Tips
May 22, 2024

What is The ICE Scoring Method? Explained (with Examples)

Learn how the ICE scoring method simplifies feature prioritization. Includes a step-by-step guide, examples, and tips for avoiding common mistakes.
Ryan Lucht
Before joining Eppo, Ryan spent 6 years in the experimentation space consulting for companies like Clorox, Braintree, Yami, and DoorDash.

Knowing why, when, and how to prioritize certain features for SaaS products can be difficult. 

One thing is certain: It’s particularly tough to do this based solely on gut feeling and hunches. 

That’s where using a framework such as the ICE scoring system becomes so important. 

You’ll still use some subjective judgment, but in an organized way that might help your prioritization. 

In this article, we’ll explore what the ICE score means and how it’s a great way of speeding up both your workflow and decision-making process. 

We’ll go over:

  • What is the ICE scoring method?

  • How are ICE scores calculated?

  • A step-by-step guide for calculating ICE scores

  • The benefits of using ICE scores

  • Common mistakes to avoid

  • Some real-world example scenarios where ICE can be useful

Let’s get started.

What is the ICE scoring method?

The ICE scoring method is a straightforward way to rank ideas and make more intentional decisions about which features to develop for your product. 

What does ICE stand for?

  • Impact: The potential positive effect the feature will have on key business goals (e.g., boosting customer retention, attracting new users, increasing revenue).
    Of course, it’s impossible to accurately predict what impact your idea will actually have — which is why it’s necessary to run A/B experiments on every new feature or change. For now, just go with a reasonable guess — don’t waste time overthinking it. 

  • Confidence: The level of certainty you have that the feature will actually deliver the intended results. This can be based on data, user feedback, or experience.

  • Ease: The level of difficulty and resources required to develop the feature.

Although leveraging the ICE framework should be quick, sometimes teams can get stuck in long (and often inconclusive) debates about exactly which numerical score (from 1-10) to assign in each category. 

Remember that this is just a prioritization exercise and should never be the source of final “ship/no ship” decisions. For true data-driven decision-making, you’ll need to run randomized controlled experiments (or A/B tests) on each potential feature.

When it comes to relatively assessing ideas pre-A/B test, however, ICE at least adds a layer of quantitative assessment to your decision-making.

How are ICE scores calculated?

The magic of ICE lies in its simple calculation. Here's how it works:

  1. Score each factor: For each feature idea, assign a score from 1 to 10 for Impact, Confidence, and Ease. Remember that these aren’t “real” qualitative data points, just initial subjective judgments. Don’t spend too long on them or get hung up debating them as a team. 

  2. Multiply: Take the scores you just assigned and multiply them together. This is the feature's overall ICE score.

The formula: ICE Score = Impact * Confidence * Ease

Example: Imagine a feature that you think might have a moderate impact (score of 5), you have high confidence it will work as expected (score of 8), and it seems relatively easy to implement (score of 7). 

Your ICE score calculation would look like this: 

5 * 8 * 7 = 280

The higher the ICE score, the greater the potential for the feature to deliver value without draining too many resources. You may want to organize your A/B testing roadmap by ICE score.  

A step-by-step breakdown for calculating ICE scores

You now know that ICE stands for Impact, Confidence, and Ease. Now let’s see how to consider each factor when scoring your feature ideas:

Impact: The potential benefit

  • Ask yourself: How much would this feature improve our key metrics (like revenue, user satisfaction, and new signups)? Would it solve major customer pain points or unlock a valuable new use case?

  • Scoring: Use a 1-10 scale. A low Impact score (1-3) means minimal improvement, while a high score (8-10) suggests a potential industry-changing feature.

  • Example: A new in-app chat feature for an e-commerce platform could have a high Impact score if an A/B test were to show it significantly reduced abandoned carts by providing quick customer support.

Confidence: Certainty of success

  • Ask yourself: How sure are we that the feature will deliver the results we want? Do we have data, user feedback, or past experience to back this up?

  • Scoring: Low Confidence (1-3) means you're taking a shot in the dark, while higher Confidence (8-10) suggests a good understanding of the likely outcome.

  • Example: Introducing a completely new, untested product line would have lower Confidence compared to adding a feature similar to what competitors already offer successfully.

Ease: How easy it will be to implement

  • Ask yourself: How much time, effort, and resources would it take to make this feature a reality? Do we have the necessary skills in-house, or will we need to outsource?

  • Scoring: On a scale of 1-10, low Ease (1-3) means a long, complex development process, while high Ease (8-10) indicates a quick, effortless addition.

  • Example: Adding a simple "sort by price" option on a website likely has higher Ease than developing a complex AI-powered recommendation engine.

A word of advice: Getting a good ICE score is all about balance. A super high-impact idea with low Confidence or Ease scores might not be the best immediate choice. Prioritizing ideas with good scores across the board can help you find the sweet spot between value and feasibility.

Why should you use the ICE scoring method?

ICE scoring offers several benefits that make it a powerful tool for product development teams. Here's why it's worth trying:

  • Quickly sorting through gut feeling: The ICE score introduces structure to your decision-making. Instead of arguing over whose idea is best, you have a framework to compare features based on their potential value and effort required.

  • Prioritize with clarity: The ICE framework makes it easy to see top contenders at a glance. No more endless discussion — the numbers help you identify which features deserve your focus.

  • Use your resources wisely: Features with low Ease scores highlight projects that might drain time and money. The ICE method helps you spot potential resource hogs early on, so you can make better budget decisions.

  • Focus on what matters: Aligning ICE scoring with your overall business goals keeps everyone on the same page. It helps you avoid adding features just because they're “nice to have” and instead prioritize those that drive key business metrics like revenue.

  • Boost those key numbers: The cumulative effect of using ICE can be powerful. By consistently focusing on high-impact, feasible features, you increase your chances of:some text

    • Faster project completion times that free up your teams

    • Improved ROI, higher revenue, and increased profit margins

    • Greater customer satisfaction and retention rates

    • Stronger competitive advantage over other providers

Common pitfalls to avoid when using the ICE method 

ICE scoring is a handy tool, but it's important to be aware of a few potential obstacles you’ll need to overcome. Here are some of them paired with a possible solution:

Subjectivity can waste time 

Everyone has different ideas of what a "high impact" feature looks like. Though the ICE method aims to offer an objective framework, subjectivity can and will end up affecting the process.

Recognize that these scoring exercises are just that — thought exercises. They are not good evidence for the actual value of an idea in the way that real-world experiments are. With that in mind, don’t let yourself get hung up on finding the “perfect” score for each factor. 

Solution: Define your scoring scale (1-10) clearly and agree on the definitions as a team. Having examples for low/medium/high scores on each factor can help everyone stay on the same page.

Don't undervalue technical debt

Features focused on fixing bugs or improving stability can get low ICE scores because they may not be seen as directly impacting customers. Neglecting these tasks can hurt your product's long-term health and impact future development.

Solution: Set aside a percentage of development time specifically for technical debt — treating it like a separate project bucket within your roadmap.

Forgetting about customer feedback

It's easy to score features based on what you think would be a great addition. Be sure to factor in actual customer data, requests, and pain points when assigning Impact scores to keep your roadmap customer-centric.

Solution: When scoring, link each feature back to specific customer feedback or data points (like feature requests from a high number of users).

The illusion of precision

ICE scores are still based on estimations. Don't treat them as the absolute truth. Use them as a starting point for comparisons and discussion, not as the final word on a feature's worth.

Solution: Rank features based on their ICE score, then review the top contenders with a more qualitative assessment, considering factors outside the ICE model.

Overthinking it 

The ICE score is meant to be a quick assessment tool. Avoid getting bogged down in analysis paralysis trying to find the 'perfect' score for each factor.

Solution: Start by focusing on a relative ranking rather than absolute score accuracy. Remember, quick comparisons are where the ICE method shines.

Real-world examples of the ICE method in use 

Now, let’s make the ICE method clearer through some examples. Take a look at the following scenarios:

Example 1: Startup prioritizes growth

A young startup wants to boost its user base rapidly. They have a few ideas, but limited resources. Using ICE, here's how they might compare options:

  • Feature A: Develop a complex referral programsome text

    • Impact: Moderate = 5 (might attract new users, but is it compelling enough?)

    • Confidence: Low = 3 (they’re unsure how well it'll actually work)

    • Ease: Low = 2 (requires development time and potential user confusion)

    • Total ICE score = 30

  • Feature B: Invest heavily in social media advertisingsome text

    • Impact: High = 8 (potential for wide reach and new signups )

    • Confidence: Moderate = 6 (they have some past advertising data)

    • Ease: High = 7 (quick to set up and manage campaigns)

    • Total ICE score = 336

Outcome: Feature B gets a higher ICE score (336), encouraging the startup to prioritize user acquisition over a potentially less impactful referral program.

Example 2: Software company chooses features

A software company has a long feature request list for their next release. ICE helps them sort through the chaos:

  • Feature A: Major back-end overhaulsome text

    • Impact:  Low = 3 (largely invisible to the average user)

    • Confidence: High = 9 (they are confident in the technical benefits)

    • Ease: Low = 2 (complex and time-consuming)

    • Total ICE score: = 54

  • Feature B: Sleek UI improvementssome text

    • Impact: High = 8 (improved user experience = happier customers)

    • Confidence: Moderate = 7 (they have some positive feedback from testing)

    • Ease: Moderate = 5 (requires design and front-end work)

    • Total ICE score = 280

Outcome: Feature B has the edge with an ICE score of 280, making user-facing improvements the priority for the release.

Example 3: E-commerce marketing strategy

An e-commerce company wants to boost its brand visibility. They use ICE to analyze different marketing approaches:

  • Option A: Traditional print advertisingsome text

    • Impact: Moderate = 5 (can reach an audience, but not highly targeted)

    • Confidence: Low = 3 (unsure of the digital age's response to print)

    • Ease: High = 8 (outsource design and placement)

    • Total ICE score = 120

  • Option B: Social media influencer campaignsome text

    • Impact: High = 8 (potential for viral reach and niche targeting)

    • Confidence: Moderate = 5 (dependent on choosing the right influencers)

    • Ease: Moderate = 6 (requires some research and campaign management)

    • Total ICE score = 240

Outcome: The social media campaign wins out (ICE score of 240) due to its potential for high impact with reasonable effort.

Next steps

While the ICE scoring method offers a framework for prioritizing ideas, Eppo's data-driven insights and experimentation tools will take your scores from rough ideas to proven evidence.

Eppo is a comprehensive experimentation and feature management platform that allows you to validate ideas with real-world experiments, measuring the data and metrics that matter — like revenue impact. 

By integrating Eppo into your prioritization workflow, you gain the ability to measure the true impact of potential features, ensuring your team focuses on the highest-value initiatives.

Here's how Eppo helps you make smarter business decisions through experimentation:

  • Data you can trust: Eppo replaces subjective points of view with real user behavior data. Instead of relying on hunches, run experiments to measure the actual impact of potential features.

  • Rigorous analysis: Eppo's advanced statistical engine provides reliable results that you can trust. Mitigate the risk of inflated Impact scores and gain clarity on the true value of each idea.

  • Test before launching: Eppo's feature flagging capabilities allow you to test ideas with minimal development overhead. This translates to more accurate assessments of how resource-intensive it will be to implement new features in a live environment. 

  • Beyond gut feelings: Eppo helps you track key metrics directly related to your business goals. This approach, backed by your company’s internal source of truth (thanks to Eppo being warehouse-native), validates the impact of your new features. 

  • Easier prioritization: Eppo's reporting and analytics tools provide insights into experiment results, enabling you to compare features and make informed decisions aligned with your ICE scores.

Book a Demo and Explore Eppo.

Learn how the ICE scoring method simplifies feature prioritization. Includes a step-by-step guide, examples, and tips for avoiding common mistakes.

Table of contents

Ready for a 360° experimentation platform?
Turn blind launches into trustworthy experiments
See Eppo in Action

Ready to go from knowledge to action?

Talk to our team of experts and see why companies like Twitch, DraftKings, and Perplexity use Eppo to power experimentation for every team.
Get a demo