Every growth team has more ideas than time. ICE is the fastest, most honest way to decide what to test next — and it takes less than 60 seconds per idea.
By Matthis Duarte — Senior SEO professional and growth strategist, 12 years experience
The graveyard of failed growth programmes is full of teams that ran the wrong experiments. Not because the ideas were bad — because they picked them the wrong way. The loudest voice in the room won. The founder’s pet project jumped the queue. The biggest idea got priority regardless of how long it would take to ship.
The ICE scoring framework exists to remove politics from prioritisation. Created by Sean Ellis — the person who coined “growth hacking” — it forces every idea through the same three filters before it earns a place on the roadmap. Fast to apply, hard to game, and particularly powerful for early-stage teams running weekly experiment cycles.
How ICE works
ICE stands for Impact, Confidence, and Ease. Each dimension is scored from 1 to 10, and the three scores are averaged to produce a single ICE score for each idea.
ICE score = (Impact + Confidence + Ease) ÷ 3
| Dimension | Question it answers | Score 1 | Score 10 |
|---|---|---|---|
| Impact | If this works, how much will it move the key metric? | Tiny, barely measurable | Game-changing, order-of-magnitude shift |
| Confidence | How certain are we that it will work? | Pure speculation, no evidence | Proven elsewhere, strong data backing it |
| Ease | How much effort is required to run this experiment? | Months of engineering work | Can be live by tomorrow |
The output is a ranked list. Ideas at the top of the list have the best combination of potential impact, realistic chance of working, and low cost to test. You start there.
Why confidence is the most important dimension
Most teams naturally inflate Impact scores — it’s easy to imagine a big outcome. Ease is also relatively honest because engineering and design teams know roughly how long things take. But Confidence is where ICE becomes genuinely useful.
Scoring confidence forces you to ask: do we actually have evidence this will work, or are we just excited about the idea? A referral programme might score 9 on Impact and 8 on Ease — but if you’ve never tested referral mechanics with your specific audience and have no benchmark data, a Confidence score of 2 or 3 will drag the ICE score into the middle of the pack where it belongs.
“The teams that grow fastest are not the ones with the best ideas — they’re the ones who are most honest about which ideas are most likely to work.”
This is ICE’s most underrated contribution: it introduces humility into the roadmap by forcing teams to score their own uncertainty.
ICE in practice: scoring a growth backlog
Imagine a SaaS startup growth team with five ideas competing for the next sprint:
| Idea | Impact | Confidence | Ease | ICE score |
|---|---|---|---|---|
| Add a progress bar to onboarding flow | 6 | 8 | 9 | 7.7 |
| Launch double-sided referral programme | 9 | 5 | 4 | 6.0 |
| A/B test CTA copy on pricing page | 5 | 7 | 9 | 7.0 |
| Rebuild checkout flow | 8 | 6 | 2 | 5.3 |
| Add in-app NPS survey | 4 | 9 | 8 | 7.0 |
The referral programme has the highest potential impact — but low confidence and significant engineering effort drag it to fourth place. The onboarding progress bar wins: moderate impact, high confidence (well-documented in the literature), and ships in a day. That’s the experiment you run this sprint, while you build the evidence base needed to raise confidence in the referral programme.
🔴 Case study — Sean Ellis at Dropbox and LogMeIn
Sean Ellis developed the ICE framework while running growth at companies including Dropbox and LogMeIn — environments where the experiment backlog was always longer than the engineering calendar. The challenge was not generating ideas; startups never lack those. The challenge was ruthless, defensible prioritisation.
What Ellis found was that without a scoring system, teams defaulted to highest-effort, highest-ambition experiments — the ones that sounded most impressive in all-hands meetings. These were often low-confidence bets that took months to ship and produced ambiguous results.
ICE shifted the incentive. By explicitly scoring Ease and Confidence alongside Impact, the framework surfaced fast, high-confidence experiments that could run in days, generate clear results, and either confirm or kill an idea quickly. The learning velocity — number of validated experiments per quarter — increased dramatically.
→ Result: faster learning cycles led to better prioritisation in subsequent sprints, compounding into significantly better growth outcomes over a 6–12 month horizon.
ICE vs. RICE vs. PIE
ICE is not the only prioritisation framework. Here’s how the main alternatives compare:
| Framework | Formula | Best for | Limitation |
|---|---|---|---|
| ICE (Sean Ellis) | (Impact + Confidence + Ease) ÷ 3 | Growth experiment backlogs, fast-moving teams | No volume/reach dimension |
| RICE(Intercom) | (Reach × Impact × Confidence) ÷ Effort | Product feature prioritisation | More complex, slower to score |
| PIE(WiderFunnel) | (Potential + Importance + Ease) ÷ 3 | CRO and A/B test prioritisation | “Potential” and “Importance” overlap with ICE’s Impact |
ICE wins for growth teams because it’s the fastest to apply — you can score 20 ideas in a single 30-minute session. RICE is more precise but requires volume data that early-stage teams often don’t have. PIE is purpose-built for conversion rate optimisation but follows the same underlying logic.
Common mistakes when using ICE
Scoring by committee. If everyone on the team scores the same idea and averages are taken, politics re-enters through the back door. Have individuals score independently before comparing.
Never updating scores. An experiment that was low-confidence six months ago might have strong industry data behind it now. Revisit scores every quarter.
Treating ICE scores as final answers. ICE is a starting point for the conversation, not a replacement for it. If a 6.5-scored idea clearly addresses a strategic priority that a 7.8-scored idea doesn’t touch, the strategic context matters.
Ignoring cycle time. Like the K-factor, speed matters. An idea that scores 7.0 and ships in two days beats an idea that scores 7.5 and takes three weeks to test.
Key takeaways
- ✓ ICE = (Impact + Confidence + Ease) ÷ 3 — a prioritisation framework created by Sean Ellis to help growth teams decide which experiments to run first
- ✓ Confidence is the most important dimension: it forces teams to score their uncertainty honestly and prevents high-excitement, low-evidence ideas from dominating the roadmap
- ✓ ICE surfaces fast, high-confidence experiments — these generate learning velocity, which compounds into better decisions over 6–12 months
- ✓ ICE vs. RICE vs. PIE: use ICE for growth experiment backlogs (fast), RICE for product feature prioritisation (precise), PIE for CRO-specific work
- ✓ Score independently before comparing — group scoring reintroduces the politics ICE was designed to eliminate
- ✓ An idea that ships in two days at a score of 6.5 often delivers more value than a three-week build at 7.5 — cycle time is a multiplier
Matthis Duarte is a senior SEO professional with 12 years of experience. HackingStory.com reverse-engineers how the fastest-growing startups actually grew — with real data, not press releases.
Article 5 — First SEO steps for a new startup
SEO GROWTH · 7 min read · March 2026
The first SEO steps for a new startup — in the right order
Most founders either ignore SEO for the first six months or try to do everything at once. Both approaches waste time. Here’s the exact sequence that builds a compounding organic foundation from day one.
By Matthis Duarte — Senior SEO professional, 12 years experience across competitive industries
SEO is the only acquisition channel that gets cheaper over time. Paid ads charge you for every click, forever. SEO requires upfront investment — but once pages rank, they generate traffic continuously, often for years, without an additional dollar spent. For a capital-efficient startup, that math is hard to ignore.
The problem is that most founders either dismiss SEO as a long-term play not worth touching early, or try to implement everything at once and end up with a partially configured, inconsistent setup that helps nothing. Both mistakes are costly.
There is a right order. Start with the foundation, earn Google’s trust, then build content on top of infrastructure that can actually rank. Here’s what that looks like in practice.
Step 1: Set up Google Search Console on day one
Google Search Console is free, takes 15 minutes to configure, and is the single most important SEO tool you will ever use. It tells you which keywords your site ranks for, which pages Google is indexing, whether there are crawl errors blocking your content, and how your click-through rates compare to your impressions.
Every other SEO decision you make will eventually be validated or challenged by GSC data. Set it up on the day you launch — or ideally before. Connect it to Google Analytics 4 if you’re running it. Submit your XML sitemap through the GSC interface so Google knows the full scope of what you want crawled.
Step 2: Fix the technical foundation before writing a single article
Content built on a broken technical foundation will never rank to its potential. Before you publish anything, run through this checklist:
| Technical element | Why it matters | How to check |
|---|---|---|
| HTTPS | Google uses it as a ranking signal; non-HTTPS sites show security warnings | Check your URL bar |
| XML sitemap | Tells Google all the pages you want indexed | Submit in GSC |
| Robots.txt | Ensures you’re not accidentally blocking important pages from crawling | yoursite.com/robots.txt |
| Site speed | Core ranking factor; every second of load time increases bounce rate | Google PageSpeed Insights |
| Mobile usability | Google indexes mobile-first; broken mobile = ranking penalty | GSC Mobile Usability report |
| Canonical tags | Prevents duplicate content from diluting your ranking signals | Check via a crawl tool |
| Flat site architecture | No page should be more than 3 clicks from the homepage | Manual audit or Screaming Frog |
If you’re on WordPress, install Yoast SEO or RankMath — both handle sitemap generation, robots.txt, and canonical tags automatically and correctly. Don’t skip this step because it seems unglamorous. A fast, crawlable, properly structured site is the prerequisite for everything that follows.
Step 3: Do keyword research before you build your content plan
The most expensive content mistake a startup can make is writing articles before knowing whether anyone is searching for them. Keyword research is not optional — it’s the brief for your entire content operation.
For a new domain, the targeting strategy is clear: go after long-tail keywords with 200–2,000 monthly searches and low keyword difficulty. These are winnable. Head terms (broad, high-volume keywords) are dominated by established domains with years of authority. You will not rank for them in year one, regardless of content quality.
A practical research process using free tools:
- Use Google Search Console (once you have traffic) to find queries you’re already getting impressions for but not ranking well
- Use Google autocomplete and “People also ask” to identify how your audience phrases their questions
- Use Ahrefs Webmaster Tools (free) for keyword difficulty and traffic estimates on your own domain
- Search your target keyword and study the first page — are the results from established giants, or do smaller sites appear? Smaller sites on page one means the keyword is winnable
Step 4: Build topical authority through content clusters — not random articles
A startup blog that publishes one article per week across five different topics will build authority in none of them. Google rewards depth. The right approach is to pick 3–5 tightly defined topics your product or brand can genuinely own, build a pillar page for each, and then produce spoke articles targeting every related long-tail keyword within that cluster.
Every spoke article links back to its pillar. The pillar links out to every spoke. This hub-and-spoke architecture signals to Google that you are a comprehensive resource on the subject — not a generalist blog.
| Wrong approach | Right approach |
|---|---|
| 1 article per topic across 10 topics | 10 articles on 1 topic with internal linking |
| Writing what’s interesting to you | Writing what your audience searches for |
| Pillar page without spoke articles | Full cluster: pillar + 8–12 spokes |
| Spokes with no internal links | Every spoke links to pillar + related spokes |
Step 5: Optimise every page for search intent, not just keywords
Ranking requires matching what Google believes users want, not just including the right words. Before finalising any article, search your target keyword and look at the top 5 results. Ask:
- Are they listicles, guides, comparisons, or landing pages? Match the format.
- How long are they? Don’t write 800 words when the top results are 2,500.
- What subtopics do they all cover? If four of the five results include a comparison table, yours probably should too.
- What’s the tone — technical, beginner-friendly, conversational?
One of the most common ranking failures on new domains is publishing well-written content in the wrong format for the search intent. A product page trying to rank for “best [category] tools” will almost never beat a listicle. A 500-word overview trying to rank for “complete guide to X” has no chance against a 3,000-word comprehensive resource.
Step 6: Build internal links from every new article on publish day
Internal links are free PageRank distribution. Every time you publish a new article, spend 10 minutes identifying 3–5 existing articles it should link to — and adding a link from each of those articles back to the new one.
This is the step that 90% of startup content teams skip entirely. Without internal linking, your articles exist in isolation. With it, every new piece of content strengthens the ones around it.
“Your internal linking architecture is the invisible infrastructure that determines whether your content compounds or sits in isolation. Build it from day one.”
Step 7: Track, analyse, and update — don’t just publish and move on
SEO is not a publish-and-forget discipline. Once GSC starts returning data (typically 2–4 weeks after publishing), review it weekly:
- Which articles are generating impressions but low clicks? The title or meta description needs work.
- Which articles are ranking on page 2? A content update and expansion could push them to page 1.
- Which queries are you ranking for that you didn’t explicitly target? Create new articles to capture those.
Updating an existing article that already has some authority is often faster and higher-ROI than publishing something new. This is one of the most underused SEO levers for startups — and one of the most consistently effective.
Key takeaways
- ✓ Set up Google Search Console on day one — it’s free, essential, and every SEO decision you make should eventually be validated by its data
- ✓ Fix the technical foundation before publishing content: HTTPS, sitemap, robots.txt, site speed, mobile usability, and flat architecture
- ✓ Do keyword research before writing anything — only target long-tail keywords with 200–2,000 monthly searches on a new domain; head terms are unwinnable early
- ✓ Build topical authority through content clusters: pick 3–5 tightly defined topics, create a pillar page for each, and produce 8–12 spoke articles per pillar
- ✓ Match search intent before finalising any article — format, length, and subtopics should mirror what Google is already rewarding for that query
- ✓ Internal linking is free PageRank distribution — build it from every new article on publish day, and update existing articles to link to new ones
Matthis Duarte is a senior SEO professional with 12 years of experience. HackingStory.com reverse-engineers how the fastest-growing startups actually grew — with real data, not press releases.