SaaS Feature Prioritization Framework - RICE Scoring Worksheet
A formula-based worksheet for prioritizing features using reach, impact, confidence, and effort so roadmap discussions are less subjective.
Built for practical use
Reach / impact / confidence / effort scoring
A formula-based worksheet for prioritizing features using reach, impact, confidence, and effort so roadmap discussions are less subjective.
Worked example
A formula-based worksheet for prioritizing features using reach, impact, confidence, and effort so roadmap discussions are less subjective.
Workshop agenda
A formula-based worksheet for prioritizing features using reach, impact, confidence, and effort so roadmap discussions are less subjective.
Priority ranking
A formula-based worksheet for prioritizing features using reach, impact, confidence, and effort so roadmap discussions are less subjective.
Score Your Feature Backlog
Add your candidate features and let the worksheet calculate score and rank automatically.
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| 0 | |||||
| 0 | |||||
| 0 | |||||
| 0 | |||||
| 0 |
About RICE
RICE is a prioritization framework developed by Sean McBride at Intercom in 2016. It was designed to help Intercom's product team make more objective, data-driven decisions about what to build next. Since then, RICE has become one of the most widely-adopted prioritization methods in SaaS product management — used by companies from early-stage startups to Fortune 500s.
RICE stands for:
- Reach
- Impact
- Confidence
- Effort
Formula: RICE Score = (Reach × Impact × Confidence) / Effort
The higher the score, the higher the priority.
Who this is for: Product managers, founders, design leads, and engineering leads who need to decide what to build next from a backlog of ideas.
What this worksheet does:
- Explains each RICE factor with clear scoring guidelines
- Provides the exact scoring scales used at Intercom
- Walks through worked examples
- Gives you a ready-to-use spreadsheet template
- Shares common pitfalls and when RICE is/isn't appropriate
Part 1: The Four Factors Explained
R: Reach
Definition: How many people will this initiative affect within a specific time period?
How to score:
- Pick a timeframe (typically 1 quarter or 1 month)
- Estimate the number of users, customers, or events the feature will impact in that period
- Use real numbers from your analytics (not guesses)
Examples of reach:
- "500 customers per quarter" — if the feature appears in a flow 500 customers complete monthly
- "150 new signups per month" — if the feature is part of onboarding and 150 people sign up monthly
- "2,000 page views per month" — if the feature affects a page with that traffic
Rules:
- Use the same timeframe for all features you're comparing
- Be honest — don't count your entire user base if only 10% will ever see the feature
- Distinguish between one-time reach (launch) and recurring reach (ongoing)
Common mistake: Assuming 100% of users will encounter a new feature. Most features affect only a subset.
I: Impact
Definition: How much will this initiative impact each person who encounters it?
Intercom's official 5-tier scoring scale (use this):
| Impact Level | Score | Description |
|---|---|---|
| Massive | 3 | Game-changing; dramatic improvement in user experience or business metrics |
| High | 2 | Significantly better; clear improvement |
| Medium | 1 | Moderate improvement (baseline — the default) |
| Low | 0.5 | Small improvement; nice to have |
| Minimal | 0.25 | Barely noticeable; minor fix |
How to score:
- Consider the magnitude of change the feature creates for each affected user
- Think about both immediate impact and long-term behavior change
- Use quantitative data when possible (e.g., "will likely reduce time-to-complete by 40%")
Examples:
- Massive (3): Adding a feature that unblocks a common frustration and enables a workflow that wasn't possible before
- High (2): Making a complex process 2x faster
- Medium (1): A meaningful but incremental improvement to an existing workflow
- Low (0.5): Polish improvements, minor UX fixes
- Minimal (0.25): Tiny visual tweaks, edge case fixes
Common mistake: Defaulting everything to "High" impact. Be disciplined — most features are "Medium" at best.
C: Confidence
Definition: How sure are you about your Reach, Impact, and Effort estimates?
Intercom's official 3-tier scoring scale (use this):
| Confidence Level | Score | When to Use |
|---|---|---|
| High | 100% | Solid data from user research, A/B tests, or clear analytics |
| Medium | 80% | Some data supports this, but parts are based on assumption |
| Low | 50% | Mostly based on intuition; limited data to back it up |
Anything below 50% Intercom calls a "moonshot" — too speculative to prioritize via RICE.
How to score:
- Ask: "If I presented this score to a skeptical executive, could I defend my assumptions with data?"
- Be honest about what's backed by research vs. what's a guess
- Don't inflate confidence to make a feature look better
Confidence anchors:
- 100% High: You have A/B test data, analytics from a similar feature, or direct user research validating both reach and impact
- 80% Medium: You have solid data for 2 of the 3 estimates (R, I, E) and reasonable assumptions for the third
- 50% Low: You have assumptions for most estimates or you're exploring a completely new area
Common mistake: Treating confidence as optional. Low confidence should significantly reduce a feature's priority — that's the whole point of this factor.
E: Effort
Definition: How many "person-months" will this take, totaled across design, engineering, testing, and product?
How to score:
- Estimate person-months (PM) across all roles
- Use whole numbers for anything over a month
- Use 0.5 for anything under a month
- Don't get into the weeds of precise estimates — ballpark is fine
Examples:
- 0.5 PM: A 1-2 week tweak or small feature
- 1 PM: ~1 month of total team effort
- 2 PM: ~2 months of work split across team
- 5 PM: A major initiative requiring full team for over a month
Who contributes to effort? Usually:
- Product manager (for spec, requirements, QA)
- Designer (for UX, UI, prototyping)
- Engineer(s) (frontend, backend)
- QA / tester
- Other specialists (data, infra) if applicable
Common mistake: Underestimating effort. Always ask engineers for their estimate — don't guess.
Part 2: The RICE Formula
The formula:
RICE Score = (Reach × Impact × Confidence) / Effort
Example calculation:
A new feature is estimated to:
- Reach 500 users per quarter
- Have "High" impact (2)
- Medium confidence (80%)
- Take 2 person-months
RICE Score = (500 × 2 × 0.80) / 2 = 800 / 2 = 400
Interpreting scores:
- There's no absolute "good" or "bad" RICE score
- Scores are only meaningful in comparison to other features in your backlog
- Rank all features by RICE score; prioritize from top to bottom
Part 3: Worked Example — 5 Feature Comparison
A product team is deciding between 5 feature ideas. Here's how RICE helps prioritize:
Feature A: Improve Email Verification Flow
- Reach: 1,200/mo (new signups)
- Impact: 2 (High — reduces early churn)
- Confidence: 100% (data from signup funnel analysis)
- Effort: 1 PM
- RICE: (1200 × 2 × 1.0) / 1 = 2,400
Feature B: Build AI-Powered Report Generator
- Reach: 200/mo (power users who generate reports)
- Impact: 3 (Massive — saves hours per week)
- Confidence: 50% (speculative — unclear if users want AI here)
- Effort: 5 PM
- RICE: (200 × 3 × 0.50) / 5 = 60
Feature C: Add Dark Mode
- Reach: 2,500/mo (all active users eventually)
- Impact: 0.5 (Low — polish, not core value)
- Confidence: 80% (user requests are frequent)
- Effort: 1 PM
- RICE: (2500 × 0.5 × 0.80) / 1 = 1,000
Feature D: Redesign Dashboard
- Reach: 2,500/mo (all active users)
- Impact: 1 (Medium — improved but not game-changing)
- Confidence: 80% (user research supports direction)
- Effort: 4 PM
- RICE: (2500 × 1 × 0.80) / 4 = 500
Feature E: Add Slack Integration
- Reach: 800/mo (estimated users who'd use Slack integration)
- Impact: 2 (High — unblocks workflow)
- Confidence: 100% (top-requested feature in surveys)
- Effort: 2 PM
- RICE: (800 × 2 × 1.0) / 2 = 800
Priority Order (by RICE score):
- Feature A: Email Verification Flow — 2,400
- Feature C: Dark Mode — 1,000
- Feature E: Slack Integration — 800
- Feature D: Dashboard Redesign — 500
- Feature B: AI Report Generator — 60
Insights from this example:
- Feature A wins easily because it affects all new users with high confidence and low effort
- Feature B (the "exciting" AI idea) scores low because of high effort + low confidence
- Feature C (Dark Mode) scores surprisingly high — easy to build, high reach — but delivers low impact
- This comparison forces the team to recognize that Dark Mode's "high priority" from user requests might not match business impact
This is the value of RICE: It surfaces trade-offs that intuition misses.
Part 4: RICE Worksheet Template
Copy this into Google Sheets, Airtable, or Notion:
| Feature | Reach (#/period) | Impact (0.25/0.5/1/2/3) | Confidence (0.5/0.8/1.0) | Effort (PM) | RICE Score | Priority Rank |
|---|---|---|---|---|---|---|
| Feature 1 | =B×C×D/E | |||||
| Feature 2 | ||||||
| Feature 3 | ||||||
| Feature 4 | ||||||
| Feature 5 |
Add columns for:
- Feature owner
- Target sprint/quarter
- Status (Backlog / In Progress / Done)
- Notes/assumptions
- Success metric (how you'll measure if the feature worked)
Part 5: When RICE Works Best (And When It Doesn't)
RICE works well for:
- ✅ Comparing similar types of features
- ✅ Building a quarterly roadmap
- ✅ Removing bias from prioritization debates
- ✅ Creating a shared team language for "what's important"
- ✅ Small-to-medium initiatives (1-10 PM of effort)
RICE doesn't work well for:
- ❌ Strategic bets / moonshot initiatives (confidence always too low)
- ❌ Platform investments (reach is hard to quantify — benefits are indirect)
- ❌ Technical debt (impact isn't directly user-facing)
- ❌ Legal/compliance work (not optional regardless of score)
- ❌ Very large initiatives that span quarters
- ❌ Creative/brand initiatives (hard to quantify impact)
For things RICE doesn't capture well:
- Strategic initiatives: Use a separate "strategic roadmap" with its own prioritization
- Tech debt: Allocate a fixed percentage of capacity (e.g., 20% of every sprint)
- Compliance/legal: Non-negotiable — do these regardless of RICE
- Moonshots: Reserve ~10% of capacity for high-risk, high-reward bets
Part 6: Common RICE Pitfalls
Pitfall 1: Inflating confidence to boost scores
If everything is "High" confidence, the factor adds no value. Be disciplined.
Pitfall 2: Underestimating effort
Always get engineering estimates. Designers and PMs consistently underestimate.
Pitfall 3: Using RICE for everything
RICE is one tool — not the only one. Strategic bets, tech debt, and compliance need different treatment.
Pitfall 4: Not revisiting scores
Reach and impact estimates should be validated after shipping. If you were wrong, adjust your future estimates.
Pitfall 5: Treating RICE as the final word
RICE informs decisions — it doesn't make them. Use judgment about things the numbers don't capture (brand fit, team morale, learning value).
Pitfall 6: Letting one person score everything
Collaborative scoring (PM + designer + engineer) surfaces different perspectives and assumptions.
Pitfall 7: Scoring in isolation
Score features as a batch — not one at a time. Comparison forces calibration.
Part 7: RICE Scoring Workshop Agenda
Run this workshop quarterly to prioritize your backlog:
Participants: PM, Design lead, Engineering lead, optional CEO/founder Time: 2 hours Prep: List of 10-20 candidate features with brief descriptions
Agenda:
0:00–0:15 — Review Methodology
- Recap what Reach, Impact, Confidence, Effort mean
- Align on the timeframe for reach (e.g., "per quarter")
- Agree on impact scale interpretations
0:15–1:00 — Score Each Feature
- For each feature, discuss Reach, Impact, Confidence, Effort
- Use data when available
- When team disagrees, discuss — don't average
- Document assumptions ("We assumed 500 users because X")
1:00–1:20 — Calculate & Rank
- Compute RICE scores
- Rank features high-to-low
- Review the ranking — does it match intuition?
1:20–1:40 — Sanity Check
- Anything unexpectedly high? (Might be over-scored)
- Anything unexpectedly low? (Might have unmeasured strategic value)
- Adjust if clearly wrong — but require justification
1:40–2:00 — Commit to Top N
- Based on quarterly capacity, commit to top features
- Remaining features stay in backlog for next quarter
- Assign owners and rough timelines
Part 8: Alternative Prioritization Frameworks
RICE isn't the only option. When RICE doesn't fit, consider:
MoSCoW
- Must have / Should have / Could have / Won't have
- Good for release planning
- Less objective than RICE
Kano Model
- Basic expectations / Performance attributes / Delighters
- Good for new product development
- Focuses on customer satisfaction
Value vs. Effort Matrix
- Simple 2x2 plot (high/low value, high/low effort)
- Faster than RICE, less precise
- Good for quick triage
ICE (simplified RICE)
- Impact × Confidence × Ease
- Like RICE without Reach
- Good for small teams / early stage
Opportunity Scoring
- Importance × (Importance - Satisfaction)
- Focuses on underserved needs
- Good for market research-driven prioritization
Sources and References
- Sean McBride, "RICE: Simple Prioritization for Product Managers," Intercom (original article, 2016)
- Intercom Product Team, "How We Made Our Prioritization Framework"
- ProductPlan, "RICE Scoring Model" documentation
- Dan Olsen, "The Lean Product Playbook"
- Marty Cagan, "Inspired: How to Create Tech Products Customers Love"
Created by Desisle — SaaS UI/UX Design Agency desisle.com | hello@desisle.com Free to use and share with attribution.
For a custom product prioritization workshop, contact us at hello@desisle.com.
Keep Building With These Next
Competitive UX Analysis Template - How to Audit Your Competitors' Design
A structured way to compare 3 to 5 competitors across onboarding, navigation, dashboard design, mobile UX, pricing, discoverability, and error handling.
Open Competitive MatrixSaaS Pricing Page Design Guide - The Page That Makes or Loses You Money
A pricing page guide covering tier architecture, comparison tables, anchor pricing, highlighted plans, trial psychology, enterprise CTAs, FAQ placement, and trust signals.
Open Pricing PlannerUX Heuristic Evaluation Template - Nielsen's 10 Heuristics Applied to SaaS
A ready-to-use spreadsheet for heuristic evaluation with Nielsen's heuristics, severity levels, a findings log, and a priority matrix built in.
Open Heuristic WorksheetNeed This Applied to Your Product? We'll Turn It Into Execution.
These resource pages are meant to be used hands-on. If you want the audit, plan, or framework translated into live product work, we can do that with your team.