·11 min read

How to decide what to fix first (when everything feels broken)

Fix the earliest broken step in your funnel first. Everything downstream of a broken step is built on a leaky foundation. Prioritization is not about picking the most interesting problem — it is about finding the one whose fix unlocks the most value in everything that follows.

analyticsfoundersdecisionsprioritization
Vertical funnel diagram showing traffic with a green checkmark, conversion with a red broken marker, and activation and retention muted as downstream steps, with an arrow pointing to conversion as the first fix

You open your analytics. Traffic is lower than you want. Conversion rate is sitting at 0.9%. Activation is around 22%. You got two user complaints about a confusing feature this week. Your retention chart is flat. Someone on your team mentioned the pricing page could be better.

You have six things to fix. You have one week. You have no idea where to start.

So you start with whatever feels most urgent. Or the thing a user mentioned. Or the page you have been meaning to clean up for months. You work hard all week. You check the numbers on Friday.

Nothing meaningful moved.

This is not bad luck. It is the predictable result of fixing things in the wrong order. Prioritization is not just a productivity question. For early-stage products, it is the difference between work that compounds and work that disappears.

Why everything breaks at once

Early products have multiple simultaneous problems because they were built quickly, with limited real user feedback, under the pressure of shipping. You built the best product you could with the information you had. Now the information has arrived, in the form of analytics, and it is showing you where the gaps are.

This is normal. It does not mean the product is failing. It means the product is alive and generating data, and data is honest about problems.

The trouble is that analytics shows you everything simultaneously but gives you no priority order. A dashboard that displays traffic, conversion rate, activation rate, and retention in the same view implies they are all equally important and equally ready to fix. They are not.

Treating them as equals is where the waste begins.

Why fixing random things does not work

When problems are tackled in no particular order, three things go wrong.

You cannot measure what moved. If you change the landing page headline, reduce onboarding steps, and update the email sequence in the same week, and your activation rate goes up by 8%, you learned almost nothing. You do not know what caused it. You cannot reproduce it deliberately. You cannot decide what to do next with any confidence. The learning from each change is consumed by the noise of the others.

You fix symptoms instead of causes. A low retention rate is almost always a symptom. The cause is usually that users never got enough value in their first session to have a reason to come back. Pouring effort into retention campaigns, re-engagement emails, and push notifications before fixing activation is papering over a problem that will keep reasserting itself.

You work on low-leverage things. Redesigning a page that 4% of your users see will not move aggregate metrics regardless of how good the redesign is. Time spent on low-reach changes is time not spent on the bottleneck that is actually limiting your growth.

The upstream first rule

Your product has a funnel. Visitors arrive, some convert to signups, some activate, some return. Each stage depends on the one before it. This creates a clear priority logic: a broken step early in the funnel limits everything downstream of it.

If traffic is your constraint (fewer than 200 visitors per week), conversion rate optimization is premature. With 200 visitors and a 3% rate, you get 6 signups per week. Even if you somehow doubled conversion to 6%, you get 12 signups per week. The absolute numbers are too small to matter. The constraint is upstream.

If conversion is your constraint (traffic is adequate, but under 1% of visitors sign up), optimizing activation is premature. You need more signups before activation rate becomes a meaningful number to improve.

The rule: identify the first stage in your funnel that is clearly outside a reasonable range, and work there before touching anything downstream.

Rough benchmarks for each stage:

  • Traffic: under 200 visitors per week means traffic is the constraint
  • Conversion: under 2% signup rate means conversion is the constraint
  • Activation: under 25% means onboarding is the constraint
  • Retention: under 20% day-7 means the product experience is the constraint

Find the first broken stage. Start there. Do not work downstream until it is in a reasonable range.

Evaluating individual problems

Once you know which stage of the funnel to focus on, you may still face multiple candidate fixes within that stage. A quick three-part test helps you pick the right one.

Volume: how many users does this problem affect?

A problem that 400 users hit every week is more important than a problem 12 users hit. Even if both have the same fix difficulty, fixing the first one moves aggregate metrics. Fixing the second one does not register.

Impact: how much does fixing this change the metric that matters?

Some fixes are adjacent to the bottleneck, not the bottleneck itself. Adding a tooltip to a confusing field might help slightly. Removing three unnecessary fields from the signup form entirely might double completion rates. Both touch the signup form. Only one is the leverage point.

Effort: how long does this actually take to fix?

A fix that takes half a day to ship can be measured in a week. A fix that takes three weeks to build delays your learning cycle by three weeks. When impact and effort are both high, the fix is worth doing but needs to be scheduled. When impact is high and effort is low, it is your next action.

Impact vs effort matrix showing four quadrants: high impact and low effort labeled Fix First, high impact and high effort labeled Plan It, low impact and low effort labeled Maybe, and low impact and high effort labeled Ignore

The most useful fixes are the high-impact, low-effort ones. They exist more often than founders expect. Removing email verification before the user can access the product. Cutting three fields from a signup form. Adding sample data to replace an empty state. None of these require a week of engineering. All of them meaningfully reduce friction at a high-volume step.

The root cause test

Before committing to any fix, ask one question: is this the cause, or is it a downstream effect of something else?

Low retention is almost always downstream. If you check activation and find it is 18%, retention will be low regardless of how good your email sequences are. The product is not giving users a reason to return because they never fully experienced it. Fix activation first. Retention typically improves as a result without direct intervention.

Low conversion rate is sometimes downstream. If 70% of your traffic comes from high-bounce, low-intent sources (a Hacker News spike, referral spam, a misdirected social post), the conversion rate is a symptom of a traffic quality problem, not a landing page problem. Adding social proof or rewriting the headline will not move the number because the audience mismatch is the actual issue.

Finding the specific drop-off step tells you where users are leaving. That is useful input. But it does not tell you whether that drop-off is the root cause or a downstream effect of something further up the funnel. Asking "what would have to change upstream for this problem to disappear?" usually reveals which it is.

The three mistakes that look like prioritization

These patterns feel like good prioritization decisions. They are not.

Fixing what a user complained about

A user emails you that a specific feature is confusing. You spend three days redesigning it. But that user was one of 400 who used that feature this month. The other 399 had no complaint. Meanwhile, 62% of visitors are bouncing from your pricing page without reaching the signup form. The complaint was a signal from one person. The data is a signal from hundreds. The data wins.

This does not mean ignoring user feedback. It means verifying that the feedback is supported by data before treating it as a priority. If one complaint shows up, add it to a list. If five separate users mention the same friction point, and the data shows a drop-off there, it is real.

Fixing what is most visible

The homepage gets the most traffic, so founders spend disproportionate time on it. The hero section gets rewritten. The layout gets iterated. The CTA button gets tested in five different colors.

But if the homepage is already converting at 3.8% and the problem is a signup form that converts at 11% when similar products convert at 35%, the homepage is not the bottleneck. You are spending time on the most visible step because it feels high-leverage, not because the data says it is.

Tracking where visitors actually drop off fixes this. Once you know that 600 visitors reach the pricing page but only 80 click through to signup, you stop guessing where the problem is.

Fixing what is interesting to work on

This is the most honest mistake. Redesigning the dashboard is more interesting than shortening the signup form. Building a new feature is more interesting than removing fields from an onboarding questionnaire. The best prioritization work is often the least interesting work: simplify, shorten, remove.

If you notice that your fix list keeps trending toward things you find engaging rather than things the data points at, that is a signal to look at the data again and override the instinct.

Using the data you already have

You do not need a complex analytics setup to prioritize well. You need two numbers at each stage of your funnel: how many users entered that stage, and how many passed through to the next one.

Compare those counts sequentially. The stage with the largest absolute gap is your biggest lever. Not the lowest conversion rate. The largest absolute number of users who stopped.

If 600 visitors hit your homepage and 480 view the pricing page, that is a 120-user gap at a 20% drop rate. If 480 view pricing and 55 reach the signup form, that is a 425-user gap at an 89% drop rate. Even though the homepage has a higher percentage drop, the pricing-to-signup gap affects four times as many users.

Work on the biggest absolute gap first. This is the core of drop-off analysis, and it is the most reliable input for prioritization decisions.

The one-fix-at-a-time rule

Once you have identified the priority, fix one thing. Not a batch of improvements. Not a round of changes. One thing.

The reason is measurement. If you change the signup form, reduce the onboarding steps, and update the first-session email in the same week, and activation goes from 22% to 31%, you cannot reproduce the result or explain it. The learning is lost because you changed three variables.

If you change only the signup form and activation goes from 22% to 31%, you have a clear result. You know what worked. You can follow the thread. You can decide whether to make a second change in the same area or shift focus.

One fix per week is slower-feeling but faster in practice because every iteration produces usable learning. A string of three unexplained improvements is less valuable than three clearly explained ones.

Once you have identified the right fix and shipped it, the three-question framework is the best way to evaluate the result: what happened, why, and what to do next. Prioritization tells you which problem to work on. That framework tells you how to read the outcome.

Three scenarios

Scenario A: Traffic 450/week. Conversion 0.9%. Activation 38%. Retention 24%.

Conversion is the constraint. Traffic is sufficient. Activation and retention are healthy given the users who do make it through. The entire problem is that almost no one signs up. Find the biggest drop between homepage and signup form using basic drop-off analysis. That step is your fix. Do not touch activation or retention.

Scenario B: Traffic 60/week. Conversion 4.5%. Activation 41%. Retention 26%.

Traffic is the constraint. Everything downstream is performing reasonably well. Doubling conversion from 4.5% to 9% at this volume produces 3 extra signups per week. That is not meaningful growth. The leverage is entirely in distribution: a new content channel, another launch, a partnership. Ship one piece of content targeted at the right audience. Do not optimize the landing page.

Scenario C: Traffic 520/week. Conversion 3.2%. Activation 17%. Retention 29% for activated users.

Activation is the constraint. Traffic and conversion are reasonable. Retention for activated users is decent, which confirms the product works for users who get to the core value. The problem is the 83% of signups who never get there. The fix is onboarding: find the step where most users stop, and remove it or replace it with something that gets users to first value faster. Do not work on retention until activation is above 30%.

When the data is not enough

There are situations where prioritization from analytics does not produce a clear answer. Usually this means volume is very low (under 100 visitors per week), numbers fluctuate too much to be reliable, or multiple metrics are equally broken with no obvious upstream cause.

In these cases, the right move is talking to users. Find 5 to 10 people who signed up recently, or people who visited and did not convert, and ask them what happened. Ask what they were trying to do, where they got confused, and what would have made them stay. A single conversation often reveals the root cause that three weeks of dashboard analysis would not.

This is not a failure of analytics. It is an acknowledgment that below a certain volume, qualitative signal is more reliable than quantitative. The right tool depends on the size of your data.

The weekly prioritization habit

What this looks like in practice: once a week, before opening any dashboard, write down one question you want answered. "Is the drop between pricing and signup still 85%?" or "Did removing the email verification step change activation?"

Then open the data, answer the question, make one decision, and act on it. The routine of translating data into a single decision is what separates founders who improve their product steadily from ones who check analytics often and change nothing.

You do not need a prioritization meeting or a feature roadmap to decide what to fix next. You need to know where your funnel is breaking, which break is furthest upstream, and the discipline to work on that one thing until the number moves.

When it does, move to the next one.

Keep reading

Frequently asked questions

Work upstream. Check each stage of your funnel in order: traffic, then conversion, then activation, then retention. The first stage that falls outside a reasonable range is your priority. Traffic under 200 visitors per week means traffic is the constraint. Conversion below 1% means conversion is the constraint. Do not work on downstream stages until the upstream one is in a reasonable range.

Both are signals, but data beats individual feedback at scale. A user complaint tells you that one person had a specific experience. Analytics data tells you how many users are affected and how severely. Before acting on any complaint, check whether the data corroborates it. If 10% of users hit the same friction point, fix it. If it happened once and no data supports it, add it to a list and revisit.

At least one full week after the fix goes live, and ideally two weeks. Changes take time to accumulate enough data to be meaningful. If you check after 48 hours and the metric has not moved, that is normal variance, not a sign the fix did not work. Patience with the measurement cycle is as important as speed of execution.

Below about 200 visitors per week, quantitative prioritization is unreliable. Numbers fluctuate too much for small changes to be measurable. In this case, prioritize getting more traffic over optimizing anything downstream. Alternatively, shift to qualitative methods: talk to 5 to 10 users or non-users. Conversations reveal root causes that data cannot at low volume.

Only if you do not need to measure the individual impact of each fix. If a change is clearly wrong and needs reverting regardless (a broken mobile layout, a confusing error message), fix it without waiting to measure. But if you are trying to improve a metric, fix one thing at a time. Multiple simultaneous changes make measurement impossible and learning slow.

Ask what would have to be true for this problem to disappear on its own if something upstream were fixed. Low retention usually disappears when activation improves. Low activation often disappears when onboarding friction is removed. If the problem is downstream of another broken step, fix the upstream problem first. The symptom may resolve without direct attention.

Try Muro on your own product

Start your 30-day free trial. No credit card required.

$5/month after the trial. Cancel anytime.