Knowledge Hub
A repeatable and lean framework for building valuable products, with proven guides and best practices across product, design, and engineering.
Most product development teams are productive. They ship features, hit sprint commitments, and keep velocity trending upward. Unfortunately, a lot of these same teams are also building products that will fail.
What separates teams that stay busy from teams that actually move business metrics? It comes down to knowing what to measure and how to act on it. In THE OAK'S LAB WAY, we call this principle Outcomes Over Outputs. It's the principle that separates product development that adds real value from product development that burns cash and resources.
Key Takeaways
- Outcomes Over Outputs is the second of our five product principles. It connects every development effort to measurable business results and eliminates work that doesn't meaningfully contribute to them.
- Output metrics (velocity, features shipped, story points completed) can mask a complete lack of business progress. A team can be highly productive and still build the wrong things.
- Every feature our teams build has defined success criteria tied to a business outcome before development starts. If we can't articulate how we'll measure whether it worked, we don't build it.
- Auditing your roadmap against real outcomes regularly can reveal that a significant portion of planned work isn't connected to the metrics that actually matter.
When Activity Becomes the Enemy of Progress
Here's a scenario we've watched play out dozens of times: a company raises funding, the team grows, everyone starts tracking feature delivery, the roadmap fills with impressive initiatives, and the team ships at a solid pace. A year later, user adoption is flat, revenue hasn't improved meaningfully, and the team delivered everything they set out to do. But the business still isn't where it needs to be.
That gap between productivity and results is the output trap. Work feels productive because everyone is busy, but none of it is converting into business value that anyone can point to.
The root cause: teams fixate on outputs rather than outcomes, inadvertently optimizing for task completion rather than value creation.
What Outcomes Over Outputs Actually Means
Companies face pressure from every direction at once. The board expects growth. Engineering wants to address technical debt that's been accumulating since day one. Sales needs enterprise features to close a deal. Marketing requires analytics dashboards to understand where users are converting.
Without outcome-focused prioritization, teams default to building for whoever lobbies hardest or gets the request in first. The result is a scattered development process where every team member is "busy" but no one can demonstrate a positive impact on revenue, retention, or adoption.
Outcomes Over Outputs connects the team's development effort to measurable business results and cuts work that doesn't move the needle. It turns looking productive into actually being productive.
How Our Teams Implement This
Before we write a single line of code
We define success metrics and tie them to specific business objectives. We identify the desired business outcome and work backward to determine the minimum product functionality required to achieve it. This is where discovery proves its value: our Product Leads work through the "why" before the team gets into the "how." This approach is a crucial part of how our broader methodology works in practice.
During development
We track user adoption and business impact, not the number of features completed. Progress means changes in user behavior or the achievement of business goals, not the total number of story points delivered. When the data shows we're on the wrong track, we adjust quickly rather than finish building something nobody needs.
After launch
We audit continuously, cutting or changing features that don't contribute to user or business success. It's an ongoing discipline to keep the product focused on what actually drives results. Our teams don't treat "shipped" as "done." They treat "adopted and driving the target metric" as done.
Common Mistakes Teams Make
Mistake 1: Vanity metrics masquerading as outcomes
What teams track: User signups, page views, feature usage counts.
Why it fails: These numbers feel like outcomes, but they're really just measuring activity in disguise. More page views isn't a straight line to more revenue.
What we focus on instead: Monthly recurring revenue, user retention at 30/60/90 days, customer acquisition cost, time-to-value for new users. These connect directly to whether the business is sustainable and growing. Keeping product development focused on these metrics directly ties your product goals to your business goals.
Mistake 2: Quarterly planning without continuous measurement
What teams do: Set strategy during quarterly planning, then forget about it until the next planning cycle.
Why it fails: By the time anyone notices a strategy isn't working, months of effort have gone in the wrong direction. We've seen teams burn entire quarters on features that showed warning signs within the first couple of weeks.
How we handle it: Track metrics on a shorter cadence and adjust tactics based on the data. Your users are telling you something in their behavior. Maintain strategic alignment while staying responsive to real information. This kind of iteration cadence is fundamental to how we run projects.
Mistake 3: Building features without defined success criteria
What teams do: Start development because a feature "seems important" or an executive asked for it. Nobody defines what success looks like before writing code.
Why it fails: Without success criteria, you can't tell if something worked once implemented. Teams ship and move on, accumulating features that may or may not be doing anything useful and adding unnecessary complexity.
What we do instead: Every feature has defined success metrics before development begins. If we can't articulate how we'll measure whether it worked, we don't build it. This constraint alone eliminates a surprising amount of speculative work.
The Business Impact
Companies that commit to prioritizing outcomes over outputs tend to see a few consistent patterns:
Faster progress toward business goals that matter. The team allocates development resources to high-impact work rather than building feature sets that look impressive but don't drive results.
Clearer resource allocation. When every feature request must demonstrate a connection to business outcomes, personal opinions or assumptions are no longer sufficient justification. Resources flow toward work that supports strategic objectives. We've seen this play out with clients where focusing on the outcomes that mattered most meant deliberately choosing not to build features that seemed "obvious" for the product but wouldn't actually improve core metrics.
Higher success rates overall. Products end up solving problems that users will actually pay to have solved. By measuring outcomes from day one, teams catch market-fit issues before they turn into expensive pivots or rebuilds.
What This Means in Practice
Here's how Outcomes Over Outputs shows up in our day-to-day work:
1. Success criteria before development starts
Our Product Leads define what success looks like for every significant feature before it enters the delivery track. That means identifying the target metric, establishing a baseline, and articulating what movement we expect to see. This isn't bureaucratic overhead. It's a five-minute conversation that prevents months of misdirected effort. If the team can't connect a feature to a business outcome, that's a signal to question whether we understand the problem well enough to build a solution.
Validate by checking whether recent features had defined success criteria before development began. If they did, the process is working. If features regularly enter development with vague justifications like "stakeholder requested" or "seems important," there's a gap.
Red flag: The team can describe what they're building in detail but can't articulate in a single sentence why it matters to the business.
2. Outcome tracking during and after sprints
Our teams review outcome metrics alongside delivery metrics during sprint reviews. The question isn't just "did we ship what we planned?" but "is the work we shipped moving the numbers we care about?" When the data shows a feature isn't driving the expected outcome, the team investigates and adjusts rather than moving on to the next backlog item. This feedback loop is what makes Outcomes Over Outputs a living practice rather than a planning exercise.
Validate by observing what gets discussed in sprint reviews. If the conversation centers on what was completed and what's next, outcomes aren't embedded yet. If the conversation includes "here's what we shipped and here's how it's performing against the success criteria," the principle is functioning.
Red flag: The team's primary progress metric is still velocity or story points completed, and nobody tracks what happened after features shipped.
3. Roadmap audits against target outcomes
Periodically, our teams step back and evaluate the entire roadmap against the business outcomes that actually matter. Every planned item gets asked: "Can we draw a direct line from this work to movement on our target metric?" This audit consistently surfaces work that felt important but doesn't connect to the outcomes the business needs. Cutting that work frees up capacity for initiatives that do.
Validate by running this exercise on the current roadmap. If a significant chunk of planned work can't be connected to a specific outcome, there's an opportunity to refocus.
Red flag: Stakeholders resist the audit because "everything is important" or "we already committed to this." That resistance is precisely why the audit matters.
Common Questions About Outcomes Over Outputs
Q: We track OKRs already. How is this different?
A: OKRs are a goal-setting framework, not a development methodology. Many teams set outcome-oriented OKRs at the start of the quarter and then spend the rest of the time tracking feature delivery across sprints. The gap between the OKR document and daily development decisions is exactly where Outcomes Over Outputs operates. If your OKRs don't change which features get built or killed each sprint, they're not functioning as outcome drivers.
Q: How do you handle features where the business impact is hard to measure directly?
A: If you genuinely can't define how you'd measure whether a feature succeeded, that's a signal to question whether you understand the problem well enough to build a solution. In practice, most "hard to measure" features are either infrastructure work (where the outcome is enabling future measurable features) or features nobody has thought through carefully enough. The discipline of defining success criteria before development starts eliminates a surprising amount of speculative work.
Q: Won't this slow down development? Defining metrics for everything adds overhead.
A: It slows down the start of development by a small amount per feature. It speeds up everything else. Teams stop building features that don't matter, stop debating scope without data, and stop discovering months later that a major initiative had no impact. The overhead of defining success criteria is minimal compared to the cost of building the wrong thing for a quarter.
Q: How does OAK'S LAB report progress to clients when using this approach?
A: We report outcome metrics alongside delivery metrics. Instead of just showing what was shipped, we show what was shipped and how it's performing against the success criteria we defined together. Stakeholders respond well to this because it connects engineering effort to business results, which is what they actually care about. The conversation shifts from "how many features did we ship" to "are we moving the metrics that matter."
Q: Does this mean you never build features that stakeholders request?
A: Stakeholder requests often contain real insights about user needs or market opportunities. The shift is from "build it because the VP of Sales asked" to "the VP of Sales sees a pattern. Let's define the outcome we'd expect if we address it, and measure whether we're right." Stakeholders generally prefer this approach once they see it working, because their requests get taken seriously and evaluated on merit rather than lost in a backlog.
Subscribe to our newsletter and receive the latest updates from our CEO.
All newsletters
(42)
Business Goals vs Product Goals and Tips to Setting Both
Product Development
Business
March 29, 2023
Startup Fundraising Indicators: A Guide to Evaluating Your Company’s Growth
Business
Technology
March 20, 2023
How to Set Your Startup’s Mission & Vision
Business
Technology
March 14, 2023
How to Use Dual-Track Agile Product Development in Early-Stage Startups
Product Development
Technology
March 9, 2023
February Engineering Monthly Round-Up
Product Development
Technology
March 7, 2023
Our Product-Building Principles
Product Development
Technology
February 16, 2023
The OAK’S LAB WAY: An Introduction to Our Product Development Methodology
Product Development
Technology
January 23, 2023
December Engineering Monthly Round-Up
Product Development
Technology
January 10, 2023
November Engineering Monthly Round-Up
Technology
Product Development
December 9, 2022
November Engineering Monthly Round-Up
Product Development
Technology
November 7, 2022
4 Ways We Use Prisma to Speed Up Development
Product Development
October 21, 2022
September Engineering Monthly Round-Up
Technology
Product Development
October 7, 2022











