• Home
  • Category: Software Engineering

Minimum Viable Product (MVP): You’re doing it wrong

Quibi was a short-lived short-form video app. It was founded in August 2018, launched in April 2020, and folded in December 2020, wiping out $1.75 billion of investor’s money. That’s twenty months from founding to launch and just six months to fail. Ouch.

Forbes chalked this up to “a misread of consumer interests”; though the content was pretty good, Quibi only worked as a phone app while customers wanted TV streaming, and it lacked social sharing features that may have drawn in new viewers. It was also a paid service competing with free options like YouTube and TikTok. According to The Wall Street Journal, the company’s attempts to address the issues were too late: “spending on advertising left little financial wiggle room when the company was struggling”.

If only there was some way Quibi could have validated its concept prior to wasting nearly two billion dollars1.

‘Geez, why didn’t they just try it out first with a basic version and see if people liked it?’, I hear you say. Such a genius idea! As it turns out, there’s already a name for it: Minimum Viable Product (MVP).

MVP: What and why

Product, as in the good or service that you’re developing.

Viable, as in it has to be usable and valuable to actual customers in the real world.

And Minimum, as in just the features required for viability so that you can get there quickly and cheaply.

The concept is simple: create a version of your solution that’s just good enough and put it out in the real world to validate assumptions about the product and market. That way, if your assumptions are wrong, you can pivot or shut it down without having spent too many resources on a failed idea. If your assumptions are right, you’ll have some great feedback to propel you along and help populate your product roadmap.

Take, for example, the recent Saint Javelin line of products supporting Ukraine. Christian Borys just wanted to do something fun: take a funny pro-Ukraine meme and make a few stickers to share. The image got such great feedback on Instagram that he decided to post it for sale on Shopify with the goal of raising CA$500 (~US$385) for a charity that helps orphaned children of Ukrainian soldiers. You can guess what happened next: the orders blew up and Saint Javelin has expanded to a full line of products sporting a variety of pro-Ukraine memes.

That feels a little like a silly example; tons of successful companies started from one person’s hobby or playfulness that never intend to grow into something bigger. And yet, it’s the perfect illustration of how starting small can lead to success. The problem with high-profile failures like Quibi is not that they started with big ambitions, it’s that they were so convinced that their offering would be celebrated by the market that they didn’t bother to test their assumptions first.

An MVP is not a cheaper product, it’s about smart learning.

Steve Blank

An MVP is not a demo, proof of concept, or beta

An MVP is a very specific step that’s valuable in the startup stage of an idea.

It’s not a beta version. A beta is a pre-release version of software with all of the functionality you plan to deliver; it’s functionally complete, but it probably has some bugs to work out before it’s ready for the public. It’ll tell you if your idea has legs, but way too late; an MVP should have been done much earlier to prove the value of the idea before the major development effort.

It’s not a demo. A demo may look cool, and may convince investors, but it doesn’t actually tell you anything about a product’s likelihood of market adoption. Maybe the demo is a simple video of your concept, or maybe it’s based on your MVP and real-world data. But don’t confuse the two.

It’s not a proof of concept. A proof of concept is an internal engineering prototype to prove that the technology works. It’s essential if you’re doing something brand new to make sure it’s actually technically feasible. But it doesn’t tell you if there’s a need for the product; ideally you’d do an MVP first, to prove the need before you spend money on technology development.

And that’s a key point: the MVP has to actually address a customer’s needs, but it doesn’t actually have to be technologically complete! It can be entirely manual behind the scenes as long as the customer is able to experience the value of the service. Steve Blank offers a great example of a startup that planned to use drones to capture data from agricultural fields to help farmers make smarter decisions. The MVP didn’t require any actual drones or AI data processing, the team could gather data from a piloted aircraft and manually process it. Once it’s clear that the service is useful, then the expensive/hard part of developing the hardware, software, and infrastructure can commence.

In the startup stage, the MVP helps with specific question: “we have this idea, how can we validate it before we invest too much in it?”

“Would consumers be interested in a short-form video service?”
“Let’s spend $250k to create a webapp and serialize some existing content!”

Founders become infatuated with a bold and ambitious mission—as they should. However, what separates a startup that actually brings its mission to life from one that doesn’t is the ability to shed the rose-colored glasses and solve for a small job to be done.

Shawn Carolan

The government needs MVPs

As this blog is focused on government system development, we have to devote at least a bit of space to describe how the MVP concept relates to government. And the honest answer is, I don’t see anything different. Government program offices should develop MVPs to test out ideas prior to major system acquisition efforts. Government offices have a number of vehicles at their disposal to fund such efforts, such as existing Systems Engineering and Technical Assistance (SETA) contracts, the Small Business Innovation Research (SBIR) program, and small-value Other Transaction Authority (OTA) contracts. They also have a number of research labs with simulation and prototyping capabilities as well as large military exercise events, great opportunities to validate an MVP.

Larger programs executing using Agile development approaches may create MVPs for particular features or capabilities being considered. “Successful” experiments build the product roadmap while “unsuccessful” experiments provide valuable lessons learned for relatively little cost. Far more beneficial than minimizing failure, the ability to experiment enables sensible risk-taking on “extreme” ideas that never would’ve been considered otherwise.

Another bite of the Apple

Apple provides a great case study in the importance of MVPs, with an example I borrow from the book The Innovator’s Dilemma by Clayton Christensen.

Steve Jobs and Steve Wozniak built the Apple I on a shoestring budget, and on credit at that. They sold just 200 units in 1976 before pulling it from the market and refocusing on the Apple II. The initial product had limited functionality, but it proved there was a market. “Both Apple and its customers learned a lot about how desktop personal computers might be used,” wrote Christensen. The next year, Apple introduced the Apple II, which sold 43,000 units in its first two years.

The Apple I was an MVP for Apple as a consumer computer company.

Contrast this with the Apple Newton, a personal digital assistant (PDA)2 first released in 1993. Newton was the result of over $100 million in research, development, and marketing; the product was shaped by market research such as focus groups and surveys and sported unprecedented portable capabilities. And, yet, it was a flop. It was pricey, buggy, and technically lacking. CEO John Sculley had staked his reputation on it, disappointing investors, and Steve Jobs axed it when he returned to the company in 1997.

Apple had poured so many resources into the Newton that they couldn’t afford for it to fail, and yet consumers hated it. And it wasn’t for a lack of market. The PalmPilot launched in 1996 and was a great success, and of course we have many options for similar devices today. Apple’s Newton served as a sort of MVP for Palm, helping to flesh out consumer and market demands.

Do you have any good or bad examples of MVPs, especially in government and military systems? When are MVPs most important? Least important? Share your thoughts in the comments below.

Agile isn’t faster

A common misconception is that Agile development processes are faster. I’ve heard this from leaders as a justification for adopting Agile processes and read it in proposals as a supposed differentiator. It’s not true. Nothing about Agile magically enable teams to architect, engineer, design, test, or validate any faster.

In fact, many parts of Agile are actually slower. Time spent on PI planning, backlog refinement, sprint planning, daily stand-ups1, and retrospectives is time the team isn’t developing. Much of that overhead is avoided in a Waterfall style where the development follows a set plan.

What Agile does offer, however, is sooner realization of value. And that’s the source of the misconception. The charts below illustrate this notionally. You can see that Agile delivers a small amount of system capability early on and the builds on that value with incremental deliveries. By contrast, Waterfall delivers no system capability until the development is “done”, at which point all of the capability is delivered at once. Agile development isn’t faster, but it does start to provide value sooner; that adds up to more area ‘under the curve’ of cumulative value.

Two graphs showing that Agile delivers value sooner than Waterfall, resulting in significantly more value delivered over time
Comparing the value of Agile and Waterfall approaches to system development

But, that’s not even the real value of Agile, in my opinion. On the charts you’ll also notice two different Waterfall lines, one for theory and one for practice. In theory, the Waterfall requirements should deliver the exactly correct system. In practice, requirements are often poorly written, incomplete, or misinterpreted, resulting in system that misses the mark. It’s also possible for user needs to change over time, especially given the long duration of many larger projects.

But because validation testing is usually scheduled near the end of the Waterfall project, those shortcomings aren’t discovered until it’s very costly to correct them. With Agile, iterative releases mean we can adapt as we learn both on an individual feature level and on the product roadmap level.

In short, Agile isn’t faster. But it delivers value sooner, delivers more cumulative value over all, and ensures that the direction of the product provides the most value to the user.

For more, check out my series on Agile Systems Engineering. Also, share your thoughts on the differences between Agile and Waterfall in the comments below.

Agile SE Part One: What is Agile, Anyway?

Table of Contents

What is “Agile”?

Agile is a relatively new approach to software development based on the Agile Manifesto and Agile Principles. These documents are an easy read and you should absolutely check them out. I will sum them up as stating that development should be driven by what is most valuable to the customer and that our projects should align around delivering value.

Yes, I’ve obnoxiously italicized the word value as if it were in the glossary of a middle school textbook. That’s because value is the essence of this entire discussion.

Little-a Agile

With a little-a, “agile” is the ability to adapt to a changing situation. This means collaboration to understand the stakeholder needs and the best way to satisfy those needs. It means changing the plan when the situation (or your understanding of the situation) changes. It means understanding what is valuable to the customer, focusing on delivering that value, and minimizing non-value added effort.

Big-A Agile

With a big-A, “Agile” is a software development process that aims to fulfill the agile principles. There are actually several variants that fall under the Agile umbrella such as Scrum, Kanban, and Extreme Programming. Each of these have techniques, rituals, and processes that help teams deliver a quality product through a focus on value-added work1.

“Cargo Cult” Agile

“Agile” has become the hot-new-thing, buzzword darling of the U.S. defense industry2. Did I mean Big-A or Little-a? It hardly matters. As contractors have rushed to promote their new development practices, they have trampled the distinction. The result is Cargo Cult Agile: following the rituals of an Agile process and expecting that the project will magically become more efficient and effective as a result. I wrote about this previously, calling it agile-in-name-only and FrAgile.

This isn’t necessarily the fault of contractors and development teams. They want to follow the latest best practices from commercial industry to most effectively meet the needs of their customers. But as anyone who has worked in the defense industry can tell you, the pace of change is glacial due to a combination of shear bureaucratic size and byzantine regulations. Most contracts just don’t support agile principles. For example, the Manifesto prioritizes “working software over comprehensive documentation” and one of the Principles is that “working software is the primary measure of progress”; but, most defense contracts require heaps of documentation that are evaluated as the primary measure of progress.

The upshot is that, to most engineers in the defense industry, “Agile” is an annoying new project management approach. Project management is already the least enjoyable part of our job, an obstacle to deal with so that we can get on with the real work. Now we have to learn a new way of doing things that may not be the most effective way to organize our teams and has no real impact on the success of the program. This has resulted in an undeserved bad taste for many of us.

If this is your experience with Agile, please understand that this is not the true intent and practice. And that’s a the point of this series: how can we achieve real agility to enhance the execution of our programs and deliver value to the field faster?

Agile Systems Engineering

So far, I’ve only mentioned Agile as a software development approach. Of course, we’re here because Agile is being appropriated to all types of engineering, especially as “Agile Hardware Development” and “Agile Systems Engineering”. Some people balk at this; how can a software process be applied to hardware and systems? Here, the distinction between little-a agile and big-A Agile is essential. Software agile development evangelists have taken the values in the Manifesto and Principles and created Agile processes and tools that realize them.

It’s incumbent upon other engineering disciplines to do the same. We must understand the agile values, envision how they are useful in our context (type of engineering, type of solution, customer, etc.), and then craft or adapt Agile processes and tools that make sense. Where many projects and teams3 go wrong is trying to shoehorn their needs into an Agile process that is a poor fit, and then blaming the process.

In the rest of this series we’ll explore how agile SE can provide customer value, how our contracts can be crafted to enable effective Agile processes, and what those processes might look like for a systems engineering team. Stay tuned!

Have you worked on a project with “Cargo Cult Agile”? Have you adapted agile principles effectively in your organization? What other resources are out there for Agile systems engineering? Share your thoughts in the comments below.

Agile SE Part Zero: Overview

“Agile” is the latest buzzword in systems engineering. It has a fair share of both adherents and detractors, not to mention a long list of companies offering to sell tools, training, and coaching. What has been lacking is a thoughtful discussion about when agile provides value, when it doesn’t, and how to adapt agile practices to be effective in complex systems engineering projects.

I don’t claim this to be the end-all guide on agile systems engineering, but hope it will at least spark some discussion. Please comment on the articles with details from your own experiences. If you’re interested in contributing or collaborating, please contact me at benjamin@engineeringforhumans.com, I’d love to add your voice to the site.

Part 1: What is Agile Anyway?

A broad overview of Agile as a concept, including the difference between Agile processes and being agile and critical discussion of how Agile most often fails. Also, adapting the concepts which have been successful for software development in order to find success in a systems engineering context.

Part 2: What’s Your Problem?

Henry Ford’s apocryphally faster horse, a solid example of how customers can misunderstand their users, and requirements myopia. In short, requirements-based acquisition is terrible, let’s refocus on solving problems and providing value.

Part 3: Agile Contracts and the Downfall of Requirements

Requirements are the antithesis of agile1: impractical, time consuming, prone to misinterpretation. But, they are the foundation for every large DoD acquisition. A major paradigm shift is required for true agile systems engineering.

Part 4: Digital Transformation

A slight detour to discuss an important enabler. Integrated digital engineering has enormous benefits by and of itself. It also addresses many of the objections to agile systems engineering and agile hardware engineering.

Agile Government Contracts

Agile is a popular and growing software development approach. It promotes a focus on the product rather than the project plan. This model is very attractive for many reasons and teams are adopting it across the defense industry. However, traditional government contracts and project management are entirely plan-driven. Can you really be agile in a plan-driven world?

Read More