• Home
  • Engineering for Humans

Minimum Viable Product (MVP): You’re doing it wrong

Quibi was a short-lived short-form video app. It was founded in August 2018, launched in April 2020, and folded in December 2020, wiping out $1.75 billion of investor’s money. That’s twenty months from founding to launch and just six months to fail. Ouch.

Forbes chalked this up to “a misread of consumer interests”; though the content was pretty good, Quibi only worked as a phone app while customers wanted TV streaming, and it lacked social sharing features that may have drawn in new viewers. It was also a paid service competing with free options like YouTube and TikTok. According to The Wall Street Journal, the company’s attempts to address the issues were too late: “spending on advertising left little financial wiggle room when the company was struggling”.

If only there was some way Quibi could have validated its concept prior to wasting nearly two billion dollars1.

‘Geez, why didn’t they just try it out first with a basic version and see if people liked it?’, I hear you say. Such a genius idea! As it turns out, there’s already a name for it: Minimum Viable Product (MVP).

MVP: What and why

Product, as in the good or service that you’re developing.

Viable, as in it has to be usable and valuable to actual customers in the real world.

And Minimum, as in just the features required for viability so that you can get there quickly and cheaply.

The concept is simple: create a version of your solution that’s just good enough and put it out in the real world to validate assumptions about the product and market. That way, if your assumptions are wrong, you can pivot or shut it down without having spent too many resources on a failed idea. If your assumptions are right, you’ll have some great feedback to propel you along and help populate your product roadmap.

Take, for example, the recent Saint Javelin line of products supporting Ukraine. Christian Borys just wanted to do something fun: take a funny pro-Ukraine meme and make a few stickers to share. The image got such great feedback on Instagram that he decided to post it for sale on Shopify with the goal of raising CA$500 (~US$385) for a charity that helps orphaned children of Ukrainian soldiers. You can guess what happened next: the orders blew up and Saint Javelin has expanded to a full line of products sporting a variety of pro-Ukraine memes.

That feels a little like a silly example; tons of successful companies started from one person’s hobby or playfulness that never intend to grow into something bigger. And yet, it’s the perfect illustration of how starting small can lead to success. The problem with high-profile failures like Quibi is not that they started with big ambitions, it’s that they were so convinced that their offering would be celebrated by the market that they didn’t bother to test their assumptions first.

An MVP is not a cheaper product, it’s about smart learning.

Steve Blank

An MVP is not a demo, proof of concept, or beta

An MVP is a very specific step that’s valuable in the startup stage of an idea.

It’s not a beta version. A beta is a pre-release version of software with all of the functionality you plan to deliver; it’s functionally complete, but it probably has some bugs to work out before it’s ready for the public. It’ll tell you if your idea has legs, but way too late; an MVP should have been done much earlier to prove the value of the idea before the major development effort.

It’s not a demo. A demo may look cool, and may convince investors, but it doesn’t actually tell you anything about a product’s likelihood of market adoption. Maybe the demo is a simple video of your concept, or maybe it’s based on your MVP and real-world data. But don’t confuse the two.

It’s not a proof of concept. A proof of concept is an internal engineering prototype to prove that the technology works. It’s essential if you’re doing something brand new to make sure it’s actually technically feasible. But it doesn’t tell you if there’s a need for the product; ideally you’d do an MVP first, to prove the need before you spend money on technology development.

And that’s a key point: the MVP has to actually address a customer’s needs, but it doesn’t actually have to be technologically complete! It can be entirely manual behind the scenes as long as the customer is able to experience the value of the service. Steve Blank offers a great example of a startup that planned to use drones to capture data from agricultural fields to help farmers make smarter decisions. The MVP didn’t require any actual drones or AI data processing, the team could gather data from a piloted aircraft and manually process it. Once it’s clear that the service is useful, then the expensive/hard part of developing the hardware, software, and infrastructure can commence.

In the startup stage, the MVP helps with specific question: “we have this idea, how can we validate it before we invest too much in it?”

“Would consumers be interested in a short-form video service?”
“Let’s spend $250k to create a webapp and serialize some existing content!”

Founders become infatuated with a bold and ambitious mission—as they should. However, what separates a startup that actually brings its mission to life from one that doesn’t is the ability to shed the rose-colored glasses and solve for a small job to be done.

Shawn Carolan

The government needs MVPs

As this blog is focused on government system development, we have to devote at least a bit of space to describe how the MVP concept relates to government. And the honest answer is, I don’t see anything different. Government program offices should develop MVPs to test out ideas prior to major system acquisition efforts. Government offices have a number of vehicles at their disposal to fund such efforts, such as existing Systems Engineering and Technical Assistance (SETA) contracts, the Small Business Innovation Research (SBIR) program, and small-value Other Transaction Authority (OTA) contracts. They also have a number of research labs with simulation and prototyping capabilities as well as large military exercise events, great opportunities to validate an MVP.

Larger programs executing using Agile development approaches may create MVPs for particular features or capabilities being considered. “Successful” experiments build the product roadmap while “unsuccessful” experiments provide valuable lessons learned for relatively little cost. Far more beneficial than minimizing failure, the ability to experiment enables sensible risk-taking on “extreme” ideas that never would’ve been considered otherwise.

Another bite of the Apple

Apple provides a great case study in the importance of MVPs, with an example I borrow from the book The Innovator’s Dilemma by Clayton Christensen.

Steve Jobs and Steve Wozniak built the Apple I on a shoestring budget, and on credit at that. They sold just 200 units in 1976 before pulling it from the market and refocusing on the Apple II. The initial product had limited functionality, but it proved there was a market. “Both Apple and its customers learned a lot about how desktop personal computers might be used,” wrote Christensen. The next year, Apple introduced the Apple II, which sold 43,000 units in its first two years.

The Apple I was an MVP for Apple as a consumer computer company.

Contrast this with the Apple Newton, a personal digital assistant (PDA)2 first released in 1993. Newton was the result of over $100 million in research, development, and marketing; the product was shaped by market research such as focus groups and surveys and sported unprecedented portable capabilities. And, yet, it was a flop. It was pricey, buggy, and technically lacking. CEO John Sculley had staked his reputation on it, disappointing investors, and Steve Jobs axed it when he returned to the company in 1997.

Apple had poured so many resources into the Newton that they couldn’t afford for it to fail, and yet consumers hated it. And it wasn’t for a lack of market. The PalmPilot launched in 1996 and was a great success, and of course we have many options for similar devices today. Apple’s Newton served as a sort of MVP for Palm, helping to flesh out consumer and market demands.

Do you have any good or bad examples of MVPs, especially in government and military systems? When are MVPs most important? Least important? Share your thoughts in the comments below.

Agile isn’t faster

A common misconception is that Agile development processes are faster. I’ve heard this from leaders as a justification for adopting Agile processes and read it in proposals as a supposed differentiator. It’s not true. Nothing about Agile magically enable teams to architect, engineer, design, test, or validate any faster.

In fact, many parts of Agile are actually slower. Time spent on PI planning, backlog refinement, sprint planning, daily stand-ups1, and retrospectives is time the team isn’t developing. Much of that overhead is avoided in a Waterfall style where the development follows a set plan.

What Agile does offer, however, is sooner realization of value. And that’s the source of the misconception. The charts below illustrate this notionally. You can see that Agile delivers a small amount of system capability early on and the builds on that value with incremental deliveries. By contrast, Waterfall delivers no system capability until the development is “done”, at which point all of the capability is delivered at once. Agile development isn’t faster, but it does start to provide value sooner; that adds up to more area ‘under the curve’ of cumulative value.

Two graphs showing that Agile delivers value sooner than Waterfall, resulting in significantly more value delivered over time
Comparing the value of Agile and Waterfall approaches to system development

But, that’s not even the real value of Agile, in my opinion. On the charts you’ll also notice two different Waterfall lines, one for theory and one for practice. In theory, the Waterfall requirements should deliver the exactly correct system. In practice, requirements are often poorly written, incomplete, or misinterpreted, resulting in system that misses the mark. It’s also possible for user needs to change over time, especially given the long duration of many larger projects.

But because validation testing is usually scheduled near the end of the Waterfall project, those shortcomings aren’t discovered until it’s very costly to correct them. With Agile, iterative releases mean we can adapt as we learn both on an individual feature level and on the product roadmap level.

In short, Agile isn’t faster. But it delivers value sooner, delivers more cumulative value over all, and ensures that the direction of the product provides the most value to the user.

For more, check out my series on Agile Systems Engineering. Also, share your thoughts on the differences between Agile and Waterfall in the comments below.

You Don’t Understand Murphy’s Law: The Importance of Defensive Design

CALLBACK is the monthly newsletter of NASA’s Aviation Safety Reporting System (ASRS)1. Each edition features excerpts from real, first-person safety reports submitted to the system. Most of the reports come from pilots, many by air traffic controllers, and also the occasional maintainer, ground crew, or flight attendant. Human factors concerns feature heavily and the newsletters provide insight into current safety concerns2. ASRS gets five to nine thousand reports each month, so there’s plenty of content for the CALLBACK team to mine.

The February 2022 issue contained this report about swapped buttons:

A Confusing Communication Interface.
An Aviation Maintenance Technician (AMT) described this incorrect interface configuration noted by a B777 Captain. It had already generated multiple operational errors. 

The Captain reported that the Controller Pilot Data Link Communications (CPDLC) ACCEPT and REJECT buttons were switched.... This caused 2 occasions of erroneous reject responses being sent to ATC. On arrival, the switches were confirmed [to be] in the wrong place (Illustrated Parts Catalog (IPC) 31-10-51-02), and [they were] switched back (Standard Wiring Practices Manual (SWPM) 20-84-13) [to their correct locations].... These switches can be inadvertently transposed.

This reminded me of the story of Capt. Edward Aloysius Murphy Jr., the very individual for whom Murphy’s Law is named. It’s a great story, uncovered by documentarian Nick Spark whose work resulted in the key players receiving the 2003 Ig Nobel Prize3 in Engineering.

Murphy’s Law

You’ve probably heard Murphy’s Law stated as:

Anything that can go wrong, will go wrong.

That’s not incorrect, per se; in fact, it’s a useful generalization. The problem is that it is often misinterpreted. When something goes wrong, Murphy will be invoked with an air of inevitability: of course [whatever improbable event] would happen, it’s Murphy’s law!

You, dear reader and astute system thinker, may have already spotted the issue. If anything that can go wrong will, why not take steps to preclude (or at least mitigate the impact of (or at least be willing to accept)) this possibility?

The story of Murphy’s Law starts with some of the most important, foundational research in airplane and automotive crash safety. I will summarize the program, but there is no way I can do it justice. I’d highly recommend this article by Nick Spark, or the video below by YouTube sensation The History Guy.

Rocket sleds

Physician and US Air Force officer John Paul Stapp was a pioneer in researching the effects of acceleration and deceleration forces on humans. This work was done using a rocket-powered sled called the Gee Whiz4 at Edwards Air Force Base. A later version called Sonic Wind was even faster, capable of going Mach 1.75.

There’s a long history of doctors and scientists experimenting on themselves. Doctor Barry Marshall injected himself with the bacterium Helicobacter pylori in order to prove that ulcers were not caused by stress and could be treated with antibiotics, winning a Nobel prize for the work. Doctors Nicholas Senn and Jean-Louis-Marc Alibert each showed that cancer was not contagious by injecting or implanting it into themselves. Of course, self-experimentation is not without risk. Dr. William Stark died from scurvy while deliberately malnourishing himself to research the disease. Stapp was cut from the same cloth, subjecting himself to 29 rocket sled tests including some of the most severe and uncertain of the setups.

Human strapped into a chair atop a rocket sled on railroad tracks
The Gee Whiz with human subject.
NASA/Edwards AFB History Office, and pilfered from the Annals of Improbable Research

And they were quite severe. Test subjects routinely experienced forces up to 40g. The peak force experienced by a human during these tests was a momentary 82.6g, which is insane. By comparison, manned spacecraft experience 3-4g and fighter pilots about 9g. People lose consciousness during sustained forces of 4-14g, depending on training, fitness, and whether they’re wearing a g-suit.

Stapp and his fellow subjects suffered tunnel vision, disorientation, loss of consciousness, “red outs” due to burst capillaries in the eyes, and black eyes due to burst capillaries around the eyes. They lost teeth fillings, were bruised and concussed, they cracked ribs and broke collarbones. Twice Stapp’s wrist broke from a test and one of those times he simply set it himself before heading back to the office. The team, particularly project manager George Nichols, were legitimately worried about killing test subjects as they were accelerated faster than bullets and close to the speed of sound6.

Six frames of subject's face clearly in discomfort and experiencing high winds
Stapp during a test on Sonic Wind.
NASA/Edwards AFB History Office, and pilfered from someone posting it on Reddit

All of this effort was designed to understand the forces humans could withstand. It had been thought that humans were not capable of surviving more the 18g, so airplane seats weren’t designed to withstand any more than that. Stapp thought, correctly, that aviators were dying in crashes not because of the forces experienced but because their planes didn’t have the structural integrity to protect them. The work lead to major advances in survivability in military aviation.

Stapp then applied his expertise to automotive research, using the techniques he’d developed to create the first crash tests and crash test dummies. He also advocated for, tested, and helped to perfect automotive seatbelts, saving millions of lives. A non-profit association honors his legacy with an annual conference on car crash survivability. We all really owe a debt of gratitude to Dr. Stapp and this program.

Murphy

So, where does Murphy fit into all of this? Well, during the program there was some question about the accuracy of the accelerometers being used to measure g-forces. Another Air Force officer, Edward Aloysius Murphy Jr. had developed strain transducers to provide this instrumentation for his own work with centrifuges. He was happy to provide these devices to the rocket sled program and he sent them down to Edwards Air Force Base with instructions for installing and using them.

Black and white yearbook photo of a man in military uniform
Murphy as a college student.
U.S. Military Academy, West Point

The gauges were installed, Stapp was strapped in, and the test conducted. The engineers eagerly pulled the data from the devices and found… nothing. Confused, the team called Murphy to ask for help and he flew out to Edwards AFB to see for himself. Upon investigation, he found that the transducers had each been meticulously installed backwards, resulting in recording no data. He blamed himself for not considering that possibility when writing the instructions and in frustration said:

If there’s more than one way to do a job, and one of those ways will end in disaster, then somebody will do it that way.

Stapp then popularized the phrase, stating in a press conference that “We do all of our work in consideration of Murphy’s Law… If anything can go wrong, it will. We force ourselves to think through all possible things that could go wrong before doing a test and act to counter them”. With human subjects in highly risky experiments, the team put the utmost care into the design of the system and the preparation of each test event. By assuming that anything that can go wrong will indeed eventually go wrong, the team would put in the effort to minimize the number of things that could go wrong and thus maximize safety.

Conclusion

Murphy’s Law isn’t about putting your fate in the hands of the universe, it’s about defensive7 and robust design. Reliability engineering and the technique of Failure Modes, Effects, and Criticality Analysis (FMECA) have their roots in this concept.

It’s why we have polarized outlets and safety interlocks. It’s why any critical component should only be able to be installed the correct way, because the ASRS database is filled with hundreds, if not thousands, of reports like the one from the top of this story of parts that fit correctly but are actually installed wrongly8.

I heavily relied on the work of Nick Sparks for this article. He tells the story much better than I in A History of Murphy’s Law, which one reviewer compares favorably to The Right Stuff.

How have you seen defensive design practiced? Has Murphy’s law impacted your engineering approach? Share your thoughts in the comments.

Agile SE Part Four: Digital Transformation

Table of Contents

A quick detour

This article is a quick detour on an important enabler for agile systems engineering. “Digital transformation” means re-imagining the way businesses operate in the digital age, including how we engineer systems. As future articles discuss scaling agile practices to larger and more complex systems, it will be very helpful to understand the possibilities that digital engineering unlocks.

Digital engineering enables the agile revolution

The knee-jerk reaction to agile systems engineering is this: “sure, agile is great for the speed and flexibility of software development, but there’s just no way to apply it to hardware systems”. Objections range from development times to lead times to the cost of producing physical prototypes.

Enter digital engineering (DE). DE is the use of computer models to develop, analyze, prototype, simulate, and experiment with the system. Of course, we’ve used computers in engineering for decades for computer-aided design (CAD), modeling & simulation (M&S), and more. DE integrates and builds upon these individual tools to model the entire, integrated system. It takes advantage of the latest discipline tools for development, integrates them using model-based systems engineering (MBSE) techniques, and uses simulation tools to test performance in realistic mission environments.

Because all components are digitally modeled, it can be treated much more like a software project using Dev*Ops. Engineering teams in every discipline work in a version control system and can branch, collapse, experiment, and unit test. They feed in data from analyses and tests to improve the fidelity of the model. When a component design is ready, it gets pushed to a test environment where automated testing makes sure nothing breaks. Then, it is deployed into a “production” simulation environment where a combination of automated and manual simulation puts the system through its paces and evaluates performance in various test scenarios.

Ready to build for real? Pick a configuration and move into a real-world prototyping and/or initial production effort with confidence in the design. We can then compare actual and simulated performance data to verify the accuracy of the models, as well as feed the results back into the model to increase its fidelity going forward.

The most-cited success story for DE and agile system development is the Saab Gripen E multi-role strike fighter [pdf]. Using a Dev*Ops approach, any version of the aircraft could be “compiled” and virtually flown at any time. Want to compare different mission system packages, armament sets, or other configurations? It’s easy to pull those options and go fly.


Marketing the use of a “fully simulated digital environment” from a major U.S. defense contractor

As a systems engineer, I love this! We can use powerful simulation environments to understand complex system interactions, conduct trade studies, and optimize the design across the entire system. And because it’s virtual, we can make changes quickly to explore possibilities with no risk—even off-the-wall ideas just to see. Systems engineering becomes generative again, breaking out of the reductive mindset we often find ourselves in.

Lifecycle analyses, digital twins, and other benefits

The benefits of the DE go far beyond the initial development. So far we’ve focused on system operational performance, but the entire system lifecycle range can be simulated. Maintainability, reliability, failure modes and effects, safety, robustness, survivability, usability, etc. WOW! This enables holistic lifecycle systems engineering like we’ve never seen before.

Billy Mays: "But wait, there's more"
But wait, there’s more

The digital system model can be reused in many ways, providing benefits far past the initial development for example, simulation capabilities can be packaged into training devices, providing more accurate training for a reduced cost/development time.

The biggest impact may come from the concept of a “digital twin”. This involves feeding real-world data back into a digital model for each individual unit in order to maintain an exact (as much as possible) simulation. There are many applications, such as:

  • Condition-based maintenance: a unit which has seen severe service may be brought in for maintenance sooner than scheduled, while a unit with light duty may be able to extend its maintenance interval
  • Performance prediction: test performance virtually before operating in previously unanticipated environments or configurations
  • Failure reconstruction: if a unit fails, having an exact digital model enhances the investigation
  • Software upgrade testing: perform testing of upgrades on each fielded configuration to ensure compatibility
  • Threat analysis: test each fielded configuration for vulnerabilities to new cyber and physical threats

Digital twins can be built after-the-fact, but a native digital design allows for a very simple transition from engineering to operations1.

Summary

DE isn’t necessarily a prerequisite to agility nor does adopting DE make a program Agile. However, the combination of Agile and DE can be extremely powerful. The real key for Agility is how the program is planned and functions prioritized. More on that next time.

Have you worked with a DE for a large system? Did I overlook any impacts or benefits? What are pitfalls to be aware of? Share your experiences and opinions in the comments below!

Agile SE Part Three: Agile Contracts and the Downfall of Requirements

Table of Contents

The antithesis of agile

Requirements are a poor way to acquire a system. They’re great in theory, but frequently fail in practice. Writing good requirements is hard, much harder than you’d think if you’ve never had the opportunity. Ivy Hooks gives several examples of good and bad requirements in the paper “Writing Good Requirements“. Poor requirements can unnecessarily constrain the design, be interpreted incorrectly, and pose challenges for verification. Over-specification results in spending on capabilities that aren’t really needed while under-specification can result in a final product that doesn’t provide all of the required functions.

If writing one requirement is hard, try scaling it up to an entire complex system. Requirements-based acquisition rests on the assumption that the specification and statement of work are complete, consistent, and effective. That requires a great deal of up-front work with limited opportunity to correct issues found later. A 2015 GAO report found that “DoD often does not perform sufficient up-front requirements analysis”, leading to “cost, schedule, and performance problems”.

And that’s just the practical issue. The systematic issue with requirements is that the process of analyzing and specifying requirements is time consuming. One of the more recent DoD acquisition buzzphrases is “speed of relevance”. Up-front requirements are antithetical to this goal. If it takes months or even years just to develop those requirements, the battlefield will have evolved before a contract can be issued. Add years of development and testing, and then we’re deploying last-generation technology geared to meeting a past need. That’s the speed of irrelevance.

Agile promises a better approach to deliver capabilities faster. But we have to move away from large up-front requirements efforts.

Still from Back to the Future Part 2 with subtitles changed to: Requirements? Where we're going, we don't need requirements!

Agile contracting

Traditional requirements-based acquisition represents a fixed scope, with up-front planning to estimate the time and cost required to accomplish that scope. Pivoting during the development effort (for example, as we learn more about what is required to accomplish the mission) requires re-planning with significant cost and schedule impacts. The Government Accountability Office (GAO) conducts annual reviews of Major Defense Acquisition Programs (MDAPs). The most recent report analyzing 85 MDAPs found that they have experienced over 54 percent total cost growth and 29 percent schedule growth, resulting in an average delay of more than 2 years.

Defense acquisition leaders talk about delivering essential capabilities faster and then continuing to add value with incremental deliveries, which is a foundational Agile and Dev*Ops concept. But you can’t do that effectively under a fixed-scope contract where the emphasis is on driving to that “complete” solution.

The opposite of a fixed-scope contract is a value stream or capacity of work model. Give the development teams broad objectives and let them get to work. Orient the process around incremental deliveries, prioritize the work that will provide the most value soonest, and start getting those capabilities to the field.

triangle with vertices labeled "SCOPE", "COST", and "TIME", the center is the word "QUALITY"
Project Management Triangle

“But wait,” you say, “doesn’t the project have to end at some point?” That’s the best part of this model. The ‘fixed’ cost and schedule contract keeps getting renewed as long as the contractor is providing value. The contractor is incentivized to delivery quality products and to work with the customer to prioritize the backlog, or the customer may choose not to renew the contract. The customer has flexibility to adjust funding profiles over time, ramping up or down based on need and funding availability. If the work reaches a natural end point—any additional features wouldn’t be worth the cost or there is no longer a need for the product—the effort can be gracefully wrapped up.

You may be familiar with the project management triangle1. Traditional approaches try to fix all of the aspects, and very often fail. Agile approaches provide guardrails to manage each of the aspects but otherwise allow the effort to evolve organically. This is a radical shift for program managers and contracting offices, but is absolutely essential for true agile development.

Agile requirements

The most important aspect of agile approaches is that they shift requirements development from an intensive up-front effort to an ongoing, collaborative effort. The graphic below illustrates the difference between traditional and agile approaches. With traditional approaches, the contractor is incentivized to meet the contractual requirements, whether or not the system actually delivers value to the using organization or is effective to the end user.

Block diagram showing acquisition models. Traditional acquisition model includes using organization defining need to acquisition organization writing requirements for contractor organization delivering system to using organization deploying system to end users. Agile acquisition model includes using organization defining need to acquisition organization creating an agile contract to contractor, iterative feedback between contractor and end users, collaboration among all groups, the contractor continuous delivery to using organization which then deploys to end users.

In an agile model, the development backlog will be seeded with high-level system objectives. Requirements (or “user stories”) are developed through collaboration among the stakeholders and the development is shaped by iterative user feedback. The agile contract may have a small set of absolute requirements or constraints. For example, the system may need to comply with an established architecture or interface, meet particular performance requirements, or adhere to relevant standards. The key is that these absolute requirements are as minimal and high-level as possible.

This enables the stakeholder requirements discovery, analysis, and development process to be collaborative, iterative, and ongoing. The resulting requirements set isn’t radically different from a traditional requirements decomposition: requirements still have to be traceable from top to bottom. A key difference is that the decomposition happens closer to the development, both in time and organization. The rationale and mission context for a requirement won’t get lost because the development team is involved in the process, so they understand the drivers behind the features they’ll be implementing. Also, because the requirements aren’t enshrined in a formal contract they can be easily changed as the teams learn more from prototyping and user engagements.

I’m getting ahead of myself, though! In the next installment of this series we’ll look at digital engineering as a key enabler of Agile SE before moving on to the importance of cross-functional development teams, the role of Product Owner, and scaling up to a large project.

What are your experiences with agile contracts and agile requirements? Share your best practices, horror stories, and pitfalls to avoid below.

Agile SE Part Two: What’s Your Problem?

Table of Contents

A faster horse

“If I had asked people what they wanted, they would have said faster horses.”

Apocryphally attributed to Henry Ford1

When people trot out that quote they’re often trying to make the point that seeking user feedback will only constrain the design because our small-minded <sneer>users</sneer> cannot possibly think outside the box. I disagree with that approach. User feedback is valuable information. It should not constrain the design, but it is essential to be able to understand and empathize with your users. They say “faster horse”? It’s your job to generalize and innovate on that desire to come up with a car. The problem with the “singular visionary” approach is that for every wildly successful visionary there are a dozen more with equally innovative ideas that didn’t find a market.

Sometimes, your research will even lead you to discover something totally unexpected which changes your whole perspective on the problem.

Here’s a great, real-world example from a Stanford Hacking for Defense class2:

Customer ≠ user

Team Aqualink was tasked by their customer (the chief medical officer of the Navy SEALS) to build a biometric monitoring kit for Navy divers. These divers face both acute and long-term health impacts due to the duration and severe conditions inherent in their dives. A wearable sensor system would allow divers to monitor their own health during a dive and allow Navy doctors to analyze the data afterwords.

Team Aqualink put themselves in the flippers of a SEAL dive team (literally) and discovered something interesting: many of the dives were longer than necessary because the divers lacked a good navigation system. The medical concerns were, at least partially, actually a symptom. What the divers truly wanted was GPS or another navigational system that worked at depth. Solving that root cause would not only alleviate many of the health concerns, it would also improve mission performance. That’s a much broader impact than initially envisioned.

The customer was trying to solve the problem they saw without a deeper understanding of the user’s needs. That’s not a criticism of the customer. Truly understanding user needs is hard and requires substantial effort by systems engineers or UX researchers well-versed in user requirements discovery.

In the U.S. DoD, the Joint Capabilities Integration and Development System (JCIDS) process is intended to identify mission capability gaps and potential solutions. The Initial Capabilities Document (ICD), Capability Development Document (CDD), and Key Performance Parameters (KPPs) are the basis for every materiel acquisition. This process suffers from the same shortcoming as the biometric project: it’s based on data that is often removed from the everyday experiences of the user. But once requirements are written, it’s very hard to change them even if the development team uncovers groundbreaking new insights.

The Bradley Fighting Vehicle

Still capture from The Pentagon Wars (1998)

The Bradley Fighting Vehicle was lampooned in the 1998 movie The Pentagon Wars3. By contrast, the program to replace the Bradley is being held up as an example of a new way of doing business.

Instead of determining the requirements from the outset, the Army is funding five companies for an 18-month digital prototyping effort. The teams were given a set of nine desired characteristics for the vehicle and will have the freedom to explore varying designs in a low-cost digital environment. The Army realizes that the companies may have tools, experiences, and concepts to innovate in ways the Army hasn’t considered. The Army is defining the problem space and stepping back to allow the contractors to explore the solution space.

Requirements myopia

System engineering for the DoD is built around requirements. The aforementioned JCIDS process defines the need. Based on that need, the acquisition command defines the requirements. The contractor bids and develops to those requirements. The test commands evaluate the system against those requirements. In theory, since those requirements tie back to warfighter needs, if we met the requirements we must have met the need.

But, there’s a gap. In the proposal process, contractors evaluate the scope of work and estimate how much effort will be required to complete the work. Sometimes this is based on concrete data from similar efforts in the past. Other times, it’s practically a guess. If requirements are incompletely specified, there could be significant latitude for interpretation. As the Team Aqualink story illustrates, even the best requirement set cannot adequately capture the actual, boots-on-the-ground mission and user needs.

So, the contractor has bid a certain cost to complete the work based on their understanding of the requirements provided. If they learn more information about the user need during the development—and meeting that need would drive up the cost—they have three options:

  1. Ask the customer for a contract change and more money to develop the desired functionality
  2. Absorb the additional costs
  3. Build to the requirement even if it isn’t the best way to meet the need (or doesn’t really meet it at all)

None of these solutions are ideal. Shelling out much more than originally budgeted reflects poorly on the government program office, who has to answer to Congress for significant overruns. Contractors will absorb some additional development cost from a “management reserve” fund built into their bid, but that amount is pretty limited. In many cases, we end up with option 3.

This is heavily driven by incentive structures. Contractors are evaluated and compensated based on meeting the requirements. Therefore, the contractor’s success metrics and leadership bonuses are built around requirements. Leaders put pressure on engineers to meet requirement metrics and so engineers are incentivized to prioritize the metrics over system performance. DoD acquisition reforms such as Human Systems Integration (HSI) have attempted to force programs to do better, but have primarily resulted in more requirements-focused bureaucracy and rarely the desired outcome.

I call this “requirements myopia”: a focus on meeting the requirements rather than delivering value.

Refocusing on value

It doesn’t make sense to get rid of requirements entirely, but we can adapt our approach based on the needs of each acquisition. I touched on this briefly in an earlier article, Agile Government Contracts.

One major issue: if we don’t have requirements, how will we know what to build and when the development is done? Ponder that until next time, because in the next post in this series we’ll dive into some of the potential approaches.

What are your experiences with requirements, good or bad? Thoughts on the “faster horse”, Team Aqualink’s pivot, or the Optionally Manned Fighting Vehicle (OMFV) prototyping effort? Sound off below!

Agile SE Part One: What is Agile, Anyway?

Table of Contents

What is “Agile”?

Agile is a relatively new approach to software development based on the Agile Manifesto and Agile Principles. These documents are an easy read and you should absolutely check them out. I will sum them up as stating that development should be driven by what is most valuable to the customer and that our projects should align around delivering value.

Yes, I’ve obnoxiously italicized the word value as if it were in the glossary of a middle school textbook. That’s because value is the essence of this entire discussion.

Little-a Agile

With a little-a, “agile” is the ability to adapt to a changing situation. This means collaboration to understand the stakeholder needs and the best way to satisfy those needs. It means changing the plan when the situation (or your understanding of the situation) changes. It means understanding what is valuable to the customer, focusing on delivering that value, and minimizing non-value added effort.

Big-A Agile

With a big-A, “Agile” is a software development process that aims to fulfill the agile principles. There are actually several variants that fall under the Agile umbrella such as Scrum, Kanban, and Extreme Programming. Each of these have techniques, rituals, and processes that help teams deliver a quality product through a focus on value-added work1.

“Cargo Cult” Agile

“Agile” has become the hot-new-thing, buzzword darling of the U.S. defense industry2. Did I mean Big-A or Little-a? It hardly matters. As contractors have rushed to promote their new development practices, they have trampled the distinction. The result is Cargo Cult Agile: following the rituals of an Agile process and expecting that the project will magically become more efficient and effective as a result. I wrote about this previously, calling it agile-in-name-only and FrAgile.

This isn’t necessarily the fault of contractors and development teams. They want to follow the latest best practices from commercial industry to most effectively meet the needs of their customers. But as anyone who has worked in the defense industry can tell you, the pace of change is glacial due to a combination of shear bureaucratic size and byzantine regulations. Most contracts just don’t support agile principles. For example, the Manifesto prioritizes “working software over comprehensive documentation” and one of the Principles is that “working software is the primary measure of progress”; but, most defense contracts require heaps of documentation that are evaluated as the primary measure of progress.

The upshot is that, to most engineers in the defense industry, “Agile” is an annoying new project management approach. Project management is already the least enjoyable part of our job, an obstacle to deal with so that we can get on with the real work. Now we have to learn a new way of doing things that may not be the most effective way to organize our teams and has no real impact on the success of the program. This has resulted in an undeserved bad taste for many of us.

If this is your experience with Agile, please understand that this is not the true intent and practice. And that’s a the point of this series: how can we achieve real agility to enhance the execution of our programs and deliver value to the field faster?

Agile Systems Engineering

So far, I’ve only mentioned Agile as a software development approach. Of course, we’re here because Agile is being appropriated to all types of engineering, especially as “Agile Hardware Development” and “Agile Systems Engineering”. Some people balk at this; how can a software process be applied to hardware and systems? Here, the distinction between little-a agile and big-A Agile is essential. Software agile development evangelists have taken the values in the Manifesto and Principles and created Agile processes and tools that realize them.

It’s incumbent upon other engineering disciplines to do the same. We must understand the agile values, envision how they are useful in our context (type of engineering, type of solution, customer, etc.), and then craft or adapt Agile processes and tools that make sense. Where many projects and teams3 go wrong is trying to shoehorn their needs into an Agile process that is a poor fit, and then blaming the process.

In the rest of this series we’ll explore how agile SE can provide customer value, how our contracts can be crafted to enable effective Agile processes, and what those processes might look like for a systems engineering team. Stay tuned!

Have you worked on a project with “Cargo Cult Agile”? Have you adapted agile principles effectively in your organization? What other resources are out there for Agile systems engineering? Share your thoughts in the comments below.

The Operations Concept: Developing and Using an OpsCon

  • An Operations Concept is more detailed than a Concept of Operations
  • It is a systems engineering artifact that describes how system use cases are realized
  • It is versatile and serves many uses across the project
  • There is no set format, though there are some best practices to consider

Concept of Operations (ConOps)

Let start by talking about the OpsCon’s better-known big brother, the ConOps.

Read More

“Diversity of thought” is the “all lives matter” of corporate inclusion efforts

For at least the last decade, engineering companies have talked a great deal about “diversity and inclusion”. Inevitably, many people1 have the takeaway that this means “diversity of thought”. This is like telling a Black Lives Matter supporter that “all lives matter”; of course all lives matter, but that’s completely missing the point2. Diversity of thought is important to avoid groupthink and promote innovation; but that’s not the point of diversity and inclusion efforts3.

Diversity and inclusion means making sure that teams are actually diverse, across a range of visible and not-visible features. Why does that matter?

The business case

There are a lot of business justifications for fostering diverse teams. The consulting firm McKinsey has published some slick reports with charts and stock photos4 to make the case to business leaders: inclusion = performance = profits. There are also arguments about finding and retaining top talent, regulatory mandates, and employee engagement.

The thing is, who cares? This blog isn’t about corporate profit, it’s about effective engineering practices. In my experience, engineers tend not to care much about profit except as a means to do fun and innovative work5. Getting some business benefits from diversity and inclusion is a nice side effect, and if it helps get corporate buy-in it’s hard to complain too much. But it still doesn’t feel right.

The innovation case

All the talk about business case often neglects to consider the mechanism, why do diverse teams perform better and how do we leverage that to enhance performance? It’s actually fascinating. As Harvard Business Review puts it, “diverse teams feel less comfortable“, which slows down their decision making and causes them to think more critically.

If you’re a fan of Daniel Kahneman’s book Thinking, Fast and Slow, you may recognize this as engaging the “slow” system. We tend to rush to decisions with fast thinking, which is efficient but not always the most effective. The friction caused by diversity forces us to engage the more creative and thoughtful slow thinking. That’s interesting to understand and is a more compelling argument to the technically-minded, but it still doesn’t feel right.

The human case

When I think about diversity and inclusion, I always end up back at the same rationale: it’s just the right thing to do. We live in a world where some members of society have fewer opportunities because of historical racism, sexism, and homophobia, including the aftereffects of that discrimination that are still present today.

Ideally, we would live in a world that was a true meritocracy where everyone has equal opportunity to succeed based on their fit for the role, regardless of skin color, nationality, physical disability, cognitive disability, sex, gender identity, sexual orientation, religion, age, hairstyle, height, fashion sense, bench press ability, body modification, etc. Though we are getting to that world, we are still far from actually achieving it. A few representative statistics:

  • U.S. patent data show that women are inventing at an all-time high, but still less than a quarter of patents issued each year include a female inventor.
  • The American Bar Association analyzed the demographics of patent attorneys (who require a strong technical and legal background) and found that, despite recent gains, less than 7% are non-white.
  • Black and Hispanic people are underrepresented in STEM fields according to data from Pew Research.

We’re moving in the right direction, but it’s hard to argue that these are the outcomes of equitable opportunity. My personal opinion is that there actually is plenty of opportunity for those who know where to look for it, but that students don’t pursue technical fields because they don’t see it as an option for them.

And who can blame them, when the most famous Black inventor lived a century ago, when we celebrate Watson and Crick but not the female scientist whose work was critical to their discovery, when chemistry labs are not built to accommodate scientists with disabilities.

That’s changing too. There are excellent, diverse STEM role models and communicators out there: Neil deGrasse Tyson, Raven the Science Maven, Abigail Harrison, Helen Arney, the late but still extremely influential Stephen Hawking, just to name a few. This is great!

But is it enough? It’s easy to point to the high-profile success stories and say the problem is solved. It will still take a generation for the students currently looking up to these role models to pursue technical degrees, begin working in the field, and become role models themselves. With each successive generation we move closer to parity and equality. But that doesn’t mean we shouldn’t take a more active role in bringing about this change as soon as possible.

Consider your role

Equality is the soul of liberty; there is, in fact, no liberty without it.

Frances Wright

There is a project called “I Am A Scientist” which aims to show students that anyone can be a STEM professional. In a few decades this effort will no longer be necessary; of course anyone can be a scientist or engineer, who would think otherwise? In the meantime, we (as a society, as engineers interested in fostering the next generation, as teachers and leaders) have to make a deliberate choice6 to recognize, affirm, and support the widest possible range of people who may be interested in STEM, including promoting diverse voices so every student can find a role model that appeals to them.

We must think about the way in which we approach diversity. So many efforts are mere tokenism, made obvious by phrases such as “diversity hire7 and by carefully arranging corporate photos to “‘highlight” “diversity”8. If you recognize these types of practices at your company, take a moment to consider if the priority is to foster true inclusion or merely to tick a box.

We have to keep promoting inclusion in our workplaces to serve our peers today and in the future. After all, a diverse crowd of STEM degree holders isn’t helpful if they aren’t actually included in the real work. It’s easy to make fun of “unconscious bias training” and the like. But when you actually speak to people from discriminated categories and ask about their experiences you learn about the small inequities that compound to hold people back from participating and from career success. Countering those inequities can be as simple as making sure that everyone is heard and respected, that everyone has the resources and support to advocate for their career opportunities, and offering mentorship.

Clear data exists and can be collected about diversity in STEM fields and that should be our metric for success. When patents issued, papers published, degrees earned, and other outcome measures reach parity with the demographics of the general population, we can claim success. We should all do our small parts to make that happen.

Are you a “diversity candidate” with an experience to share? Do you have other suggestions for increasing inclusion? Leave your comments below.

Agile SE Part Zero: Overview

“Agile” is the latest buzzword in systems engineering. It has a fair share of both adherents and detractors, not to mention a long list of companies offering to sell tools, training, and coaching. What has been lacking is a thoughtful discussion about when agile provides value, when it doesn’t, and how to adapt agile practices to be effective in complex systems engineering projects.

I don’t claim this to be the end-all guide on agile systems engineering, but hope it will at least spark some discussion. Please comment on the articles with details from your own experiences. If you’re interested in contributing or collaborating, please contact me at benjamin@engineeringforhumans.com, I’d love to add your voice to the site.

Part 1: What is Agile Anyway?

A broad overview of Agile as a concept, including the difference between Agile processes and being agile and critical discussion of how Agile most often fails. Also, adapting the concepts which have been successful for software development in order to find success in a systems engineering context.

Part 2: What’s Your Problem?

Henry Ford’s apocryphally faster horse, a solid example of how customers can misunderstand their users, and requirements myopia. In short, requirements-based acquisition is terrible, let’s refocus on solving problems and providing value.

Part 3: Agile Contracts and the Downfall of Requirements

Requirements are the antithesis of agile1: impractical, time consuming, prone to misinterpretation. But, they are the foundation for every large DoD acquisition. A major paradigm shift is required for true agile systems engineering.

Part 4: Digital Transformation

A slight detour to discuss an important enabler. Integrated digital engineering has enormous benefits by and of itself. It also addresses many of the objections to agile systems engineering and agile hardware engineering.