A common misconception is that Agile development processes are faster. I’ve heard this from leaders as a justification for adopting Agile processes and read it in proposals as a supposed differentiator. It’s not true. Nothing about Agile magically enable teams to architect, engineer, design, test, or validate any faster.
In fact, many parts of Agile are actually slower. Time spent on PI planning, backlog refinement, sprint planning, daily stand-ups1, and retrospectives is time the team isn’t developing. Much of that overhead is avoided in a Waterfall style where the development follows a set plan.
What Agile does offer, however, is sooner realization of value. And that’s the source of the misconception. The charts below illustrate this notionally. You can see that Agile delivers a small amount of system capability early on and the builds on that value with incremental deliveries. By contrast, Waterfall delivers no system capability until the development is “done”, at which point all of the capability is delivered at once. Agile development isn’t faster, but it does start to provide value sooner; that adds up to more area ‘under the curve’ of cumulative value.
But, that’s not even the real value of Agile, in my opinion. On the charts you’ll also notice two different Waterfall lines, one for theory and one for practice. In theory, the Waterfall requirements should deliver the exactly correct system. In practice, requirements are often poorly written, incomplete, or misinterpreted, resulting in system that misses the mark. It’s also possible for user needs to change over time, especially given the long duration of many larger projects.
But because validation testing is usually scheduled near the end of the Waterfall project, those shortcomings aren’t discovered until it’s very costly to correct them. With Agile, iterative releases mean we can adapt as we learn both on an individual feature level and on the product roadmap level.
In short, Agile isn’t faster. But it delivers value sooner, delivers more cumulative value over all, and ensures that the direction of the product provides the most value to the user.
For more, check out my series on Agile Systems Engineering. Also, share your thoughts on the differences between Agile and Waterfall in the comments below.
CALLBACK is the monthly newsletter of NASA’s Aviation Safety Reporting System (ASRS)1. Each edition features excerpts from real, first-person safety reports submitted to the system. Most of the reports come from pilots, many by air traffic controllers, and also the occasional maintainer, ground crew, or flight attendant. Human factors concerns feature heavily and the newsletters provide insight into current safety concerns2. ASRS gets five to nine thousand reports each month, so there’s plenty of content for the CALLBACK team to mine.
The February 2022 issue contained this report about swapped buttons:
That’s not incorrect, per se; in fact, it’s a useful generalization. The problem is that it is often misinterpreted. When something goes wrong, Murphy will be invoked with an air of inevitability: of course [whatever improbable event] would happen, it’s Murphy’s law!
You, dear reader and astute system thinker, may have already spotted the issue. If anything that can go wrong will, why not take steps to preclude (or at least mitigate the impact of (or at least be willing to accept)) this possibility?
The story of Murphy’s Law starts with some of the most important, foundational research in airplane and automotive crash safety. I will summarize the program, but there is no way I can do it justice. I’d highly recommend this article by Nick Spark, or the video below by YouTube sensation The History Guy.
Physician and US Air Force officer John Paul Stapp was a pioneer in researching the effects of acceleration and deceleration forces on humans. This work was done using a rocket-powered sled called the Gee Whiz4 at Edwards Air Force Base. A later version called Sonic Wind was even faster, capable of going Mach 1.75.
There’s a long history of doctors and scientists experimenting on themselves. Doctor Barry Marshall injected himself with the bacterium Helicobacter pylori in order to prove that ulcers were not caused by stress and could be treated with antibiotics, winning a Nobel prize for the work. Doctors Nicholas Senn and Jean-Louis-Marc Alibert each showed that cancer was not contagious by injecting or implanting it into themselves. Of course, self-experimentation is not without risk. Dr. William Stark died from scurvy while deliberately malnourishing himself to research the disease. Stapp was cut from the same cloth, subjecting himself to 29 rocket sled tests including some of the most severe and uncertain of the setups.
And they were quite severe. Test subjects routinely experienced forces up to 40g. The peak force experienced by a human during these tests was a momentary 82.6g, which is insane. By comparison, manned spacecraft experience 3-4g and fighter pilots about 9g. People lose consciousness during sustained forces of 4-14g, depending on training, fitness, and whether they’re wearing a g-suit.
Stapp and his fellow subjects suffered tunnel vision, disorientation, loss of consciousness, “red outs” due to burst capillaries in the eyes, and black eyes due to burst capillaries around the eyes. They lost teeth fillings, were bruised and concussed, they cracked ribs and broke collarbones. Twice Stapp’s wrist broke from a test and one of those times he simply set it himself before heading back to the office. The team, particularly project manager George Nichols, were legitimately worried about killing test subjects as they were accelerated faster than bullets and close to the speed of sound6.
All of this effort was designed to understand the forces humans could withstand. It had been thought that humans were not capable of surviving more the 18g, so airplane seats weren’t designed to withstand any more than that. Stapp thought, correctly, that aviators were dying in crashes not because of the forces experienced but because their planes didn’t have the structural integrity to protect them. The work lead to major advances in survivability in military aviation.
Stapp then applied his expertise to automotive research, using the techniques he’d developed to create the first crash tests and crash test dummies. He also advocated for, tested, and helped to perfect automotive seatbelts, saving millions of lives. A non-profit association honors his legacy with an annual conference on car crash survivability. We all really owe a debt of gratitude to Dr. Stapp and this program.
So, where does Murphy fit into all of this? Well, during the program there was some question about the accuracy of the accelerometers being used to measure g-forces. Another Air Force officer, Edward Aloysius Murphy Jr. had developed strain transducers to provide this instrumentation for his own work with centrifuges. He was happy to provide these devices to the rocket sled program and he sent them down to Edwards Air Force Base with instructions for installing and using them.
The gauges were installed, Stapp was strapped in, and the test conducted. The engineers eagerly pulled the data from the devices and found… nothing. Confused, the team called Murphy to ask for help and he flew out to Edwards AFB to see for himself. Upon investigation, he found that the transducers had each been meticulously installed backwards, resulting in recording no data. He blamed himself for not considering that possibility when writing the instructions and in frustration said:
If there’s more than one way to do a job, and one of those ways will end in disaster, then somebody will do it that way.
Stapp then popularized the phrase, stating in a press conference that “We do all of our work in consideration of Murphy’s Law… If anything can go wrong, it will. We force ourselves to think through all possible things that could go wrong before doing a test and act to counter them”. With human subjects in highly risky experiments, the team put the utmost care into the design of the system and the preparation of each test event. By assuming that anything that can go wrong will indeed eventually go wrong, the team would put in the effort to minimize the number of things that could go wrong and thus maximize safety.
It’s why we have polarized outlets and safety interlocks. It’s why any critical component should only be able to be installed the correct way, because the ASRS database is filled with hundreds, if not thousands, of reports like the one from the top of this story of parts that fit correctly but are actually installed wrongly8.
This article is a quick detour on an important enabler for agile systems engineering. “Digital transformation” means re-imagining the way businesses operate in the digital age, including how we engineer systems. As future articles discuss scaling agile practices to larger and more complex systems, it will be very helpful to understand the possibilities that digital engineering unlocks.
Digital engineering enables the agile revolution
The knee-jerk reaction to agile systems engineering is this: “sure, agile is great for the speed and flexibility of software development, but there’s just no way to apply it to hardware systems”. Objections range from development times to lead times to the cost of producing physical prototypes.
Enter digital engineering (DE). DE is the use of computer models to develop, analyze, prototype, simulate, and experiment with the system. Of course, we’ve used computers in engineering for decades for computer-aided design (CAD), modeling & simulation (M&S), and more. DE integrates and builds upon these individual tools to model the entire, integrated system. It takes advantage of the latest discipline tools for development, integrates them using model-based systems engineering (MBSE) techniques, and uses simulation tools to test performance in realistic mission environments.
Because all components are digitally modeled, it can be treated much more like a software project using Dev*Ops. Engineering teams in every discipline work in a version control system and can branch, collapse, experiment, and unit test. They feed in data from analyses and tests to improve the fidelity of the model. When a component design is ready, it gets pushed to a test environment where automated testing makes sure nothing breaks. Then, it is deployed into a “production” simulation environment where a combination of automated and manual simulation puts the system through its paces and evaluates performance in various test scenarios.
Ready to build for real? Pick a configuration and move into a real-world prototyping and/or initial production effort with confidence in the design. We can then compare actual and simulated performance data to verify the accuracy of the models, as well as feed the results back into the model to increase its fidelity going forward.
The most-cited success story for DE and agile system development is the Saab Gripen E multi-role strike fighter [pdf]. Using a Dev*Ops approach, any version of the aircraft could be “compiled” and virtually flown at any time. Want to compare different mission system packages, armament sets, or other configurations? It’s easy to pull those options and go fly.
As a systems engineer, I love this! We can use powerful simulation environments to understand complex system interactions, conduct trade studies, and optimize the design across the entire system. And because it’s virtual, we can make changes quickly to explore possibilities with no risk—even off-the-wall ideas just to see. Systems engineering becomes generative again, breaking out of the reductive mindset we often find ourselves in.
Lifecycle analyses, digital twins, and other benefits
The benefits of the DE go far beyond the initial development. So far we’ve focused on system operational performance, but the entire system lifecycle range can be simulated. Maintainability, reliability, failure modes and effects, safety, robustness, survivability, usability, etc. WOW! This enables holistic lifecycle systems engineering like we’ve never seen before.
The digital system model can be reused in many ways, providing benefits far past the initial development for example, simulation capabilities can be packaged into training devices, providing more accurate training for a reduced cost/development time.
The biggest impact may come from the concept of a “digital twin”. This involves feeding real-world data back into a digital model for each individual unit in order to maintain an exact (as much as possible) simulation. There are many applications, such as:
Condition-based maintenance: a unit which has seen severe service may be brought in for maintenance sooner than scheduled, while a unit with light duty may be able to extend its maintenance interval
Performance prediction: test performance virtually before operating in previously unanticipated environments or configurations
Failure reconstruction: if a unit fails, having an exact digital model enhances the investigation
Software upgrade testing: perform testing of upgrades on each fielded configuration to ensure compatibility
Threat analysis: test each fielded configuration for vulnerabilities to new cyber and physical threats
Digital twins can be built after-the-fact, but a native digital design allows for a very simple transition from engineering to operations1.
DE isn’t necessarily a prerequisite to agility nor does adopting DE make a program Agile. However, the combination of Agile and DE can be extremely powerful. The real key for Agility is how the program is planned and functions prioritized. More on that next time.
Have you worked with a DE for a large system? Did I overlook any impacts or benefits? What are pitfalls to be aware of? Share your experiences and opinions in the comments below!
Requirements are a poor way to acquire a system. They’re great in theory, but frequently fail in practice. Writing good requirements is hard, much harder than you’d think if you’ve never had the opportunity. Ivy Hooks gives several examples of good and bad requirements in the paper “Writing Good Requirements“. Poor requirements can unnecessarily constrain the design, be interpreted incorrectly, and pose challenges for verification. Over-specification results in spending on capabilities that aren’t really needed while under-specification can result in a final product that doesn’t provide all of the required functions.
If writing one requirement is hard, try scaling it up to an entire complex system. Requirements-based acquisition rests on the assumption that the specification and statement of work are complete, consistent, and effective. That requires a great deal of up-front work with limited opportunity to correct issues found later. A 2015 GAO report found that “DoD often does not perform sufficient up-front requirements analysis”, leading to “cost, schedule, and performance problems”.
And that’s just the practical issue. The systematic issue with requirements is that the process of analyzing and specifying requirements is time consuming. One of the more recent DoD acquisition buzzphrases is “speed of relevance”. Up-front requirements are antithetical to this goal. If it takes months or even years just to develop those requirements, the battlefield will have evolved before a contract can be issued. Add years of development and testing, and then we’re deploying last-generation technology geared to meeting a past need. That’s the speed of irrelevance.
Agile promises a better approach to deliver capabilities faster. But we have to move away from large up-front requirements efforts.
Traditional requirements-based acquisition represents a fixed scope, with up-front planning to estimate the time and cost required to accomplish that scope. Pivoting during the development effort (for example, as we learn more about what is required to accomplish the mission) requires re-planning with significant cost and schedule impacts. The Government Accountability Office (GAO) conducts annual reviews of Major Defense Acquisition Programs (MDAPs). The most recent report analyzing 85 MDAPs found that they have experienced over 54 percent total cost growth and 29 percent schedule growth, resulting in an average delay of more than 2 years.
Defense acquisition leaders talk about delivering essential capabilities faster and then continuing to add value with incremental deliveries, which is a foundational Agile and Dev*Ops concept. But you can’t do that effectively under a fixed-scope contract where the emphasis is on driving to that “complete” solution.
The opposite of a fixed-scope contract is a value stream or capacity of work model. Give the development teams broad objectives and let them get to work. Orient the process around incremental deliveries, prioritize the work that will provide the most value soonest, and start getting those capabilities to the field.
“But wait,” you say, “doesn’t the project have to end at some point?” That’s the best part of this model. The ‘fixed’ cost and schedule contract keeps getting renewed as long as the contractor is providing value. The contractor is incentivized to delivery quality products and to work with the customer to prioritize the backlog, or the customer may choose not to renew the contract. The customer has flexibility to adjust funding profiles over time, ramping up or down based on need and funding availability. If the work reaches a natural end point—any additional features wouldn’t be worth the cost or there is no longer a need for the product—the effort can be gracefully wrapped up.
You may be familiar with the project management triangle1. Traditional approaches try to fix all of the aspects, and very often fail. Agile approaches provide guardrails to manage each of the aspects but otherwise allow the effort to evolve organically. This is a radical shift for program managers and contracting offices, but is absolutely essential for true agile development.
The most important aspect of agile approaches is that they shift requirements development from an intensive up-front effort to an ongoing, collaborative effort. The graphic below illustrates the difference between traditional and agile approaches. With traditional approaches, the contractor is incentivized to meet the contractual requirements, whether or not the system actually delivers value to the using organization or is effective to the end user.
In an agile model, the development backlog will be seeded with high-level system objectives. Requirements (or “user stories”) are developed through collaboration among the stakeholders and the development is shaped by iterative user feedback. The agile contract may have a small set of absolute requirements or constraints. For example, the system may need to comply with an established architecture or interface, meet particular performance requirements, or adhere to relevant standards. The key is that these absolute requirements are as minimal and high-level as possible.
This enables the stakeholder requirements discovery, analysis, and development process to be collaborative, iterative, and ongoing. The resulting requirements set isn’t radically different from a traditional requirements decomposition: requirements still have to be traceable from top to bottom. A key difference is that the decomposition happens closer to the development, both in time and organization. The rationale and mission context for a requirement won’t get lost because the development team is involved in the process, so they understand the drivers behind the features they’ll be implementing. Also, because the requirements aren’t enshrined in a formal contract they can be easily changed as the teams learn more from prototyping and user engagements.
I’m getting ahead of myself, though! In the next installment of this series we’ll look at digital engineering as a key enabler of Agile SE before moving on to the importance of cross-functional development teams, the role of Product Owner, and scaling up to a large project.
What are your experiences with agile contracts and agile requirements? Share your best practices, horror stories, and pitfalls to avoid below.
When people trot out that quote they’re often trying to make the point that seeking user feedback will only constrain the design because our small-minded <sneer>users</sneer> cannot possibly think outside the box. I disagree with that approach. User feedback is valuable information. It should not constrain the design, but it is essential to be able to understand and empathize with your users. They say “faster horse”? It’s your job to generalize and innovate on that desire to come up with a car. The problem with the “singular visionary” approach is that for every wildly successful visionary there are a dozen more with equally innovative ideas that didn’t find a market.
Sometimes, your research will even lead you to discover something totally unexpected which changes your whole perspective on the problem.
Team Aqualink was tasked by their customer (the chief medical officer of the Navy SEALS) to build a biometric monitoring kit for Navy divers. These divers face both acute and long-term health impacts due to the duration and severe conditions inherent in their dives. A wearable sensor system would allow divers to monitor their own health during a dive and allow Navy doctors to analyze the data afterwords.
Team Aqualink put themselves in the flippers of a SEAL dive team (literally) and discovered something interesting: many of the dives were longer than necessary because the divers lacked a good navigation system. The medical concerns were, at least partially, actually a symptom. What the divers truly wanted was GPS or another navigational system that worked at depth. Solving that root cause would not only alleviate many of the health concerns, it would also improve mission performance. That’s a much broader impact than initially envisioned.
The customer was trying to solve the problem they saw without a deeper understanding of the user’s needs. That’s not a criticism of the customer. Truly understanding user needs is hard and requires substantial effort by systems engineers or UX researchers well-versed in user requirements discovery.
In the U.S. DoD, the Joint Capabilities Integration and Development System (JCIDS) process is intended to identify mission capability gaps and potential solutions. The Initial Capabilities Document (ICD), Capability Development Document (CDD), and Key Performance Parameters (KPPs) are the basis for every materiel acquisition. This process suffers from the same shortcoming as the biometric project: it’s based on data that is often removed from the everyday experiences of the user. But once requirements are written, it’s very hard to change them even if the development team uncovers groundbreaking new insights.
Instead of determining the requirements from the outset, the Army is funding five companies for an 18-month digital prototyping effort. The teams were given a set of nine desired characteristics for the vehicle and will have the freedom to explore varying designs in a low-cost digital environment. The Army realizes that the companies may have tools, experiences, and concepts to innovate in ways the Army hasn’t considered. The Army is defining the problem space and stepping back to allow the contractors to explore the solution space.
System engineering for the DoD is built around requirements. The aforementioned JCIDS process defines the need. Based on that need, the acquisition command defines the requirements. The contractor bids and develops to those requirements. The test commands evaluate the system against those requirements. In theory, since those requirements tie back to warfighter needs, if we met the requirements we must have met the need.
But, there’s a gap. In the proposal process, contractors evaluate the scope of work and estimate how much effort will be required to complete the work. Sometimes this is based on concrete data from similar efforts in the past. Other times, it’s practically a guess. If requirements are incompletely specified, there could be significant latitude for interpretation. As the Team Aqualink story illustrates, even the best requirement set cannot adequately capture the actual, boots-on-the-ground mission and user needs.
So, the contractor has bid a certain cost to complete the work based on their understanding of the requirements provided. If they learn more information about the user need during the development—and meeting that need would drive up the cost—they have three options:
Ask the customer for a contract change and more money to develop the desired functionality
Absorb the additional costs
Build to the requirement even if it isn’t the best way to meet the need (or doesn’t really meet it at all)
None of these solutions are ideal. Shelling out much more than originally budgeted reflects poorly on the government program office, who has to answer to Congress for significant overruns. Contractors will absorb some additional development cost from a “management reserve” fund built into their bid, but that amount is pretty limited. In many cases, we end up with option 3.
This is heavily driven by incentive structures. Contractors are evaluated and compensated based on meeting the requirements. Therefore, the contractor’s success metrics and leadership bonuses are built around requirements. Leaders put pressure on engineers to meet requirement metrics and so engineers are incentivized to prioritize the metrics over system performance. DoD acquisition reforms such as Human Systems Integration (HSI) have attempted to force programs to do better, but have primarily resulted in more requirements-focused bureaucracy and rarely the desired outcome.
I call this “requirements myopia”: a focus on meeting the requirements rather than delivering value.
Refocusing on value
It doesn’t make sense to get rid of requirements entirely, but we can adapt our approach based on the needs of each acquisition. I touched on this briefly in an earlier article, Agile Government Contracts.
One major issue: if we don’t have requirements, how will we know what to build and when the development is done? Ponder that until next time, because in the next post in this series we’ll dive into some of the potential approaches.
What are your experiences with requirements, good or bad? Thoughts on the “faster horse”, Team Aqualink’s pivot, or the Optionally Manned Fighting Vehicle (OMFV) prototyping effort? Sound off below!
Agile is a relatively new approach to software development based on the Agile Manifesto and Agile Principles. These documents are an easy read and you should absolutely check them out. I will sum them up as stating that development should be driven by what is most valuable to the customer and that our projects should align around delivering value.
Yes, I’ve obnoxiously italicized the word value as if it were in the glossary of a middle school textbook. That’s because value is the essence of this entire discussion.
With a little-a, “agile” is the ability to adapt to a changing situation. This means collaboration to understand the stakeholder needs and the best way to satisfy those needs. It means changing the plan when the situation (or your understanding of the situation) changes. It means understanding what is valuable to the customer, focusing on delivering that value, and minimizing non-value added effort.
With a big-A, “Agile” is a software development process that aims to fulfill the agile principles. There are actually several variants that fall under the Agile umbrella such as Scrum, Kanban, and Extreme Programming. Each of these have techniques, rituals, and processes that help teams deliver a quality product through a focus on value-added work1.
“Cargo Cult” Agile
“Agile” has become the hot-new-thing, buzzword darling of the U.S. defense industry2. Did I mean Big-A or Little-a? It hardly matters. As contractors have rushed to promote their new development practices, they have trampled the distinction. The result is Cargo Cult Agile: following the rituals of an Agile process and expecting that the project will magically become more efficient and effective as a result. I wrote about this previously, calling it agile-in-name-only and FrAgile.
This isn’t necessarily the fault of contractors and development teams. They want to follow the latest best practices from commercial industry to most effectively meet the needs of their customers. But as anyone who has worked in the defense industry can tell you, the pace of change is glacial due to a combination of shear bureaucratic size and byzantine regulations. Most contracts just don’t support agile principles. For example, the Manifesto prioritizes “working software over comprehensive documentation” and one of the Principles is that “working software is the primary measure of progress”; but, most defense contracts require heaps of documentation that are evaluated as the primary measure of progress.
The upshot is that, to most engineers in the defense industry, “Agile” is an annoying new project management approach. Project management is already the least enjoyable part of our job, an obstacle to deal with so that we can get on with the real work. Now we have to learn a new way of doing things that may not be the most effective way to organize our teams and has no real impact on the success of the program. This has resulted in an undeserved bad taste for many of us.
If this is your experience with Agile, please understand that this is not the true intent and practice. And that’s a the point of this series: how can we achieve real agility to enhance the execution of our programs and deliver value to the field faster?
Agile Systems Engineering
So far, I’ve only mentioned Agile as a software development approach. Of course, we’re here because Agile is being appropriated to all types of engineering, especially as “Agile Hardware Development” and “Agile Systems Engineering”. Some people balk at this; how can a software process be applied to hardware and systems? Here, the distinction between little-a agile and big-A Agile is essential. Software agile development evangelists have taken the values in the Manifesto and Principles and created Agile processes and tools that realize them.
It’s incumbent upon other engineering disciplines to do the same. We must understand the agile values, envision how they are useful in our context (type of engineering, type of solution, customer, etc.), and then craft or adapt Agile processes and tools that make sense. Where many projects and teams3 go wrong is trying to shoehorn their needs into an Agile process that is a poor fit, and then blaming the process.
In the rest of this series we’ll explore how agile SE can provide customer value, how our contracts can be crafted to enable effective Agile processes, and what those processes might look like for a systems engineering team. Stay tuned!
Have you worked on a project with “Cargo Cult Agile”? Have you adapted agile principles effectively in your organization? What other resources are out there for Agile systems engineering? Share your thoughts in the comments below.
“Agile” is the latest buzzword in systems engineering. It has a fair share of both adherents and detractors, not to mention a long list of companies offering to sell tools, training, and coaching. What has been lacking is a thoughtful discussion about when agile provides value, when it doesn’t, and how to adapt agile practices to be effective in complex systems engineering projects.
I don’t claim this to be the end-all guide on agile systems engineering, but hope it will at least spark some discussion. Please comment on the articles with details from your own experiences. If you’re interested in contributing or collaborating, please contact me at email@example.com, I’d love to add your voice to the site.
A broad overview of Agile as a concept, including the difference between Agile processes and being agile and critical discussion of how Agile most often fails. Also, adapting the concepts which have been successful for software development in order to find success in a systems engineering context.
Henry Ford’s apocryphally faster horse, a solid example of how customers can misunderstand their users, and requirements myopia. In short, requirements-based acquisition is terrible, let’s refocus on solving problems and providing value.
Requirements are the antithesis of agile1: impractical, time consuming, prone to misinterpretation. But, they are the foundation for every large DoD acquisition. A major paradigm shift is required for true agile systems engineering.
A slight detour to discuss an important enabler. Integrated digital engineering has enormous benefits by and of itself. It also addresses many of the objections to agile systems engineering and agile hardware engineering.
Human performance is a major factor in overall system performance
Humans are increasingly the bottleneck for system performance
Human factors engineering design drives human performance and thus system performance
Why care about humans?
In many system development efforts, the focus is on the capabilities of the technology: How fast can the jet fly? How accurately can the rifle fire?
We can talk about the horsepower of the engines and the boring of the rifle until the cows come home, but without a human pressing the throttle or pulling the trigger, neither technology is doing anything. A major mistake many systems engineering efforts experience is neglecting the impact of the human on the performance of the system.
A great example is the FIM-92 Stinger Man Portable Air Defense System. Stinger had a requirement to hit the target 60% of the time, which was met easily in developmental testing. However, put in the hands of actual soldiers, it only hit the target 30% of the time. An Army report found that the system suffered from several shortcomings including poor usability and a lack of consideration for the capabilities of the intended user population. The technology hit the mark, but the system as a whole failed1.
Let’s illustrate with a more everyday example. I play ice hockey and use a professional composite stick. I would guess that my fastest slap shot clocks in at around 50 mph. A pro using the exact same stick could easily break 100 mph. Clearly the technology isn’t any different, I just don’t have the same level of skill. The performance is the combination of the technology and the human using it.
Once we acknowledge that fact, it’s clear that we must understand the capabilities and limitations of the users to understand how the system is going to work in the real world. Most human factors models capture this interaction in one way or another. My preferred model for most systems is the FAA human factors interaction model, shown below. This model shows a continuous loop. The human takes in information through sensory capabilities, makes a decision, and translates that decision into actions to the system; then, the system takes those inputs, responds appropriately, and updates the displays for the loop to repeat.
This just drives home the point that system performance is driven by both technology and human performance. But, simply accounting for human performance is the bare minimum. In most cases we can go much further, designing the human-technology interactions to enhance the performance of the human and thus the integrated system.
The human bottleneck
A related model, often used by the military, is the OODA loop: Observe, Orient, Decide, Act. In any competition from ice hockey to strategy games to aerial dogfights, an entity that can execute the OODA loop faster and more accurately than their opponent, all other factors being equal, will win. This is a useful paradigm for exploring human performance in complex systems.
Systems developers have paid more and more attention to the OODA loop in recent decades, as computer technologies have significantly sped up the loop. We have more ability to collect and act upon information than ever before, to the point that it can be overwhelming if not managed effectively. We’ve come a long way from WWII cockpits with dial gauges and completely manual controls to point-and-click control of otherwise-autonomous aircraft. Computers used to require tedious manual programming with careful planning for even relatively simple tasks, and lots of waiting around for programs to finish running. Now, computers can complete tasks nearly instantaneously2 and are often idle waiting for the human’s next command. Automation has taken over many simpler tasks, and can do them better and more reliably than a human.
In short, it’s not the technology delaying the OODA loop; the human is the bottleneck.
The role of human factors engineering
Even selecting the very best humans and providing them with the very best training can only improve performance so much, and that’s a pretty costly approach. The solution is obvious: engineer superhumans. However, effective human factors engineering can support and enhance human performance.
Human factors engineering (HFE) is a broad and multidisciplinary field that addresses any interface between human and technology. Depending on the needs of the system, this could be as simple as ensuring that displays are clearly readable. For advanced systems with autonomous capabilities, HFE supports effective functional allocation among the technology and human elements of the system, maximizing the value of both; the technology handles the things that don’t require human decision making to allow the user to focus on the tasks that do require uniquely human capabilities. Effective human interfaces support the human’s tasks by presenting the right information at the right time in the most useful manner, allowing the human sensory and cognitive components to work speedily and accurately. That’s followed by intuitive controls for transmitting the human’s decision back to the technology.
The OODA loop is sped up when the human gets the right information presented in an effective and timely manner and can act on that information also in an effective and timely manner. When the human is the bottleneck, any HFE design improvements that support human performance have a direct corresponding impact on system performance. In order to have the biggest impact, the HFE effort must be initiated early on when those allocation and design decisions have not yet been made. Additionally, the human must be captured in all system architectural, behavioral, and simulation models.
The Stinger example demonstrates the risk of pushing off human factors engineering, and that was for a relatively straightforward system. To enhance the OODA loop and maintain a competitive edge in advanced modern systems, HFE is a must. System performance is the product of technology and human performance, and HFE is essential for ensuring the human aspect of that equation.