• Home
  • Engineering for Humans

Agile SE Part Three: Agile Contracts and the Downfall of Requirements

Welcome to a series on Agile Systems Engineering, exploring the practical aspects of this emerging approach. If you didn’t see them already, check out Part 1: What is Agile, Anyway? and Part 2: What’s Your Problem?

The antithesis of agile

Requirements are a poor way to acquire a system. They’re great in theory, but frequently fail in practice. Writing good requirements is hard, much harder than you’d think if you’ve never had the opportunity. Ivy Hooks gives several examples of good and bad requirements in the paper “Writing Good Requirements“. Poor requirements can unnecessarily constrain the design, be interpreted incorrectly, and pose challenges for verification. Over-specification results in spending on capabilities that aren’t really needed while under-specification can result in a final product that doesn’t provide all of the required functions.

If writing one requirement is hard, try scaling it up to an entire complex system. Requirements-based acquisition rests on the assumption that the specification and statement of work are complete, consistent, and effective. That requires a great deal of up-front work with limited opportunity to correct issues found later. A 2015 GAO report found that “DoD often does not perform sufficient up-front requirements analysis”, leading to “cost, schedule, and performance problems”.

And that’s just the practical issue. The systematic issue with requirements is that the process of analyzing and specifying requirements is time consuming. One of the more recent DoD acquisition buzzphrases is “speed of relevance”. Up-front requirements are antithetical to this goal. If it takes months or even years just to develop those requirements, the battlefield will have evolved before a contract can be issued. Add years of development and testing, and then we’re deploying last-generation technology geared to meeting a past need. That’s the speed of irrelevance.

Agile promises a better approach to deliver capabilities faster. But we have to move away from large up-front requirements efforts.

Still from Back to the Future Part 2 with subtitles changed to: Requirements? Where we're going, we don't need requirements!

Agile contracting

Traditional requirements-based acquisition represents a fixed scope, with up-front planning to estimate the time and cost required to accomplish that scope. Pivoting during the development effort (for example, as we learn more about what is required to accomplish the mission) requires re-planning with significant cost and schedule impacts. The Government Accountability Office (GAO) conducts annual reviews of Major Defense Acquisition Programs (MDAPs). The most recent report analyzing 85 MDAPs found that they have experienced over 54 percent total cost growth and 29 percent schedule growth, resulting in an average delay of more than 2 years.

Defense acquisition leaders talk about delivering essential capabilities faster and then continuing to add value with incremental deliveries, which is a foundational Agile and Dev*Ops concept. But you can’t do that effectively under a fixed-scope contract where the emphasis is on driving to that “complete” solution.

The opposite of a fixed-scope contract is a value stream or capacity of work model. Give the development teams broad objectives and let them get to work. Orient the process around incremental deliveries, prioritize the work that will provide the most value soonest, and start getting those capabilities to the field.

triangle with vertices labeled "SCOPE", "COST", and "TIME", the center is the word "QUALITY"
Project Management Triangle

“But wait,” you say, “doesn’t the project have to end at some point?” That’s the best part of this model. The developer’s ‘fixed’ cost and schedule keeps getting renewed as long as they’re providing value. The contractor is incentivized to delivery quality products and to work with the customer to prioritize the backlog, or the customer may choose not to renew the contract. The customer has flexibility to adjust funding profiles over time, ramping up or down based on need and funding availability. If the work reaches a natural end point—any additional features wouldn’t be worth the cost or there is no longer a need for the product—the effort can be gracefully wrapped up.

You may be familiar with the project management triangle1. Traditional approaches try to fix all of the aspects, and very often fail. Agile approaches provide guardrails to manage all of the aspects but otherwise allow the effort to evolve organically.

Agile requirements

The most important aspect of agile approaches is that they shift requirements development from an intensive up-front effort to an ongoing, collaborative effort. The graphic below illustrates the difference between traditional and agile approaches. With traditional approaches, the contractor is incentivized to meet the contractual requirements, whether or not the system actually delivers value to the using organization or is effective to the end user.

Block diagram showing acquisition models. Traditional acquisition model includes using organization defining need to acquisition organization writing requirements for contractor organization delivering system to using organization deploying system to end users. Agile acquisition model includes using organization defining need to acquisition organization creating an agile contract to contractor, iterative feedback between contractor and end users, collaboration among all groups, the contractor continuous delivery to using organization which then deploys to end users.

In an agile model, the development backlog will be seeded with high-level system objectives. Requirements are developed through collaboration among the stakeholders and the development is shaped by iterative user feedback. The agile contract may have a small set of system requirements or constraints. For example, it may be a requirement for the system to comply with an established architecture or interface, meet particular performance requirements, or adhere to relevant standards. The key is that the provided set of requirements are as minimal as possible.

The requirements discovery, analysis, and development process is collaborative, iterative, and ongoing. It really isn’t extremely different from a traditional requirements decomposition, as requirements still have to be traceable from top-level objectives. A key difference is that the decomposition happens closer to the development, both in time and organization. The rationale and mission context for a requirement won’t get lost because the development team is involved in the process, so they understand the drivers behind the features they’ll be implementing.

I’m getting ahead of myself, though! In the next installment of this series we’ll look at cross-functional development teams, the role of Product Owner, and scaling up to a large project.

What are your experiences with agile contracts and agile requirements? Share your best practices, horror stories, and pitfalls to avoid below.

Agile SE Part Two: What’s Your Problem?

Welcome to a series on Agile Systems Engineering exploring the practical aspects of this emerging approach. If you didn’t see it already, check out Part 1: What is Agile, Anyway?

A faster horse

“If I had asked people what they wanted, they would have said faster horses.”

Apocryphally attributed to Henry Ford1

When people trot out that quote they’re often trying to make the point that seeking user feedback will only constrain the design because our small-minded <sneer>users</sneer> cannot possibly think outside the box. I disagree with that approach. User feedback is valuable information. It should not constrain the design, but it is essential to be able to understand and empathize with your users. They say “faster horse”? It’s your job to generalize and innovate on that desire to come up with a car. The problem with the “singular visionary” approach is that for every wildly successful visionary there are a dozen more with equally innovative ideas that didn’t find a market.

Sometimes, your research will even lead you to discover something totally unexpected which changes your whole perspective on the problem.

Here’s a great, real-world example from a Stanford Hacking for Defense class:

Customer ≠ user

Team Aqualink was tasked by their customer (the chief medical officer of the Navy SEALS) to build a biometric monitoring kit for Navy divers. These divers face both acute and long-term health impacts due to the duration and severe conditions inherent in their dives. A wearable sensor system would allow divers to monitor their own health during a dive and allow Navy doctors to analyze the data afterwords.

Team Aqualink put themselves in the flippers of a SEAL dive team (literally) and discovered something interesting: many of the dives were longer than necessary because the divers lacked a good navigation system. The medical concerns were, at least partially, really a symptom. What the divers truly wanted was GPS or another navigational system that worked at depth. Solving that root cause would alleviate many of the health concerns and improve mission performance, a much broader impact.

The customer was trying to solve the problem they saw without a deeper understanding of the user’s needs. That’s not a criticism of the customer. Truly understanding user needs is hard and requires substantial effort by engineers well-versed in user requirements discovery.

In the US DoD, the Joint Capabilities Integration and Development System (JCIDS) process is intended to identify mission capability gaps and potential solutions. The Initial Capabilities Document (ICD), Capability Development Document (CDD), and Key Performance Parameters (KPPs) are the basis for every materiel acquisition. This process suffers from the same shortcoming as the biometric project: it’s based on data that is often removed from the everyday experiences of the user. But once requirements are written, it’s very hard to change them even if the development team uncovers groundbreaking new insights.

The Bradley Fighting Vehicle

Still capture from The Pentagon Wars (1998)

The Bradley Fighting Vehicle was lampooned in the 1998 movie The Pentagon Wars2. By contrast, the program to replace the Bradley is being held up as an example of a new way of doing business.

Instead of determining the requirements from the outset, the Army is funding five companies for an 18-month digital prototyping effort. The teams were given a set of nine desired characteristics for the vehicle and will have the freedom to explore varying designs in a low-cost digital environment. The Army realizes that the companies may have tools, experiences, and concepts to innovate in ways the Army hasn’t considered. The Army is defining the problem space and stepping back to allow the contractors to explore the solution space.

Requirements myopia

System engineering for the DoD is built around requirements. The aforementioned JCIDS process defines the need. Based on that need, the acquisition command defines the requirements. The contractor bids and develops to those requirements. The test commands evaluate the system against those requirements. In theory, since those requirements tie back to warfighter needs, if we met the requirements we must have met the need.

But, there’s a gap. In the proposal process, contractors evaluate the scope of work and estimate how much effort will be required to complete the work. Sometimes this is based on concrete data from similar efforts in the past. Other times, it’s practically a guess. If requirements are incompletely specified, there could be significant latitude for interpretation. Even really good requirement sets cannot adequately capture the actual, boots-on-the-ground mission and user needs.

So, the contractor has bid a certain cost to complete the work based on their understanding of the requirements provided. If they learn more information about the user need but meeting that need would drive up the cost, they have three options:

  1. Ask the customer for a contractual change and more money to develop the desired functionality
  2. Absorb the additional costs
  3. Build to the requirement even if it isn’t the best way to meet the need (or doesn’t really meet it at all)

Obviously none of these solutions are ideal. Shelling out much more than originally budgeted reflects poorly on the government program office, who has to answer to Congress for significant overruns. Contractors will absorb some additional development cost from a “management reserve” fund built into their bid, but that amount is pretty limited. In many cases, we end up with option 3.

This is heavily driven by incentive structures. Contractors are evaluated and compensated based on meeting the requirements. Therefore, the contractor’s success metrics and leadership bonuses are built around requirements. Leaders put pressure on engineers to meet requirement metrics and so engineers are incentivized to prioritize the metrics over system performance. DoD acquisition reforms such as Human Systems Integration (HSI) have attempted to force programs to do better, but have primarily resulted in more requirements-focused bureaucracy and rarely the desired outcome.

I call this “requirements myopia”: a focus on meeting the requirements rather than delivering value.

Refocusing on value

It doesn’t make sense to get rid of requirements entirely, but we can adapt our approach based on the needs of each acquisition. I touched on this briefly in an earlier article, Agile Government Contracts.

One major issue: if we don’t have requirements, how will we know when the development is “done”? Ponder that until next time, because in the next post in this series we’ll dive into some of the potential approaches.

What are your experiences with requirements, good or bad? Thoughts on the “faster horse”, Team Aqualink’s pivot, or the Optionally Manned Fighting Vehicle (OMFV) prototyping effort? Sound off below!

Agile SE Part One: What is Agile, Anyway?

Welcome to a new series on Agile Systems Engineering exploring the practical aspects of this emerging approach.

What is “Agile”?

Agile is a relatively new approach to software development based on the Agile Manifesto and Agile Principles. These documents are straightforward. I will sum them up as stating that development should be driven by what is most valuable to the customer and that our projects should align around delivering value.

Yes, I’ve obnoxiously italicized the word value as if it were in the glossary of a middle school textbook. That’s because value is the essence of this discussion.

Little-a Agile

With a little-a, “agile” is the ability to adapt to a changing situation. This means collaboration to understand the stakeholder needs and the best way to satisfy those needs. It means changing the plan when the situation (or your understanding of the situation) changes. It means understanding what is valuable to the customer, focusing on delivering that value, and minimizing non-value add effort.

Big-A Agile

With a big-A, “Agile” is a software development process that aims to fulfill the agile principles. There are actually several variants that fall under the Agile umbrella such as Scrum, Kanban, and Extreme Programming. Each of these have techniques, rituals, and processes that supposedly lead to delivery of a quality product by helping teams focus on value-added work.

“Cargo Cult” Agile

“Agile” has become the hot-new-thing, buzzword darling of the U.S. defense industry. Did I mean Big-A or Little-a? It hardly matters. As contractors have rushed to promote their “new” development practices, they have trampled the distinction. The result is Cargo Cult Agile: following the rituals of an Agile process and expecting that the project will magically become more efficient and effective as a result. I wrote about this previously, calling it agile-in-name-only and FrAgile.

This isn’t necessarily the fault of contractors. They want to follow the latest best practices from commercial industry to most effectively meet the needs of their customers. But as anyone who has worked in the defense industry can tell you, the pace of change is glacial due to a combination of shear bureaucratic size and byzantine regulations. Most contracts just don’t support agile principles. For example, the Manifesto prioritizes “working software over comprehensive documentation” and one of the Principles is that “working software is the primary measure of progress”; but, most defense contracts require heaps of documentation that are evaluated as the primary measure of progress.

The upshot is that, to most engineers in the defense industry, “Agile” is an annoying new project management approach. Project management is already the least enjoyable part of our job, an obstacle to deal with so that we can get on with the real work. Now we have to learn a new way of doing things that may not be the most effective way to organize our teams and has no real impact on the success of the program. This has resulted in an undeserved bad taste for many of us.

If this is your experience with Agile, please understand that this is not the true intent and practice. The rest of this series will talk about how we achieve real agility.

Agile Systems Engineering

So far, I’ve only mentioned Agile as a software development approach. Of course, we’re here because Agile is being appropriated to all types of engineering, especially as “Agile Hardware Development” and “Agile Systems Engineering”. Some people balk at this; how can a software process be applied to hardware and systems? Here, the distinction between little-a agile and big-A Agile is essential. Software agile development evangelists have taken the values in the Manifesto and Principles and created Agile processes and tools that realize them.

It’s incumbent upon other engineering disciplines to do the same. We must understand the agile values, envision how they are useful in our context (type of engineering, type of solution, customer, etc.), and then craft or adapt Agile processes and tools that make sense. Where many projects and teams go wrong is trying to shoehorn their needs into an Agile process that is a poor fit, and then blaming the process.

Stay Tuned

In the rest of this series we’ll explore how agile SE can provide customer value, how our contracts can be crafted to enable effective Agile processes, and what those processes might look like for a systems engineering team. Stay tuned!

Have you worked on a project with “Cargo Cult Agile”? Have you adapted agile principles effectively in your organization? What other resources are out there for Agile systems engineering? Share your thoughts in the comments below.

The Operations Concept: Developing and Using an OpsCon

  • An Operations Concept is more detailed than a Concept of Operations
  • It is a systems engineering artifact that describes how system use cases are realized
  • It is versatile and serves many uses across the project
  • There is no set format, though there are some best practices to consider

Concept of Operations (ConOps)

Let start by talking about the OpsCon’s better-known big brother, the ConOps.

Read More

“Diversity of thought” is the “all lives matter” of corporate inclusion efforts

For at least the last decade, engineering companies have talked a great deal about “diversity and inclusion”. Inevitably, many people1 have the takeaway that this means “diversity of thought”. This is like telling a Black Lives Matter supporter that “all lives matter”; of course all lives matter, but that’s completely missing the point2. Diversity of thought is important to avoid groupthink and promote innovation; but that’s not the point of diversity and inclusion efforts3.

Diversity and inclusion means making sure that teams are actually diverse, across a range of visible and not-visible features. Why does that matter?

The business case

There are a lot of business justifications for fostering diverse teams. The consulting firm McKinsey has published some slick reports with charts and stock photos4 to make the case to business leaders: inclusion = performance = profits. There are also arguments about finding and retaining top talent, regulatory mandates, and employee engagement.

The thing is, who cares? This blog isn’t about corporate profit, it’s about effective engineering practices. In my experience, engineers tend not to care much about profit except as a means to do fun and innovative work5. Getting some business benefits from diversity and inclusion is a nice side effect, and if it helps get corporate buy-in it’s hard to complain too much. But it still doesn’t feel right.

The innovation case

All the talk about business case often neglects to consider the mechanism, why do diverse teams perform better and how do we leverage that to enhance performance? It’s actually fascinating. As Harvard Business Review puts it, “diverse teams feel less comfortable“, which slows down their decision making and causes them to think more critically.

If you’re a fan of Daniel Kahneman’s book Thinking, Fast and Slow, you may recognize this as engaging the “slow” system. We tend to rush to decisions with fast thinking, which is efficient but not always the most effective. The friction caused by diversity forces us to engage the more creative and thoughtful slow thinking. That’s interesting to understand and is a more compelling argument to the technically-minded, but it still doesn’t feel right.

The human case

When I think about diversity and inclusion, I always end up back at the same rationale: it’s just the right thing to do. We live in a world where some members of society have fewer opportunities because of historical racism, sexism, and homophobia, including the aftereffects of that discrimination that are still present today.

Ideally, we would live in a world that was a true meritocracy where everyone has equal opportunity to succeed based on their fit for the role, regardless of skin color, nationality, physical disability, cognitive disability, sex, gender identity, sexual orientation, religion, age, hairstyle, height, fashion sense, bench press ability, body modification, etc. Though we are getting to that world, we are still far from actually achieving it. A few representative statistics:

  • U.S. patent data show that women are inventing at an all-time high, but still less than a quarter of patents issued each year include a female inventor.
  • The American Bar Association analyzed the demographics of patent attorneys (who require a strong technical and legal background) and found that, despite recent gains, less than 7% are non-white.
  • Black and Hispanic people are underrepresented in STEM fields according to data from Pew Research.

We’re moving in the right direction, but it’s hard to argue that these are the outcomes of equitable opportunity. My personal opinion is that there actually is plenty of opportunity for those who know where to look for it, but that students don’t pursue technical fields because they don’t see it as an option for them.

And who can blame them, when the most famous Black inventor lived a century ago, when we celebrate Watson and Crick but not the female scientist whose work was critical to their discovery, when chemistry labs are not built to accommodate scientists with disabilities.

That’s changing too. There are excellent, diverse STEM role models and communicators out there: Neil deGrasse Tyson, Raven the Science Maven, Abigail Harrison, Helen Arney, the late but still extremely influential Stephen Hawking, just to name a few. This is great!

But is it enough? It’s easy to point to the high-profile success stories and say the problem is solved. It will still take a generation for the students currently looking up to these role models to pursue technical degrees, begin working in the field, and become role models themselves. With each successive generation we move closer to parity and equality. But that doesn’t mean we shouldn’t take a more active role in bringing about this change as soon as possible.

Consider your role

Equality is the soul of liberty; there is, in fact, no liberty without it.

Frances Wright

There is a project called “I Am A Scientist” which aims to show students that anyone can be a STEM professional. In a few decades this effort will no longer be necessary; of course anyone can be a scientist or engineer, who would think otherwise? In the meantime, we (as a society, as engineers interested in fostering the next generation, as teachers and leaders) have to make a deliberate choice6 to recognize, affirm, and support the widest possible range of people who may be interested in STEM, including promoting diverse voices so every student can find a role model that appeals to them.

We must think about the way in which we approach diversity. So many efforts are mere tokenism, made obvious by phrases such as “diversity hire7 and by carefully arranging corporate photos to “‘highlight” “diversity”8. If you recognize these types of practices at your company, take a moment to consider if the priority is to foster true inclusion or merely to tick a box.

We have to keep promoting inclusion in our workplaces to serve our peers today and in the future. After all, a diverse crowd of STEM degree holders isn’t helpful if they aren’t actually included in the real work. It’s easy to make fun of “unconscious bias training” and the like. But when you actually speak to people from discriminated categories and ask about their experiences you learn about the small inequities that compound to hold people back from participating and from career success. Countering those inequities can be as simple as making sure that everyone is heard and respected, that everyone has the resources and support to advocate for their career opportunities, and offering mentorship.

Clear data exists and can be collected about diversity in STEM fields and that should be our metric for success. When patents issued, papers published, degrees earned, and other outcome measures reach parity with the demographics of the general population, we can claim success. We should all do our small parts to make that happen.

Are you a “diversity candidate” with an experience to share? Do you have other suggestions for increasing inclusion? Leave your comments below.

Human Factors Design Drives System Performance

Bottom Line Up Front:

  • Human performance is a major factor in overall system performance
  • Humans are increasingly the bottleneck for system performance
  • Human factors engineering design drives human performance and thus system performance

Why care about humans?

In many system development efforts, the focus is on the capabilities of the technology: How fast can the jet fly? How accurately can the rifle fire?

We can talk about the horsepower of the engines and the boring of the rifle until the cows come home, but without a human pressing the throttle or pulling the trigger, neither technology is doing anything. A major mistake many systems engineering efforts experience is neglecting the impact of the human on the performance of the system.

A great example is the FIM-92 Stinger Man Portable Air Defense System. Stinger had a requirement to hit the target 60% of the time, which was met easily in developmental testing. However, put in the hands of actual soldiers, it only hit the target 30% of the time. An Army report found that the system suffered from several shortcomings including poor usability and a lack of consideration for the capabilities of the intended user population. The technology hit the mark, but the system as a whole failed1.

Let’s illustrate with a more everyday example. I play ice hockey and use a professional composite stick. I would guess that my fastest slap shot clocks in at around 50 mph. A pro using the exact same stick could easily break 100 mph. Clearly the technology isn’t any different, I just don’t have the same level of skill. The performance is the combination of the technology and the human using it.

System performance = technology performance * human performance

Once we acknowledge that fact, it’s clear that we must understand the capabilities and limitations of the users to understand how the system is going to work in the real world. Most human factors models capture this interaction in one way or another. My preferred model for most systems is the FAA human factors interaction model, shown below. This model shows a continuous loop. The human takes in information through sensory capabilities, makes a decision, and translates that decision into actions to the system; then, the system takes those inputs, responds appropriately, and updates the displays for the loop to repeat.

This just drives home the point that system performance is driven by both technology and human performance. But, simply accounting for human performance is the bare minimum. In most cases we can go much further, designing the human-technology interactions to enhance the performance of the human and thus the integrated system.

The human bottleneck

A related model, often used by the military, is the OODA loop: Observe, Orient, Decide, Act. In any competition from ice hockey to strategy games to aerial dogfights, an entity that can execute the OODA loop faster and more accurately than their opponent, all other factors being equal, will win. This is a useful paradigm for exploring human performance in complex systems.

Systems developers have paid more and more attention to the OODA loop in recent decades, as computer technologies have significantly sped up the loop. We have more ability to collect and act upon information than ever before, to the point that it can be overwhelming if not managed effectively. We’ve come a long way from WWII cockpits with dial gauges and completely manual controls to point-and-click control of otherwise-autonomous aircraft. Computers used to require tedious manual programming with careful planning for even relatively simple tasks, and lots of waiting around for programs to finish running. Now, computers can complete tasks nearly instantaneously2 and are often idle waiting for the human’s next command. Automation has taken over many simpler tasks, and can do them better and more reliably than a human.

In short, it’s not the technology delaying the OODA loop; the human is the bottleneck.

The role of human factors engineering

Even selecting the very best humans and providing them with the very best training can only improve performance so much, and that’s a pretty costly approach. The solution is obvious: engineer superhumans. However, effective human factors engineering can support and enhance human performance.

Human factors engineering (HFE) is a broad and multidisciplinary field that addresses any interface between human and technology. Depending on the needs of the system, this could be as simple as ensuring that displays are clearly readable. For advanced systems with autonomous capabilities, HFE supports effective functional allocation among the technology and human elements of the system, maximizing the value of both; the technology handles the things that don’t require human decision making to allow the user to focus on the tasks that do require uniquely human capabilities. Effective human interfaces support the human’s tasks by presenting the right information at the right time in the most useful manner, allowing the human sensory and cognitive components to work speedily and accurately. That’s followed by intuitive controls for transmitting the human’s decision back to the technology.

The OODA loop is sped up when the human gets the right information presented in an effective and timely manner and can act on that information also in an effective and timely manner. When the human is the bottleneck, any HFE design improvements that support human performance have a direct corresponding impact on system performance. In order to have the biggest impact, the HFE effort must be initiated early on when those allocation and design decisions have not yet been made. Additionally, the human must be captured in all system architectural, behavioral, and simulation models.

The Stinger example demonstrates the risk of pushing off human factors engineering, and that was for a relatively straightforward system. To enhance the OODA loop and maintain a competitive edge in advanced modern systems, HFE is a must. System performance is the product of technology and human performance, and HFE is essential for ensuring the human aspect of that equation.

Ergonomics

The term ergonomics was coined by Wojciech Jastrzębowski in 1857 to mean “the science of work”1 with the goal of improving productivity and profit. He described the importance of physical, emotional, entertainment, and rational aspects of the labor and employee experience, but the context was squarely on factory-type production.

Over time, this has evolved into two, slightly different definitions.

Workplace safety

In the United States, ergonomics is most often associated with equipment or workplace design. An “ergonomic” computer mouse is supposedly more comfortable and less likely to result in repetitive strain injury. The Occupational Health and Safety Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) provide guidance for workplace design to reduce the risk of occupational injury.

This definition is a subset of human factors engineering (HFE) that may be also called occupational health and safety. It’s related to anthropometrics (the study of human body measurements) and industrial engineering.

Human factors engineering

Around the world, ergonomics is more often synonymous with HFE. The International Ergonomics Association provides this definition: “scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and methods to design in order to optimize human well-being and overall system performance”.

Discussion

These different definitions of the same term came about by parallel evolution driven by broader demand for human engineering.

In the US, the term human factors engineering was coined to describe research into aviation human error during World War II. It began being applied to other industries and grew in scope to encompass a range of related fields. Some ergonomists began practicing HFE while ergonomics continued to focus on workplace impacts and fell under the umbrella of human factors.

The same demand existed for human engineering around the world for aviation and then computers, but the term HFE wasn’t in use. Instead, the application of ergonomics expanded to meet the need. This has lead to the different terms being used in different parts of the world.

Human Factors Engineering (HFE)

Human factors engineering (HFE) is a broad and multidisciplinary field that designs and evaluates the human interfaces of a system.

Don’t stop reading — that definition masks a lot of complexity. Let’s break it down:

System

INCOSE defines system as “an arrangement of parts or elements that together exhibit behaviour or meaning that the individual constituents do not. Systems can be either physical or conceptual, or a combination of both.”

Systems may include any combination of hardware, software, people, organizations, processes, information, facilities, services, tools, consumables, etc. A system can be as complex as the entire universe or as simple as two people interacting.

Human interfaces

When people hear “human interface”, they usually think software or hardware interfaces. But, interfaces really encompass any human interfaces with any of the other system components as defined above.

A great example is Crew Resource Management, which is a system for pilot interpersonal communication and shared decision making. No other system components are involved, just the humans in the cockpit1.

Think of a trip to the grocery store. You propel the cart, observe price tags and product packaging, smell the prepared foods, hear the muzak, talk to the butcher, handle products, place items on the checkstand conveyor belt, talk with the cashier, use the card reader to pay, check the accuracy of the receipt, etc. All of these are interfaces with some level of design. There’s a whole field of study on grocery store psychology.

Design and evaluate

What does it mean to design and evaluate an interface?

Obviously, it’s highly dependent on the requirements and context of the system. This is where relevant human factors expertise is required to understand the aims of the system and the interfaces to be designed, decompose those into human factors objectives, and specify how success will be evaluated.

It’s best to specify the verification method before designing, to ensure that you’re clear on the goal you’re working towards. Common metrics include user satisfaction, accuracy and error rate, speed, situation awareness, workload, usability, and engagement.

Broad and multidisciplinary

HFE covers a range of fields that may include: human-computer interaction, anthropometry, physiology, psychology, macroergonomics and organizational psychology, cognitive science, industrial design, user experience, and more.

Because HFE is such a broad field, it may take a team of experts with different specialties to effectively address the range of considerations applicable to any given system.

Summary

You should now have a better understanding of the full scope of what it means that HFE designs and evaluates the human interfaces of a system.

You may also be interested in the relationship between HFE and ergonomics and user experience (UX).

User Experience (UX)

The term user experience was coined in 1993 by Don Norman while working at Apple. He intended it to encompass a person’s entire experience related to a product, from any feelings they had prior to using it, to first seeing it in the store, getting it home, turning it on and learning how to use it, telling someone else about it, etc.

I highly recommend this short video where Mr. Norman explains this history and also complains about the frequent misuse of the word:

How does UX relate to human factors engineering?

Human factors is an umbrella term that covers a range of fields which design and evaluate the human interfaces of a system. We often think of a system as hardware and/or software, but it can also include social and organizational interfaces.

Thus, UX is very much a type of human factors. UX is distinguished from related specialties like human computer interaction (HCI) or interaction design by extending the scope of consideration beyond the product itself to any interface which might affect the user’s perceptions and feelings of the product. Yet, the goal is the same: understand the human’s needs in order to design interfaces that meet them1.

UX is very much a type of human factors.

Recently the field of customer experience (CX) has begun to emerge. CX focuses on whatever interactions a customer has with a business, which may be independent of a product user experience. CX and UX are the same basic concept, just with slightly varying scopes. CX emphasizes the design of the sales process and the customer as a user of that process. A product UX team may not consider the sales process if the “user” isn’t the same as the customer.

Why do we care about the user’s experience? For the same reason we care about all of the other functions of human factors. People seek out products and services to meet their needs. When we meet those needs better than the competition2, they’ll come back for more.