• Home
  • Category: Human Factors Engineering

Ergonomics

The term ergonomics was coined by Wojciech Jastrzębowski in 1857 to mean “the science of work”1 with the goal of improving productivity and profit. He described the importance of physical, emotional, entertainment, and rational aspects of the labor and employee experience, but the context was squarely on factory-type production.

Over time, this has evolved into two, slightly different definitions.

Workplace safety

In the United States, ergonomics is most often associated with equipment or workplace design. An “ergonomic” computer mouse is supposedly more comfortable and less likely to result in repetitive strain injury. The Occupational Health and Safety Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) provide guidance for workplace design to reduce the risk of occupational injury.

This definition is a subset of human factors engineering (HFE) that may be also called occupational health and safety. It’s related to anthropometrics (the study of human body measurements) and industrial engineering.

Human factors engineering

Around the world, ergonomics is more often synonymous with HFE. The International Ergonomics Association provides this definition: “scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and methods to design in order to optimize human well-being and overall system performance”.

Discussion

These different definitions of the same term came about by parallel evolution driven by broader demand for human engineering.

In the US, the term human factors engineering was coined to describe research into aviation human error during World War II. It began being applied to other industries and grew in scope to encompass a range of related fields. Some ergonomists began practicing HFE while ergonomics continued to focus on workplace impacts and fell under the umbrella of human factors.

The same demand existed for human engineering around the world for aviation and then computers, but the term HFE wasn’t in use. Instead, the application of ergonomics expanded to meet the need. This has lead to the different terms being used in different parts of the world.

Human Factors Engineering (HFE)

Human factors engineering (HFE) is a broad and multidisciplinary field that designs and evaluates the human interfaces of a system.

Don’t stop reading — that definition masks a lot of complexity. Let’s break it down:

System

INCOSE defines system as “an arrangement of parts or elements that together exhibit behaviour or meaning that the individual constituents do not. Systems can be either physical or conceptual, or a combination of both.”

Systems may include any combination of hardware, software, people, organizations, processes, information, facilities, services, tools, consumables, etc. A system can be as complex as the entire universe or as simple as two people interacting.

Human interfaces

When people hear “human interface”, they usually think software or hardware interfaces. But, interfaces really encompass any human interfaces with any of the other system components as defined above.

A great example is Crew Resource Management, which is a system for pilot interpersonal communication and shared decision making. No other system components are involved, just the humans in the cockpit1.

Think of a trip to the grocery store. You propel the cart, observe price tags and product packaging, smell the prepared foods, hear the muzak, talk to the butcher, handle products, place items on the checkstand conveyor belt, talk with the cashier, use the card reader to pay, check the accuracy of the receipt, etc. All of these are interfaces with some level of design. There’s a whole field of study on grocery store psychology.

Design and evaluate

What does it mean to design and evaluate an interface?

Obviously, it’s highly dependent on the requirements and context of the system. This is where relevant human factors expertise is required to understand the aims of the system and the interfaces to be designed, decompose those into human factors objectives, and specify how success will be evaluated.

It’s best to specify the verification method before designing, to ensure that you’re clear on the goal you’re working towards. Common metrics include user satisfaction, accuracy and error rate, speed, situation awareness, workload, usability, and engagement.

Broad and multidisciplinary

HFE covers a range of fields that may include: human-computer interaction, anthropometry, physiology, psychology, macroergonomics and organizational psychology, cognitive science, industrial design, user experience, and more.

Because HFE is such a broad field, it may take a team of experts with different specialties to effectively address the range of considerations applicable to any given system.

Summary

You should now have a better understanding of the full scope of what it means that HFE designs and evaluates the human interfaces of a system.

You may also be interested in the relationship between HFE and ergonomics and user experience (UX).

User Experience (UX)

The term user experience was coined in 1993 by Don Norman while working at Apple. He intended it to encompass a person’s entire experience related to a product, from any feelings they had prior to using it, to first seeing it in the store, getting it home, turning it on and learning how to use it, telling someone else about it, etc.

I highly recommend this short video where Mr. Norman explains this history and also complains about the frequent misuse of the word:

How does UX relate to human factors engineering?

Human factors is an umbrella term that covers a range of fields which design and evaluate the human interfaces of a system. We often think of a system as hardware and/or software, but it can also include social and organizational interfaces.

Thus, UX is very much a type of human factors. UX is distinguished from related specialties like human computer interaction (HCI) or interaction design by extending the scope of consideration beyond the product itself to any interface which might affect the user’s perceptions and feelings of the product. Yet, the goal is the same: understand the human’s needs in order to design interfaces that meet them1.

UX is very much a type of human factors.

Recently the field of customer experience (CX) has begun to emerge. CX focuses on whatever interactions a customer has with a business, which may be independent of a product user experience. CX and UX are the same basic concept, just with slightly varying scopes. CX emphasizes the design of the sales process and the customer as a user of that process. A product UX team may not consider the sales process if the “user” isn’t the same as the customer.

Why do we care about the user’s experience? For the same reason we care about all of the other functions of human factors. People seek out products and services to meet their needs. When we meet those needs better than the competition2, they’ll come back for more.

The Boeing 737 Max crashes represent a failure of systems engineering

The 737 is an excellent airplane with a long history of safe, efficient service. Boeing’s cockpit philosophy of direct pilot control and positive mechanical feedback represents excellent human factors1. In the latest generation, the 737 Max, Boeing added a new component to the flight control system which deviated from this philosophy, resulting in two fatal crashes. This is a case study in the failure of human factors engineering and systems engineering.

The 737 Max and MCAS

You’ve certainly heard of the 737 Max, the fatal crashes in October 2018 and March 2019, and the Maneuvering Characteristics Augmentation System (MCAS) which has been cited as the culprit. Even if you’re already familiar, I highly recommend these two thorough and fascinating articles:

  • Darryl Campbell at The Verge traces the market pressures and regulatory environment which led to the design of the Max, describes the cockpit activities leading up to each crash, and analyzes the information Boeing provided to pilots.
  • Gregory Travis at IEEE Spectrum provides a thorough analysis of the technical design failures from the perspective of a software engineer along with an appropriately glib analysis of the business and regulatory environment.

Typically I’d caution against armchair analysis of an aviation incident until the final crash investigation report is in. However, given the availability of information on the design of the 737 Max, I think the engineering failures are clear even as the crash investigations continue.

Hazard analysis

The most glaring, obvious, and completely inexplicable design choice was a lack of redundancy in the MCAS sensor inputs. Gregory Travis blames “inexperience, hubris, or lack of cultural understanding” on the part of the software team. That certainly seems to be the case, but it’s nowhere near the whole story.

There’s a team whose job it is to understand how the various aspects of the system work together: systems engineering2. One essential job of the systems engineer is to understand all of the possible interactions among system components, how they interact under various conditions, and what happens if any part (or combination of parts) fails. That last part is addressed by hazard analysis techniques such as failure modes, effects, and criticality analysis (FMECA).

The details of risk management may vary among organizations, but the general principles are the same: (1) Identify hazards, (2) categorize by severity and probability, (3) mitigate/control risk as much as practical and to an acceptable level, (4) monitor for any issues. These techniques give the engineering team confidence that the system will be reasonably safe.

FAA Safety Risk Management Process flowchart and Risk Categorization Matrix table
FAA Safety Risk Management Process and Risk Categorization Matrix from FAA Order 8040.4B, Safety Risk Management Policy.

On its own, the angle of attack (AoA) sensor is an important but not critical component. The pilots can fly the plane without it, though stall-protection, automatic trim, and autopilot functions won’t work normally, increasing pilot workload. The interaction between the sensor and flight control augmentation system, MCAS in the case of the Max, can be critical. If MCAS uses incorrect AoA information from a faulty sensor, it can push the nose down and cause the plane to lose altitude. If this happens, the pilots must be able to diagnose the situation and respond appropriately. Thus the probability of a crash caused by an AoA failure can be notionally figured as follows:

P(AoA sensor failure) × P(system unable to recognize failure) × P(system unable to adapt to failure) × P(pilots unable to diagnose failure) × P(pilots unable to disable MCAS) × P(pilots unable to safely fly without MCAS)

AoA sensors can fail, but that shouldn’t be much of an issue because the plane has at least two of them and it’s pretty easy for the computers to notice a mismatch between them and also with other sources of attitude data such as inertial navigation systems. Except, of course, that the MCAS didn’t bother to cross-check; the probability of the Max failing to recognize and adapt to a potential AoA sensor failure was 100%. You can see where I’m going with this: the AoA sensor is a single point of failure with a direct path through the MCAS to the flight controls. Single point of failure and flight controls in the same sentence ought to give any engineer chills.

The next link in our failure chain is the pilots and their ability to recognize, diagnose, and respond to the issue. This implies proper training, procedures, and understanding of the system. From the news coverage, it seems that pilots were not provided sufficient information on the existence of MCAS and how to respond to its failure. Systems and human factors engineers, armed with a hazard analysis, should have known about and addressed this potential contributing factor to reduce the overall risk.

Finally, there’s the ability of the pilots to disable and fly without MCAS. The Ethiopian Airlines crew correctly diagnosed and responded to the issue but the aerodynamic forces apparently prevented them from manually correcting it. The ability to override those forces, plus the time it takes to correct the flight path, should have been part of the FMECA analysis.

I have no specific knowledge of the hazard analyses performed on the 737 Max. Based on recent events, it seems that the risk of this type of failure was severely underestimated or went unaddressed. Either one is equally poor systems engineering.

Cockpit human factors

An inaccurate hazard analysis, though inexcusable, could be an oversight. Compounding that, Boeing made a clear design decision in the cockpit controls which is hard to defend.

In previous 737 models, pilots could quickly override automatic trim control by yanking back on the yoke, similar to disabling cruise control in a car by hitting the brake. This is great human factors and it fit right in with Boeing’s cockpit philosophy of ensuring that the human was always in ultimate control. This function was removed in the Max.

As both the Lion Air and Ethiopian Airlines crew experienced, the aerodynamic forces being fed into the yoke are too strong for the human pilots to overcome. When MCAS directs the nose to go down, the nose goes down. Rather than simply control the airplane, Max pilots first have to disable the automated systems. Comparisons to HAL are not unwarranted.

In summary

Boeing is developing a fix for MCAS. It will include redundancy in AoA sensor inputs, not activating MCAS if the sensors disagree, MCAS activating only once per high-angle indication (i.e. not continuously activating after the pilots have given contrary commands), and limiting the feedback forces into the control yoke so that they aren’t stronger than the pilots. This functionality should have been part of the system to begin with.

Along with these fixes, Boeing is likely3 also re-conducting a complete hazard analysis of MCAS and other flight control systems. Boeing and the FAA should not clear the type until the hazards are completely understood, controlled, quantified, and deemed acceptable.

Many news stories frame the 737 Max crashes in terms of the market and regulatory pressures which resulted in the design. While I don’t disagree, these are not an excuse for the systems engineering failures. The 737 Max is a valuable case study for engineers of all types in any industry, and for systems engineers in high-risk industries in particular.