• Home
  • Category: Human Factors Engineering

Postal vehicles: Function over form

One of my favorite items in my small model collection is a 1:34 scale Grumman Long Life Vehicle (LLV)1 with sliding side doors, a roll-up rear hatch, and pull-back propulsion. The iconic vehicle has been plying our city streets for nearly 40 years, reliably delivering critical communiques, bills, checks, advertisements, Dear John letters, junk mail, magazines, catalogs, post cards from afar, chain letters2, and Amazon packages.

Read More

System Design Lessons from the USS McCain

The Navy installed touch-screen steering systems to save money.

Ten sailors paid with their lives.

ProPublica
USS McCain in 2019 (U.S. Navy Photo)

Ten sailors died after the crew of the destroyer USS John S. McCain lost control of their vessel, causing a collision with the merchant tanker Alnic MC. There was nothing technically wrong with the vessel or its controls. Though much of the blame was put on the Sailors and Officers aboard, the real fault rests with the design of the Integrated Bridge & Navigation System (IBNS).

Read More

World War III’s Bletchley Park

In a near-future battlefield against a peer adversary, effective employment of machine learning and autonomy is the deciding factor. While our adversary is adapting commercial, mass-market technologies and controlling them remotely, U.S. and allied forces dominate with the effective application of advanced technologies that make decisions faster and more accurately. The concept of Joint All-Domain Command and Control (JADC2) is a key enabler, driving better battlefield decisions through robust information sharing.

Fed by this information, advanced decision aiding systems present courses of action (COAs) to each commander and then crew in the battle, taking into account every possible factor: tasking, environment and terrain, threats, available sensors and effectors, etc. Options and recommendations adapt as the battle unfolds, supporting every decision made with actionable information while deferring to human judgment.

In this campaign, the first few battles are handily won. It seems this war will be a cakewalk.

Until the enemy learns. They notice routines in behaviors and responses that are easy to exploit: System A is a higher threat priority, so is used as a diversion; displaced earth is flagged as a potential mine, so the enemy digs random holes to slow progress; fire comes in specific patterns, so the enemy knows when a barrage is over and quickly counters.

Pretty soon, these adaptions evolve to active attacks on autonomy: dazzle camouflage tricks computer vision systems into seeing more or different units; noise added to radio communications causes military chatter to be misclassified as civilian; selective sensor jamming confuses autonomy.

As the enemy learns to counter and attack these advanced capabilities, they become less helpful and eventually become a liability. Eventually, the operators deem them unreliable and revert to human decision-making and manual control of systems. The enemy has evened the battle and our investment in advanced decision support systems is wasted. Even worse, our operators lack experience with controlling the systems and are actually at a disadvantage, the technology actively hurt us.

The solution is clear: we must be prepared to counter the enemy’s learning and to learn ourselves. This is not a new insight. Learning and adaptation have always been essential elements of war, and now it’s more important than ever. The lessons learned from the field must be fed back into the AI/ML/autonomy development process. A short feedback, development, testing, deployment cycle is essential for autonomy to adapt to the adversary’s capabilities and TTPs, limiting the ability of the adversary to learn how to defend against and defeat our technologies.

In World War II, cryptography was the game-changing technology. You’re doubtless familiar with Bletchley Park, the codebreaking site that provided critical intelligence in World War II 1. Here, men and women worked tirelessly every single day of the war to analyze communication traffic, break the day’s codes, and pass intelligence to decision-makers. This work saved countless lives, leading directly to the Allied victory and shortening the war by 2-4 years. With the advancement of communications security 2, practically unbreakable encryption is available to everyone. We will no longer have the advantage of snooping on enemy communication content and must develop some other unique capability to ensure our forces have the edge.

I submit that the advantage will come from military-grade autonomy. Not the autonomous vehicles themselves, which are commodities, but the ability of the autonomy to respond to changing enemy behavior 3. One key advantage to traditional human control is adaptability to unique and changing situations, which current autonomy is not capable of; the state of the art in autonomous systems today more closely resembles video game NPCs, mindlessly applying the same routines based on simple input. While we may have high hopes for the future of autonomy, the truth is that autonomous systems will be limited for the foreseeable future by an inability to think outside the box.

Average autonomous system

How, then, do we enable the autonomous systems to react rapidly to changing battlefield conditions?

World War III’s version of Bletchley Park will be a capability I’m calling the Battlefield Accelerated Tactics and Techniques Learning and Effectiveness Laboratory. BATTLE Lab is a simulation facility. It ingests data from the field in near-real time, every detail of every battle including terrain, weather, friendly behaviors, enemy tactics, signals, etc. Through experimentation across hundreds of thousands of simulated engagements driven by observed behavior, we’ll develop courses of action for countering the enemy in every imaginable situation. Updated behavior models will be pushed to the field multiple times per day that reduce friendly vulnerabilities, exploit enemy weaknesses, and give our forces the edge.

Of course, we already do this today with extensive threat intelligence capabilities, training, and tactics. The difference is that the future battlefield will be chockablock with autonomous systems which can more rapidly integrate new threat and behavior models generated by BATTLE Lab. We’ll be able to move faster, using autonomy and simulation to reduce the OODA loop while nearly-instantly incorporating lessons from every battle.

Without BATTLE Lab, the enemy will learn how our autonomy operates and quickly find weaknesses to exploit; autonomous systems will be weak to spoofing, jamming, and unexpected behaviors by enemy systems. Bletchley Park shortened the OODA loop by providing better intelligence to strategic decision-makers (“Observe”). BATTLE Lab will shorten the OODA Loop by improving the ability of autonomy to understand the situation and make decisions (“Orient” and “Decide”).

BATTLE Lab is enabled by technology available and maturing today: low-cost uncrewed systems 4, battlefield connectivity, and edge processing.

A critical gap is human-autonomy interaction solutions. To implement these advanced capabilities effectively, human crews need to effectively task, trust, and collaborate with autonomous teammates and these interaction strategies need to mature alongside autonomy capabilities to enhance employment at every step. Autonomy tactics may change rapidly based on new models disseminated from the BATTLE Lab and human teammates need to be able to understand and trust the autonomous system’s behaviors. Explainability and trust are topics of ongoing research; additional efforts to integrate these capabilities into mission planning and mission execution will also be needed.

What do you think the future battlefield will look like and what additional capabilities need to be developed to make it possible? Share your thoughts in the comments below.

What makes a good human factors engineer? Five critical skills

Recently, the head of a college human factors program asked for my perspective on the human factors (and user experience) skills valued in industry. Here are five critical qualities that emerged from our discussion, in no particular order:

Systems thinking

Making sense of complexity requires identifying relationships, patterns, feedback loops, and causality. Systems thinkers excel at identifying emergent properties of systems and are thus suited to analyses such as safety, cybersecurity, and process, where outcomes may not be obvious from simply looking at sum of the parts.

Read More

You Don’t Understand Murphy’s Law: The Importance of Defensive Design

CALLBACK is the monthly newsletter of NASA’s Aviation Safety Reporting System (ASRS)1. Each edition features excerpts from real, first-person safety reports submitted to the system. Most of the reports come from pilots, many by air traffic controllers, and also the occasional maintainer, ground crew, or flight attendant. Human factors concerns feature heavily and the newsletters provide insight into current safety concerns2. ASRS gets five to nine thousand reports each month, so there’s plenty of content for the CALLBACK team to mine.

The February 2022 issue contained this report about swapped buttons:

Read More

Human Factors Design Drives System Performance

Bottom Line Up Front:

  • Human performance is a major factor in overall system performance
  • Humans are increasingly the bottleneck for system performance
  • Human factors engineering design drives human performance and thus system performance

Why care about humans?

In many system development efforts, the focus is on the capabilities of the technology: How fast can the jet fly? How accurately can the rifle fire?

We can talk about the horsepower of the engines and the boring of the rifle until the cows come home, but without a human pressing the throttle or pulling the trigger, neither technology is doing anything. A major mistake many systems engineering efforts experience is neglecting the impact of the human on the performance of the system.

Read More

Ergonomics

The term ergonomics was coined by Wojciech Jastrzębowski in 1857 to mean “the science of work”1 with the goal of improving productivity and profit. He described the importance of physical, emotional, entertainment, and rational aspects of the labor and employee experience, but the context was squarely on factory-type production.

Over time, this has evolved into two, slightly different definitions.

Read More

Human Factors Engineering (HFE)

Human factors engineering (HFE) is a broad and multidisciplinary field that designs and evaluates the human interfaces of a system.

Don’t stop reading — that definition masks a lot of complexity. Let’s break it down:

System

INCOSE defines system as “an arrangement of parts or elements that together exhibit behaviour or meaning that the individual constituents do not. Systems can be either physical or conceptual, or a combination of both.”

Read More

User Experience (UX)

The term user experience was coined in 1993 by Don Norman while working at Apple. He intended it to encompass a person’s entire experience related to a product, from any feelings they had prior to using it, to first seeing it in the store, getting it home, turning it on and learning how to use it, telling someone else about it, etc.

I highly recommend this short video where Mr. Norman explains this history and also complains about the frequent misuse of the word:

Read More

The Boeing 737 Max crashes represent a failure of systems engineering

The 737 is an excellent airplane with a long history of safe, efficient service. Boeing’s cockpit philosophy of direct pilot control and positive mechanical feedback represents excellent human factors1. In the latest generation, the 737 Max, Boeing added a new component to the flight control system which deviated from this philosophy, resulting in two fatal crashes. This is a case study in the failure of human factors engineering and systems engineering.

The 737 Max and MCAS

You’ve certainly heard of the 737 Max, the fatal crashes in October 2018 and March 2019, and the Maneuvering Characteristics Augmentation System (MCAS) which has been cited as the culprit. Even if you’re already familiar, I highly recommend these two thorough and fascinating articles:

  • Darryl Campbell at The Verge traces the market pressures and regulatory environment which led to the design of the Max, describes the cockpit activities leading up to each crash, and analyzes the information Boeing provided to pilots.
  • Gregory Travis at IEEE Spectrum provides a thorough analysis of the technical design failures from the perspective of a software engineer along with an appropriately glib analysis of the business and regulatory environment.

Typically I’d caution against armchair analysis of an aviation incident until the final crash investigation report is in. However, given the availability of information on the design of the 737 Max, I think the engineering failures are clear even as the crash investigations continue.

Read More