• Home
  • Engineering for Humans

User Experience (UX)

The term user experience was coined in 1993 by Don Norman while working at Apple. He intended it to encompass a person’s entire experience related to a product, from any feelings they had prior to using it, to first seeing it in the store, getting it home, turning it on and learning how to use it, telling someone else about it, etc.

I highly recommend this short video where Mr. Norman explains this history and also complains about the frequent misuse of the word:

How does UX relate to human factors engineering?

Human factors is an umbrella term that covers a range of fields which design and evaluate the human interfaces of a system. We often think of a system as hardware and/or software, but it can also include social and organizational interfaces.

Thus, UX is very much a type of human factors. UX is distinguished from related specialties like human computer interaction (HCI) or interaction design by extending the scope of consideration beyond the product itself to any interface which might affect the user’s perceptions and feelings of the product. Yet, the goal is the same: understand the human’s needs in order to design interfaces that meet them1.

UX is very much a type of human factors.

Recently the field of customer experience (CX) has begun to emerge. CX focuses on whatever interactions a customer has with a business, which may be independent of a product user experience. CX and UX are the same basic concept, just with slightly varying scopes. CX emphasizes the design of the sales process and the customer as a user of that process. A product UX team may not consider the sales process if the “user” isn’t the same as the customer.

Why do we care about the user’s experience? For the same reason we care about all of the other functions of human factors. People seek out products and services to meet their needs. When we meet those needs better than the competition2, they’ll come back for more.

Learn from the mistakes of others

The problem with being too busy to read is that you learn by experience… i.e. the hard way. By reading, you learn through others’ experiences, generally a better way to do business…

General James Mattis

The most successful people in any profession learn from the experiences of others. You can learn from their successes, sure. But don’t focus on doing things exactly they way they did, you’ll stifle your own innovation. Instead, understand their successes, extract relevant lessons, and forge your own path.

More importantly, learn from others’ failures and mistakes.

That’s why I publish a Reading / Listening List. As of the publishing of this article, 5 of the 6 recommendations are about poor engineering and design1. I find these stories fascinating, enlightening, and valuable. By avoiding the pitfalls of the past, we improve the likelihood of success in our own projects.

It’s okay to make mistakes, but strive to at least make original mistakes.

A Functional Team is NOT an Integrated Product Team

“My name is Inigo Montoya. You won a government contract. Prepare to deliver CDRLs.”

TL;DR: An Integrated Product Team (IPT) is a cross-functional group. If everyone on the team has the same background, that’s a functional or discipline team. There’s a difference.

Read More

Board man gets paid

For years I’ve been advocating for the effective inclusion of human systems integration (HSI) in the systems engineering (SE) process. I had to address a persistent misunderstanding of what HSI is and how it relates to human factors; while that can be frustrating, I recognized that it wasn’t going to change overnight. Instead, I worked diligently to share my message with anyone who would listen.

Recently, my diligence paid off. I was contacted by a group putting together a proposal for a defense contract. The government’s request outlined their expectations for HSI as part of the systems engineering effort in a way that the proposal team hadn’t seen before. Someone on the team had heard me speak before, knew I had the right expertise they needed, and reached out to request my support.

It will be a while before we find out who won the contract, but I am certain that our proposal is much stronger for the inclusion of HSI. The HSI piece of the work is small but essential, and any competitors without the requisite expertise may not have understood its impact or importance to the customer.

This experience reminded me of basketball star Kawhi Leonard’s most popular catchphrase: “The board man gets paid.” See, Leonard is known for his skill at grabbing his team’s rebounds1. This is a key differentiator on the basketball court. The team has done all that work to get the ball up the court, yet failed to score. Grabbing the rebound before the opponent does gives the team another chance. Most of the time, the defensive team is in a better position to grab the rebound; Kawhi Leonard has made a career of getting to those balls first.

Leonard identified an underexploited opportunity and worked hard to develop the skill to take advantage of it. Throughout high school and college, he called himself “The Board Man”. He shaped his career around this unique skill and has been extraordinarily successful because of it.

That’s not to say you have to find a niche to be successful. Obviously there are superstars in every field. But, it’s a heck of a lot easier if you can identify those opportunities nobody else is taking advantage of2.

Bonus read: The top 5%. Share your own tips, inspiration, and niche in the comments below.

Diversity in engineering careers

I had the privilege to attend the Society of Women Engineers conference WE19 in Anaheim, CA last week. I left inspired and optimistic.

Speakers and panelists relayed their experiences over the previous decades. These women had been denied entrance into engineering schools, marginalized in the workplace, and forced to become ‘one of the guys’ to be accepted among their peers.

We’ve come a long way. It’s never been a better time to enter the workforce as a woman/person of color/LGBTQ/etc. Diversity in the workforce and leadership of engineering companies is on the rise, barriers are falling, and the value of diversity is being recognized. And yet, we still have so far to go.

We recognize that diversity is good for business 1 and companies are actively recruiting more diverse talent. Our organizational cultures are still adapting to this diversity. In many ways, we still expect all employees to conform to the existing culture, rather than proactively shape the inclusive culture we desire.

A great example is the “confidence gap” theory for why men are more successful in the workplace. Writing in The Atlantic  in 2014, Katty Kay and Claire Shipman explain that “compared with men, women don’t consider themselves as ready for promotions, they predict they’ll do worse on tests, and they generally underestimate their abilities. This disparity stems from factors ranging from upbringing to biology.”

Jayshree Seth‘s WE19 closing keynote combated the confidence gap with a catchy “confidence rap”. I was excited to share it with you in a gender-neutral post about combating imposter syndrome. In researching this post, I learned that the “confidence gap” is symptom, not a cause. Telling women to be more confident won’t close the gap because our workplace cultures are often biased against women who display confidence.

Jayshree Seth countered the “confidence gap” with the “confidence rap” in an excellent keynote.

Research demonstrates that an insidious double standard2 is what’s holding women back. Women who talk up their accomplishments the same way men do are perceived as less likeable. Women who are modest are more likeable, but nobody learns of their accomplishments and they appear to lack confidence. Women can be just as confident as men, but the cultural expectations of the workplace do not allow it.

That’s not to totally dismiss the confidence gap theory. This double-standard stems partly (primarily?) from continuing societal expectations. Though gender equality has advanced significantly in recent decades, many parents continue to raise girls and boys differently3. A girl raised to be modest and display less confidence will join the workforce with the same attitude.

That’s not the whole story, of course. Our behaviors and habits continue to be shaped by the workplace culture, especially for younger employees just learning to fit in at the office. Currently most office cultures encourage confidence in men and discourage it in women.

I think this is changing slowly over time along with other aspects of gender equality. I also think that a gradual change is not good enough. We owe it to ourselves, to our female peers, and to the advancement of the profession to consciously bring gender equality in engineering more swiftly.

We should define what a gender-equal workplace looks like, identify where our cultures diverge from this ideal, and create strategies for closing that gap. As a starting point, Harvard Business Review shared some management and organizational strategies. And all of us can contribute by recognizing our own biases and by finding ways to highlight others’ accomplishments.

What does workplace gender equality mean to you? How does the culture of your office support (or not) gender equality? What strategies would you recommend for addressing bias on an individual, team, or organizational level? Post in the comments below.

The Swiss cheese model: Designing to reduce catastrophic losses

Failures and errors happen frequently. A part breaks, an instruction is misunderstood, a rodent chews through a power cord. The issue gets noticed, we respond to correct it, we clean up any impacts, and we’re back in business.

Occasionally, a catastrophic loss occurs. A plane crashes, a patient dies during an operation, an attacker installs ransomware on the network. We often look for a single cause or freak occurrence to explain the incident. Rarely, if ever, are these accurate.

Read More

Thoughts on “A Message to Garcia”

“A Message to Garcia” is a brief essay on the value of initiative and hard work written by Elbert Hubbard in 1898. It is often assigned in leadership courses, particularly in the military. Less often assigned but providing essential context is Col. Andrew Rowan’s first-person account of the mission, “How I Carried the Message to Garcia”.

There are also a number of opinion pieces archived in newspapers and posted on the internet both heralding and decrying the essay. There are a number of interpretations and potential lessons to be extracted from this story. It’s important that developing leaders find the valuable ideas.

Work ethic

Hubbard’s original essay is something of a rant on the perceived scarcity of work ethic and initiative in the ranks of employees. He holds Rowan up as an example of the rare person who is dedicated to achieving his task unquestioningly and no matter the cost.

Of course, this complaint is not unique to Hubbard1 nor is it shared universally. Your view on this theme probably depends on whether you are a manager or worker and your views on the value of work2. Nevertheless, Hubbard’s point is clear: Strong work ethic is valuable and will be rewarded.

No questions asked

If that were the extent of the message, it would be an interesting read but not particularly compelling. One reason the essay gained so much traction is Hubbard’s waxing about how Rowan supposedly carried out his task: with little information, significant ingenuity, and no questions asked. This message appeals to a certain type of ‘leader’ who doesn’t think highly of their subordinates.

It’s also totally bogus.

Lt. Rowan was a well-trained Army intelligence officer and he was sufficiently briefed on the mission. Relying on his intelligence background, he understood the political climate and implications. Additionally, preparations were made for allied forces to transport him to Garcia. He did not have to find his own way and blindly search Cuba to accomplish his objective.

I don’t intend to minimize Rowan’s significant effort and achievement, only to point out Hubbard’s misguided message. Hubbard would have us believe that Rowan succeeded through sheer determination, when the truth is that critical thinking and understanding were his means.

There may be a time and place for blind execution, but the majority of modern work calls for specialized skills and critical thinking. Hubbard seems to conflate any question with a stupid question, which is misguided. We should encourage intelligent questions and clarifications to ensure that people can carry out their tasks effectively. After all, if Rowan didn’t have the resources to reach Garcia he may still be wandering Cuba and Spain may still be an empire.

The commander who dismisses all questions breeds distrust and dissatisfaction. Worse, they send their troops out underprepared.

Leadership

On the topic of work ethic, Hubbard is preaching to the choir. Those with work ethic already have it while those with is won’t be swayed by the message. Of course, managers always desire employees who demonstrate work ethic.

“A Message to Garcia” would be more effectively viewed as a treatise on leadership. After all, Army leadership effectively identified, developed, and utilized Rowan’s potential.

Perhaps the most important lesson, understated in the essay, is choosing the right person for the job. Rowan had the right combination of determination, brains, and knowledge to get the job done. In another situation, he may have been the worst person. How did Col. Wagner know about Rowan and decide he was the right person for the job? How do we optimize personnel allocation in our own organizations?

That’s my two pesetas, now you chime in below. What lessons do you take from Hubbard’s essay? Feel free to link to an interpretation, criticism, or praise which resonates with you.

It’s time to get rid of specialty engineering: A criticism of the INCOSE Handbook

Chapter 10 of the INCOSE Systems Engineering Handbook covers “Specialty Engineering”. Take a look at the table of contents below. It’s a hodge-podge of roles and skillsets with varying scope.

Table of contents for the Specialty Engineering section of the INCOSE handbook.
Table of contents for the Specialty Engineering section of the INCOSE handbook.

There doesn’t seem to be rhyme or reason to this list of items. Training Needs Analysis is a perfect example. There’s no doubt that it’s important, but it’s one rather specific task and not a field unto itself. If you’re going to include this activity, why not its siblings Manpower Analysis and Personnel Analysis?

On the other hand, some of the items in this chapter are supposedly “integral” to the engineering process. This is belied by the fact that they’re shunted into this separate chapter at the end of the handbook. In practice, too, they’re often organized into a separate specialty engineering group within a project.

This isn’t very effective.

Many of these roles really are integral to systems engineering. Their involvement early on in each relevant process ensures proper planning, awareness, and execution. They can’t make this impact if they’re overlooked, which often happens when they’re organizationally separated from the rest of the systems engineering team. By including them in the specialty engineering section along with genuinely tangential tasks, INCOSE has basically stated that these roles are less important to the success of the project.

The solution

The solution is simple: re-evaluate and remove, or at least re-organize, this section of the handbook.

The actual systems engineering roles should be integrated into the rest of the handbook. Most of them already are mentioned throughout the document. The descriptions of each role currently in the specialty engineering section can be moved to the appropriate process section. Human systems integration, for example, might fit into “Technical Management Processes” or “Cross-Cutting Systems Engineering Methods”.

The tangential tasks, such as Training Needs Analysis, should be removed from the handbook altogether. These would be more appropriate as a list of tools and techniques maintained separately online, where it can be updated frequently and cross-referenced with other sources.

Of course, the real impact comes when leaders internalize these changes and organize their programs to effectively integrate these functions. That will come with time and demonstrated success.

The Boeing 737 Max crashes represent a failure of systems engineering

The 737 is an excellent airplane with a long history of safe, efficient service. Boeing’s cockpit philosophy of direct pilot control and positive mechanical feedback represents excellent human factors1. In the latest generation, the 737 Max, Boeing added a new component to the flight control system which deviated from this philosophy, resulting in two fatal crashes. This is a case study in the failure of human factors engineering and systems engineering.

The 737 Max and MCAS

You’ve certainly heard of the 737 Max, the fatal crashes in October 2018 and March 2019, and the Maneuvering Characteristics Augmentation System (MCAS) which has been cited as the culprit. Even if you’re already familiar, I highly recommend these two thorough and fascinating articles:

  • Darryl Campbell at The Verge traces the market pressures and regulatory environment which led to the design of the Max, describes the cockpit activities leading up to each crash, and analyzes the information Boeing provided to pilots.
  • Gregory Travis at IEEE Spectrum provides a thorough analysis of the technical design failures from the perspective of a software engineer along with an appropriately glib analysis of the business and regulatory environment.

Typically I’d caution against armchair analysis of an aviation incident until the final crash investigation report is in. However, given the availability of information on the design of the 737 Max, I think the engineering failures are clear even as the crash investigations continue.

Hazard analysis

The most glaring, obvious, and completely inexplicable design choice was a lack of redundancy in the MCAS sensor inputs. Gregory Travis blames “inexperience, hubris, or lack of cultural understanding” on the part of the software team. That certainly seems to be the case, but it’s nowhere near the whole story.

There’s a team whose job it is to understand how the various aspects of the system work together: systems engineering2. One essential job of the systems engineer is to understand all of the possible interactions among system components, how they interact under various conditions, and what happens if any part (or combination of parts) fails. That last part is addressed by hazard analysis techniques such as failure modes, effects, and criticality analysis (FMECA).

The details of risk management may vary among organizations, but the general principles are the same: (1) Identify hazards, (2) categorize by severity and probability, (3) mitigate/control risk as much as practical and to an acceptable level, (4) monitor for any issues. These techniques give the engineering team confidence that the system will be reasonably safe.

FAA Safety Risk Management Process flowchart and Risk Categorization Matrix table
FAA Safety Risk Management Process and Risk Categorization Matrix from FAA Order 8040.4B, Safety Risk Management Policy.

On its own, the angle of attack (AoA) sensor is an important but not critical component. The pilots can fly the plane without it, though stall-protection, automatic trim, and autopilot functions won’t work normally, increasing pilot workload. The interaction between the sensor and flight control augmentation system, MCAS in the case of the Max, can be critical. If MCAS uses incorrect AoA information from a faulty sensor, it can push the nose down and cause the plane to lose altitude. If this happens, the pilots must be able to diagnose the situation and respond appropriately. Thus the probability of a crash caused by an AoA failure can be notionally figured as follows:

P(AoA sensor failure) × P(system unable to recognize failure) × P(system unable to adapt to failure) × P(pilots unable to diagnose failure) × P(pilots unable to disable MCAS) × P(pilots unable to safely fly without MCAS)

AoA sensors can fail, but that shouldn’t be much of an issue because the plane has at least two of them and it’s pretty easy for the computers to notice a mismatch between them and also with other sources of attitude data such as inertial navigation systems. Except, of course, that the MCAS didn’t bother to cross-check; the probability of the Max failing to recognize and adapt to a potential AoA sensor failure was 100%. You can see where I’m going with this: the AoA sensor is a single point of failure with a direct path through the MCAS to the flight controls. Single point of failure and flight controls in the same sentence ought to give any engineer chills.

The next link in our failure chain is the pilots and their ability to recognize, diagnose, and respond to the issue. This implies proper training, procedures, and understanding of the system. From the news coverage, it seems that pilots were not provided sufficient information on the existence of MCAS and how to respond to its failure. Systems and human factors engineers, armed with a hazard analysis, should have known about and addressed this potential contributing factor to reduce the overall risk.

Finally, there’s the ability of the pilots to disable and fly without MCAS. The Ethiopian Airlines crew correctly diagnosed and responded to the issue but the aerodynamic forces apparently prevented them from manually correcting it. The ability to override those forces, plus the time it takes to correct the flight path, should have been part of the FMECA analysis.

I have no specific knowledge of the hazard analyses performed on the 737 Max. Based on recent events, it seems that the risk of this type of failure was severely underestimated or went unaddressed. Either one is equally poor systems engineering.

Cockpit human factors

An inaccurate hazard analysis, though inexcusable, could be an oversight. Compounding that, Boeing made a clear design decision in the cockpit controls which is hard to defend.

In previous 737 models, pilots could quickly override automatic trim control by yanking back on the yoke, similar to disabling cruise control in a car by hitting the brake. This is great human factors and it fit right in with Boeing’s cockpit philosophy of ensuring that the human was always in ultimate control. This function was removed in the Max.

As both the Lion Air and Ethiopian Airlines crew experienced, the aerodynamic forces being fed into the yoke are too strong for the human pilots to overcome. When MCAS directs the nose to go down, the nose goes down. Rather than simply control the airplane, Max pilots first have to disable the automated systems. Comparisons to HAL are not unwarranted.

In summary

Boeing is developing a fix for MCAS. It will include redundancy in AoA sensor inputs, not activating MCAS if the sensors disagree, MCAS activating only once per high-angle indication (i.e. not continuously activating after the pilots have given contrary commands), and limiting the feedback forces into the control yoke so that they aren’t stronger than the pilots. This functionality should have been part of the system to begin with.

Along with these fixes, Boeing is likely3 also re-conducting a complete hazard analysis of MCAS and other flight control systems. Boeing and the FAA should not clear the type until the hazards are completely understood, controlled, quantified, and deemed acceptable.

Many news stories frame the 737 Max crashes in terms of the market and regulatory pressures which resulted in the design. While I don’t disagree, these are not an excuse for the systems engineering failures. The 737 Max is a valuable case study for engineers of all types in any industry, and for systems engineers in high-risk industries in particular.