Our world and our systems are safer than ever. A major reason why is that we’ve learned from prior mistakes. Many of our practices, rules, and standards are “written in blood” from past, tragic failures. We learn so that we don’t repeat the same mistakes. Of course, we first identify the proximate causes—the specific events directly leading to a casualty. To truly learn, we must take a step back to examine the larger context: what were the preceding holes in the Swiss cheese, and how do we account for them in our systems engineering practice? This approach is increasingly important …

Written in Blood: Case Studies of Systems Engineering Failure Read more »

The Navy installed touch-screen steering systems to save money. Ten sailors paid with their lives. ProPublica Ten sailors died after the crew of the destroyer USS John S. McCain lost control of their vessel, causing a collision with the merchant tanker Alnic MC. There was nothing technically wrong with the vessel or its controls. Though much of the blame was put on the Sailors and Officers aboard, the real fault rests with the design of the Integrated Bridge & Navigation System (IBNS).

The vast majority of catastrophes are created by a series of factors that line up in just the wrong way, allowing seemingly-small details to add up to a major incident. The Swiss cheese model is a great way to visualize this and is fully compatible with systems thinking. Understanding it will help you design systems which are more resilient to failures, errors, and even security threats.

HSI is a natural part of the systems engineering process. It adds minimal up-front cost for significant benefits and cost savings. Optimizing the system for performance and lifecycle cost benefits the customer. Ensuring early consideration of HSI factors reduces the risk of costly rework to the contractor. Delivering a system which is mission effective with reasonable lifecycle costs benefits everyone.