Shortly after midnight Moscow time on the 27th of September 1983, Stanislav Yevgrafovich Petrov saved the world. This incident, unknown outside of the Soviet air defense forces until 1998, is a highly relevant case study as we accelerate the pace of automation and artificial intelligence in complex military systems. Cold War backdrop Cold War tensions between the US/NATO and USSR/Warsaw Pact were higher than they’d been since the Cuban Missile Crisis. Both sides had intermediate-range nuclear missiles on alert, treaty negotiations were breaking down, and NATO was preparing to deploy upgraded Pershing II and additional Ground Launched Cruise Missile systems …

Artificial Intelligence and Human Judgement: A Cold War Cautionary Tale Read more »

Our world and our systems are safer than ever. A major reason why is that we’ve learned from prior mistakes. Many of our practices, rules, and standards are “written in blood” from past, tragic failures. We learn so that we don’t repeat the same mistakes. Of course, we first identify the proximate causes—the specific events directly leading to a casualty. To truly learn, we must take a step back to examine the larger context: what were the preceding holes in the Swiss cheese, and how do we account for them in our systems engineering practice? This approach is increasingly important …

Written in Blood: Case Studies of Systems Engineering Failure Read more »