• Home
  • Category: Systems Engineering

World War III’s Bletchley Park

In a near-future battlefield against a peer adversary, effective employment of machine learning and autonomy is the deciding factor. While our adversary is adapting commercial, mass-market technologies and controlling them remotely, U.S. and allied forces dominate with the effective application of advanced technologies that make decisions faster and more accurately. The concept of Joint All-Domain Command and Control (JADC2) is a key enabler, driving better battlefield decisions through robust information sharing.

Fed by this information, advanced decision aiding systems present courses of action (COAs) to each commander and then crew in the battle, taking into account every possible factor: tasking, environment and terrain, threats, available sensors and effectors, etc. Options and recommendations adapt as the battle unfolds, supporting every decision made with actionable information while deferring to human judgment.

In this campaign, the first few battles are handily won. It seems this war will be a cakewalk.

Until the enemy learns. They notice routines in behaviors and responses that are easy to exploit: System A is a higher threat priority, so is used as a diversion; displaced earth is flagged as a potential mine, so the enemy digs random holes to slow progress; fire comes in specific patterns, so the enemy knows when a barrage is over and quickly counters.

Pretty soon, these adaptions evolve to active attacks on autonomy: dazzle camouflage tricks computer vision systems into seeing more or different units; noise added to radio communications causes military chatter to be misclassified as civilian; selective sensor jamming confuses autonomy.

As the enemy learns to counter and attack these advanced capabilities, they become less helpful and eventually become a liability. Eventually, the operators deem them unreliable and revert to human decision-making and manual control of systems. The enemy has evened the battle and our investment in advanced decision support systems is wasted. Even worse, our operators lack experience with controlling the systems and are actually at a disadvantage, the technology actively hurt us.

The solution is clear: we must be prepared to counter the enemy’s learning and to learn ourselves. This is not a new insight. Learning and adaptation have always been essential elements of war, and now it’s more important than ever. The lessons learned from the field must be fed back into the AI/ML/autonomy development process. A short feedback, development, testing, deployment cycle is essential for autonomy to adapt to the adversary’s capabilities and TTPs, limiting the ability of the adversary to learn how to defend against and defeat our technologies.

In World War II, cryptography was the game-changing technology. You’re doubtless familiar with Bletchley Park, the codebreaking site that provided critical intelligence in World War II 1. Here, men and women worked tirelessly every single day of the war to analyze communication traffic, break the day’s codes, and pass intelligence to decision-makers. This work saved countless lives, leading directly to the Allied victory and shortening the war by 2-4 years. With the advancement of communications security 2, practically unbreakable encryption is available to everyone. We will no longer have the advantage of snooping on enemy communication content and must develop some other unique capability to ensure our forces have the edge.

I submit that the advantage will come from military-grade autonomy. Not the autonomous vehicles themselves, which are commodities, but the ability of the autonomy to respond to changing enemy behavior 3. One key advantage to traditional human control is adaptability to unique and changing situations, which current autonomy is not capable of; the state of the art in autonomous systems today more closely resembles video game NPCs, mindlessly applying the same routines based on simple input. While we may have high hopes for the future of autonomy, the truth is that autonomous systems will be limited for the foreseeable future by an inability to think outside the box.

Average autonomous system

How, then, do we enable the autonomous systems to react rapidly to changing battlefield conditions?

World War III’s version of Bletchley Park will be a capability I’m calling the Battlefield Accelerated Tactics and Techniques Learning and Effectiveness Laboratory. BATTLE Lab is a simulation facility. It ingests data from the field in near-real time, every detail of every battle including terrain, weather, friendly behaviors, enemy tactics, signals, etc. Through experimentation across hundreds of thousands of simulated engagements driven by observed behavior, we’ll develop courses of action for countering the enemy in every imaginable situation. Updated behavior models will be pushed to the field multiple times per day that reduce friendly vulnerabilities, exploit enemy weaknesses, and give our forces the edge.

Of course, we already do this today with extensive threat intelligence capabilities, training, and tactics. The difference is that the future battlefield will be chockablock with autonomous systems which can more rapidly integrate new threat and behavior models generated by BATTLE Lab. We’ll be able to move faster, using autonomy and simulation to reduce the OODA loop while nearly-instantly incorporating lessons from every battle.

Without BATTLE Lab, the enemy will learn how our autonomy operates and quickly find weaknesses to exploit; autonomous systems will be weak to spoofing, jamming, and unexpected behaviors by enemy systems. Bletchley Park shortened the OODA loop by providing better intelligence to strategic decision-makers (“Observe”). BATTLE Lab will shorten the OODA Loop by improving the ability of autonomy to understand the situation and make decisions (“Orient” and “Decide”).

BATTLE Lab is enabled by technology available and maturing today: low-cost uncrewed systems 4, battlefield connectivity, and edge processing.

A critical gap is human-autonomy interaction solutions. To implement these advanced capabilities effectively, human crews need to effectively task, trust, and collaborate with autonomous teammates and these interaction strategies need to mature alongside autonomy capabilities to enhance employment at every step. Autonomy tactics may change rapidly based on new models disseminated from the BATTLE Lab and human teammates need to be able to understand and trust the autonomous system’s behaviors. Explainability and trust are topics of ongoing research; additional efforts to integrate these capabilities into mission planning and mission execution will also be needed.

What do you think the future battlefield will look like and what additional capabilities need to be developed to make it possible? Share your thoughts in the comments below.

Agile SE Part Five: Agility on Large, Complex Programs

Table of Contents

Putting it all together

In this series we’ve introduced agile concepts, requirements, contracting, and digital engineering (DE) for physical systems. These things are all enablers to agility, they don’t make a program agile per se. The key for agility is how the program is planned and functions prioritized.

Agile program planning

A traditional waterfall program is planned using the Statement of Work (SOW), Work Breakdown Structure (WBS), and Integrated Master Schedule (IMS). This basically requires scheduling all of the work before the project starts, considering dependencies, key milestones, etc. Teams know what to work on because the schedule tells them what they’ll be working on when. At least in theory.

Read More

Minimum Viable Product (MVP): You’re doing it wrong

Quibi was a short-lived short-form video app. It was founded in August 2018, launched in April 2020, and folded in December 2020, wiping out $1.75 billion of investor’s money. That’s twenty months from founding to launch and just six months to fail. Ouch.

Forbes chalked this up to “a misread of consumer interests”; though the content was pretty good, Quibi only worked as a phone app while customers wanted TV streaming, and it lacked social sharing features that may have drawn in new viewers. It was also a paid service competing with free options like YouTube and TikTok. According to The Wall Street Journal, the company’s attempts to address the issues were too late: “spending on advertising left little financial wiggle room when the company was struggling”.

If only there was some way Quibi could have validated its concept prior to wasting nearly two billion dollars1.

Read More

Agile isn’t faster

A common misconception is that Agile development processes are faster. I’ve heard this from leaders as a justification for adopting Agile processes and read it in proposals as a supposed differentiator. It’s not true. Nothing about Agile magically enable teams to architect, engineer, design, test, or validate any faster.

In fact, many parts of Agile are actually slower. Time spent on PI planning, backlog refinement, sprint planning, daily stand-ups1, and retrospectives is time the team isn’t developing. Much of that overhead is avoided in a Waterfall style where the development follows a set plan.

Read More

You Don’t Understand Murphy’s Law: The Importance of Defensive Design

CALLBACK is the monthly newsletter of NASA’s Aviation Safety Reporting System (ASRS)1. Each edition features excerpts from real, first-person safety reports submitted to the system. Most of the reports come from pilots, many by air traffic controllers, and also the occasional maintainer, ground crew, or flight attendant. Human factors concerns feature heavily and the newsletters provide insight into current safety concerns2. ASRS gets five to nine thousand reports each month, so there’s plenty of content for the CALLBACK team to mine.

The February 2022 issue contained this report about swapped buttons:

Read More

Agile SE Part Four: Digital Transformation

Table of Contents

A quick detour

This article is a quick detour on an important enabler for agile systems engineering. “Digital transformation” means re-imagining the way businesses operate in the digital age, including how we engineer systems. As future articles discuss scaling agile practices to larger and more complex systems, it will be very helpful to understand the possibilities that digital engineering unlocks.

Digital engineering enables the agile revolution

The knee-jerk reaction to agile systems engineering is this: “sure, agile is great for the speed and flexibility of software development, but there’s just no way to apply it to hardware systems”. Objections range from development times to lead times to the cost of producing physical prototypes.

Read More

Agile SE Part Three: Agile Contracts and the Downfall of Requirements

Table of Contents

The antithesis of agile

Requirements are a poor way to acquire a system. They’re great in theory, but frequently fail in practice. Writing good requirements is hard, much harder than you’d think if you’ve never had the opportunity. Ivy Hooks gives several examples of good and bad requirements in the paper “Writing Good Requirements“. Poor requirements can unnecessarily constrain the design, be interpreted incorrectly, and pose challenges for verification. Over-specification results in spending on capabilities that aren’t really needed while under-specification can result in a final product that doesn’t provide all of the required functions.

If writing one requirement is hard, try scaling it up to an entire complex system. Requirements-based acquisition rests on the assumption that the specification and statement of work are complete, consistent, and effective. That requires a great deal of up-front work with limited opportunity to correct issues found later. A 2015 GAO report found that “DoD often does not perform sufficient up-front requirements analysis”, leading to “cost, schedule, and performance problems”.

Read More

Agile SE Part Two: What’s Your Problem?

Table of Contents

A faster horse

“If I had asked people what they wanted, they would have said faster horses.”

Apocryphally attributed to Henry Ford1

When people trot out that quote they’re often trying to make the point that seeking user feedback will only constrain the design because our small-minded <sneer>users</sneer> cannot possibly think outside the box. I disagree with that approach. User feedback is valuable information. It should not constrain the design, but it is essential to be able to understand and empathize with your users. They say “faster horse”? It’s your job to generalize and innovate on that desire to come up with a car. The problem with the “singular visionary” approach is that for every wildly successful visionary there are a dozen more with equally innovative ideas that didn’t find a market.

Read More

Agile SE Part One: What is Agile, Anyway?

Table of Contents

What is “Agile”?

Agile is a relatively new approach to software development based on the Agile Manifesto and Agile Principles. These documents are an easy read and you should absolutely check them out. I will sum them up as stating that development should be driven by what is most valuable to the customer and that our projects should align around delivering value.

Yes, I’ve obnoxiously italicized the word value as if it were in the glossary of a middle school textbook. That’s because value is the essence of this entire discussion.

Little-a Agile

With a little-a, “agile” is the ability to adapt to a changing situation. This means collaboration to understand the stakeholder needs and the best way to satisfy those needs. It means changing the plan when the situation (or your understanding of the situation) changes. It means understanding what is valuable to the customer, focusing on delivering that value, and minimizing non-value added effort.

Read More

The Operations Concept: Developing and Using an OpsCon

  • An Operations Concept is more detailed than a Concept of Operations
  • It is a systems engineering artifact that describes how system use cases are realized
  • It is versatile and serves many uses across the project
  • There is no set format, though there are some best practices to consider

Concept of Operations (ConOps)

Let start by talking about the OpsCon’s better-known big brother, the ConOps.

Read More