One of my favorite items in my small model collection is a 1:34 scale Grumman Long Life Vehicle (LLV)1 with sliding side doors, a roll-up rear hatch, and pull-back propulsion. The iconic vehicle has been plying our city streets for nearly 40 years, reliably delivering critical communiques, bills, checks, advertisements, Dear John letters, junk mail, magazines, catalogs, post cards from afar, chain letters2, and Amazon packages.
Read MoreSystem Design Lessons from the USS McCain
The Navy installed touch-screen steering systems to save money.
Ten sailors paid with their lives.
ProPublica
Ten sailors died after the crew of the destroyer USS John S. McCain lost control of their vessel, causing a collision with the merchant tanker Alnic MC. There was nothing technically wrong with the vessel or its controls. Though much of the blame was put on the Sailors and Officers aboard, the real fault rests with the design of the Integrated Bridge & Navigation System (IBNS).
Read MoreWorld War III’s Bletchley Park
In a near-future battlefield against a peer adversary, effective employment of machine learning and autonomy is the deciding factor. While our adversary is adapting commercial, mass-market technologies and controlling them remotely, U.S. and allied forces dominate with the effective application of advanced technologies that make decisions faster and more accurately. The concept of Joint All-Domain Command and Control (JADC2) is a key enabler, driving better battlefield decisions through robust information sharing.
Fed by this information, advanced decision aiding systems present courses of action (COAs) to each commander and then crew in the battle, taking into account every possible factor: tasking, environment and terrain, threats, available sensors and effectors, etc. Options and recommendations adapt as the battle unfolds, supporting every decision made with actionable information while deferring to human judgment.
In this campaign, the first few battles are handily won. It seems this war will be a cakewalk.
Until the enemy learns. They notice routines in behaviors and responses that are easy to exploit: System A is a higher threat priority, so is used as a diversion; displaced earth is flagged as a potential mine, so the enemy digs random holes to slow progress; fire comes in specific patterns, so the enemy knows when a barrage is over and quickly counters.
Pretty soon, these adaptions evolve to active attacks on autonomy: dazzle camouflage tricks computer vision systems into seeing more or different units; noise added to radio communications causes military chatter to be misclassified as civilian; selective sensor jamming confuses autonomy.
As the enemy learns to counter and attack these advanced capabilities, they become less helpful and eventually become a liability. Eventually, the operators deem them unreliable and revert to human decision-making and manual control of systems. The enemy has evened the battle and our investment in advanced decision support systems is wasted. Even worse, our operators lack experience with controlling the systems and are actually at a disadvantage, the technology actively hurt us.
The solution is clear: we must be prepared to counter the enemy’s learning and to learn ourselves. This is not a new insight. Learning and adaptation have always been essential elements of war, and now it’s more important than ever. The lessons learned from the field must be fed back into the AI/ML/autonomy development process. A short feedback, development, testing, deployment cycle is essential for autonomy to adapt to the adversary’s capabilities and TTPs, limiting the ability of the adversary to learn how to defend against and defeat our technologies.
In World War II, cryptography was the game-changing technology. You’re doubtless familiar with Bletchley Park, the codebreaking site that provided critical intelligence in World War II 3. Here, men and women worked tirelessly every single day of the war to analyze communication traffic, break the day’s codes, and pass intelligence to decision-makers. This work saved countless lives, leading directly to the Allied victory and shortening the war by 2-4 years. With the advancement of communications security 4, practically unbreakable encryption is available to everyone. We will no longer have the advantage of snooping on enemy communication content and must develop some other unique capability to ensure our forces have the edge.
I submit that the advantage will come from military-grade autonomy. Not the autonomous vehicles themselves, which are commodities, but the ability of the autonomy to respond to changing enemy behavior 5. One key advantage to traditional human control is adaptability to unique and changing situations, which current autonomy is not capable of; the state of the art in autonomous systems today more closely resembles video game NPCs, mindlessly applying the same routines based on simple input. While we may have high hopes for the future of autonomy, the truth is that autonomous systems will be limited for the foreseeable future by an inability to think outside the box.
How, then, do we enable the autonomous systems to react rapidly to changing battlefield conditions?
World War III’s version of Bletchley Park will be a capability I’m calling the Battlefield Accelerated Tactics and Techniques Learning and Effectiveness Laboratory. BATTLE Lab is a simulation facility. It ingests data from the field in near-real time, every detail of every battle including terrain, weather, friendly behaviors, enemy tactics, signals, etc. Through experimentation across hundreds of thousands of simulated engagements driven by observed behavior, we’ll develop courses of action for countering the enemy in every imaginable situation. Updated behavior models will be pushed to the field multiple times per day that reduce friendly vulnerabilities, exploit enemy weaknesses, and give our forces the edge.
Of course, we already do this today with extensive threat intelligence capabilities, training, and tactics. The difference is that the future battlefield will be chockablock with autonomous systems which can more rapidly integrate new threat and behavior models generated by BATTLE Lab. We’ll be able to move faster, using autonomy and simulation to reduce the OODA loop while nearly-instantly incorporating lessons from every battle.
Without BATTLE Lab, the enemy will learn how our autonomy operates and quickly find weaknesses to exploit; autonomous systems will be weak to spoofing, jamming, and unexpected behaviors by enemy systems. Bletchley Park shortened the OODA loop by providing better intelligence to strategic decision-makers (“Observe”). BATTLE Lab will shorten the OODA Loop by improving the ability of autonomy to understand the situation and make decisions (“Orient” and “Decide”).
BATTLE Lab is enabled by technology available and maturing today: low-cost uncrewed systems 6, battlefield connectivity, and edge processing.
A critical gap is human-autonomy interaction solutions. To implement these advanced capabilities effectively, human crews need to effectively task, trust, and collaborate with autonomous teammates and these interaction strategies need to mature alongside autonomy capabilities to enhance employment at every step. Autonomy tactics may change rapidly based on new models disseminated from the BATTLE Lab and human teammates need to be able to understand and trust the autonomous system’s behaviors. Explainability and trust are topics of ongoing research; additional efforts to integrate these capabilities into mission planning and mission execution will also be needed.
What do you think the future battlefield will look like and what additional capabilities need to be developed to make it possible? Share your thoughts in the comments below.
OODA Loop: Observe, Orient, Decide, Act
All models are wrong, some models are useful.
George E. P. Box
“Observe, Orient, Decide, Act” (OODA) is a simple decision-making model developed by US Air Force Colonel John Boyd. The concept is straightforward: Every entity in a competition is executing these four phases, the side that can execute them more quickly and accurately 7 will win. “OODA” is a useful shorthand for discussing human decision-making and is commonly used in military circles.
Of course, this simple phrase masks an enormous amount of complexity regarding the amount of information observed, the participant’s ability to orient, the quality of decision-making, and the actions available to execute. It is this simplicity that gives the phrase its strength. Because the model is so simple, it is true at every scale: engagement to engagement, battle to battle, campaign to campaign. Strategic decision-makers are looking at the forest while tactical decision-makers are looking at the trees, yet they’re all executing an OODA loop for their relative scope and scale.
What makes a good human factors engineer? Five critical skills
Recently, the head of a college human factors program asked for my perspective on the human factors (and user experience) skills valued in industry. Here are five critical qualities that emerged from our discussion, in no particular order:
Systems thinking
Making sense of complexity requires identifying relationships, patterns, feedback loops, and causality. Systems thinkers excel at identifying emergent properties of systems and are thus suited to analyses such as safety, cybersecurity, and process, where outcomes may not be obvious from simply looking at sum of the parts.
Read MoreMilitary-industrial complex
The phrase “military-industrial complex” was coined by President Eisenhower in his farewell address to the nation in 19618. In this address, Eisenhower spoke of the deterrence value of military strength:
A vital element in keeping the peace is our military establishment. Our arms must be mighty, ready for instant action, so that no potential aggressor may be tempted to risk his own destruction.
Simultaneously, he warned of the potential danger in the growing relationship between the military establishment and the defense industry:
Read MoreAgile SE Part Five: Agility on Large, Complex Programs
Table of Contents
- Part 0: Overview
- Part 1: What is Agile, Anyway?
- Part 2: What’s Your Problem?
- Part 3: Agile Contracts and the Downfall of Requirements
- Part 4: Digital Transformation
- Part 5: Agility on Large, Complex Programs (you’re here!)
Putting it all together
In this series we’ve introduced agile concepts, requirements, contracting, and digital engineering (DE) for physical systems. These things are all enablers to agility, they don’t make a program agile per se. The key for agility is how the program is planned and functions prioritized.
Agile program planning
A traditional waterfall program is planned using the Statement of Work (SOW), Work Breakdown Structure (WBS), and Integrated Master Schedule (IMS). This basically requires scheduling all of the work before the project starts, considering dependencies, key milestones, etc. Teams know what to work on because the schedule tells them what they’ll be working on when. At least in theory.
Read MoreCollege interviewing tips
For several years I’ve been volunteering as an alumni interviewer for my alma mater. It’s enjoyable to spend a bit of time interacting with a younger generation and exploring their interests; my optimism is buoyed by their potential.
Read MoreMinimum Viable Product (MVP): You’re doing it wrong
Quibi was a short-lived short-form video app. It was founded in August 2018, launched in April 2020, and folded in December 2020, wiping out $1.75 billion of investor’s money. That’s twenty months from founding to launch and just six months to fail. Ouch.
Forbes chalked this up to “a misread of consumer interests”; though the content was pretty good, Quibi only worked as a phone app while customers wanted TV streaming, and it lacked social sharing features that may have drawn in new viewers. It was also a paid service competing with free options like YouTube and TikTok. According to The Wall Street Journal, the company’s attempts to address the issues were too late: “spending on advertising left little financial wiggle room when the company was struggling”.
If only there was some way Quibi could have validated its concept prior to wasting nearly two billion dollars9.
Read MoreAgile isn’t faster
A common misconception is that Agile development processes are faster. I’ve heard this from leaders as a justification for adopting Agile processes and read it in proposals as a supposed differentiator. It’s not true. Nothing about Agile magically enable teams to architect, engineer, design, test, or validate any faster.
In fact, many parts of Agile are actually slower. Time spent on PI planning, backlog refinement, sprint planning, daily stand-ups10, and retrospectives is time the team isn’t developing. Much of that overhead is avoided in a Waterfall style where the development follows a set plan.
Read More