Digital Twin

A digital twin is a persistent, dynamic virtual model of a specific physical system, synchronized with that system through real-world—and ideally real-time—data. The twin doesn’t just represent what the system is designed to do; it represents the state and capabilities of the actual, real-world unit based on sensor data, operational history, and observed behavior.

That’s immensely valuable for predictive maintenance, operational readiness, mission simulation, and many other needs. But like many technology buzzwords, the meaning of “digital twin” has been muddied by misuse, particularly conflation with generic simulation capabilities1 .

A generic Black Hawk simulation is useful in many ways. But it doesn’t tell me the status and capabilities of the specific Black Hawk I’m looking at and it doesn’t feed real-world performance data back to improve the simulation and performance prediction capabilities for the fleet.

History and concept

The desire to capture and maintain an accurate model of a specific system is not new. Versions of this concept have been practiced and discussed for decades in a wide variety of industries and applications. The core insight is the same in each application: a continuously-updated virtual replica of a physical asset enables condition monitoring, failure prediction, problem diagnosis, and performance optimization at the unit level and also at the fleet or design level.

Three things separate a true digital twin from a conventional simulation:

  1. A persistent identity. The twin represents one specific physical instance and it accumulates the operational history of that particular unit: its flight hours, configurations, maintenance events, operational environments, detected anomalies, performance shift over time, etc.
  2. Bidirectional data flow. The physical system informs the twin through sensors, telemetry, maintenance logs, and other inputs; the twin informs decisions about the physical system through analysis and prediction. Pull one direction out and you have either a sensor feed or a model, but not a twin.
  3. Continuous synchronization. A digital twin may exist as a series of snapshots over time (vice true real-time data feeds), but it is the living record that truly creates the twin.

The appeal should be obvious. Think of the maintenance minder in your car. It is a very crude predictive maintenance capability using a small number of factors—speed, operating temperature, ambient temperature, time, and vehicle use—to determine when regular maintenance is necessary. Almost certainly your oil has more life in it when the car says it’s time for a change; in fact, there are communities online dedicated to extending oil change intervals through sample testing. But that data of actual oil life doesn’t make it back to the maintenance minder. Nor does the real-world data of hundreds of thousands of other cars from the same model and year experiencing a wide range of uses and environments.

If you had that data, you could run analyses and push insights back to the physical system that would enable you to optimize your usage and maintenance. The impact is small on an individual level, but enormous when managing a fleet and/or trying a push a system to its absolute limits to meet a critical need.

Dilution

Like nearly every technology buzzword, “digital twin” has become a victim of its own appeal. It sounds impressive, which makes it useful in marketing decks and program briefings regardless of whether the concept actually applies. The result is that the term now gets applied to almost any computer model of a physical thing and, increasingly, digital things that really don’t have a physical history to manage.

A finite element model of a bridge isn’t a digital twin, it’s a structural simulation. A CAD model of a new aircraft isn’t a digital twin, it’s a design tool. A physics-based simulation environment for training operators isn’t a digital twin, it’s a simulator. These are all valuable and legitimate, they just aren’t twins.

The confusion matters for a few reasons. First, it raises expectations, then erodes trust in the concept when those expectations aren’t fulfilled. Second, it obscures the investments necessary to capture and integrate real-world data; that’s a far greater commitment than a one-time modeling effort. Third, it muddies requirements; if “digital twin” can mean anything, it means nothing2.

Because this concept is so valuable, it’s worth being clear about what it means and what it doesn’t.

Why it matters for systems engineers

The genuine digital twin concept has real implications for how we think about system design from day one. A system that will have a digital twin needs to be instrumented to support it, which is far easier to do in initial design than added later.

It also needs a lifecycle sustainment concept that leverages the technology. True digital twins can transform sustainment in a way that more common predictive or condition-based maintenance can’t. That’s a meaningful capability. It’s worth using the term carefully enough that it stays that way.

How have you seen “digital twin” used or misused in your organization? Is the definitional looseness causing real problems, or is it harmless buzzword inflation? Leave your thoughts below.

  1. I like how Wikipedia puts it: “A digital twin operating without real, continuous data from its physical counterpart is widely considered a contested and largely marketing-oriented interpretation of the concept, since authoritative definitions consistently require dynamic synchronization with the real system for the virtual model to qualify as a true digital twin.”
  2. Why yes, I have been accused of being a pedant. How did you know?