Enhancing Decision Making with AI in Tactical Decision Games

A few weeks back I rode with an unusual mix of people — one who cycles outside only during events, one who is adept at riding on the road alongside live Southeast Asian traffic, and one who comes from a country that is very cyclist-friendly.

I was the ride lead. I knew the target destination. It wasn’t that far. Because of that, I decided to adopt a very loose posture towards planning the ride.

It was a dumb idea. We had too many close calls and there was a lot of room for improvement — to the point that we had a major fight after the ride.

I said to myself: never again.

I have always prided myself on being careful about planning. But I never really thought about the step before planning — making correct and informed decisions, and then owning those decisions.

In this blog, I’ll discuss how that experience led me to experiment with building an AI agent designed to practice decisions in the same spirit as Tactical Decision Games.

Tactical Decision Games

Every so often you run into a training method that is deceptively simple. One of those methods is the Tactical Decision Game (TDG).

At first glance, a TDG is just a short scenario. A situation is presented, usually incomplete, often ambiguous. The trainee has to make a decision.

Then the facilitator asks the obvious question:

Why did you choose that?

That’s it.

But the real value isn’t the scenario. The value is the decision loop it creates. A typical TDG looks like this:

scenario
→ decision
→ consequences
→ discussion
→ improved mental model

Over time, repeated exposure to these situations builds something much harder to teach directly: judgment under uncertainty.

You’re not memorizing answers.

You’re building decision quality.

Throwing In AI Makes It More Interesting

Recently I’ve been experimenting with whether modern AI tools can help augment this kind of training.

The biggest limitation of traditional TDGs is simply time and bandwidth. A classroom facilitator can only run so many scenarios in a session. And designing good scenarios takes work.

AI can help in a few ways.

First, it can generate variations of scenarios quickly.

Second, it can evaluate decisions from different perspectives — including adversarial ones.

Third, it can simulate the consequences of a decision step by step so the trainee can see how the situation evolves.

The result looks something like this:

scenario generation
→ trainee decision
→ red-team evaluation
→ decision replay
→ after-action review

Essentially the same learning loop TDGs already use, just with some wargaming and after-action-review (AAR) thrown in and augmented by AI.

The Hard Part: Reasoning, Not Scenarios

While building the experiment, a few design considerations became clear.

Scenario generation is relatively easy for modern AI systems, but evaluating decisions consistently is harder.

My approach, then, is to separate the two concerns: generating scenarios and reasoning about decisions.

This allows the training loop to remain structured while still supporting different domains and contexts. This also allows the system to evaluate decisions consistently using multiple lenses: tactical consequences, operational implications, strategic effects, ethical considerations, and even stakeholder impact.

This externalized reasoning framework gave the TDG a consistent structure for analyzing decisions. It is following a decision analysis process.

This is important because TDGs are not about finding the “correct” answer. They are about examining the reasoning behind a decision.

Why Domain Knowledge Still Matters

A generalized reasoning framework is useful, but it can’t operate in a vacuum: different domains have different constraints.

Decision making in a search-and-rescue operation is not the same as decision making in a negotiation, and both are very different from crisis response.

That’s why the system I experimented with separates two things: (1) the reasoning framework, and (2) the domain knowledge.

The reasoning engine handles the decision process. While the domain modules provide context-specific rules and constraints. This allows the same training system to explore adjacent domains without rewriting the core logic.

In theory, it lets you reuse the same decision-training structure across multiple fields: plug-and-play domain knowledge.

The Risk of Going Too Far

There is a trap here though. Generalization is useful, but too much generalization can dilute the training value. If the reasoning framework becomes too abstract, the system starts producing advice that feels generic.

Anyone who has participated in real TDGs knows that the power of the exercise comes from specific context: terrain, time pressure, degraded information, conflicting objectives

If a training system ignores those realities, it stops being useful. So the trick is maintaining a balance between general reasoning framework and strong domain context.

Too much domain specialization and the system becomes brittle; too much abstraction and the training becomes vague.

The interesting design challenge is finding the middle ground.

Why This Matters

The point of all this isn’t to build a fancy AI tool.

The point is to explore whether we can scale decision training. If learners can experience many decision scenarios safely and repeatedly, they can build better instincts. Instructors can then focus on the more valuable part: refining the reasoning behind those decisions.

Good judgment usually comes from experience. But experience is expensive and often risky. TDGs offer a way to practice decisions safely.

If AI can help people run more of these exercises — with better feedback and more varied scenarios — then it might help people develop better decision instincts faster.

At least, that’s the experiment.

And like any good TDG, the real value isn’t in the answer.

About Me

In my spare time, I work on AI projects that can help save lives in dynamic and non-permissive environments.

Comments

Leave a comment