Large Language Models (LLMs) are impressive tools. They can summarize information, answer questions, and generate explanations on almost any topic.
But raw LLMs also operate in a high-noise environment. They generate responses based on statistical patterns learned from huge datasets. That means useful insights are often mixed with irrelevant details, shallow reasoning, or occasionally incorrect information.
If we want to use LLMs as serious analytical tools, especially inside a specific domain, we need to increase the signal-to-noise ratio.
In a regulatory framework project I am working on, I am having challenges getting the big picture and where my component sits in. More than that, I could really use having an expert in the topic to bounce ideas off and challenge my assumptions.
To solve that, I decided to build an expert AI Agent I can use for analysis. I built it around these 3 structures-
- curated domain knowledge
- a reasoning framework
- validation mechanisms
My thinking is, when done correctly, the LLM stops behaving like a generic chatbot and starts acting more like an assistant that helps analyze problems within a defined problem space.
Layer 1: Curated Domain Knowledge
My first step is controlling the knowledge inputs. LLMs are trained on massive datasets that contain both high-quality information and questionable sources. That mix is fine for general conversation, but it becomes risky when the system is expected to provide reliable analysis.
Even decades later, it is still garbage in, garbage out.
For my project, I used these authoritative sources: Regulations and Circulars published by a government body.
These sources form a bounded knowledge domain that the system can rely on when analyzing problems.
Layer 2: Reasoning Framework
But, just having the sources is not enough- knowledge needs to be in a usable form.
The second step was to formalize that knowledge into a reasoning framework.
I did that by decomposing the knowledge gathered into structured and layered components that act as the scaffolding that guides how the model thinks through a problem:
| Component | Definition | Example | Effect on the System |
|---|---|---|---|
| Doctrine | Foundational principle describing how the world or system works. It defines the conceptual model the system uses when interpreting problems. | “System behavior emerges from structure, feedback loops, delays, modes, and states.” | Establishes the worldview used during analysis. The system consistently interprets problems in terms of relationships and dynamics rather than isolated facts. |
| Ontology | A formal representation of the entities, relationships, and concepts within a domain. It defines what kinds of things exist in the system and how they relate. | Entities such as system, variable, stock, flow, feedback loop, delay, intervention, outcome. | Provides a consistent vocabulary and data structure for reasoning. The system can map problems into a structured representation instead of relying on vague language. |
| Heuristic | A procedural rule or guideline that shapes how analysis is performed. It defines the reasoning steps the system should follow. | “Map the system before proposing solutions.” | Changes the reasoning process. The system must identify variables, relationships, and feedback loops before proposing interventions, reducing premature conclusions. |
| Policy | A constraint or rule that governs acceptable system behavior. It acts as a guardrail ensuring quality and consistency. | “Do not propose interventions until the diagnostic checklist is complete.” | Acts as a quality gate. The system cannot move forward to solutions until required analysis steps are satisfied, preventing shallow or incomplete reasoning. |
| State Machine | A formal model describing system states and the transitions between them based on triggers or conditions. | System modes such as analysis mode, intervention evaluation mode, and validation mode | Allows the system to track operational conditions and adjust reasoning accordingly. Different states activate different rules, constraints, or behaviors. |
Layer 3 – Validation Mechanisms
Even a well-designed reasoning framework needs ways to check itself. I did that by introducing archetypes and validation tests.
Archetypes are recurring system structures that produce predictable patterns of behavior. Each archetype represents a specific configuration of feedback loops and constraints. Recognizing these patterns helps the system diagnose problems faster and focus on deeper structural causes instead of symptoms.
For example, when I analyzed the regulatory framework that we are executing against, turns out it can exhibit “Shifting the Burden” systems archetype. This happens because the operational load is, by design, concentrated on downstream institutions and customers rather than the upstream infrastructure that causes the problem.
After identifying common system structures through archetypes, I designed self-tests. These validation tests function like unit tests for the reasoning system. The system checks whether it can map feedback loops, explain behavior over time, and recognize archetype patterns.
If the reasoning fails these checks, the system flags that the analysis is incomplete and continues mapping the system structure before proposing interventions.
Together, these mechanisms keep the analytical process consistent and improving over time.
Signal-to-Noise Before and After
Before introducing a structured approach, the LLM just shifted to what I observed as a narrative mode- broad but shallow explanations, inconsistent reasoning, topic drifting, and hallucinations.
The signal is there—but buried in noise. So much noise.
After adding structure and converting it to an expert AI Agent, I observed that analysis follows a consistent structure, causal relationships become explicit, feedback loops are identified, and even have interventions target structural leverage points.
Structure reduces noise because it constrains the reasoning process. Instead of generating answers freely, the system must follow defined concepts, procedures, and validation checks. This dramatically narrows the space of possible outputs and increases analytical consistency.
Through structure, I successfully turned a stochastic text generator into a structured expert system that helps analyze problems.
Why This Matters
People turn to expert systems when they need help with: diagnosing complex problems, interpreting regulations, evaluating strategies, planning operations
In those situations, reliability matters more than conversational ability.
From experience, a useful expert system needs: curated domain knowledge, reasoning framework, and validation mechanisms. With the strong push in the AI space, adapting this structure to AI I believe will be very powerful.
Without these elements, an LLM remains a helpful general assistant—but not a reliable analytical tool.
With them, it becomes part of a structured knowledge system designed to support human decision-making.
About Me
“Light is gone but the machine is still working”
Leave a comment