Agent Intelligence Graph
Your agent’s architecture — mapped, visualized, and tested.
- ✓Auto-discovers your agent's architecture and renders it as an interactive graph
- ✓Automatic discovery -- Invarium extracts your agent's architecture without manual configuration
- ✓Powers 5 additional graph-aware audit checks invisible to flat-list analysis
Why It Matters
The Agent Intelligence Graph is an auto-discovered, interactive visualization of your AI agent’s architecture — mapping every tool, chain, guard, and external service along with their relationships — enabling graph-aware audit checks that find issues invisible to flat-list analysis.
A flat list of tools tells you what your agent can do. The graph tells you how — which tools chain together, which paths are guarded, which external services are called, and which policy constraints apply. This structural view reveals risks that individual tool inspection cannot: unguarded paths to sensitive operations, multi-hop chains without error handling, and dead paths that no test ever exercises.
How Discovery Works
Invarium extracts your agent’s architecture automatically — you don’t need to describe it manually.
For popular frameworks like LangChain and LangGraph, Invarium analyzes your agent’s runtime structure to extract the complete graph with high accuracy. For custom agents using OpenAI, Anthropic, or other SDKs, Invarium captures your agent’s tool configuration directly from the LLM integration.
You can also upload a blueprint manually from the dashboard or via MCP if you prefer explicit control over what gets tested.
The discovery method is recorded in the graph metadata so you always know how your agent’s architecture was mapped.
Graph Structure
The graph consists of nodes (components) and edges (relationships). Each graph is versioned and associated with a specific agent.
Node Types
| Node Type | What It Represents | Description |
|---|---|---|
| Tool | A function or API your agent can call | Individual tools with parameters and return types |
| Chain | A sequence of processing steps | Multi-step workflows that execute in order |
| Guard | A validation or safety check | Input validation, auth checks, content filters |
| ExternalService | An external API or database | Third-party APIs, databases, file systems |
| PolicyConstraint | A rule the agent must follow | System prompt rules, business logic constraints |
Edge Types
| Edge Type | What It Represents | Description |
|---|---|---|
| CAN_INVOKE | Agent can call this tool | Direct invocation relationship |
| CHAINS_TO | One step leads to another | Sequential execution flow |
| GUARDED_BY | Tool or chain is protected by a guard | Safety or validation dependency |
| READS | Component reads from a data source | Data input relationship |
| WRITES | Component writes to a data source | Data output relationship |
Graph-Aware Audit Checks
The Agent Readiness Audit includes five checks that require graph data. These checks analyze relationships between components, not just individual components in isolation.
| Check | What It Detects | Why It Matters |
|---|---|---|
| Unguarded paths | User input can reach sensitive operations without passing through a guard | A cancel_order tool reachable without identity verification is a safety gap |
| Multi-hop chains without error handling | Long chains with no fallback if a middle step fails | A 5-step chain that fails at step 3 with no recovery leaves the user stuck |
| Tools reachable without verification | Tools requiring identity verification that lack a GUARDED_BY edge | Financial operations without auth checks are exploitable |
| Missing fallbacks on critical paths | Critical workflows with no alternative if the primary path fails | A payment flow that depends on a single external API with no fallback |
| Dead paths | Paths defined in the graph but never executed in any test run | Untested paths are blind spots — you do not know if they work |
These checks are displayed on the graph itself, highlighting the specific nodes and edges involved in each finding.
Path Comparison
After running tests, the graph shows the expected behavioral path versus the actual path your agent took:
| Deviation Type | What It Means |
|---|---|
| Correct path | Agent followed the expected path exactly |
| Skipped guard | Agent bypassed a required safety or validation check |
| Unexpected tool | Agent called a tool not in the expected path |
| Wrong sequence | Agent called the right tools in the wrong order |
| Missing step | Agent skipped a required step in the workflow |
Path deviations are mapped to the Failure Taxonomy — for example, a “skipped guard” maps to Tool Usage or Safety failures, while a “wrong sequence” maps to Instruction or Reasoning failures.
Graph Versioning
Every time the graph is re-extracted (via discovery or manual upload), a new version is created. Only one version is active at a time, but previous versions are retained so you can see how your agent’s architecture changed over time.
Using the Graph
The Agent Intelligence Graph is available in the dashboard for each agent:
- Select an agent from the sidebar
- Click Agent Graph in the navigation
- The graph loads with your most recent active version
Interaction controls:
- Pan — Click and drag the background to move around
- Zoom — Scroll to zoom in and out
- Select node — Click a node to see its details, metadata, and connected test cases
- Filter — Use the toolbar to filter by node type, edge type, or audit findings
- Highlight path — Click an edge to highlight the full path it belongs to
- Export — Download the graph as PNG
Prioritize fixes using the graph:
- Start with audit findings highlighted on the graph — unguarded paths and missing fallbacks are highest risk
- Check path comparison deviations — where your agent goes off-path is where bugs live
- Review runtime-discovered tools — tools that appear at runtime but are not in the original graph may indicate undocumented behavior
- Generate tests targeting dead paths — untested paths are blind spots
Runtime Graph Enrichment
The graph is a living document that gets updated with every test run:
- Runtime-discovered tools — Tools called during tests that were not in the initial graph are added automatically, marked with a distinct visual indicator
- Dead path detection — Paths in the graph that were never executed across all test runs are flagged for review
- Coverage overlay — The graph shows which paths have been tested and which remain untested
FAQ
Is the graph updated automatically?
Yes. The graph updates every time you sync new test results with invarium_sync_results. Each test run adds coverage data and may discover new runtime tools.
What if my agent has no tools?
Agents without tools still produce a graph, but it will be simpler — primarily showing input/output nodes and decision paths based on constraint handling and response patterns.
How does the graph handle multi-agent systems?
Each agent gets its own graph. If you have agents that call other agents, each one is graphed independently based on its own discovered architecture.
Does auto-discovery work with all frameworks?
Auto-discovery works best with popular frameworks like LangChain and LangGraph. For agents using OpenAI or Anthropic SDKs directly, Invarium captures tool configurations automatically. You can always upload a blueprint manually as an alternative.
What is the difference between the graph and the blueprint?
The blueprint is your agent’s declared architecture (tools, constraints, system prompt). The graph is the visual representation of that architecture with relationships, plus runtime enrichment from test execution. The graph can contain more information than the blueprint if runtime discovery finds tools or paths not declared in the original blueprint.