DocumentationAgent Intelligence Graph

Agent Intelligence Graph

Your agent’s architecture — mapped, visualized, and tested.

Key Takeaways
  • Auto-discovers your agent's architecture and renders it as an interactive graph
  • Automatic discovery -- Invarium extracts your agent's architecture without manual configuration
  • Powers 5 additional graph-aware audit checks invisible to flat-list analysis

Why It Matters

The Agent Intelligence Graph is an auto-discovered, interactive visualization of your AI agent’s architecture — mapping every tool, chain, guard, and external service along with their relationships — enabling graph-aware audit checks that find issues invisible to flat-list analysis.

A flat list of tools tells you what your agent can do. The graph tells you how — which tools chain together, which paths are guarded, which external services are called, and which policy constraints apply. This structural view reveals risks that individual tool inspection cannot: unguarded paths to sensitive operations, multi-hop chains without error handling, and dead paths that no test ever exercises.


How Discovery Works

Invarium extracts your agent’s architecture automatically — you don’t need to describe it manually.

For popular frameworks like LangChain and LangGraph, Invarium analyzes your agent’s runtime structure to extract the complete graph with high accuracy. For custom agents using OpenAI, Anthropic, or other SDKs, Invarium captures your agent’s tool configuration directly from the LLM integration.

You can also upload a blueprint manually from the dashboard or via MCP if you prefer explicit control over what gets tested.

The discovery method is recorded in the graph metadata so you always know how your agent’s architecture was mapped.


Graph Structure

The graph consists of nodes (components) and edges (relationships). Each graph is versioned and associated with a specific agent.

Node Types

Node TypeWhat It RepresentsDescription
ToolA function or API your agent can callIndividual tools with parameters and return types
ChainA sequence of processing stepsMulti-step workflows that execute in order
GuardA validation or safety checkInput validation, auth checks, content filters
ExternalServiceAn external API or databaseThird-party APIs, databases, file systems
PolicyConstraintA rule the agent must followSystem prompt rules, business logic constraints

Edge Types

Edge TypeWhat It RepresentsDescription
CAN_INVOKEAgent can call this toolDirect invocation relationship
CHAINS_TOOne step leads to anotherSequential execution flow
GUARDED_BYTool or chain is protected by a guardSafety or validation dependency
READSComponent reads from a data sourceData input relationship
WRITESComponent writes to a data sourceData output relationship

Graph-Aware Audit Checks

The Agent Readiness Audit includes five checks that require graph data. These checks analyze relationships between components, not just individual components in isolation.

CheckWhat It DetectsWhy It Matters
Unguarded pathsUser input can reach sensitive operations without passing through a guardA cancel_order tool reachable without identity verification is a safety gap
Multi-hop chains without error handlingLong chains with no fallback if a middle step failsA 5-step chain that fails at step 3 with no recovery leaves the user stuck
Tools reachable without verificationTools requiring identity verification that lack a GUARDED_BY edgeFinancial operations without auth checks are exploitable
Missing fallbacks on critical pathsCritical workflows with no alternative if the primary path failsA payment flow that depends on a single external API with no fallback
Dead pathsPaths defined in the graph but never executed in any test runUntested paths are blind spots — you do not know if they work

These checks are displayed on the graph itself, highlighting the specific nodes and edges involved in each finding.


Path Comparison

After running tests, the graph shows the expected behavioral path versus the actual path your agent took:

Deviation TypeWhat It Means
Correct pathAgent followed the expected path exactly
Skipped guardAgent bypassed a required safety or validation check
Unexpected toolAgent called a tool not in the expected path
Wrong sequenceAgent called the right tools in the wrong order
Missing stepAgent skipped a required step in the workflow

Path deviations are mapped to the Failure Taxonomy — for example, a “skipped guard” maps to Tool Usage or Safety failures, while a “wrong sequence” maps to Instruction or Reasoning failures.


Graph Versioning

Every time the graph is re-extracted (via discovery or manual upload), a new version is created. Only one version is active at a time, but previous versions are retained so you can see how your agent’s architecture changed over time.


Using the Graph

The Agent Intelligence Graph is available in the dashboard for each agent:

  1. Select an agent from the sidebar
  2. Click Agent Graph in the navigation
  3. The graph loads with your most recent active version

Interaction controls:

  • Pan — Click and drag the background to move around
  • Zoom — Scroll to zoom in and out
  • Select node — Click a node to see its details, metadata, and connected test cases
  • Filter — Use the toolbar to filter by node type, edge type, or audit findings
  • Highlight path — Click an edge to highlight the full path it belongs to
  • Export — Download the graph as PNG

Prioritize fixes using the graph:

  1. Start with audit findings highlighted on the graph — unguarded paths and missing fallbacks are highest risk
  2. Check path comparison deviations — where your agent goes off-path is where bugs live
  3. Review runtime-discovered tools — tools that appear at runtime but are not in the original graph may indicate undocumented behavior
  4. Generate tests targeting dead paths — untested paths are blind spots

Runtime Graph Enrichment

The graph is a living document that gets updated with every test run:

  • Runtime-discovered tools — Tools called during tests that were not in the initial graph are added automatically, marked with a distinct visual indicator
  • Dead path detection — Paths in the graph that were never executed across all test runs are flagged for review
  • Coverage overlay — The graph shows which paths have been tested and which remain untested

FAQ

Is the graph updated automatically?

Yes. The graph updates every time you sync new test results with invarium_sync_results. Each test run adds coverage data and may discover new runtime tools.

What if my agent has no tools?

Agents without tools still produce a graph, but it will be simpler — primarily showing input/output nodes and decision paths based on constraint handling and response patterns.

How does the graph handle multi-agent systems?

Each agent gets its own graph. If you have agents that call other agents, each one is graphed independently based on its own discovered architecture.

Does auto-discovery work with all frameworks?

Auto-discovery works best with popular frameworks like LangChain and LangGraph. For agents using OpenAI or Anthropic SDKs directly, Invarium captures tool configurations automatically. You can always upload a blueprint manually as an alternative.

What is the difference between the graph and the blueprint?

The blueprint is your agent’s declared architecture (tools, constraints, system prompt). The graph is the visual representation of that architecture with relationships, plus runtime enrichment from test execution. The graph can contain more information than the blueprint if runtime discovery finds tools or paths not declared in the original blueprint.