Architectural Guide

What is a Decision Context Graph? An Architectural Guide

The infrastructure layer between raw data and intelligent decisions.

SS

Swarnim Shrey

Founder, MindPalace

April 12, 202614 min read

The problem with dashboards

Dashboards were built to answer one question: "what happened?"

They do that well. Charts go up. Charts go down. Numbers get displayed.

Dashboards were never designed to answer the questions that actually matter:

  • Why did it happen?
  • Who is responsible for fixing it?
  • What should we do next?

When Revenue drops, a dashboard shows you a red number. It does not show you which upstream metric caused the decline. It does not show you that Returning Customers fell because Retention dropped because Onboarding Completion cratered last month. It does not show you that Sarah owns Retention and should already be investigating.

Dashboards show data. They do not show logic.

That gap, between seeing numbers and understanding causality, is what a Decision Context Graph fills.

What is a Decision Context Graph?

A Decision Context Graph is a living topology of your business that maps three layers traditional BI tools ignore.

The three layers of a Decision Context Graph. Each one builds on the layer below.

Layer 1: the semantic layer (entities)

This layer understands your data. Not just its schema, its meaning.

A traditional data catalog knows that a table called customers exists. It knows the columns and data types. What it does not know:

  • Is customers an Object (a thing that persists) or an Event (a thing that happens)?
  • How does customers relate to orders? One-to-many? Many-to-many?
  • Which join paths are "golden" (trusted, frequently used) vs dangerous (cause fan-out, inflate numbers)?

A Decision Context Graph knows all of this. Not because someone documented it, because the system learned it from actual query behavior.

When analysts consistently use customers → orders → products to answer business questions, that path becomes a semantic edge in the graph. When a particular join causes row explosion and bad numbers, the graph learns to flag it.

Schema tells you what exists. Semantics tells you what it means.

Layer 2: the causal layer (metrics)

This layer maps how your metrics connect. The hierarchy from North Star to drivers.

In most organizations, Revenue is just a number on a dashboard. In a Decision Context Graph, Revenue is a node with children.

Revenue decomposed by customer source. New + Returning + Expansion = Revenue.

Every metric is a node. Every relationship is an edge. The edges encode causality, not just correlation, but the actual mathematical decomposition.

The structure must be MECE: Mutually Exclusive, Collectively Exhaustive.

  • Mutually Exclusive: no overlaps. New Revenue and Returning Revenue should not double-count.
  • Collectively Exhaustive: no gaps. Child metrics must mathematically sum to the parent.

A single metric can have more than one valid MECE decomposition. The diagram above is one cut: by customer source. The same Revenue can also be cut by transaction structure (Customers × AOV × Frequency) or by product line, channel, or geography. The graph holds them all and lets you traverse whichever one answers the current question.

If your KPI structure is not MECE on any of these cuts, you have blind spots. A Decision Context Graph makes those blind spots visible.

Layer 3: the human layer (ownership)

This layer connects every metric to a person. Not a team. A specific human accountable for that number.

Every metric maps to a person. When Retention drops, the system already knows who to notify.

When Retention Rate drops, the graph does not just surface an alert. It knows to notify the CS Director. It knows the CS Director reports to the VP of Customer Success, whose North Star is Returning Revenue, which rolls up to the CEO.

Most BI tools do not know who Sarah is. A Decision Context Graph does.

Data does not make decisions. People do. The graph should know who they are.

Why this matters now

Feed an LLM your raw schema and ask "why is revenue down?" It will guess. Confidently. Often incorrectly. We covered this failure mode in detail in why LLMs should never calculate your churn rate.

No graph, no trust

The AI for BI tools we have evaluated cluster into two patterns. The ones that work ground the LLM in a structured graph of metrics and relationships before it runs a query. The ones that fail skip that step. We have not seen a third pattern.

LLMs hallucinate because they lack a grounding layer. They do not know how your metrics connect. They do not know which definitions your Finance team actually trusts. They do not know that "Revenue" in the Sales dashboard double-counts renewals.

A Decision Context Graph is that grounding layer. It constrains what the AI can query, validates the logic before execution, and ensures every insight is traceable to blessed definitions.

The thesis around context graphs has been articulated outside MindPalace too. Foundation Capital recently argued that the next generation of enterprise platforms will not just store data, they will capture decision traces: the context, reasoning, and precedent behind every action. Their framing focuses on workflow agents. Ours focuses on decision intelligence. The core insight is the same: data without context is noise, and context is the moat.

How a Decision Context Graph gets built

The traditional approach takes months.

You interview stakeholders. Document every metric definition. Write LookML or dbt models by hand. Map relationships in Lucidchart. Assign ownership in a spreadsheet. By the time you finish, half of it is outdated.

MindPalace takes a different approach: behavioral discovery. Instead of asking people how data should be used, we analyze how it is used.

Cartographer: six agents, one graph

Cartographer is our automated context discovery system. It ingests your query logs, the actual SQL your organization runs, and extracts the implicit business logic buried inside.

Cartographer's six specialized agents. Five extract patterns from query logs. One synthesizes them into a graph.

Agent 1: Entity Resolver

Classifies tables as Objects or Events based on behavioral patterns.

The heuristic: if a table has high UPDATE frequency relative to INSERTs, it is likely an Object (a thing that persists and changes state, like customers). If it is mostly INSERTs with rare updates, it is an Event (a thing that happens once, like orders).

def classify_entity(table_stats):
    insert_ratio = table_stats.inserts / table_stats.total_writes
    update_ratio = table_stats.updates / table_stats.total_writes
 
    if update_ratio > 0.3:
        return "OBJECT"   # customers, products, accounts
    elif insert_ratio > 0.8:
        return "EVENT"    # orders, sessions, transactions
    else:
        return "OTHER"    # mapping tables, junction tables,
                          # staging artifacts (refined further
                          # by Pattern Synthesizer)

Production uses additional signals: table size, primary-key shape, foreign-key structure, naming conventions, and column-type histograms. The snippet above captures the core intuition behind the first-pass classification.

Schema crawlers cannot make this distinction. They see tables. Cartographer sees business objects.

Agent 2: Measure Scout

Discovers metric definitions from actual aggregation patterns. When Finance runs SUM(payment_amount) WHERE status = 'completed' AND refund_id IS NULL, that is the real definition of Net Revenue, regardless of what the documentation says.

Measure Scout identifies which columns get aggregated, which filters are consistently applied, and which definitions different teams use (and where they conflict).

{
  "measure_name": "net_revenue",
  "expression": "SUM(payment_amount)",
  "filters": [
    "status = 'completed'",
    "refund_id IS NULL"
  ],
  "used_by": ["finance_team", "exec_dashboard"],
  "conflicts_with": "gross_revenue_marketing"
}

conflicts_with is not magic. Pattern Synthesizer cross-references measure definitions across teams and flags two entries when their expressions or filter sets diverge but their names suggest the same concept (Net Revenue from Finance vs Marketing's Revenue dashboard). The conflict surfaces for human review, not silent reconciliation.

Agent 3: Relationship Grapher

Maps JOIN patterns to find the "golden paths" between entities. Foreign keys are often missing, outdated, or wrong. Relationship Grapher ignores schema definitions and instead maps relationships based on actual usage frequency.

If analysts consistently join customers.id = orders.customer_id, that is a confirmed relationship, regardless of whether a foreign key constraint exists. The agent also detects dangerous joins: paths that cause fan-out (row multiplication) and lead to inflated metrics.

Agent 4: Temporal Analyst

Identifies time dimensions and granularity patterns. Every business has temporal logic: fiscal quarters vs calendar quarters, weekly cycles, monthly reporting cadences.

Temporal Analyst discovers which timestamp columns are authoritative, what granularity makes sense for each metric, and how time zones are handled (or mishandled).

Agent 5: Segment Detector

Finds the dimensions that matter for slicing data. When analysts run queries, they GROUP BY certain columns repeatedly: region, customer_tier, product_category, channel. These are the segments that matter to the business.

Segment Detector identifies statistically useful dimensions, the ones that actually explain variance, and ignores noise.

Agent 6: Pattern Synthesizer

Connects the dots. Takes outputs from the other five agents and synthesizes them into a coherent Decision Context Graph. Entities become nodes. Relationships become semantic edges. Metrics become a hierarchy. Segments become valid drill-down paths. Conflicts get flagged for human review.

Four hours, not four months

The result: a complete Decision Context Graph foundation in approximately four hours.

Traditional approachTime
Stakeholder interviews4 weeks
Metric documentation4 weeks
Relationship mapping2 weeks
Implementation4 weeks
Total~4 months
Cartographer approachTime
Query log ingestion30 minutes
Agent processing (5,000+ queries)3 hours
Human validation1 to 2 days
Total~4 hours + validation

This is not a guess or a template. It is a map built from your organization's actual behavior.

The KPI tree: strategy made visible

At the heart of every Decision Context Graph is a KPI tree. A hierarchical structure that decomposes your North Star metric into contributing drivers.

This is not new. KPI trees are core to how McKinsey, BCG, and Bain teach business diagnosis. The problem is that in most organizations, the tree lives in a PowerPoint, drawn once for a strategy offsite, then forgotten. We are making the analyst's whiteboard executable: every node connected to live data, every relationship validated, every owner mapped.

North Star decomposition

Every organization has one number that matters most.

  • E-commerce: GMV or Revenue
  • SaaS: ARR or MRR
  • Fintech: AUM or Transaction Volume
  • Marketplace: Take Rate x GMV

The graph starts at the North Star and decomposes downward. Each level answers "what drives this metric?" Below is the same Revenue cut by transaction structure rather than customer source: Customers × Average Order Value × Frequency. Same metric, different question.

Revenue ($12.4M, +4%)
├── Customers (45.2K, +6%)
│   ├── New (8.3K, +12%)
│   └── Returning (36.9K, -4%) ⚠
├── Average Order Value ($127, -2%)
│   ├── Price ($42, -2%)
│   └── Quantity (3.0, 0%)
└── Frequency (2.1x, +5%)

When Revenue growth slows, you do not guess. You traverse the tree. Returning Customers is down 4 percent. That is the driver. Drill into Returning Customers and find that Retention Rate dropped. Drill into Retention and find that Onboarding Completion cratered last month. Causality becomes visible.

MECE enforcement

The graph validates that whichever decomposition you traverse is MECE on its own terms.

Mutually Exclusive check: do the children at each level use mutually exclusive sets? On the customer-source cut, do "New Revenue" and "Returning Revenue" double-count any transaction?

Collectively Exhaustive check: do the children sum (or multiply, depending on the cut) to the parent? Is there a category nobody mapped?

If the structure fails MECE on any cut, the graph flags it. Imagine a company that decided to track Revenue by source but only built two buckets: New and Returning. They forgot Expansion. The graph would surface this:

MECE violation detected (illustrative)

"New Revenue" + "Returning Revenue" = $11.8M. "Total Revenue" = $12.4M. Gap: $600K unattributed revenue. Suggested fix: add "Expansion Revenue" node for upsells.

That gap is exactly the kind of blind spot most dashboards hide. A complete decomposition would include the third bucket. The diagram earlier in this post shows the corrected structure.

Most dashboards hide these gaps. A Decision Context Graph surfaces them.

Strategy layers: role-based views

The KPI tree looks different depending on who is looking.

  • CEO view: full tree from Revenue down. Every metric, every driver, every owner.
  • VP Sales view: subtree starting at New Revenue. They see their North Star and everything that drives it.
  • Regional Manager view: subtree filtered to their geography. Northeast New Customers and the drivers beneath.

Everyone sees how their work connects to the top line. Alignment happens through structure, not through endless re-alignment meetings.

What you can do with a Decision Context Graph

Once the graph exists, it becomes the foundation for everything.

KPI Drilldown

Type any metric. Get a complete data story in 30 seconds. The system uses the graph to generate a four-act narrative.

The four-act narrative. Every metric becomes a story, not a chart.

An example from our reference e-commerce dataset. Type "Revenue":

  • Act 1: "Revenue is up 8.2 percent this month, driven by strong Electronics performance."
  • Act 2: "São Paulo drives 32 percent of revenue, up 12 percent. Rio de Janeiro is down 23 percent."
  • Act 3: "Strong correlation (r=0.87) between order campaigns and revenue growth."
  • Act 4: "INSIGHT: Electronics shows 2x average growth. ALERT: Rio revenue 23 percent below average. OPPORTUNITY: Order volume campaigns drive revenue boost."

No dashboard clicking. No analyst requests. Just type a metric and get the story.

Deep Analysis

When you need statistical rigor, the graph enables hypothesis testing at scale. The system suggests hypotheses based on graph patterns, then runs proper statistical analysis. ANOVA for comparing groups. Pearson correlation for relationships. Effect size for practical significance. Confidence intervals for uncertainty.

Critically, the LLM does not do the math. It plans the analysis. Deterministic Python (SciPy, statsmodels) executes the calculations on row-level data. Intent is linguistic. Math is deterministic. We covered the architecture in why LLMs should never calculate your churn rate.

Anomaly detection

The graph knows what "normal" looks like. Not just for individual metrics, but for relationships between them. Revenue dropped, but New Customers is flat. The graph knows this means the problem is in Returning Revenue. It traces the path automatically and surfaces the root cause before you ask. Cross-metric anomalies are invisible to traditional alerting. They are native to a Decision Context Graph.

AI co-pilot (coming soon)

The trust property is already in the system today. Cartographer grounds every query in blessed definitions. The KPI tree enforces MECE structure. Deep Analysis runs deterministic statistics in Python. Every result carries an audit trail back to its plan and assumptions.

The co-pilot is the conversational surface on top of that foundation. Ask "why did revenue drop last week?" and get an answer that is traced through the KPI tree, validated against the graph, statistically tested on real data, and attributed to an owner. The LLM plans. The graph validates. Deterministic systems calculate. Trust comes from the existing layers, not from the chat interface.

Decision context graph vs traditional tools

CapabilityDashboardData catalogSemantic layerDecision context graph
Shows current metricsyesnonoyes
Maps table relationshipsnopartialpartialyes (behavioral)
Defines metric logicnonoyesyes (discovered)
Enforces MECE structurenononoyes
Tracks ownershipnononoyes
Enables causal traversalnononoyes
Grounds AI safelynonopartialyes
Updates as data evolvespartialnopartialyes

A Decision Context Graph is not a replacement for your warehouse, your BI tool, or your semantic layer. It is the connective tissue that makes them all work together with causality, ownership, and trust.

The era of the context graph

Dashboards were built for a world where data was scarce and humans did the synthesis. That world is gone. The structure that makes data meaningful is the relationships, ownership, and causality that turn metrics into decisions. That structure is the Decision Context Graph.

VCs are calling this AI's largest infrastructure opportunity. We agree, but not because of hype. The graph is the missing layer between raw data and intelligent action. It is what makes AI trustworthy. It is what makes dashboards obsolete. It is what turns organizations from data-rich and insight-poor into genuinely decision-intelligent.

We are building it. If you are tired of being the Human API between your warehouse and your executives, take a look at the product or read more about Decision Intelligence.

Read this next