Case studyTPC-H (Snowflake)May 10, 20264 min read

How MindPalace Handles Metrics Without a Timeline

About 30 percent of business questions have no time dimension. BI tools force a fake one or refuse to render. The engine emits snapshot SQL across multi-hop joins on its own.

Headline finding

9.7s

Snapshot KPIs across a multi-hop join chain

About thirty percent of the questions a business actually wants to ask have no time dimension.

How many active suppliers do we have right now? Which customer segments are largest? How is our inventory distributed across regions? How many open invoices live in the warehouse today? These are snapshot questions. There is no "compared to last month." There is just the current state.

Most BI tools struggle here. The metric editor wants a date column. The chart builder expects a trend. If the user picks a metric without a time series, the tool either prompts them to add a fake date or refuses to render. We have seen teams add a column called as_of and hard-code today's date just to get the chart to draw.

This case study is about a different default. The MindPalace engine emits snapshot SQL when the data has no temporal column, joins across multiple hops in the data context graph to bring the answer together, and runs on Snowflake without any configuration.

We ran it on TPC-H, the academic warehouse benchmark.

The setup

TPC-H is a public synthetic dataset. It has suppliers, nations, regions, customers, parts, and orders. The schema is normalized. To answer a question like "how many supplier-customer pairs exist per customer market segment," you have to traverse four tables: supplier, nation, region, and customer.

The KPI we asked the engine to resolve was supplier-customer pair count, grouped by customer market segment and by supplier region. No time dimension. No date column. Just the current snapshot.

What the engine did

Cartographer had scanned the workspace and built the data context graph. It knew that supplier joins to nation via s_nationkey. It knew nation joins to region via n_regionkey. It knew customer joins to nation separately via c_nationkey. Two separate multi-hop paths.

KPI Drilldown resolved the plan: grain supplier.s_suppkey, measure COUNT(supplier.s_suppkey), breakdowns by customer market segment and supplier region. The grounding step found no temporal column on supplier. The engine flipped to snapshot mode. No fake date column was added. No human configured this.

The emit step rendered two parallel multi-hop joins: supplier through nation to customer for the segment breakdown, and supplier through nation to region for the region breakdown. Two SQL queries went to Snowflake. Both returned.

End-to-end time: 9.7 seconds.

Two multi-hop join paths the engine emitted automatically, both routed through the nation table.

The results

Customer market segmentCount
BUILDING2,481
FURNITURE2,172
MACHINERY2,007
HOUSEHOLD1,738
AUTOMOBILE1,602
Supplier regionCount
AMERICA2,036
MIDDLE EAST2,019
ASIA2,003
EUROPE1,987
AFRICA1,955

Total: 10,000 supplier-customer pairs.

The honest read on the data

There is no business story in these numbers. TPC-H is generated to be uniform on purpose. Suppliers are distributed roughly evenly across five regions. Customer segments come in close to the same volume. No segment is concentrating. No region is failing. The data is a synthetic benchmark, not a real business.

That is why this case study is framed as a capability demonstration rather than a discovery. Three things matter operationally, none of which are obvious from the totals.

The engine ran on Snowflake, not the Postgres demo workspace. The same engine that powers Net New MRR analysis on Postgres emits dialect-correct Snowflake SQL.

The engine handled snapshot mode automatically. No date column needed. No configuration. A typical BI metric layer would have rejected this metric or required a workaround.

The engine resolved multi-hop joins from the data context graph. Two different join paths were needed and the engine emitted both without being asked.

Why this matters

The as_of workaround from the setup section exists because the BI tool demanded a date. Most BI tools have a strong opinion about what a metric should look like: aggregation over time, sliced by a dimension, charted as a line or a bar. Snapshot KPIs do not fit. So they fall through the cracks. They get a separate dashboard. Or they live in a spreadsheet.

For a workspace that wants every metric to be queryable the same way, the cracks matter. The data context graph captures the relationships once, and both temporal and snapshot metrics go through the same plan, emit, execute path. The user does not pick a mode. The data tells the engine what mode to run in.

A note on the data

TPC-H is a public synthetic benchmark dataset, generated to be uniformly distributed across regions and segments. The figures above are real outputs from the live engine running against TPC-H on Snowflake on 2026-05-10. The numbers are not from a customer and do not represent a real-world distribution.

If you want to see this run against your warehouse, including snapshot metrics like active suppliers, current inventory, or open invoices, request a demo.