A pioneering researcher had developed a forensic methodology for mapping the causal architecture of complex societal problems — rigorous, novel, and entirely without software. TDG came in to build the tool her vision required, then stayed embedded long enough to discover what it actually needed. Fifteen months and three contracts later, the platform is live with her first client, the AI intelligence layer is ahead of schedule, and the endgame — an AI-native research practice where institutional memory compounds across every project — is now in sight.
The client is a researcher and educator whose forensic methodology for analyzing complex societal problems is unlike anything in the market. The approach maps entangled causal relationships across time, discipline, and scale — principal factors, contributing causes, cross-connections, epistemic confidence levels — in a visualization so mathematically demanding that the first question wasn't whether TDG could build it. It was whether it could be built at all.
The answer was yes. But the more important answer came later.
Building something genuinely new requires a different kind of partnership than building something that already has a category. There's no spec sheet for a tool that doesn't exist. There's a vision, a methodology, a set of needs the client can articulate — and a larger set she can't yet, because you can't fully know what a tool needs to do until you've used it with real clients on real work.
TDG came in to build the vision. We stayed to discover what it actually required. And fifteen months in, we're on the threshold of something that hasn't existed before.
The philosophy TDG brings to every novel product engagement: the version you ship on day one is the version you could imagine before you built it. The real product emerges twelve months later, shaped by lived experience, client signals, and the workflows nobody could have anticipated from the outside.
When TDG started this engagement, the framing was simple: "I'll build this vision for you — but this is just step one. This is what we can imagine today. The best part will be twelve months after we ship, when we've evolved the product so far you'll never recognize what we pushed out on day one." That's exactly what happened.
Before writing a line of code, TDG ran a full discovery engagement to determine whether the visualization — mathematically complex, nothing like it anywhere on the market — was even buildable to spec. It was. That discovery phase established the technical foundation and the trust that made everything that followed possible.
A purpose-built canvas application for building, navigating, and presenting complex causal factor graphs. Radial visualization, factor tree, timeline view, multi-lens filtering, epistemic confidence tagging, undo/redo, full-text search. Designed from scratch to support a methodology no off-the-shelf tool could accommodate. Currently in active use on the client's first paid research engagement.
The second contract wasn't scoped from a spec sheet. It emerged from use — from the moments the tool met real clients and revealed what it still needed. Multi-tenant access, account roles, new editing modes, rapid table construction, sensemaking workflows. Each feature discovered together: TDG hearing the signals, showing what was possible, then building from the client's lived experience and domain wisdom. This is what embedded partnership looks like in practice.
The AI vision wasn't in the original scope. It emerged from fifteen months of embedded partnership — and from watching the frontier closely enough to know when the tools had finally caught up to the ambition. Fifteen months ago, what's now possible simply wasn't. The models weren't there. The infrastructure wasn't there. TDG tracked the space, held each new development up against the research methodology, and identified the moment the match was close enough to build.
The result is an Analyst and Builder agent mode that treats the research dataset the way Cursor treats a codebase — the AI already knows everything the project knows before the conversation starts. Researchers can interrogate their own causal graphs directly, ask questions of the data, surface cross-connections, and propose new factors with full project context loaded automatically. Live API integration is running. Data connection is a month ahead of schedule.
The full vision — now in sight — is a central nervous system for the entire research practice. Diverse inputs (papers, ethnographies, hard data) feed into the Forensic Tool. The tool produces findings. The intelligence layer seeds learnings across current and future projects. Institutional knowledge compounds. Every engagement makes the next one smarter.
This is what it means to build a genuinely AI-native organization — not AI bolted onto existing workflows, but AI woven into how the work itself is structured.
The Forensic Tool CMS is in active use on the client's first paid research engagement — shipped and running, not a prototype. The intelligence layer is ahead of schedule. A fifteen-month engagement is extending to twenty-four.
From first conversation to shipped product in eight months — a purpose-built platform for a methodology that had no software equivalent anywhere on the market.
In active use on the client's first paid research engagement. Not a prototype — a running platform doing real work with real clients.
AI-native agent modes — Analyst, Builder — connecting the research dataset directly to the model. Live API integration running, data connection a month ahead of schedule.
A fifteen-month engagement extending to twenty-four, with a defined roadmap to a fully AI-native research practice. Three contracts earned. Each one expanding the scope of what's possible.