I don’t know what you did last summer.

But I have a growing suspicion that parts of it weren’t real.

The perfect Amalfi coast restaurant you saw online that felt oddly flat in person. The beach that didn’t match the version you bought into while you scrolled. You weren’t chasing a memory, you were chasing a synthetic edit.

The media and the internet used to show you what happened. Now it shows you what could have happened.

For a decade, we dismissed this as harmless influencer theater and editorial slant. That phase is over.

This total collapse of authenticity is more than a cultural glitch; it is an infrastructure failure. It’s also the opportunity to birth a new category of valuable tools.

I’m starting a series of periodic notes on the infrastructure shifts I’m tracking and white-space that I see emerging. This is the first. I’m sending this to a few builders and investors I’ve talked shop with, hopefully, it’s a useful lens for the gaps we’re all seeing at the frontier….

We’ve entered the era where the cost of generating a lie has dropped to zero, while the cost of verifying it is scaling exponentially. This isn't just about being catfished. It’s about kinetic consequences.

In the last few weeks, AI-generated footage of the Burj Khalifa engulfed in flames and Tel Aviv apartment blocks being "flattened" by strikes amassed over 145 million views before the first official debunking could hit the wire. 

The distortion has become so pervasive that even our verification tools are hallucinating. Even last week, when Netanyahu posted a video from a Jerusalem cafe to debunk rumors of his own death, the Grok AI1 chatbot flagged it as "100% deepfake," citing a "static coffee level" and "unnatural lip sync." 

This is the "Wag the Dog" moment, but without the Hollywood soundstage2. It’s digital-first warfare designed to trigger action, not just attention.

At some point, and I worry sooner than we think, a fabricated signal will trigger a bank run, a riot, or a retaliatory strike. 

When truth is slower to produce than fiction is to deploy, the system doesn't just degrade, it fails.

This is no longer a philosophical debate; it is a structural crisis for any system that depends on a shared reality. Like a functioning society. Assuming that's still the goal. But this isn’t a new problem, the tools of deception have just escalated. Historically, there have been three distinct evolutions of arbitrating reality, all of which have hit their breaking point:

The latest sets of assumptions on how to combat AI-fueled reality-bending leans unsurprisingly on these structures. New laws and regulations (Gen 1) lag as they’ve always done. Generative media watermarking from the AI platforms (Gen 2) is a cute idealism since their incentives are tied to creation, not verification3.

All of these architectures sometimes work, often fail, and frequently contradict one another. Governments have intelligence services to vet reality, yet the individual has nothing. In a world where information is this malleable, where absolute truth is fleeting, “common sense” doesn’t help you, and you cannot wait for a referee to tell you what is real. 

It’s time to build the next architecture. Gen 4: Personal Probabilistic Truth.

This system posits two beliefs:
1.) Society is (sadly) in a “post-truth era”
2.) We need to empower the individual

This shift from external trust to personal verification is the birth of a new opportunity category that I (annoyingly) call Truth as a Service (TaaS).

Unlike a centralized authority, TaaS moves the locus of control away from the platform and toward the individual

These systems are personal watchdogs that digitally live with you and cross-reference the previous three generations in real-time to not give you the absolute answer, but rather a confidence interval for how much you should trust what you see.
The opportunity for founders is to build three layers:

  1. Provenance technologies and standards: Capture-time cryptographic signing (the "Axon" model for the smartphone). Ways to mutually trust that the raw media is what it says it is. The SSL double-hand-shake for reality.

  2. Detection: Independent identification of anomalies in a broad range of media and behaviors that humans can't see with common sense.

  3. End-user Interface: The interface into "always-on" hardware - phones, AR glasses, earbuds, or OS-level integrations - Get the context of what the individual is perceiving, and communicate back to them how suspicious they should be. The “Jiminy Cricket” that acts less like your conscience, but rather a personal “truthiness” helper. 

Putting on my VC hat, getting this right is going to be especially hard for many reasons. But, founders building category-defining TaaS businesses will have to solve for two structural tensions that aren't technical, but human:

First: The Luxury Reality Gap. 

If TaaS requires significant compute and capital to deploy, we risk creating a two-tiered reality where "Ground Truth" is a gated asset. A world where a Tier-1 fund or a high-capacity individual has the tools to interrogate the world while everyone else is left to be more easily lied to. Reality becomes a luxury good. 

The Challenge / Opportunity: To be a global protocol, TaaS cannot be a luxury filter. It is bad for society and it shrinks your TAM. How do you commoditize a process that is inherently expensive? The winning founder doesn't just build a better detector; they solve the unit economics of trust.

Second: The Apathy Gap

Does anyone actually care? We’ve seen this movie before with privacy. Philosophically, everyone claims to value it; operationally, they trade it for convenience and a "good enough" user experience every single time. If people would rather live in a selectively curated bubble than face a difficult ground truth, TaaS isn't just a technical hurdle, it’s a human one.

The Challenge / Opportunity: To find the wedge, you can’t sell "Truth" as a virtue. You sell certainty as a requirement. The winning founders find the specific use-case where truth seeking is most urgent and the cost of being wrong is existential. 

A handful of teams are going to build really impactful and valuable businesses in TaaS, and we are just starting to see founders pitch their ideas. I can’t wait to back them.

— AF

1  Seems like Grok should probably stick to what it does best, synthetic pornography

2  A hoot of a movie. Dustin Hoffman. Robert De Niro. Hire a director to film a fake war in order to distract from a presidential sex scandal and secure a re-election. Wait…life never imitates art, right?

3  And more importantly, you don’t ask “cigarette companies” to lead cancer research

Keep reading