Adam Mosseri recently posted about how Instagram is evolving to arbitrate what is, or is not, truthful in a generative AI era. Om Malik’s analysis of said post is well worth the time to read and, in particular, his framing of Instagram’s movement into what he calls a ‘trust graph’ era:
[Instagram] has moved from the social graph era, when you saw posts from people you knew, to the interest graph era, when you saw what algorithms though [sic] you will like. It is now entering a trust graph era, in which platforms arbitrate authenticity. And it is being dragged into this new era. [^ Emphasis added.]
…
AI is flooding the system, and feeds are filling with fakes. Visual cues are no longer reliable. Platforms will verify identities, trace media provenance, and rank by credibility and originality, not just engagement.
Malik’s framing is useful not simply because it captures a product evolution, but because it gestures toward a deeper shift, and one whose implications extend well beyond Instagram as a platform. Namely, platforms are positioning themselves as arbiters of authenticity and credibility in an environment where traditional signals of truth are increasingly unstable.
There are some efforts to try and assert that certain content has not been made using generative systems. Notwithstanding the visibility that Meta possesses to try to address problems at scale, what is becoming more salient is not merely a technical response to synthetic media, but a broader epistemic and ontological shift that increasingly resembles Jean Baudrillard’s account account of simulacra and life lived in a state of simulation:
Simulacra are copies that depict things that either had no original, or that no longer have an original. Simulation is the imitation of the operation of a real-world process or system over time.
This framing matters because efforts to ground authenticity and truth are predicated on the existence of an original, authentic referent that can be recovered, verified, or attested to.
Generative AI content can, arguably, be said to largely be divorced from the ‘original’ following the vectorization and statistical weighting of content; at most, the ‘original’ may persist only as a normalized residue within a lossy generative process derived from the world. Critically, generative systems do not simply remix content; they dissolve the very reference points on which provenance and authenticity regimes depend. And as generative LLMs (and Large World Models) are increasingly taken up, and used to operate the world in semi-autonomous ways, rather than to simply represent it, will they not constitute an imitation of the operation of real-world processes or systems themselves?
This level of heightened abstraction will, to some extent, be resisted. People will seek out more conservative, more grounded, and perceptibly more ‘truthful’ representations of the world. Some companies, in turn, may conclude that it is in their financial interest to meet this market need by establishing what is, and is not, a ‘truthful’ constitutive aspect of reality for their users.
How will companies, at least initially, try to exhibit the real? To some extent, they will almost certainly turn to identity monitoring and verification. In practice, this means shifting trust away from content itself and toward the identities, credentials, and attestations attached to published content. In this turn, they will likely be joined by some jurisdictions’ politicians and regulators; already, we see calls for identity and age verification regimes as tools to ameliorate online harms. In effect, epistemic uncertainty about content may be displaced onto confidence in identities attached to content.
This convergence between platform governance and regulatory activity may produce efforts to stabilize conservative notions of truth in response to emergent media creation and manipulation capabilities. Yet such stabilization may demand heightened digital surveillance systems to govern and police identity, age, and the generation and propagation of content. The mechanics of trust, in other words, risk becoming the mechanics of oversight and inviting heightened intrusions into private life along with continued erosion of privacy in digital settings.
Regardless of whether there is a popping of the AI bubble, the generative AI systems that are further throwing considerations of truth into relief are here to stay. What remains unsettled is not whether platforms will respond, but how different jurisdictions, companies, and regulators will choose to define authenticity, credibility, and trust in a world increasingly composed of simulacra and simulations. Whether the so-called trust-graph era ultimately serves users—or primarily reasserts institutional authority under conditions of ontological and epistemic uncertainty—will remain one of the more intriguing technology policy issues as we move into 2026 and beyond.
