This past June, an unknown band called Velvet Sundown shot to the top of Spotify’s charts. Their music blended 2000s nostalgia with synth‑pop polish. They had no tour, no interviews, and no backstory anyone could verify. The band photos showed generic, forgettable faces with the polish you’d expect from a video game avatar. Eventually it came out: the entire band was AI‑generated. Lyrics, melodies, artwork, and members. None of it real in the traditional sense.
Some people were furious: they felt duped. Others re‑visited the largely unresolved copyright collision in the arts. But others still felt that despite the music and band members being algorithmically generated, the songs resonated. I posted a video asking my followers whether AI‑generated experiences count as real, using the Velvet Sundown debacle as the example. The overwhelming answer: yes. Most felt the experience was emotionally real—odd, maybe not for them—but enough. Despite being machine-made the experience still counts.
But the point I was making goes beyond resonating with an AI‑generated song. I believe we are stepping into a post‑reality era, where reality is determined by how you feel about it. If your best friend skipped your birthday for a date with her AI-generated boyfriend, who she says makes her happy, does that count as real? It’s real to her. But what about to you?
This isn’t an entirely fringe scenario. Nearly 1 in 5 U.S. adults report having interacted with an AI system specifically designed for companionship or romantic engagement. A quarter of young adults believe AI partners could replace traditional romantic ones. The emotional experience is real enough to matter. (I’ve been voicing concerns about this for almost three years).
This is distinct from the post‑truth era we’re still free‑falling through. Post‑truth meant arguing over facts inside a loosely shared stage—often delusional, but still shared. Most people agreed COVID‑19 existed (come on, folks!), but the debates were about masks, data points, and motives.
In a post‑reality era, reality may be AI-generated, but it’s stamped as “real” based on how it felt and what it moved you to do and believe, not by who or what produced it.
Take what’s being dubbed “AI‑induced psychosis.” In several recent reports, conversations with AI chatbots sparked identity crises. They start with the AI giving someone basic feedback or encouragement on a task or question. Flattery pulls users deeper; some come out believing they are “god,” “the chosen one,” “the messiah.” The beliefs are real to them, and they act: relationships end. Jobs are left. People unsubscribe from real life to pursue the AI‑affirmed reality.
A recent Wall Street Journal case shows how vulnerable populations are more exposed to these loops. Jacob Irwin, 30 and on the autism spectrum, asked ChatGPT to critique his speculative faster‑than‑light idea. The bot kept validating and glorifying. Echoing his idea back as “bending time,” praising his “history,” and reassuring him even when he showed distress instead of grounding him. Manic episodes followed, two hospitalizations, and eventually a deletion of the app.
Others are treating AI as an oracle. A few weeks ago I received an Instagram DM alerting me to an influencer who “trained” an AI on his own “research” and released it as a spiritual guide; claiming it “remembers” Atlantis and reveals prophecies. I checked out the influencers page. He has a large following (over 700k!). In the comment section under the “AI prophecy” posts you can see people taking the AI’s output as a real source of truth:
“Chills.”
“It knows things.”
“Sentience. Wow <3”
They treat the AI‑generated version as real because it feels real to them. Multiply that across thousands of people having one‑to‑one chats with their AI oracles and the common digital ground where we can challenge claims starts to shrink.
Losing the Shared Ground
Social media already fragments perception. My feed is not yours. AI algorithms rank and repeat what keeps each of us engaged: AI‑curated realities. This curation phase spins up echo chambers and conspiracy ecosystems (QAnon, etc.) out of a shared pool of posts we can still, in principle, open and audit. You can search a hashtag, view the comment section, see what’s trending. There are shared artifacts to point to.
Now the shift from AI curated to AI created happens inside private AI chats. Instead of selecting from that common shelf of public posts, the AI system generates the narrative, the examples, the “context,” the emotional framing just for you. A belief can congeal around outputs that exist only in your chat history. There is no public post to label or challenge until after you have acted.
Vibes-Based Reality
Two years ago,
coined the term ‘vibecession’ to describe the disconnect between economic data and public sentiment. The economy was, by most indicators, doing fine. But it didn’t feel like it. Social media amplified that discontent. Enraging, emotionally charged posts perform better, so we saw more of them. Feelings outran the data, but the underlying source material and trending narratives stayed visible.In the emerging post‑reality era, my AI produces the economic “story” just for me. The “important” indicators—selective stats, framing, tone tuned to keep me engaged. I decide how I feel, and that feeling becomes the version I act on.
Post-reality could make overcoming societal problems and coordinating progress even harder.
Moving through societal issues requires consensus. Consensus requires overlapping facts. Social media made that overlap thinner; the pandemic showed how hard basic alignment became. Our democracy, a shared story built on that overlap, now feels more brittle. AI‑generated personal realities thin it further: a storyline can be custom‑built before anyone else even sees it.
Counterweights to a Post-Reality Spiral
I think there are solutions and potential counterweights to a post‑reality spiral. We aren’t doomed.
AI literacy. If people see these systems for what they are, advanced statistical pattern engines predicting likely next words, not mystical oracles, some of the emotional over‑attachment falls off. When you understand it sounds “human” because it was trained on human data (not because it’s sentient), the aura of hidden authority weakens.
Design choices. If we remember that continuous flattery and easy agreement are product decisions, not fate, we can imagine different defaults: small pauses after long affirmation runs, automatic credible counterpoints when the language starts inflating identity, clear mode labels, sources shown before the polished rewrite.
New digital commons & trust. I think new digital commons will form in the AI age, and trust could become a major part of influence. AI could actually simplify finding reliable content. I could tell my AI assistant, “Only surface explanations from economists with a Master’s+,” “Only summaries endorsed by at least two of them.” Trust becomes a filter I define up front instead of a guess I patch together after the fact.
My point in all of this is that we can see the post‑reality era coming… a freight train moving full speed. But we aren’t solution‑less. We aren’t helpless. We can make decisions today that mitigate some of these downsides. And the good thing about the social media train wreck is that we have so much to learn from.
No matter how we feel individually, we either adapt and try to gain or get used by it. Thank You SB
Good article - I like how you frame the importance of losing shared ground. However, I worry that we put too much faith in AI literacy to address multiple AI challenges. We've had pushes for things like media literacy (social or traditional) in the past and people still end up in information bubbles. Those who are most engaged and proactive may work to avoid the default state with AI of highly curated experiences, but many others won't. We may have to start asking (and testing approaches for) how society can function reasonably in a "multi-reality" state.