OpenAI's Endgame Starts With Your Selfie
Sora 2 isn’t a social app. It’s a data pipeline for teaching robots how to see.
AI-generated Image
A few weeks ago, OpenAI launched the Sora 2 app, a social platform dedicated entirely to AI-generated video. Picture TikTok or Instagram except with solely AI-generated video clips featuring users themselves in AI-driven cameos, or (with quasi-consent) videos generated with other people’s likeness. The reaction was immediate and polarized: half the internet was amazed by the cinematic realism, the other half asked, “Do we really need another AI feed no one, literally no one, asked for?”
But OpenAI isn’t in the social-media business. They don’t have a social division or a strategy team chasing engagement metrics. Look closely at the Sora 2 page on OpenAI.com and they barely mention the word “social.” They repeatedly call Sora 2 a “general-purpose world simulator” and reference “robotic agents.”
What does a TikTok-like app (arguably with more nonsense) have to do with robotic agents? If you zoom out, everything.
From Social Gimmick to Strategic Move
Sora 2 isn’t an attempt to build the next TikTok. It’s an attempt to train and normalize a new behavior, and, in the process, help OpenAI teach robots how to move through the world. From a user behavior standpoint, OpenAI is getting people comfortable creating AI-generated video in their daily lives. It’s the same playbook Apple used with Photo Booth and the first front-facing cameras. What started as a fun way to “see yourself on screen” quietly trained us for a video-first world. One where looking into a lens would become as natural as looking into a mirror (cue FaceTime). In the same way, Sora 2 makes “AI video creation” feel ordinary: prompt-generate-iterate-share. Soon, that workflow will feel as natural as taking a selfie (which was once seen as radical), remember when the selfie stick was a best-selling item?
Outsourcing Labor to Users
What’s also happening here is a massive outsourcing of labor. OpenAI has cleverly packaged what would otherwise be expensive training and evaluation work as a “fun social experience.” Every video prompt, every video tweak, every video that gets shared or discarded, what goes viral, what doesn’t, is training their video generation model. That’s all free labor that would cost millions to replicate in a controlled environment with paid testers. They’re essentially getting millions of people to volunteer as unpaid quality assurance testers, prompt engineers, and data labelers. They have gamified reinforcement learning at scale.
A Data Pipeline for World Simulators
And the final move in the Sora 2 strategic play: training robots. OpenAI is gathering “interactions”: prompts, edits, approvals, re-renders, and crowd judgments about what looks real and plausible. Every time someone attempts to create an AI-generated video, gives more feedback, puts the video into drafts and doesn’t post, what the “crowd” does or does not respond to is a micro-signal teaching an AI world model how physics, lighting, and human motion behave.
These signals become the data for training robots. Why? Because robots need to understand how the physical world works before they can safely interact with it. World simulators create virtual environments where robots can learn without physical consequences, but these simulators are only useful if they accurately reflect real-world physics.
When users reject a Sora video because “water doesn’t flow like that” or “people don’t move that way,” they’re teaching the model fundamental truths that transfer directly to robotic systems. This explains the otherwise bizarre repeated mentions of “robotic agents” on a website ostensibly about a social video app. OpenAI is telling us exactly what Sora 2 is for, right on the product page.
The Strategic Play
OpenAI’s Sora 2 social app is simply the visible layer of a long-term strategy: to build the behavioral and technical foundation for AI that can act reliably in the physical world. We are training OpenAI’s simulation and robotics models. We are the product. Sora 2 is an on-ramp to a future where simulation and robotics merge. While everyone debates whether we need another social platform, OpenAI is quietly building something far more consequential: the operating system for the physical world.
The ethical cost of OpenAI’s strategic play.
We are rolling into another example of “if it’s free, you’re the product” on steroids. We’re also the guinea pigs for testing deepfake boundaries at scale. While OpenAI has some guardrails, failures happen in public, not in labs. We’re creating a world where video evidence becomes increasingly meaningless, consent becomes a checkbox rather than a principle, and the line between real and synthetic blurs beyond recognition. And we’re doing this all for free, to help a company build better robots.
On this week’s episode of I’ve Got Questions
I sat down with leading AI scholar and professor, Kate Crawford, to discuss:
The environmental impacts of AI
The AI “Race” and the geopolitical and national security dynamics shaping AI development
Thy China is forging ahead with renewables to power AI
And whether AI companies are more powerful than nation States



