60% of Today’s Jobs Didn’t Exist 80 Years Ago—Will AI Upend the Next Wave?
AI, Disruption, UBI: Why Leaders Aren't Talking About This
I am a strategic foresight advisor, which means it’s my job to use data to inform scenarios that help reduce uncertainty around decision-making. I’ve advised governments across all levels, C-suite executives at startups and Fortune 500 companies, and steering committees at some of the most famous international organizations in the world. I believe the most responsible leaders plan for both the worst- and best-case scenarios. Yet I have yet to come across a state-level leader who is truly grasping—and appropriately preparing for—the scale of disruption that could be coming to the workforce.
We hear a lot about AI’s potential for driving the economy and, separately, about empowering workers, job creation, and strengthening the middle class. But the truth is, if not managed correctly, AI could upend the middle class—and the aspirations of workers who hope to join it.
If you are leading a frontier AI lab (i.e., OpenAI, Anthropic, Google, etc.), of course it’s part of the marketing package to sell your product as something that may one day complete all jobs requiring “human-level” skills (AGI). But marketing aside, these labs—and many companies building AI agents on top of frontier models—are openly declaring the futures they’re creating. It would be wise to listen.
For example, economist Anton Korinek argues that Artificial General Intelligence (AGI) could, in fact, automate all human tasks, leading to wage collapse and extreme inequality, with owners of AI systems capturing most economic value while non-owners risk being left behind. In his NBER paper, he models scenarios where wages initially rise but collapse before full automation is achieved, transitioning into a steady-state economy in which labor and compute earn equal returns.
These views are considered more extreme, and many economists argue they’re unlikely. History has also disproved the lump-of-labor fallacy (the mistaken belief that there’s a fixed amount of work in an economy), and new jobs have always emerged as economies evolve. In fact, 60% of occupations Americans work in today didn’t exist 80 years ago. Many economists expect this pattern to continue. I also believe AI (along with robotics, synthetic biology, space travel, etc.) will lead to new jobs and industries that are impossible to imagine now—picture explaining the role of a social media manager to someone in 1992.
But that doesn’t absolve leaders from the responsibility of having a plan for scenarios in which AI can handle a significant portion of the required work in the labor market (and from taking steps today to steer away from it if they deem it undesirable).
What are the new economic models for that future? What about new taxation and distribution frameworks? In countries without universal healthcare—where coverage is typically tied to employment—what’s the plan? These are seemingly basic questions that demand complex, deeply thoughtful analysis. The idea of universal basic income gets tossed around as though it’s a simple button that can be pressed if the automation emergency light goes on. But there are deep societal, cultural, geopolitical, and national security implications to this model. If your country has not secured an indispensable place in the AI supply chain, will you be importing most aspects of “the future” while also automating your workforce? Your security and continuity would be entirely dependent on the nations you’re importing from.
Alternatively, in a scenario where AI drives productivity and creates more jobs and tasks at a slower, more gradual pace, we could still see widening income inequality and unrest if not managed carefully. In his NBER paper “The Simple Macroeconomics of AI,” Daron Acemoglu suggests AI is likely to have only modest impacts on productivity and GDP over the next decade, but is still likely to negatively impact income inequality. Workers with skills complementary to AI (high-skill, tech-savvy individuals) may benefit more, while others face downward wage pressure or job loss. In this scenario, widespread, ongoing skills and training programs are vital, along with social security safety nets to protect individuals’ quality of life during transitions.
I believe these sorts of training programs and economic safety nets should be in place regardless. Even so, adjusting social security nets requires reconfiguring fiscal policy, which demands political buy-in. This is costly and time-consuming. The wheels should be turning now.
Even in the best-case scenario—where AI ultimately becomes the tide that lifts all boats, allowing us to work fewer hours while producing far more—there will still be disruption on the road to get there. And the path to that future isn’t automatic: it requires substantial investments in retraining, honest conversations about which jobs will soon be obsolete, and practical social safety nets to carry people through the transition. It might also mean revisiting our tax structures, placing a bit more weight on capital and a bit less on labor, so that the gains from AI’s productivity don’t just concentrate at the top. Judging by the rise in tax cuts for companies in the midst of a fiercely competitive global market, these are tricky conversations that require very thoughtful analysis, too.
Regardless of where you stand on these workforce projections, I think we can agree that disruption is coming. Among the companies I advise, there is already a significant rise in independent work, and we will see these trend lines continue. On one hand, AI will empower individuals to do more with less, and we should expect a boom in entrepreneurship (this is great!). On the other hand, companies will be more reluctant to hire full-time roles as AI brings significant uncertainty to what those roles might look like in 12, 18, or 24 months. As a result, contract work is likely to become more prevalent, with major implications for worker bargaining power, social safety nets (like health insurance), and opportunities for re-skilling. It also presents more openings for algorithmic management, which has often proved detrimental to worker well-being. And these are just some of the scenarios we can see with high certainty.
History has given us enough examples of how rapid automation, combined with inadequate social protections and economic inequality, can ignite violent societal unrest. We have time to avoid this scenario. Are we using it wisely?
The most responsible leaders evaluate all scenarios, ask the big “what if” questions, and prepare for each possibility. They’re not just hoping for the best—they’re actively working to shape it. For now, time is still on our side. Even if AI continues to advance rapidly, it will take a while for businesses to adapt their models and properly integrate these technologies across society. But change is coming—that much is certain. The real question is: Are leaders doing enough to prepare for it?
The tide will only lift the boats for people with enough money to afford one.
The tide might lift all boats. The problem is -many have no boats, and frankly, are not really strong swimmers either (mental/emotional agility, agency, self-regulation skills). So, yeah.
I see a lot of classic stress response these days: those who fight (often in a reactive not creative way); those who freeze and hope to wait it out (reminds me of the pandemic times - a stress test many failed, but were bailed); and, of course, the good old flight.