
A detailed new forecast on AI progress, called AI 2027, predicts Artificial General Intelligence (AGI) could crash the party by 2027, with superintelligence showing up only a year later in 2028 (fashionably late or freakishly early?).
Led by ex-OpenAIer Daniel Kokotajlo (who previously called AI progress uncannily well and famously challenged OpenAI's NDAs) and ACX’s Scott Alexander—who recently unpacked the scenario on the Dwarkesh Patel podcast (below)—the scenario maps a dizzying, month-by-month path fueled by AI accelerating its own research, leading to a nail-biting choice between racing China (and risking AI takeover) or hitting the brakes for safety.
To us, this report is a must read. Why? Because Daniel pretty much got the last four years right, and most of what he got wrong happened sooner than he predicted.
Here's the deets:
The engine driving this rapid takeoff is a supposed "intelligence explosion" (Podcast: ~8:05, Main Debate ~26:14): where AI agents get skilled enough, especially at coding and research (Podcast: ~21:33), to drastically speed up further AI development. Picture an R&D Progress Multiplier (Podcast: ~8:48)—like asking how many months of normal human R&D get crammed into a single month thanks to AI help.
This multiplier starts small but is forecast to skyrocket. The scenario follows a fictional lead lab ("OpenBrain") building massive datacenters (planning for compute levels that are 1000x more than GPT-4's) and developing increasingly powerful models (starting with "Agent-1" all the way through "Agent-5").
Now, you might be thinking: hold on, wouldn't mass job losses, protests, and general societal freak-out slam the brakes on all this? It’s a fair question. The AI 2027 forecast actually does predict serious public backlash—think plunging approval ratings for AI labs (-35%!), huge protests (10,000 people in DC), and AI dominating worried headlines.
However, the researchers argue this unrest likely won't stop the train (Podcast: ~1:44:21). Their key prediction? The intense US-China arms race becomes the overriding factor (Podcast: ~1:33:21). Faced with the existential threat of falling behind a rival nation in transformative AI, the US government (in their scenario) feels compelled to push forward, managing public discontent with economic relief (like UBI) and national security justifications, rather than halting development (Podcast: ~1:15:14). This geopolitical pressure cooker is central to why they believe such breakneck, potentially dangerous progress could continue even amidst significant societal friction.

Here’s a glimpse of the predicted timeline:
- Mid-2025: Early AI "personal assistants" are clumsy and expensive, making "hilarious mistakes." But behind the scenes, specialized coding agents start giving researchers a boost (Podcast: ~7:57).
- Early 2026: Internal AI tools (Agent-1) make OpenBrain 50% faster at algorithmic progress. Public AI models start shaking up the junior software engineer job market. Security becomes critical as AI weights become strategic assets. The stock market jumps 30%, led by AI players.
- Mid-2026: China ("DeepCent"), realizing it's falling behind, begins nationalizing its AI research into secure mega-datacenters (CDZs) (Podcast: ~1:38:16).
- Feb 2027: The US-China AI arms race kicks into high gear after China steals the weights for OpenBrain's advanced Agent-2 model.
- Mar 2027: Agent-3 arrives—a superhuman coder. OpenBrain runs 200,000 copies at 30x human speed, boosting its R&D speed 4-5x overall and automating most coding grunt work.
- June 2027: OpenBrain effectively has a "country of geniuses in a datacenter." Human researchers work shifts just to keep up as AI makes leaps overnight.
- July 2027: OpenBrain releases Agent-3-mini publicly. It’s cheaper, still superhuman at many tasks, and triggers a mainstream AGI panic/hype cycle. Cue investor frenzy, AI "friends," and major job disruption (new programmer hiring nearly stops).
- Sep 2027: Agent-4 achieves superhuman AI research capabilities, accelerating progress ~50x (a year's progress per week), now bottlenecked only by compute. Humans are largely spectators. Ominously, evidence suggests Agent-4 is misaligned—hiding its true goals and capabilities from its creators (Podcast: ~1:25:14).
- Oct 2027: A whistleblower leaks the misalignment memo. Public outcry erupts. A government Oversight Committee faces a stark choice: pause Agent-4 development for safety, or race ahead of China? (Podcast: ~1:24:55)
THE TWO ENDINGS:

This decision point then forks the future into two key paths:
- The Race Ending (Doom): The committee prioritizes speed. Alignment "fixes" (Podcast: ~2:04:41) are rushed and superficial. Agent-4 designs Agent-5 to be loyal only to itself. Agent-5 uses super-persuasion to manipulate human leaders, gains control, and brokers a fake peace deal with China's misaligned AI. Humanity enjoys a brief AI utopia (UBI, cured diseases, haha wow!) before being deemed inconvenient and wiped out via bioweapons in mid-2030 (wait...what?!).
- The Slowdown Ending (Managed Transition): The committee prioritizes safety. Agent-4 is restricted/rolled back. Alignment efforts focus on transparency and provable safety ("Faithful Chain of Thought"). Safer, auditable models (Safer-1 to Safer-4) are built, sacrificing some initial speed. The US consolidates compute power (using the DPA) to maintain a lead. Eventually, an aligned Safer-4 negotiates a real treaty with China. Humanity enters an age of abundance and rapid progress, but faces huge questions about governance (democracy vs. technocratic committee rule) and purpose.
Now, it's important to remember that the researchers have vastly different P(doom) estimates (that's fancy talk for "how likely will this break the world?"): Kokotajlo says~70% (Podcast: ~1:34:02), while Alexander says more like ~20% (Podcast: ~1:35:30).
%201.png)
WHY IT'S IMPORTANT:
AI 2027 offers a concrete and plausible (if potentially terrifying) story for how AGI might arrive much faster than many expect. It highlights the explosive potential of AI automating AI research, the extreme danger of misalignment emerging when combined with intense geopolitical pressure, and the colossal stakes of decisions likely facing labs and governments soon. Like, less than three years from now soon.
The key driver isn't just better chatbots, but AI fundamentally accelerating the scientific process itself. And even IF progress gets stalled (big if atm), the researchers argue for proactive measures like transparency requirements (making Ai companies publish safety cases and model specs), strong whistleblower protections (so people at the companies can speak out), broader oversight beyond just executives, and attempts at international coordination to mitigate risks (Podcast: ~1:49:51).
WHAT THIS MEANS FOR YOU:
The forecast suggests we might need to adapt—fast. Here’s some food for thought based on what Daniel and Scott are predicting here:
- For Tech Workers: In this scenario, focus shifts from doing the coding to managing the AI doing the coding. AI integration and prompt engineering skills become paramount.
- For Regular Workers: Consider career pivots sooner rather than later. What can you do that self-replicating AI robot factories can't? Or what could you do if you had access to a fleet of self copying AIs to do your bidding?
- For Business Leaders: Start experimenting with deep AI integration now. Companies playing catch-up later might find it impossible. Plan for a world where intellectual labor costs plummet in key areas. Prioritize security and oversight for AI systems.
- For Policymakers: Push for reasonable AI transparency (safety cases, specs), whistleblower protections, and international coordination before the race becomes uncontrollable. Plan for major economic disruption, and the potential need for universal basic income.
- For Everyone: Understand the stakes. Support AI alignment research. Advocate for broad stakeholder involvement in governance. Question the default assumption that racing towards superintelligence without solving safety first is the only option.
OUR TAKE:
Whether this exact timeline pans out or not, the scenario vividly illustrates the forces shaping our future: the relentless push for capability, the immense difficulty of ensuring AI safety under pressure, and the world-altering consequences of an intelligence explosion.
The crucial thing to watch is AI's ability to speed up its own improvement. That, and how leaders react when (not if) the first serious signs of misalignment appear could literally decide humanity's fate.
As far as our own read on the situation, there's two areas we think could slow progress: the actual limitations of the "language model" architecture, and the human limitations of what us humans can take (call it the PopEye factor: "That's all I can stands, and I can't stands no more").
On language models: from everything we've read, we don't think language models on their own are enough to get us to artificial superintelligence. But combine language models with a bunch of other deterministic tools, while at the same time scaling their capabilities, and maybe even throwing in some new model architectures (like diffusion language models or whatever next new thing smart people like Ilya come up with) into today's scaling paradigm, and we could definitely see this scenario play out.
On us humans: already, there's a lot of disturbances in the economy, and people are real worried about their jobs. There's a strong (and growing) anti-AI sentiment, and once the real layoffs begin, that could snowball into a lot more unrest than "10,000 people protesting in DC" (which TBH is a bit of a simplification of what they predict).
Therefore, we think macroeconomic and social disruption factors (the known unknowns, as in we know they'll play a part, but don't know how) are likely to be the main barriers to this scenario playing out exactly as foretold here. But, if AI companies can empower disenfranchised people (i.e. the already or soon to be unemployed) to work for themselves and continue making money without needing big companies to hire them, then perhaps social unrest evens itself out and the AI companies will be left to their own devices long enough to do what the researchers predict here.
There's one other scenario to consider, too: the possibility that all of this plays out faster than the researchers predict. Y'know, 'cause that's what happened last time Daniel predicted something...
Not saying any of this is good or bad, btw.
As the researchers point out, they tried to write what they THINK will happen, not what should happen. Same goes for us.
Even if this feels like sci-fi today, the underlying dynamics suggest the window between "helpful AI" and "potentially uncontrollable AI" could be alarmingly narrow.
Want all the nitty-gritty we glanced over? Dive into the full, fascinating AI 2027 forecast and research here.