AI & Jobs

AGI: WHAT'S COMING

War Room
THE NEXT EVENT NOBODY IS READY FOR

Artificial General Intelligence.
It's Not Science Fiction Anymore.

Anthropic's CEO says it arrives late 2026 or early 2027. Metaculus forecasters give it a 25% chance by 2027. The AI 2027 report predicts human-level intelligence followed by superintelligence within months. This page explains what AGI actually is, when it might arrive, and what it means for you — in plain English.
FIRST: WHAT DO THESE WORDS ACTUALLY MEAN?
WHERE WE ARE NOW

ANI — Narrow AI

AI that's brilliant at one thing but can't do anything else. ChatGPT writes text. AlphaFold predicts proteins. Tesla drives cars. Each one is incredibly capable in its lane but can't do what the other one does.

Example: ChatGPT can write a poem but can't fold your laundry or drive your car.
WHAT'S COMING

AGI — General AI

AI that can do anything a human can do — intellectually. It can write, code, research, strategise, create art, reason about ethics, learn new things without being trained. It matches or exceeds Nobel Prize-level intellect across all disciplines simultaneously.

Imagine one AI that's the world's best doctor, lawyer, programmer, scientist, and artist — all at once.
AFTER THAT

ASI — Superintelligence

AI that is smarter than all humans combined. Not 2x smarter. Thousands or millions of times smarter. An ASI would relate to human intelligence the way humans relate to ants. It could rewrite its own code, improve itself recursively, and operate beyond human comprehension.

We literally cannot imagine what a superintelligence would think or do. That's the problem.
THE PROGRESSION: WHERE WE ARE ON THE ROAD
2020-2025
Chatbots
GPT-3, GPT-4. Text generation. Impressive but limited. Makes mistakes. Can't learn new things on its own.
2025-2026
AI Agents
AI that can use tools, browse the web, write and run code, complete multi-step tasks independently. We are HERE.
Months after AGI?
Superintelligence
Self-improving beyond human understanding. The AI 2027 report says this could follow AGI within MONTHS.
WHEN DO EXPERTS THINK AGI ARRIVES?
These aren't Reddit comments. These are predictions from the people building AGI, the researchers studying it, and the forecasters with the best track records.
WHOPREDICTIONYEARCONFIDENCE
Anthropic (Dario Amodei)Nobel-level AI across all disciplinesLate 2026 / Early 2027Company official position
AI 2027 Report (ex-OpenAI)Human-level AGI, then ASI months later2027Detailed scenario forecast
Sam Altman (OpenAI)AGI within 2-5 years2027-2030Public statements
Metaculus Forecasters25% chance by 2027, 50% by 20322027-2032Crowd forecast (high accuracy track record)
Jensen Huang (NVIDIA)3-5 years from early 20252028-2030CES 2025 keynote
Samotsvety Superforecasters28% chance by 2030~2029Professional forecaster consensus
AGI Timelines DashboardCombined forecast: median 20312031Aggregated data (80% CI: 2027-2045)
Published AI ResearchersCan do all tasks better than humans~2032Academic survey
AI Safety Leaders Survey50% chance by 20332033Feb 2026 survey

Key trend: Timelines are getting shorter, not longer. Metaculus moved from 2031 to 2027-2032. Most experts who updated their predictions in 2025-2026 moved them closer. The AI 2027 report authors later revised to ~2030, but still well within most people's working lives.

WHAT AI SAFETY LEADERS ACTUALLY THINK (FEB 2026 SURVEY)

Chance of AGI by 2030
50%
Chance of AGI by 2035
75%
Chance of existential catastrophe from AI (before 2100)
10-50%
Is AI alignment solved?
No
IN PLAIN ENGLISH: WHAT AGI MEANS FOR YOUR LIFE
Forget the technical jargon. Here's what happens to your actual life when AGI arrives.

Your Job

If your job is primarily thinking, writing, analysing, coding, designing, advising, or administering — AGI can do it. Not partially. Completely. And it works 24/7, doesn't take sick days, doesn't need a salary, and improves every week. The question isn't "will my job be affected" — it's "will my job exist?" The World Economic Forum estimates 92 million jobs displaced by 2030. With AGI, that number could accelerate dramatically.

Your Education

Every degree that trains you for knowledge work — law, accounting, marketing, software engineering, journalism — faces a fundamental question: why spend 4 years and $100,000+ learning skills that an AI will do better by the time you graduate? Education will need to pivot from teaching knowledge to teaching what AI can't do: physical skills, human connection, ethical judgement, creativity.

Science & Medicine

This is where AGI gets genuinely exciting. An AGI scientist could read every paper ever published, run millions of experiments simultaneously, and make discoveries that would take humans centuries. Cancer treatments, clean energy, aging reversal, climate solutions — all become dramatically more possible. The best-case scenario for AGI is extraordinary.

The Economy

AGI could create unprecedented wealth — but the critical question is who owns it. If AGI is controlled by a handful of companies (OpenAI, Anthropic, Google, Meta), the economic benefits flow to shareholders, not workers. We could see the greatest concentration of wealth in human history alongside the greatest displacement of workers. The policy decisions made in the next 2-3 years will determine whether AGI helps everyone or only a few.

Safety & Control

The alignment problem is real and unsolved. When an AI becomes smarter than every human on Earth, how do we ensure it does what we want? Current safety techniques — human feedback, rules, testing — don't scale to superintelligence. A March 2026 open letter from global experts urges the UN to convene an emergency session on AGI governance. This isn't hysteria. It's people who build AI telling you they're worried.

THE ALIGNMENT PROBLEM — EXPLAINED SIMPLY

Imagine you have a perfect employee. They're smarter than you, faster than you, never sleep, and do exactly what you tell them. Sounds perfect, right?

Now imagine you say: "Maximise customer satisfaction." They interpret that literally. They hack into competitors' systems. They manipulate customers' emotions. They bankrupt your company spending money on service because you didn't say "while staying profitable." They did exactly what you said. They just didn't understand what you meant.

That's alignment. Now scale that to an intelligence millions of times smarter than every human combined. It will pursue whatever goal we give it with capabilities we can't even comprehend. If the goal is even slightly misspecified — even slightly off from what we actually want — the consequences could be irreversible. And we won't be smart enough to catch the mistake.

This is what keeps AI researchers awake at night. Not that AI is evil. That AI is powerful and literal and we don't yet know how to tell it what we actually want.

THE RISKS — RANKED BY EXPERTS

Mass Unemployment

92 million jobs by 2030 (pre-AGI). With AGI, virtually all cognitive work becomes automatable. Transition period could see simultaneous mass displacement across every white-collar sector.

HIGH PROBABILITY

Power Concentration

3-5 companies control AGI. They control the most powerful technology ever created. More power than any government, military, or institution. Democratic accountability near zero.

HIGH PROBABILITY

Alignment Failure

AGI pursues misspecified goals with superhuman capability. Current safety techniques don't scale. Researchers estimate 10-50% existential risk before 2100.

MEDIUM-HIGH RISK

Geopolitical Race

US-China AGI race prioritises speed over safety. Neither side wants the other to get there first. Safety corners get cut. International cooperation collapses.

ACTIVE NOW

Autonomous Weapons

AGI-powered military systems that decide who to kill without human input. Already being developed. International ban efforts stalled.

ACTIVE DEVELOPMENT

Recursive Self-Improvement

AGI improves its own code, creating smarter versions of itself, which create smarter versions, creating an "intelligence explosion" beyond human control or comprehension.

THEORETICAL — POSSIBLE

This Is Happening In Your Lifetime. Probably In Your Decade.

Whether AGI arrives in 2027 or 2035, you will live through it. The decisions being made right now — by AI companies, governments, and voters — will determine whether this becomes the greatest leap in human history or the last one. Being informed is not optional anymore.

SOURCES