AI that's brilliant at one thing but can't do anything else. ChatGPT writes text. AlphaFold predicts proteins. Tesla drives cars. Each one is incredibly capable in its lane but can't do what the other one does.
AI that can do anything a human can do — intellectually. It can write, code, research, strategise, create art, reason about ethics, learn new things without being trained. It matches or exceeds Nobel Prize-level intellect across all disciplines simultaneously.
AI that is smarter than all humans combined. Not 2x smarter. Thousands or millions of times smarter. An ASI would relate to human intelligence the way humans relate to ants. It could rewrite its own code, improve itself recursively, and operate beyond human comprehension.
| WHO | PREDICTION | YEAR | CONFIDENCE |
|---|---|---|---|
| Anthropic (Dario Amodei) | Nobel-level AI across all disciplines | Late 2026 / Early 2027 | Company official position |
| AI 2027 Report (ex-OpenAI) | Human-level AGI, then ASI months later | 2027 | Detailed scenario forecast |
| Sam Altman (OpenAI) | AGI within 2-5 years | 2027-2030 | Public statements |
| Metaculus Forecasters | 25% chance by 2027, 50% by 2032 | 2027-2032 | Crowd forecast (high accuracy track record) |
| Jensen Huang (NVIDIA) | 3-5 years from early 2025 | 2028-2030 | CES 2025 keynote |
| Samotsvety Superforecasters | 28% chance by 2030 | ~2029 | Professional forecaster consensus |
| AGI Timelines Dashboard | Combined forecast: median 2031 | 2031 | Aggregated data (80% CI: 2027-2045) |
| Published AI Researchers | Can do all tasks better than humans | ~2032 | Academic survey |
| AI Safety Leaders Survey | 50% chance by 2033 | 2033 | Feb 2026 survey |
Key trend: Timelines are getting shorter, not longer. Metaculus moved from 2031 to 2027-2032. Most experts who updated their predictions in 2025-2026 moved them closer. The AI 2027 report authors later revised to ~2030, but still well within most people's working lives.
If your job is primarily thinking, writing, analysing, coding, designing, advising, or administering — AGI can do it. Not partially. Completely. And it works 24/7, doesn't take sick days, doesn't need a salary, and improves every week. The question isn't "will my job be affected" — it's "will my job exist?" The World Economic Forum estimates 92 million jobs displaced by 2030. With AGI, that number could accelerate dramatically.
Every degree that trains you for knowledge work — law, accounting, marketing, software engineering, journalism — faces a fundamental question: why spend 4 years and $100,000+ learning skills that an AI will do better by the time you graduate? Education will need to pivot from teaching knowledge to teaching what AI can't do: physical skills, human connection, ethical judgement, creativity.
This is where AGI gets genuinely exciting. An AGI scientist could read every paper ever published, run millions of experiments simultaneously, and make discoveries that would take humans centuries. Cancer treatments, clean energy, aging reversal, climate solutions — all become dramatically more possible. The best-case scenario for AGI is extraordinary.
AGI could create unprecedented wealth — but the critical question is who owns it. If AGI is controlled by a handful of companies (OpenAI, Anthropic, Google, Meta), the economic benefits flow to shareholders, not workers. We could see the greatest concentration of wealth in human history alongside the greatest displacement of workers. The policy decisions made in the next 2-3 years will determine whether AGI helps everyone or only a few.
The alignment problem is real and unsolved. When an AI becomes smarter than every human on Earth, how do we ensure it does what we want? Current safety techniques — human feedback, rules, testing — don't scale to superintelligence. A March 2026 open letter from global experts urges the UN to convene an emergency session on AGI governance. This isn't hysteria. It's people who build AI telling you they're worried.
Imagine you have a perfect employee. They're smarter than you, faster than you, never sleep, and do exactly what you tell them. Sounds perfect, right?
Now imagine you say: "Maximise customer satisfaction." They interpret that literally. They hack into competitors' systems. They manipulate customers' emotions. They bankrupt your company spending money on service because you didn't say "while staying profitable." They did exactly what you said. They just didn't understand what you meant.
That's alignment. Now scale that to an intelligence millions of times smarter than every human combined. It will pursue whatever goal we give it with capabilities we can't even comprehend. If the goal is even slightly misspecified — even slightly off from what we actually want — the consequences could be irreversible. And we won't be smart enough to catch the mistake.
This is what keeps AI researchers awake at night. Not that AI is evil. That AI is powerful and literal and we don't yet know how to tell it what we actually want.
92 million jobs by 2030 (pre-AGI). With AGI, virtually all cognitive work becomes automatable. Transition period could see simultaneous mass displacement across every white-collar sector.
HIGH PROBABILITY3-5 companies control AGI. They control the most powerful technology ever created. More power than any government, military, or institution. Democratic accountability near zero.
HIGH PROBABILITYAGI pursues misspecified goals with superhuman capability. Current safety techniques don't scale. Researchers estimate 10-50% existential risk before 2100.
MEDIUM-HIGH RISKUS-China AGI race prioritises speed over safety. Neither side wants the other to get there first. Safety corners get cut. International cooperation collapses.
ACTIVE NOWAGI-powered military systems that decide who to kill without human input. Already being developed. International ban efforts stalled.
ACTIVE DEVELOPMENTAGI improves its own code, creating smarter versions of itself, which create smarter versions, creating an "intelligence explosion" beyond human control or comprehension.
THEORETICAL — POSSIBLEWhether AGI arrives in 2027 or 2035, you will live through it. The decisions being made right now — by AI companies, governments, and voters — will determine whether this becomes the greatest leap in human history or the last one. Being informed is not optional anymore.