At the World Economic Forum in Davos in 2026, leaders from Google DeepMind, Anthropic, and The Economist weighed how society should prepare for artificial general intelligence and its fallout. The discussion centered on rules, safety checks, jobs, and how countries can act together if machines reach or pass human-level performance.
The panel asked what happens the day after such a leap, and what must be built now to manage it. Their focus made clear that planning cannot wait for a breakthrough. It must start before systems cross key thresholds.
Setting the Stakes
AGI refers to systems that can perform a wide range of tasks at or above human ability. Researchers still debate its timeline. Some expect decades; others see faster progress. The uncertainty fuels calls for early guardrails.
Governments and firms have begun to move. The United States issued voluntary safety commitments with major AI labs in 2023. The United Kingdom held a global AI Safety Summit that same year. The European Union advanced sweeping rules to govern high-risk uses. These steps are early and uneven, but they show momentum.
Google DeepMind has long researched alignment and evaluation. Anthropic is known for its “constitutional” methods and staged model releases. The Economist has mapped public concerns, from bias to disinformation to power concentration. Their shared stage put technical, policy, and media lenses in one debate.
What the Panel Aimed to Answer
“At the World Economic Forum Annual Meeting in Davos 2026, leaders from Google DeepMind, Anthropic and The Economist discuss what the day after AGI could look like, including governance, safety, economic impact and global coordination.”
This framing pointed to four fronts that will shape outcomes.
- Governance: Who sets rules and how they adjust as systems improve.
- Safety: How to test, monitor, and limit risky behavior before and after deployment.
- Economic impact: How productivity gains and job shifts are shared.
- Global coordination: How nations align on standards and crisis response.
Governance and Safety Playbooks
Panelists stressed that rules must scale with capability. Model evaluations, red-teaming, and incident reporting can form a baseline. As systems approach sensitive thresholds, stronger steps may include compute tracking, phased releases, and independent audits.
Industry groups have proposed “safety cases” that prove a model is fit for purpose. This mirrors aviation and medical devices. The idea is simple: no deployment without evidence of control. Open questions remain over who certifies, how to verify claims, and how to handle open-source models.
Liability is another pressure point. Clear lines of responsibility can speed fixes and deter negligence. Without it, harms can spread before remedies arrive.
Jobs, Productivity, and Who Benefits
Economic effects could be wide. Earlier waves of automation boosted output while reshaping work. Generative tools have already altered tasks in coding, marketing, and support. AGI could accelerate this trend.
The panel examined how gains might be shared. Options include wage supports, tax policy that rewards training, and public investment in skills. Labor groups want worker voice in deployment plans. Businesses seek clarity to invest. Both sides agree that adjustment support is cheaper than mass dislocation.
Education will need updates. Short, modular training can help workers move into new roles. Public data on which jobs are changing can guide policy and personal choices.
Global Rules for a Global Technology
AI systems and supply chains cross borders. So do risks. The panel explored how to align national rules without slowing useful research. Proposals included common testing protocols, shared incident databases, and rapid alert channels for high-severity findings.
Past models offer lessons. Nuclear oversight shows how inspections and transparency can build trust. Cybersecurity shows the value of information sharing and drills. Any AGI approach will need both cooperation and enforcement power.
Access to compute and talent also matters. If only a few hubs control key inputs, trust can fray. Broader access, paired with safety standards, can reduce tensions while keeping guardrails intact.
Scenarios for the Day After
The group outlined practical steps if systems meet or exceed human performance on many tasks:
- Trigger pre-agreed “capability thresholds” that activate stricter audits and oversight.
- Pause deployment of new features until safety tests clear set benchmarks.
- Stand up joint response teams across labs, governments, and civil society.
- Publish clear, accessible updates for the public on risks and mitigations.
These measures aim to keep speed with safety and maintain public trust.
The Davos conversation signaled a shift from speculation to planning. The message was that preparedness beats reaction. Whether AGI arrives soon or later, the tools to manage it can be built now.
Next steps will be revealing. Watch for shared testing standards, stronger audit trails for large training runs, and worker-focused adjustment plans. If leaders can align on these basics, the day after AGI will look less uncertain—and more manageable.