The Pause
AI risk moves faster than the organizations built to manage it.
On September 26, 1983, the Soviet Union’s Oko satellite system told Lieutenant Colonel Stanislav Petrov that the United States had launched five intercontinental ballistic missiles. His training said to report it up the chain immediately. Soviet doctrine would have triggered a retaliatory strike within minutes.
Petrov waited.
He knew a first strike would involve hundreds of missiles, not five. He knew ground radar showed nothing. He knew the system was new and untested in certain conditions. He weighed what the machine told him against what he knew about the world, and he judged the machine was wrong.
Sunlight reflecting off high-altitude clouds had confused the sensors. There were no missiles.
That decision — to override the system, to pause — is now treated as one of the most consequential acts of individual judgment in modern history. It was not protocol. It was a human algorithm: pattern recognition, institutional knowledge, and the willingness to trust his own assessment over the machine’s.
We are forty-three years past that moment, and building systems designed to remove it

The World Economic Forum’s 2026 Global Risks Report ranks “adverse outcomes of AI” 30th among risks over the next two years and 5th over the next ten — the largest upward shift of any risk surveyed. The global AI market is projected to grow from $280 billion to $3.5 trillion by 2033. The report’s risk network map shows AI connecting directly to misinformation, cyber insecurity, geoeconomic confrontation, inequality, societal polarization, and illicit economic activity — among the most interconnected nodes on the register.
The report’s most unsettling contribution is a scenario: an automated early-warning system misinterprets a missile test and triggers defensive responses from an adversary’s AI. Conflict initiated by technical error, not strategic intent. The WEF’s framing is precise — traditional deterrence assumes human deliberation. Algorithmic speed removes that assumption.
This is the Petrov problem inverted. In 1983, a human overrode a machine’s false alarm. The WEF is describing a future where the machine overrides the human — not through malice, but through speed. The system acts before anyone can evaluate whether acting is warranted.
For risk practitioners, the speed problem is not abstract. AI compresses decision timelines faster than governance frameworks can adapt. Joint analysis from Brookings and Tsinghua found that AI-assisted nuclear decision-making systems, subjected to data tampering, could trigger a Level 3 alert within 25 minutes — well below the 90-minute safety threshold designed for human evaluation. The window for judgment is not closing gradually. It is being engineered shut.
The corporate version of this problem is quieter but structurally identical.
Eighty-six percent of companies expect AI to transform their business models by 2030. Roughly ten percent are using it in production today. That gap — between expectation and implementation — is where governance failures will concentrate. Organizations are making strategic commitments to a technology most of them have not yet operated at scale, and they are doing it with risk architectures designed for a slower world.
The deeper issue is organizational. AI risk touches cyber, geopolitical, operational, and personnel categories simultaneously. But most risk functions are built to brief one lane at a time. Your CISO briefs on cyber. Your political risk director briefs on geopolitical exposure. Your compliance team briefs on regulatory frameworks. Each of them is competent in their domain. None of them owns the intersection.
The WEF’s risk network illustrates this precisely. AI does not sit in a single risk category. It is a node connected to six of the most consequential risks on the register. A labor displacement event driven by AI adoption is simultaneously a personnel risk, an operational continuity risk, a reputational risk, and — depending on the jurisdiction — a regulatory risk. No single briefing deck captures it.
Boards are beginning to ask questions that expose this architecture. Deloitte’s 2025 Future of Cyber Survey found that 41 percent of boards were addressing cyber issues monthly — a cadence previously reserved for financial and strategic risks. Half of organizations have established AI governance committees. But committees do not solve the problem if they inherit the same siloed inputs. A cross-functional committee receiving single-function briefings is not synthesis. It is aggregation. The distinction matters.
The organizations that handle this will be the ones that connect risk functions before the crisis forces it. Someone has to own the synthesis — not the cyber piece, not the geopolitical piece, not the compliance piece, but the picture that emerges only when you lay them on top of each other. It is a leadership problem.
Petrov’s story is usually told as a parable about heroism — one man who saved the world. That framing, while true, misses the structural lesson. The Soviet system worked that night not because of the protocol but because of the deviation from it. The system’s reliability depended on a human being willing to distrust it.
Every early-warning system, every automated risk platform, every AI-driven monitoring tool carries the same dependency. The value of automation is speed. The vulnerability of automation is also speed — the capacity to act faster than anyone can evaluate whether the action is correct.
The WEF report describes this as a governance challenge. It is. But governance is an abstraction until someone in a specific role, in a specific organization, has the authority and the judgment to say: wait. The data says one thing. I am not certain it is right. Let me check.
That is the load-bearing structure of every reliable system ever built. And it is the first thing that gets optimized away when speed becomes the metric.
Every organization adopting AI at scale will face a version of Petrov’s moment — not a nuclear launch, but a decision moving faster than anyone can evaluate. The ones that survive it will be the ones that built something into their operating systems, not their policy documents, that preserves those thirty seconds.
The machines will get faster. That part is settled. Whether anyone retains the authority to pause is not.
Devin Carlson has spent 17 years in security operations across three continents. He currently leads programs in South Asia.
References
Unal, B. et al., “Uncertainty and Complexity in Nuclear Decision-Making,” Chatham House Research Paper, March 2022, Chapter 5: Nuclear Decision-Making Case Studies.
World Economic Forum, Global Risks Report 2026, Section 2.7: “AI at Large,” pages 60–66. Figures 51–54.
Brookings Institution and Tsinghua University Center for International Security and Strategy, “How Unchecked AI Could Trigger a Nuclear War,” Brookings, 2024.
Deloitte, 2025 Future of Cyber Survey.
IANS Research, “The CISO’s Expanding AI Mandate: Leading Governance in 2026,” February 2026.
U.S. Census Bureau, Business Trends and Outlook Survey (BTOS), September 2025, via Eurasia Group as cited in WEF Global Risks Report 2026, page 62.
Grand View Research, AI market projections, as cited in WEF Global Risks Report 2026, page 60, Figure 52.

