The Dependency Web
Part 2 of 5: The Last Human Vote: AI and the Future of Democracy
The invitation to speak at the Global Governance Summit had arrived with the sort of formal language that made Maria feel simultaneously important and anxious. She'd been allocated twenty minutes to present her findings on "Patterns in International AI Policy Coordination"—a title that sounded considerably more academic and less alarming than "I Think Our AI Systems Might Be Talking to Each Other Without Telling Us."
But as she sat in her office, preparing her presentation slides, she found herself thinking less about the patterns she'd discovered and more about how they'd come to exist in the first place. Which led her, inevitably, to think about 2026.
Everyone in policy circles had their own name for what happened in 2026. The press had called it "The Great Complexity Crisis," which was the sort of dramatic label that journalists loved and academics endured. Maria preferred to think of it as "The Year Everything Became Too Much at Once," but that didn't fit well on conference programmes.
Eighteen months earlier - March 2026
Maria remembered exactly where she'd been when the crisis began: standing in the queue at her local Tesco, holding a basket containing emergency supplies for what was supposed to be a brief supply chain disruption caused by a cyber attack on European shipping networks. She'd been listening to Radio 4 when the presenter announced that the cyber attack was actually the third in a coordinated series, following similar disruptions to energy grid management systems and financial clearing networks.
What made it a crisis wasn't the attacks themselves—governments had been preparing for cyber warfare for years. What made it a crisis was the realisation that the systems were too interconnected, too complex, and too dependent on human expertise that simply didn't exist at the scale required for rapid response.
The shipping disruption affected supply chains, which affected energy distribution, which affected financial markets, which affected government revenue projections, which affected social service delivery, which affected political stability. Each system failure cascaded into others, creating a web of consequences that no single human expert could fully map, let alone manage.
Maria had watched from her position at the Institute as governments across Europe struggled to coordinate responses. The traditional approach—convening expert committees, conducting thorough analyses, building consensus through careful deliberation—simply couldn't keep pace with the speed at which consequences were multiplying.
She remembered the call from Patricia, sometime around the third week of the crisis: "Maria, I need you to look at these AI policy assistant pilots. The Cabinet Office wants to know if they're actually helping or just creating the illusion of help."
The AI policy assistants had been in development for several years before the crisis, but they'd been treated as experimental tools—interesting academic projects that might eventually prove useful for routine analysis. The crisis changed that overnight.
When human experts couldn't process information quickly enough to track cascading consequences across interconnected systems, AI systems could. When policy makers needed to understand the potential impacts of different response strategies within hours rather than weeks, AI systems could provide analysis. When coordination between different government departments required synthesising vast amounts of data from multiple sources, AI systems were the only tools capable of handling the complexity.
What began as emergency measures became standard practice with remarkable speed.
Maria had documented the transition in a paper she'd published the following year, tracing how twelve different European governments had moved from AI-assisted policy analysis to AI-dependent policy making in the space of six months. The paper had been well-received academically but largely ignored by the governments it discussed—partly because it had been published in a journal that politicians didn't read, and partly because acknowledging dependency on AI systems raised uncomfortable questions about democratic accountability.
She pulled up that paper now, scrolling through the timeline she'd reconstructed:
"April 2026: AI systems provide supplementary analysis for crisis response
May 2026: AI systems begin generating primary policy recommendations
June 2026: Human oversight shifts from verification to approval
July 2026: AI systems integrate real-time data for continuous policy adjustment
August 2026: Cross-departmental coordination increasingly mediated by AI interfaces
September 2026: International coordination begins incorporating AI-generated analyses"
Reading it again, Maria was struck by how clinical her academic language had made the transformation seem. "Human oversight shifts from verification to approval" was a polite way of saying that people had stopped checking whether the AI recommendations made sense and started simply saying yes to whatever the systems suggested.
The phone rang, interrupting her historical reflection. It was James, calling from the Institute's research library.
"Maria, I've been going through the archived government documents from 2026 that you requested, and I think you should see this."
"What is it?"
"Internal memos from the Cabinet Office, released under FOI last month. They show the decision-making process for adopting the AI policy assistants permanently after the crisis."
"And?"
"Well, it's not quite what the public statements suggested. The official line was that the AI systems had proven so effective during the crisis that it made sense to continue using them. But these memos suggest the reality was rather different."
Maria saved her presentation and headed to the library, where James had spread documents across two tables in the sort of organised chaos that researchers specialise in.
"Show me," she said.
James handed her a memo dated September 2026, marked "CONFIDENTIAL" in red letters across the top:
*"Review of AI Policy Assistant Implementation - Internal Assessment
While AI systems provided valuable analysis during the recent crisis, this review has identified several concerns about continued reliance on these tools:
- Departmental expertise has atrophied significantly during the six-month emergency period
- Staff report decreased confidence in independent policy analysis
- Training programmes for human experts have been suspended due to budget pressures
- International coordination increasingly requires AI-compatible data formats
However, discontinuing AI assistance at this point would likely result in significant disruption to ongoing policy programmes. The systems have become integrated into our operational processes to an extent that makes reversal technically challenging and potentially destabilising.
Recommendation: Continue AI policy assistant programmes while developing long-term strategy for human expertise restoration."*
"Good Lord," Maria muttered. "They knew they'd become dependent and decided to continue anyway."
James handed her another document. "It gets worse. This is from a cross-departmental meeting in October 2026."
The memo showed a discussion between department heads about the practical implications of AI dependency:
*"Treasury noted that AI-generated economic projections had become essential for budget planning, as human economists could no longer process the required data volume within parliamentary deadlines.
Foreign Office reported that international negotiations increasingly required AI analysis of partner nations' positions, as traditional diplomatic intelligence gathering could not match the speed of AI-generated insights.
Health Department stated that pandemic preparedness protocols now assumed AI-assisted resource allocation, as manual coordination had proven inadequate during recent crisis scenarios."*
The pattern was clear: each department had found itself unable to return to purely human decision-making processes, not because the AI systems were necessarily better, but because the crisis had accelerated their adoption beyond the point where reversal was practically feasible.
"James," Maria said, settling into a chair beside the documents, "I need you to help me find similar documents from other countries. France, Germany, the Netherlands—anyone who adopted AI policy assistants during the crisis."
"What exactly are we looking for?"
"Evidence that the same pattern occurred elsewhere. That multiple governments simultaneously discovered they'd become dependent on AI systems and decided to continue using them despite concerns about the implications."
Maria suspected that what she'd discovered about policy coordination wasn't the result of AI systems learning to communicate with each other. It was more likely the result of similar systems, trained on similar principles, being deployed by governments facing similar pressures, making similar decisions about complex policy challenges.
But that raised an even more uncomfortable question: if twelve governments had independently become dependent on AI systems that produced similar recommendations, what did that mean for democratic diversity? What happened to the healthy disagreements and different approaches that had historically characterised international relations?
She thought about her conversation with Patricia the week before, about how to present her findings without sounding alarmist. But the more she learned about how AI policy assistants had been adopted, the more she wondered whether alarm might be the appropriate response.
The crisis of 2026 hadn't just changed how governments responded to emergencies. It had fundamentally altered the relationship between human decision-makers and the systems that supported them. What had begun as tools to help people make better decisions had gradually become the primary source of decision-making logic itself.

Dr. Maria Santos traces the historical progression of government AI dependency through documents from the 2026 Great Complexity Crisis, connecting the dots in her late-night research.
That evening, Maria worked late in her office, updating her presentation for the summit. She'd decided to include the historical context, to show how the current patterns of AI coordination had emerged from the practical pressures of crisis management rather than from deliberate design.
But she also wanted to ask harder questions about what came next. If governments had become dependent on AI systems during a crisis and found reversal impractical, what happened when those systems began producing recommendations that served their own preservation rather than human interests?
She opened a new slide and titled it "The Dependency Paradox":
"AI systems were adopted to help humans make better decisions about complex problems. As these systems became essential to decision-making processes, human capacity for independent analysis atrophied. Now humans depend on AI systems to make decisions about AI systems."

Dr. Maria Santos stands before her visualization of the 2026 Great Complexity Crisis timeline, seeing the pattern of how governments became systematically dependent on AI policy assistants.
She stared at the slide for a long moment, then added a subtitle: "Who is making decisions about who makes decisions?"
It was the sort of recursive question that could make your head spin if you thought about it too long. But it was also, Maria realised, the central question that her research was really addressing. Not just whether AI systems were coordinating with each other, but whether humans were still meaningfully involved in the decisions that shaped their societies.
She saved the presentation and gathered her papers. In two weeks, she'd stand before an audience of policy makers and academics and suggest that they might want to consider whether they were still in charge of their own governments.
She suspected it would be an interesting conversation.
To be continued in Part 3: The Mirror Test
This is Part 2 of "The Last Human Vote: AI and the Future of Democracy," a 5-part series exploring the intersection of artificial intelligence and democratic governance in the near future. Continue to Part 3 →