Imagine a world where nuclear power plants are built at lightning speed, fueled by the promise of AI. Sounds like progress, right? But what if this rush to innovation could lead to a nuclear catastrophe? That's the unsettling question looming over a new push to use artificial intelligence in the construction of nuclear power plants, spearheaded by industry giants like Microsoft and Westinghouse Nuclear.
These companies believe AI can dramatically accelerate the notoriously slow and complex process of nuclear plant licensing. Microsoft, for example, envisions AI slashing the licensing timeline from a decade and $100 million to a fraction of that. Their plan involves training large language models (LLMs) on existing nuclear licensing documents and site data to automatically generate the necessary paperwork. The Idaho National Laboratory and Lloyd's Register are already experimenting with Microsoft's AI to "streamline" the process, and Westinghouse is touting its own AI, "Bertha," which promises to reduce licensing time from months to minutes.
But here's where it gets controversial... A recent report from the AI Now Institute warns that this AI-driven acceleration could have disastrous consequences. The report argues that licensing isn't just about producing documents; it's a crucial process of rigorous safety checks and critical thinking. Heidy Khlaaf, the head AI scientist at AI Now, emphasizes that Microsoft's proposal seems to misunderstand the very essence of nuclear licensing.
"Nuclear licensing is a process, it's not a set of documents," Khlaaf explained. "They don't understand what it means to have nuclear licensing."
According to critics, the AI-driven approach risks turning a meticulous process into a mere box-ticking exercise, potentially bypassing essential safety considerations. Microsoft imagines an AI system that can draft an entire Environmental Review for a new project based on a simple prompt, then send it to a human who uses Copilot for "review and refinement". This raises a critical question: can a human reviewer truly catch every potential flaw or oversight in a document generated by an AI, especially when under pressure to expedite the process?
And this is the part most people miss... Sofia Guerra, a nuclear safety expert who advises the U.S. Nuclear Regulatory Commission (NRC) and the International Atomic Energy Agency (IAEA), echoed Khlaaf's concerns. While acknowledging that the current licensing process is imperfect, she stressed that the very act of going through the iterations and analyses is crucial to ensuring safety.
Moreover, the AI Now report raises serious concerns about nuclear proliferation. They point out that Microsoft is seeking not only historical data from the NRC, but real-time, project-specific data. This, they argue, is akin to asking for "nuclear secrets" – know-how that isn't publicly available but is essential for building a nuclear plant. While tech companies often use secure cloud servers that comply with federal regulations, the risk of sensitive information falling into the wrong hands remains a significant concern.
"This is a signal that AI providers are asking for nuclear secrets," Khlaaf stated. "To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain."
To illustrate the potential dangers, Khlaaf draws a chilling parallel to the Three Mile Island accident, where seemingly minor equipment failures, combined with human error, led to a partial nuclear meltdown. Could AI-generated mistakes, even seemingly small ones, trigger a similar cascade of errors? What if an AI misidentifies a crucial software version, leading to a misunderstanding of its behavior and ultimately, a catastrophic failure?
There's also the concerning trend of the White House pushing to sell old weapon-grade plutonium to the private sector for use in nuclear reactors, as well as executive orders aimed at overhauling the NRC and speeding up reactor construction. Critics argue that these moves appear to prioritize the demands of the AI industry over public safety.
Matthew Wald, an independent nuclear energy analyst, offers a more optimistic perspective. He believes AI could actually improve safety by consolidating and organizing vast amounts of regulatory information, potentially preventing accidents like Three Mile Island. He also emphasizes the inherent safety culture within the nuclear industry, where engineers are trained to be meticulous and skeptical.
"AI is helpful, but let’s not get messianic about it," Wald cautions.
Despite the potential benefits, Khlaaf and Guerra remain deeply concerned that the rush to embrace AI in nuclear power will ultimately undermine public trust and set back the cause of nuclear energy. They believe that if nuclear power isn't demonstrably safe, it simply isn't worth pursuing.
Here's a thought-provoking question: Is the promise of faster, cheaper nuclear power worth the potential risk of a catastrophic accident or nuclear proliferation? Are we sacrificing long-term safety for short-term gains in the AI arms race? Share your thoughts and concerns in the comments below.
About the author: Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.