Aurora supercomputer unlocks reactor-scale fusion simulations at Argonne

Category: Divertors, Simulations, Tokamak

Aurora supercomputer unlocks reactor-scale fusion simulations at Argonne


Aurora supercomputer simulation visualises reactor-scale tokamak plasma behaviour, resolving edge physics and divertor heat loads in unprecedented detail.

(Image courtesy of the ALCF Visualization and Data Analytics Team; CS Chang and Julien Dominski/Princeton Plasma Physics Laboratory)

Aurora, the exascale supercomputer at Argonne National Laboratory, is starting to give fusion researchers something they have wanted for decades. Not faster post-processing, not better visualisation but routine access to full-fidelity, reactor-scale physics in software, running on hardware that exists today rather than inside a machine that does not. Capable of more than a quintillion floating-point operations per second, its tightly coupled CPU-GPU architecture handles both large-scale simulation and AI workloads within the same infrastructure, which maps directly onto fusion’s mix of kinetic codes, fluid models, and data-driven prediction.


A core strand of work on Aurora targets tokamak disruptions. William Tang, principal research physicist at PPPL and lecturer with the rank of professor in astrophysical sciences at Princeton University, is leading the disruption prediction project alongside Argonne computational scientist Kyle Felker. Their team trains AI models on diagnostic archives from DIII-D and JET, using Aurora to extract spatiotemporal patterns in the run-up to instability. The goal is a disruption warning score in milliseconds, fast enough to trigger mitigation or a controlled shutdown before a damaging event develops.


The plasma edge is getting similar treatment. Choongseok Chang, managing principal research physicist at PPPL, leads research resolving edge plasmas down to individual tungsten particles sputtered from divertor surfaces, tracking trajectories, radiation and confinement feedback in multi-trillion-particle kinetic runs. Simulations that once occupied leadership-class systems for days complete in hours. High-resolution parameter scans and rapid iteration have stopped being aspirational.


Those capabilities carry direct engineering consequences. With ensembles of high-fidelity runs rather than one-off hero simulations, design teams can quantify risk across divertor geometries and operating regimes. How often a configuration is likely to see unmanageable heat fluxes, where erosion concentrates on a complex three-dimensional surface, which control strategies most reliably keep the machine within safe limits. That statistically grounded view is essential as projects move toward plant-class devices, where uncertainty margins narrow and regulatory requirements sharpen.


Aurora’s design also makes it a natural hub for fusion’s growing AI stack. The same infrastructure training disruption predictors can support surrogate models for expensive kinetic codes, or real-time digital twins of operating machines, continuously updated with live diagnostic data. Argonne is already deploying analogous patterns in other domains. Fusion researchers are beginning to test how far they can push an always-on numerical counterpart of a reactor, evolving in step with each campaign.


Behind the technical detail is a structural shift in how the work gets done. Exascale computing lets fusion move away from a slow loop of building hardware first and understanding it later, toward a tighter, software-driven cycle where designs, control strategies, and operating scenarios are stress-tested in simulation before a single shot is fired. Aurora is one of the first platforms to show what that looks like at fusion scale. The supercomputer becoming part of the experimental apparatus, not just a place to analyse what it produced.

Stay ahead in the fusion revolution explore more breakthroughs from leading innovators in clean energy technology.