When seismologists at Los Alamos National Laboratory fed seismic data from Hawaii's 2018 Kīlauea volcano eruption into an AI system originally designed to transcribe human speech, something unexpected happened. The model, which had spent its career learning to distinguish "cat" from "caught," began detecting patterns in the rumbling earth that preceded fault slips—patterns invisible to traditional analysis. The AI could predict when the ground would shift, sometimes giving warning before the rock actually moved.
This wasn't the breakthrough we've been waiting for—AI still can't tell us when the Big One will hit San Francisco. But it represents something potentially more useful: a new ability to read the earth's warning signs in real time, when every second counts.
Why Speech Recognition Cracked the Code
The connection between Alexa understanding your grocery list and predicting earthquakes isn't immediately obvious, but it makes perfect sense to Brian Kulis. The Boston University professor spent years building speech recognition systems at Amazon before turning his attention to seismic data. His insight: earthquake waveforms recorded by seismometers look essentially identical to audio waveforms captured by microphones.
Both are continuous streams of vibrations converted into data. Both contain meaningful signals buried in noise. Both require algorithms that can distinguish subtle variations in frequency, amplitude, and timing. When Kulis and other researchers adapted Meta's Wav2Vec-2.0—a model trained on human speech—to analyze seismic signals, they bypassed a major obstacle that had stymied traditional machine learning approaches.
Previous earthquake prediction models required researchers to manually label training data, a time-consuming process that limited the scale of what could be learned. Wav2Vec-2.0 uses self-supervised learning, meaning it can train itself on raw, unlabeled seismic waveforms. Feed it enough data from rumbling faults and collapsing magma chambers, and it learns to recognize patterns on its own.
The Los Alamos team pretrained their model on continuous seismic data, then fine-tuned it using recordings from Kīlauea's three-month collapse sequence. The model learned to analyze signals from the collapsing magma chamber and predict when faults would slip—outperforming traditional methods that struggle with the inherently chaotic nature of seismic activity.
The Aftershock Advantage
While AI can't yet predict initial earthquakes, it's proving remarkably capable at forecasting what happens next. When a major earthquake strikes, the critical question for emergency responders isn't whether aftershocks will occur—they almost certainly will—but where, when, and how strong they'll be.
Traditional forecasting relies on the ETAS model (Epidemic-Type Aftershock Sequence), which runs thousands of simulations to estimate aftershock probability. It works well enough that Italy, New Zealand, and the United States use it operationally. The problem: ETAS calculations take hours or days to run on a standard computer. By the time authorities get actionable forecasts, the most dangerous aftershock window may have already passed.
In November 2025, Foteini Dervisi, a PhD student working with the British Geological Survey and the Universities of Edinburgh and Padua, published research demonstrating that AI models can produce comparable forecasts in seconds. Her team trained models on earthquake data from five major seismic regions—California, New Zealand, Italy, Japan, and Greece—teaching them to predict where and how many aftershocks would occur within 24 hours following earthquakes of magnitude 4 or higher.
The speed matters tremendously for emergency response. Should rescue teams enter damaged buildings? Should residents shelter in place or evacuate? Answering these questions hours faster could mean the difference between saving lives and losing them.
Reading the Earth's Vital Signs
Another promising approach combines deep learning with satellite data. Researchers trained a neural network called M-Large on 10,000 simulated earthquakes, teaching it to predict ground shaking intensity using high-rate global navigation satellite system (HR-GNSS) data. The system achieved average warning times of 40.5 seconds for moderate shaking and 25.8 seconds for severe shaking—enough time for automated systems to shut down gas lines, halt trains, and trigger emergency alerts.
This might sound modest, but 40 seconds is an eternity when earthquakes are concerned. Japan's bullet trains can brake to a stop. Nuclear reactors can initiate emergency protocols. People can take cover under desks or in doorways. The 2011 Tōhoku earthquake demonstrated both the value and limitations of such warnings: Japan's system provided crucial seconds of notice, but couldn't prevent catastrophic damage from the magnitude 9.0 quake and resulting tsunami.
What makes the M-Large approach intriguing is how it learns. The neural network appears to identify earthquake scaling relationships—mathematical patterns that connect initial seismic signals to eventual rupture size—without being explicitly programmed to look for them. It discovers these relationships by analyzing satellite measurements of ground displacement in real time, essentially watching the earth's surface deform as energy releases.
The Fault Line Between Prediction and Preparation
The seismic community remains divided on whether true earthquake prediction will ever be possible. Earthquakes result from complex interactions between tectonic stress, rock properties, fault geometry, and fluid pressure—a system so sensitive to initial conditions that small changes can dramatically alter outcomes. Some seismologists argue this makes long-term prediction fundamentally impossible, like trying to forecast exactly which raindrop will trigger an avalanche.
But AI's successes in aftershock forecasting and fault behavior monitoring suggest a middle path: not prediction in the sense of saying "a magnitude 7 will strike Los Angeles on March 15," but rather continuous risk assessment that identifies when and where the earth is becoming more dangerous. Faults emit distinct signals as they shift. Stress accumulates in measurable ways. AI models trained on enough data might learn to recognize when a fault is approaching failure, even if they can't pinpoint the exact moment of rupture.
The Los Alamos research on Kīlauea offers a proof of concept. The AI didn't predict the volcano would erupt, but once eruption began, it tracked the magma chamber collapse in real time and forecast subsequent fault movements. Applied to tectonic faults, similar systems might not tell us when the Big One will hit, but could provide evolving probability estimates—"elevated risk in the next 72 hours" rather than "earthquake at 3:47 PM Tuesday."
When Seconds Become Lifelines
The practical benefits extend beyond earthquakes. The same AI architectures being trained on seismic data can analyze signals from volcanoes, landslides, and other geological hazards. Kulis received NSF funding to develop what he calls "a large foundational model for earthquake understanding"—essentially, an AI system that learns general principles of how the earth moves and applies them across different contexts.
This matters because most of the world's earthquake-prone regions lack the dense sensor networks that blanket California and Japan. AI models trained on data-rich areas could potentially transfer their knowledge to data-poor regions, forecasting aftershock risk in Nepal or Turkey based on patterns learned from New Zealand and Italy. The computational efficiency makes this feasible: rather than requiring expensive supercomputers, these AI models run on ordinary hardware.
Emergency managers are already incorporating these tools into response protocols. When aftershock forecasts arrive in seconds rather than hours, they can make faster decisions about where to deploy rescue teams, which buildings to evacuate, and how to allocate limited resources. The forecasts aren't perfect—earthquakes remain inherently unpredictable—but they're good enough to improve outcomes.
The real achievement isn't that AI has learned to predict earthquakes. It hasn't, and may never fully succeed at that goal. The achievement is that AI has learned to listen to the earth more carefully than we could before, detecting warnings in the noise and translating them into actionable information while there's still time to act. In a domain where certainty remains impossible, better uncertainty might be the best we can hope for—and increasingly, it's proving to be enough.