By closely tracking swarms of extremely small earthquakes, scientists are gaining new insight into a dangerous and complicated region off the Northern California coast. "If we don't understand the underlying tectonic processes, it's hard to predict the seismic hazard," said coauthor Amanda Thomas, professor of earth and planetary sciences at UC Davis. The Mendocino Triple Junction lies offshore from Humboldt County, where three major tectonic plates converge. "You can see a bit at the surface, but you have to figure out what is the configuration underneath," Shelly said. To uncover that hidden structure, Shelly and his colleagues used a dense network of seismometers across the Pacific Northwest. The instruments recorded extremely small "low-frequency" earthquakes that occur where tectonic plates slowly slide against or over one another. These tiny events are thousands of times weaker than earthquakes people can feel at the surface. The team tested their underground model by examining how these small earthquakes respond to tidal forces. This updated model helps explain why the 1992 earthquake occurred at such a shallow depth. "It had been assumed that faults follow the leading edge of the subducting slab, but this example deviates from that," Materna said. "The plate boundary seems not to be where we thought it was." Note: Content may be edited for style and length. Patients Once “Paralyzed by Life” Show Lasting Recovery With Implanted Nerve Therapy Using These Common Painkillers After Surgery May Be Slowing Recovery, New Study Finds Could a Tomato Nutrient Help Prevent Severe Gum Disease in Older Adults? A Hidden Climate Rhythm Is Driving Extreme Floods and Droughts Worldwide Stay informed with ScienceDaily's free email newsletter, updated daily and weekly. Or view our many newsfeeds in your RSS reader: Keep up to date with the latest news from ScienceDaily via social networks: Tell us what you think of ScienceDaily -- we welcome both positive and negative comments.
The next AI revolution could start with world models Why today's AI systems struggle with consistency, and how emerging world models aim to give machines a steady grasp of space and time You've probably seen an artificial intelligence system go off track. You ask for a video of a dog, and as the dog runs behind the love seat, its collar disappears. Like the models that power ChatGPT, which are trained to predict text, video generation models predict what is statistically most plausible to look right next. In neither case does the AI hold a clearly defined model of the world that it continuously updates to make more informed decisions. But that's starting to change as researchers across many AI domains work on creating “world models,” with implications that extend beyond video generation and chatbot use to augmented reality, robotics, autonomous vehicles and even humanlike intelligence—or artificial general intelligence (AGI). If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. A simple way to understand world modeling is through four-dimensional, or 4D, models (three dimensions plus time). If you were to freeze any frame, you would have an impression of distance between characters and objects on the ship. Cinema's illusion of 3D is made using stereoscopy—two slightly different images often projected in rapid alternation, one for the left eye and one for the right. Multiple perspectives are, however, increasingly possible thanks to the past decade of research. Starting in 2020, NeRF (neural radiance field) algorithms offered a path to create “photorealistic novel views” but required combining many photos so that an AI system could generate a 3D representation. Other 3D approaches use AI to fill in missing information predictively, deviating more from reality. But 4D techniques can also help generate new video content. These are early results, but they hint at a broader trend: models that update an internal scene map as they generate. It also allows for occlusions—when digital objects disappear behind real ones. Being able to rapidly convert videos into 4D also provides rich data for training robots and autonomous vehicles on how the real world works. Today's general-purpose vision-language AI models—which understand images and text but do not generate clearly defined world models—often make errors; a benchmark paper presented at a 2025 conference reports “striking limitations” in their basic world-modeling abilities, including “near-random accuracy when distinguishing motion trajectories.” Here's the catch: “world model” means much more to those pursuing AGI. For instance, today's leading large language models (LLMs), such as those powering ChatGPT, have an implicit sense of the world from their training data. “In a way, I would say that the LLM already has a very good world model; it's just we don't really understand how it's doing it,” says Angjoo Kanazawa, an assistant professor of electrical engineering and computer sciences at University of California, Berkeley. These conceptual models, though, aren't a real-time physical understanding of the world because LLMs can't update their training data in real time. Even OpenAI's technical report notes that, once deployed, its model GPT-4 “does not learn from experience.” “How do you develop an intelligent LLM vision system that can actually have streaming input and update its understanding of the world and act accordingly?” Kanazawa says. I think AGI is not possible without actually solving this problem.” The LLM would act as the layer for “language and common sense to communicate,” Kanazawa says; it would serve as an “interface,” whereas a more clearly defined underlying world model would provide the necessary “spatial temporal memory” that current LLMs lack. In recent years a number of prominent AI researchers have turned toward world models. In 2024 Fei Fei Li founded World Labs, which recently launched its Marble software to create 3D worlds from “text, images, video, or coarse 3D layouts,” according to the start-up's promotional material. And last November AI researcher Yann LeCun announced on LinkedIn that he was leaving Meta to launch a start-up, now called Advanced Machine Intelligence (AMI Labs), to build “systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” He seeded these ideas in a 2022 position paper in which he asked why humans can act well in situations they've never encountered and argued the answer “may lie in the ability... to learn world models, internal models of how the world works.” Research increasingly shows the benefits of internal models. An April 2025 Nature paper reported results on DreamerV3, an AI agent that, by learning a world model, can improve its behavior by “imagining” future scenarios. So while in the context of AGI, “world model” refers more closely to an internal model of how reality works, not just 4D reconstructions, advances in 4D modeling could provide components that help with understanding viewpoints, memory and even short-term prediction. Deni Ellis Béchard is Scientific American's senior writer for technology. You can follow him on X, Instagram and Bluesky @denibechard If you enjoyed this article, I'd like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history. If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized. In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. There has never been a more important time for us to stand up and show why science matters.
For a long time, endocrinologist Leigh Perreault, MD, felt uneasy about how weight management was handled in routine medical care. Too often, patients were sent home with the same advice to eat better and exercise more, even when it clearly was not enough. She realized that many of those medications addressed symptoms rather than the root problem. The program introduces dedicated clinic visits where providers can concentrate specifically on weight related care instead of squeezing it into a standard appointment. With funding from the National Institutes of Health (NIH), PATHWEIGH was rolled out across UCHealth's 56 primary care clinics throughout Colorado to evaluate its impact. Results published by Perreault and her team in Nature Medicine showed that the program reduced population weight gain by 0.58 kg over 18 months and shifted the overall trend from steady gain to weight loss -- an outcome with major implications for public health. The program also made patients more likely to receive help for weight issues. Participation increased the chances of receiving weight related care by 23%. "With PATHWEIGH, we showed that we absolutely eliminated population weight gain across all of our primary care, which has never been done previously," Perreault says. As a result, obesity specialists are now pointing to PATHWEIGH as a possible standard of care, and several health systems around the country are exploring how to adopt it. "We built a highway that we could put all the vehicles on, so there's actually a process for people to receive weight related care if they wanted it." Clinics posted signs letting patients know they could request an appointment focused entirely on weight management by asking at the front desk. That request automatically activated a workflow in the electronic health record. Patients received a survey, and once completed, their responses flowed directly into the clinician's notes. "It made the whole process really efficient, and then effectively turned our note template into a menu of anything that we might do," Perreault says. Data collected over 18 months showed that about one in four eligible patients received some form of weight related care at least once during the trial. Most of that care involved lifestyle counseling, but prescriptions for anti-obesity medications doubled during the intervention. It also reduced the discomfort that often surrounds conversations about weight in medical settings. "Most people who want or need weight related care never get it. This was a safe space to say, 'Hey, if you would like medical assistance with your weight, we actually have a process for you to receive that now.'" Experts estimate that rising obesity rates are driven by an average population weight gain of about .5kg per year. Stopping that increase and turning it into even modest weight loss could make a meaningful difference in slowing the obesity epidemic. Even patients who did not receive direct interventions saw reduced weight gain compared with what would normally be expected. Five health systems across seven states are also considering PATHWEIGH as its creators work toward licensing the model. Patients Once “Paralyzed by Life” Show Lasting Recovery With Implanted Nerve Therapy Using These Common Painkillers After Surgery May Be Slowing Recovery, New Study Finds Could a Tomato Nutrient Help Prevent Severe Gum Disease in Older Adults? A Hidden Climate Rhythm Is Driving Extreme Floods and Droughts Worldwide Stay informed with ScienceDaily's free email newsletter, updated daily and weekly. Keep up to date with the latest news from ScienceDaily via social networks: Tell us what you think of ScienceDaily -- we welcome both positive and negative comments.