Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Posted: How did Atlantic editor-in-chief Jeffrey Goldberg get added to a Signal group chat with Trump administration officials discussing their plans for an airstrike in Yemen? The simplest explanation: National Security Adviser Mike Waltz had Goldberg saved as a contact in his phone and accidentally added him. Indeed, when Waltz first claimed that Goldberg's phone number was “sucked in” from another contact, Goldberg scoffed, “This isn't ‘The Matrix.'” But according to the Guardian, an internal investigation conducted by the White House's information technology office concluded that something more complicated taken place, with an iPhone auto-suggestion playing a key role: After Goldberg emailed the White House for comment on a story, a Trump spokesperson, Brian Hughes, texted the contents of Goldberg's email to Waltz. As a result, Waltz's iPhone offered a “contact suggestion update” that ultimately saved Goldberg's phone number under Hughes' name. Then, when Waltz tried to add Hughes — now a spokesperson for the National Security Council — to the chat, he supposedly ended up adding Goldberg instead. For his part, Goldberg said, “I'm not going to comment on my relationship with Mike Waltz beyond saying I do know him and have spoken to him.” Topics Subscribe for the industry's biggest tech news Every weekday and Sunday, you can get the best of TechCrunch's coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. © 2025 Yahoo.
Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Microsoft has released a browser-based, playable level of the classic video game Quake II. This functions as a tech demo for the gaming capabilities of Microsoft's Copilot AI platform — though by the company's own admission, the experience isn't quite the same as playing a well-made game. You can try it out for yourself, using your keyboard to navigate a single level of Quake II for a couple minutes before you hit the time limit. In a blog post describing their work, Microsoft researchers said their Muse family of AI models for video games allows users to “interact with the model through keyboard/controller actions and see the effects of your actions immediately, essentially allowing you to play inside the model.” To show off these capabilities, the researchers trained their model on a Quake II level (which Microsoft owns through its acquisition of ZeniMax). “Much to our initial delight we were able to play inside the world that the model was simulating,” they wrote. “We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game.” At the same time, the researchers emphasized that this is meant to be “a research exploration” and should be thought of as “playing the model as opposed to playing the game.” More specifically, they acknowledged “limitations and shortcomings,” like the fact that enemies are fuzzy, the damage and health counters can be inaccurate, and most strikingly, the model struggles with object permanence, forgetting about things that are out of view for 0.9 seconds or longer. In the researchers' view, this can “also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up,” or even “teleport around the map by looking up at the sky and then back down.” Writer and game designer Austin Walker was less impressed by this approach, posting a gameplay video in which he spent most of his time trapped in a dark room. (This also happened to me both times I tried to play the demo, though I'll admit I'm extremely bad at first-person shooters.) Referring to a Microsoft Gaming CEO Phil Spencer's recent statements that AI models could help with game preservation by making classic games “portable to any platform,” Walker argued this reveals “a fundamental misunderstanding of not only this tech but how games WORK.” “The internal workings of games like Quake — code, design, 3d art, audio — produce specific cases of play, including surprising edge cases,” Walker wrote. “That is a big part of what makes games good. If you aren't actually able to rebuild the key inner workings, then you lose access to those unpredictable edge cases.” Topics Anthony Ha is TechCrunch's weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City. DOGE reportedly planning a hackathon to build ‘mega API' for IRS data Meta releases Llama 4, a new crop of flagship AI models Gemini 2.5 Pro is Google's most expensive AI model yet A comprehensive list of 2025 tech layoffs OpenAI says it'll release o3 after all, delays GPT-5 Teen with 4.0 GPA who built the viral Cal AI app was rejected by 15 top universities Mark Cuban backs Skylight, a TikTok alternative built on Bluesky's underlying technology © 2025 Yahoo.
FOSS unites Red and Green. When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. In a surprising turn of events, an Nvidia engineer pushed a fix to the Linux kernel, resolving a performance regression seen on AMD integrated and dedicated GPU hardware (via Phoronix). Turns out, the same engineer inadvertently introduced the problem in the first place with a set of changes to the kernel last week, attempting to increase the PCI BAR space to more than 10TiB. This ended up incorrectly flagging the GPU as limited and hampering performance, but thankfully it was quickly picked up and fixed. In the open-source paradigm, it's an unwritten rule to fix what you break. The Linux kernel is open-source and accepts contributions from everyone, which are then reviewed. Responsible contributors are expected to help fix issues that arise from their changes. So, despite their rivalry in the GPU market, FOSS (Free Open Source Software) is an avenue that bridges the chasm between AMD and Nvidia. The regression was caused by a commit that was intended to increase the PCI BAR space beyond 10TiB, likely for systems with large memory spaces. This indirectly reduced a factor called KASLR entropy on consumer x86 devices, which determines the randomness of where the kernel's data is loaded into memory on each boot for security purposes. At the same time, this also artificially inflated the range of the kernel's accessible memory (direct_map_physmem_end), typically to 64TiB. In Linux, memory is divided into different zones, one of which is the zone device that can be associated with a GPU. The problem here is that when the kernel would initialize zone device memory for Radeon GPUs, an associated variable (max_pfn) that represents the total addressable RAM by the kernel would artificially increase to 64TiB. Since the GPU likely cannot access the entire 64TiB range, it would flag dma_addressing_limited() as True. This variable essentially restricts the GPU to use the DMA32 zone, which offers only 4GB of memory and explains the performance regressions. The good news is that this fix should be implemented as soon as the pull request lands, right before the Linux 6.15-rc1 merge window closes today. With a general six to eight week cadence before new Linux kernels, we can expect the stable 6.15 release to be available around late May or early June. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he's not working, you'll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun. Game developers urge Nvidia RTX 30 and 40 series owners rollback to December 2024 driver after recent RTX 50-centric release issues Nvidia RTX 50 owners get another Hotfix, with 572.75 addressing crashes and clock speeds Sovol Zero Review: Good Things Come in Small Packages Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Will this cable unlock high refresh 8K gaming? When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. The Shenzhen 8K UHD Video Industry Cooperation Alliance, a group made up of more than 50 Chinese companies, just released a new wired media communication standard called the General Purpose Media Interface or GPMI. This standard was developed to support 8K and reduce the number of cables required to stream data and power from one device to another. According to HKEPC, the GPMI cable comes in two flavors — a Type-B that seems to have a proprietary connector and a Type-C that is compatible with the USB-C standard. Because 8K has four times the number of pixels of 4K and 16 times more pixels than 1080p resolution, it means that GPMI is built to carry a lot more data than other current standards. There are other variables that can impact required bandwidth, of course, such as color depth and refresh rate. The GPMI Type-C connector is set to have a maximum bandwidth of 96 Gbps and deliver 240 watts of power. This is more than double the 40 Gbps data limit of USB4 and Thunderbolt 4, allowing you to transmit more data on the cable. However, it has the same power limit as that of the latest USB Type-C connector using the Extended Power Range (EPR) standard. Standard Bandwidth Power Delivery DisplayPort 2.1 UHBR20 80 Gbps No Power GPMI Type-B 192 Gbps 480W GPMI Type-C 96 Gbps 240W HDMI 2.1 FRL 48 Gbps No Power HDMI 2.1 TMDS 18 Gbps No Power Thunderbolt 4 40 Gbps 100W USB4 40 Gbps 240W GPMI Type-B beats all other cables, though, with its maximum bandwidth of 192 Gbps and power delivery of up to 480 watts. While still not a level where you can use it to power your RTX 5090 gaming PC through your 8K monitor, it's still more than enough for many gaming laptops with a high-end discrete graphics. This will simplify the desk setup of people who prefer a portable gaming computer, since you can use one cable for both power and data. Aside from that, the standard also supports a universal control standard like HDMI-CEC, meaning you can use one remote control for all appliances that connect via GPMI and use this feature. The only widely used video transmission standards that also deliver power right now are USB Type-C (Alt DP/Alt HDMI) and Thunderbolt connections. However, this is mostly limited to monitors, with many TVs still using HDMI. If GPMI becomes widely available, we'll soon be able to use just one cable to build our TV and streaming setup, making things much simpler. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. He's been writing with several tech publications since 2021, where he's been interested in tech hardware and consumer electronics. AMD sets new supercomputer record, runs CFD simulation over 25x faster on Instinct MI250X GPUs China strikes back at Trump with 34 percent tariff — bans some rare earth exports to the U.S. WinRAR security flaw ignores Windows Mark of the Web security warnings Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
This laptop has dropped to its lowest price of all time. When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. Right now, at Amazon, you can find the 15.6-inch Samsung Galaxy AI Book4 Edge laptop for one of its best prices to date. This Snapdragon X Plus-based laptop usually goes for around $899, but right now it's marked down to just $695. So far, no expiration has been specified for the discount, so we don't know for how long it will be made available at this rate. It is, however, labeled as a limited offer. We haven't had the opportunity to review the Samsung Galaxy AI Book4 Edge so far, but we're plenty familiar with several Snapdragon-powered Copilot+ machines. Recently, some controversy arose when the Surface Laptop 7s were frequently returned due to compatibility issues. If you're considering this laptop, you might want to research a little and make sure your favorite games and apps are able to run well on Windows-on-Arm systems. On the positive side, once you go Arm, you should enjoy the best "long-lasting battery" life available on Windows devices. Samsung 15-Inch Galaxy AI Book4 Edge: now $695 at Amazon (was $899)This laptop is built around a Snapdragon X Plus X1P-42-100 processor. It has a 15.6-inch FHD display and relies on a Qualcomm Adreno GPU. It comes with 16GB of LPDDR5X and a 500GB internal SSD for storage. The main processor driving the Samsung Galaxy AI Book4 Edge is a Snapdragon X Plus X1P-42-100. This CPU has eight cores with a base speed of 3.4 GHz and a single-core boost feature that takes it up to 3.8 GHz. For graphics, it relies on a Qualcomm Adreno GPU which outputs to a 15.6-inch anti-glare display with an FHD resolution of 1,920 x 1,080 pixels. As far as memory goes, this edition comes with 16GB of LPDDR5X and a 500GB internal SSD is fitted for storage. It has a couple of 2W speakers integrated for audio output, but you also get a 3.5mm audio jack to take advantage of. It has an HDMI 2.1 port for outputting video to a secondary screen and a handful of USB ports, including one USB 3.2 port and two USB4 ports. It is also worth noting that this price is cheaper than the current offer over at the official Samsung website. If you want to check out this deal for yourself, head over to the Samsung 15-inch Galaxy AI Book4 Edge product page on Amazon US for more information and purchase options. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Ash Hill is a contributing writer for Tom's Hardware with a wealth of experience in the hobby electronics, 3D printing and PCs. She manages the Pi projects of the month and much of our daily Raspberry Pi reporting while also finding the best coupons and deals on all tech. Vaio touts 'tariff free' inventory for sale — Intel-powered laptops on sale while supplies last Grab a budget HP gaming laptop for just $449 Delidded AMD Ryzen 9 9950X3D runs 23 degrees cooler Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Melting occurs despite Corsair's first-party 600W 12VHPWR cable being used. When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. Another Blackwell GPU bites the dust, as the meltdown reaper has reportedly struck a Redditor's MSI GeForce RTX 5090 Gaming Trio OC, with the impact tragically extending to the power supply as well. Ironically, the user avoided third-party cables and specifically used the original power connector, the one that was supplied with the PSU, yet both sides of the connector melted anyway. Nvidia's GeForce RTX 50 series GPUs face an inherent design flaw where all six 12V pins are internally tied together. The GPU has no way of knowing if all cables are seated properly, preventing it from balancing the power load. In the worst-case scenario, five of the six pins may lose contact, resulting in almost 500W (41A) being drawn from a single pin. Given that PCI-SIG originally rated these pins for a maximum of 9.5A, this is a textbook fire/meltdown risk. The GPU we're looking at today is the MSI RTX 5090 Gaming Trio OC, which, on purchase, set the Redditor back a hefty $2,900. That's still a lot better than the average price of an RTX 5090 from sites like eBay, currently sitting around $4,000. Despite using Corsair's first-party 600W 12VHPWR cable, the user was left with a melted GPU-side connector, a fate which extended to the PSU. The damage, in the form of a charred contact point, is quite visible and clearly looks as if excess current was drawn from one specific pin, corresponding to the same design flaw mentioned above. The user is weighing an RMA for their GPU and PSU, but a GPU replacement is quite unpredictable due to persistent RTX 50 series shortages. Sadly, these incidents are still rampant despite Nvidia's assurances before launch. With the onset of enablement drivers (R570) for Blackwell, both RTX 50 and RTX 40 series GPUs began suffering from instability and crashes. Despite multiple patches from Nvidia, RTX 40 series owners haven't seen a resolution and are still reliant on reverting to older 560-series drivers. Moreover, Nvidia's decision to discontinue 32-bit OpenCL and PhysX support with RTX 50 series GPUs has left the fate of many legacy applications and games in limbo. As of now, the only foolproof method to secure your RTX 50 series GPU is to ensure optimal current draw through each pin. You might want to consider Asus' ROG Astral GPUs as they can provide per-pin current readings, a feature that's absent in reference RTX 5090 models. Alternatively, if feeling adventurous, maybe develop your own power connector with built-in safety measures and per-pen sensing capabilities? Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he's not working, you'll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun. Nvidia engineer breaks and then quickly fixes AMD GPU performance in Linux Nvidia's PhysX and Flow go open source — Running legacy PhysX on RTX 50 may be possible using wrappers WinRAR security flaw ignores Windows Mark of the Web security warnings Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
CFD simulation is cut down from almost 40 hours to less than two using 1,024 Instinct MI250X accelerators paired with Epyc CPUs. When you purchase through links on our site, we may earn an affiliate commission. Here's how it works. AMD processors were instrumental in achieving a new world record during a recent Ansys Fluent computational fluid dynamics (CFD) simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory (ORNL). According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly. A new supercomputing record has been set!Ansys, @bakerhughesco, and @ORNL have run the largest-ever commercial #CFD simulation using 2.2 billion cells and 1,024 @AMD Instinct GPUs on the world's first exascale supercomputer. The result? A 96% reduction in simulation run…April 4, 2025 Frontier was once the fastest supercomputer in the world, and it was also the first one to break into exascale performance. It replaced the Summit supercomputer, which was decommissioned in November 2024. However, the El Capitan supercomputer, located at the Lawrence Livermore National Laboratory, broke Frontier's record at around the same time. Both Frontier and El Capitan are powered by AMD GPUs, with the former boasting 9,408 AMD EPYC processors and 37,632 AMD Instinct MI250X accelerators. On the other hand, the latter uses 44,544 AMD Instinct MI300A accelerators. Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth. “By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges, enabling breakthroughs in efficiency, sustainability, and innovation,” said Brad McCredie, AMD Senior Vice President for Data Center Engineering. Even though AMD can deliver top-tier performance at a much cheaper price than Nvidia, many AI data centers prefer Team Green because of software issues with AMD's hardware. One high-profile example was Tiny Corp's TinyBox system, which had problems with instability with its AMD Radeon RX 7900 XTX graphics cards. The problem was so bad that Dr. Lisa Su had to step in to fix the issues. And even though it was purportedly fixed, the company still released two versions of the TinyBox AI accelerator — one powered by AMD and the other by Nvidia. Tiny Corp also recommended the more expensive Team Green version, with its six RTX 4090 GPUs, because of its driver quality. If Team Red can fix the software support on its great hardware, then it could likely get more customers for its chips and get a more even footing with Nvidia in the AI GPU market. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. He's been writing with several tech publications since 2021, where he's been interested in tech hardware and consumer electronics. Aurora supercomputer is now fully operational, available to researchers China releases Top 100 supercomputer list for 2024: No ExaFLOPS systems mentioned, obfuscation continues Nvidia engineer breaks and then quickly fixes AMD GPU performance in Linux Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
The Pentagon is expected to deliver plans for a “Golden Dome” to Trump this week. In the crudest sense, the Golden Dome is a missile defense system that would shoot nukes, missiles, and drones that threaten the U.S. out of the sky. A scientific study published earlier this month detailed the scientific impossibility of the scheme. America has tried to build a missile defense system since before Ronald Reagan was president. Reagan wanted to put satellites into space that would use lasers to blast Soviet nukes out of the sky. What we built was somewhat more pedestrian. It also probably won't work. But defense contractors made a lot of money. “When engineers have been under intense political pressure to deploy a system, the United States has repeatedly initiated costly programs that proved unable to deal with key technical challenges and were eventually abandoned as their inadequacies became apparent,” explained a new study from the American Physical Society Panel on Public Affairs. Under Trump, we're going to do it again. Trump signed an executive order on January 27 that called on the Pentagon to come up with a plan for an “Iron Dome for America,” which the President and others have taken to calling a “Golden Dome.” According to the EO, Trump wants a plan that'll keep the homeland safe from “ballistic, hypersonic, advanced cruise missiles, and other next-generation aerial attacks from peer, near-peer, and rogue adversaries.” The dream of the Golden Dome is simple: shoot missiles out of the sky before they can do any damage. “It's important to not simply think of Golden Dome as the next iteration of the ground-based missile defense system or solely a missile defense system because it's a broader mission than that,” Jonathan Moneymaker, the CEO of BlueHalo, a defense company working on Golden Dome adjacent tech, told Gizmodo. Moneymaker was clear-eyed about the challenges of building Golden Dome. “Everyone looks at it as a replication of Israel's Iron Dome, but we have to appreciate that Israel's the size of New Jersey,” he said. Israel's Iron Dome has done a great job shooting down Hamas rockets and Iranian missiles. It's also covering a small territory and shooting down projectiles that aren't moving as fast as a nuclear weapon or a Russian Kh-47M2 Kinzhal ballistic missile might. The pitch of the Golden Dome is that it would keep the whole of the continental U.S. safe. That's a massive amount of territory to cover and the system would need to identify, track, and destroy nuclear weapons, drones, and other objects moving at high speed. That's like trying to shoot a bullet out of the sky with a bullet. The missile defense study, published on March 3, detailed a few of the challenges facing a potential Golden Dome-style system. Trump's executive order is vague and covers a lot of potential threats. “We focus on the fundamental question of whether current and proposed systems intended to defend the United States against nuclear-armed [intercontinental ballistic missile] now effective, or could in the near future be made effective in preventing the death and destruction that a successful attack by North Korea on the United States using such ICBMs would produce.” Stopping a nuke is the primary promise of a missile system. And if one of these systems can't stop a nuke then of what use is it? The study isn't positive. “This is the most comprehensive, independent scientific study in decades on the feasibility of national ballistic missile defense. Its findings may shock Americans who have not paid much attention to these programs,” Joseph Cirincione told Gimzodo. Cirincione is the retired president of the Ploughshares Fund and a former Congressional staffer. He investigated missile defense systems and nukes for the House Armed Services Committee. “We have no chance of stopping a determined ballistic missile attack on the United States despite four decades of trying and over $400 billion spent. This is the mother of all scandals,” he said. The study looked at a few different methods for knocking a North Korean nuke out of the sky. An ICBM launch has three phases: the boost phase which lasts only a few minutes, the midcourse phase which lasts around 20 minutes, and the terminal phase which is less than a minute. During the boost-phase, the nuke is building up speed and getting into the air. “Boost-phase intercept of ICBMs launched from even a small country like North Korea is challenging,” the study said. You have to get weapons close to the missile and, in the case of North Korea, that would require building them close to China and then firing them over Chinese territory. Any defense system would only have a few moments to respond to the nuke because the boost phase only lasts a few minutes. For a countermeasure to hit that ICBM under those time constraints means it would need to be built close, probably somewhere in the Pacific. And we would need a lot of them. China would not be happy about a ring of missile defense systems close to its borders, no matter how America tried to sell it to them. But what about space-based systems? It's a territory rivals have less power over. “The scientific review panel found that it would take over a thousand orbiting weapons to counter a single North Korean ballistic missile. Even then, ‘the system would be costly and vulnerable to anti-satellite attacks,'” Cirincione told Gizmodo. Around 3,600 interceptors, to be precise. So we're talking about ringing the planet in thousands of munitions-armed satellites. And remember that this is just to handle one nuke launched by North Korea. Imagine scaling up a similar defense shield to guard against all the nukes in Russia and you'll begin to see the size of the problem. Well, what about lasers? Reagan's original plan was lasers. Surely technology has advanced since the 1980s. “There is widespread agreement that laser weapons that could disable ICBMs during their boost-phase, whether based on aircraft, drones, or space platforms, will not be technically feasible within the 15-year time horizon of this study,” the study said. This hints at another one of the problems of missile defense: it takes a long time to build and your enemies aren't stagnant while it's happening. While America works on the Golden Dome, Russia, North Korea, and China will be building their own new and different kinds of weapons meant to circumvent it. We may be able to build lasers capable of shooting nukes out of the sky in two decades but by then America's enemies may have things to deal with the lasers. OK, so building the systems to shoot down a nuke in its boost phase is a logistical and geopolitical nightmare. What about during its mid-course arc? There's more time to do something then, between 20 and 30 minutes. Most of America's currently deployed missile defense systems are designed to strike an object midcourse. “The absence of air drag during this phase means that launch debris, such as spent upper stages, deployment and altitude control modules, separation debris and debris from unburned fuel, insulation, and other parts of the booster, as well as missile fragments deliberately created by the offense and light-weight decoys and other penetration aids, all follow the same trajectory as a warhead,” the study said. “This makes it difficult for the defense to discriminate the warhead from other objects in this ‘threat cloud,' so it can target the warhead.” In tests, America's midcourse interceptors only work about half the time. And those tests are done under perfect conditions against known threats. “After reviewing carefully the technology and test record of the [ground-based midcourse] system, the report concludes that its unreliability and vulnerability to countermeasures seriously limits its effectiveness,” the study said. There's still the terminal phase, that less than a second before a nuke hits its target. And the U.S. also has systems, like the Terminal High Altitude Area Defense (THADD), designed to knock a missile out of the air during this crucial moment. The truth is that if a nuke is that close, you've probably already lost. “Even effective terminal-phase defenses can defend only limited areas,” the study said. “Moreover, terminal-phase sensors are vulnerable to the blinding effects of nuclear explosions in the atmosphere.” These are just a few of the problems that the researchers discussed in the 60-page report. There are many more. And remember this is just talking about shooting down a North Korean salvo. Things get more complicated when you add Russia, China, or any of America's other enemies. For Cirincione, the report confirmed his long-held belief that any kind of intricate missile defense system isn't worth the cost of building it. “In short, we cannot defend the country against a determined ballistic missile attack now or anytime in the foreseeable future,” he said. “While we can intercept short-range missiles such as those used in the Middle East or Ukraine, there is zero chance we can intercept long-range missiles that span the oceans. We have spent over $400 billion since 1983 on nothing. Future expenditures will just be throwing money down a rat hole.” Moneymaker was bullish. “When a nation can get aligned around an objective, whether that's Star Wars or Golden Dome or sending someone to the moon, when you have a unity of mission, a lot of things can happen,” he said. He also noted that the Golden Dome was a massive opportunity for disruptive defense companies like Anduril and, yes, BlueHalo. He said that Golden Dome was a project at a scale that's never been seen before. Building any proposed system will require cooperation between state and local officials, police, the Coast Guard, the FBI, and the DHS. “There's a lot of constituents at play that have a next-level order of integration that needs to happen.” In Moneymaker's imagining, the Golden Dome wouldn't be just one system but a vast patchwork of weapons that cover the United States. “Is this one dome? Or is it a series of federated domes that interplay with each other? I just, just given the size and scale of the endeavor, we're going to see phases to this development,” he said. Moneymaker explained that high-value targets like military bases or large metro areas might get protection first and then be woven together into a “tapestry or fabric of protection.” He said the project is so big that progress will be incremental. “The good news is that I think we can go fast as a nation when we need to or want to.” In Washington this week, there's talk of creating a whole new department just to handle the development of the Golden Dome. Booz Allen Hamilton has teased a swarm of refrigerator-sized drones flying in 20 orbital planes around 200 miles in the air. The plan is for these AI-connected drone swarms to identify missiles as they come in and slam into them. That's just one of the many pitches the Trump administration has received. According to Defense One, the Pentagon has gotten more than 360 plans related to the Golden Dome. “I fully expect the Trump administration to ignore this serious scientific advice, just as they reject scientific truth on the climate crisis, vaccines, and the environment,” Cirincione said. “When there is money to be made, science is shunted aside.” Donald TrumpmissilesNuclear weaponsnukespentagon Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. Please select your desired newsletters and submit your email to upgrade your inbox. The tech billionaire and Trump adviser “donated” Starlink service to the White House. The move resembles a previous maneuver by Microsoft. "We will track down leakers and prosecute them to the fullest extent of the law," a spokesperson for DHS said. An already weakened agency workforce will be severely culled in the coming weeks, per DOGE's orders. The president said China was "not happy" with his tariffs, which are currently decimating the U.S. stock market. Loomer is suing Bill Maher for defamation after the comedian said she was sleeping with the president. Other Trump orbiters might sell, too. We may earn a commission when you buy through links on our sites. ©2025 GIZMODO USA LLC. All rights reserved. Mode Follow us Mode Follow us
If you buy something using links in our stories, we may earn a commission. Learn more. The original version of this story appeared in Quanta Magazine. The French scholar Pierre-Simon Laplace crisply articulated his expectation that the universe was fully knowable in 1814, asserting that a sufficiently clever “demon” could predict the entire future given a complete knowledge of the present. His thought experiment marked the height of optimism about what physicists might forecast. Since then, reality has repeatedly humbled their ambitions to understand it. One blow came in the early 1900s with the discovery of quantum mechanics. Whenever quantum particles are not being measured, they inhabit a fundamentally fuzzy realm of possibilities. They don't have a precise position for a demon to know. Another came later that century, when physicists realized how much “chaotic” systems amplified any uncertainties. A demon might be able to predict the weather in 50 years, but only with an infinite knowledge of the present all the way down to every beat of every butterfly's wing. In recent years, a third limitation has been percolating through physics—in some ways the most dramatic yet. Physicists have found it in collections of quantum particles, along with classical systems like swirling ocean currents. Known as undecidability, it goes beyond chaos. Even a demon with perfect knowledge of a system's state would be unable to fully grasp its future. “I give you God's view,” said Toby Cubitt, a physicist turned computer scientist at University College London and part of the vanguard of the current charge into the unknowable, and “you still can't predict what it's going to do.” Eva Miranda, a mathematician at the Polytechnic University of Catalonia (UPC) in Spain, calls undecidability a “next-level chaotic thing.” Pierre-Simon Laplace speculated that an all-knowing demon could perfectly predict the future of any physical system. He was wrong. Undecidability means that certain questions simply cannot be answered. It's an unfamiliar message for physicists, but it's one that mathematicians and computer scientists know well. More than a century ago, they rigorously established that there are mathematical questions that can never be answered, true statements that can never be proved. Now physicists are connecting those unknowable mathematical systems with an increasing number of physical ones and thereby beginning to map out the hard boundary of knowability in their field as well. These examples “place major limitations on what we humans can come up with,” said David Wolpert, a researcher at the Santa Fe Institute who studies the limits of knowledge but was not involved in the recent work. “And they are inviolable.” A striking example of unknowability came to physics in 1990 when Cris Moore, then a graduate student at Cornell University, designed an undecidable machine with a single moving part. His setup—which was purely theoretical—resembled a highly customizable pinball machine. Imagine a box, open at the bottom. A player would fill the box with bumpers, move the launcher to any position along the bottom of the box, and fire a pinball into the interior. The contraption was relatively simple. But as the ball ricocheted around, it was secretly performing a computation. “I give you God's view, and you still can't predict what it's going to do.” Moore had become fascinated with computation after reading Gödel, Escher, Bach, a Pulitzer Prize–winning book about systems that reference themselves. The system that most captured his imagination was an imaginary device that had launched the field of computer science, the Turing machine. Defined by the mathematician Alan Turing in a landmark 1936 paper, the Turing machine consisted of a head that could move up and down an infinitely long tape, reading and writing 0s and 1s in a series of steps according to a handful of simple rules telling it what to do. One Turing machine, following one set of rules, might read two numbers and print their product. Another, following a different set of rules, might read one number and print its square root. In this way, a Turing machine could be designed to execute any sequence of mathematical and logical operations. Today we would say that a Turing machine executes an “algorithm,” and many (but not all) physicists consider Turing machines to define the limits of calculation itself, whether performed by computer, human or demon. Moore recognized the seeds of Turing machine behavior in the subject of his graduate studies: chaos. In a chaotic system, no detail is small enough to ignore. Adjusting the position of a butterfly in Brazil by a millimeter, in one infamous metaphor, could mean the difference between a typhoon striking Tokyo and a tornado tearing through Tennessee. Uncertainty that starts off as a rounding error eventually grows so large that it engulfs the entire calculation. In chaotic systems, this growth can be represented as movement across a written-out number: Ignorance in the one-tenths place spreads left, eventually moving across the decimal point to become ignorance in the tens place. Moore designed his pinball machine to complete the analogy to the Turing machine. The starting position of the pinball represents the data on the tape being fed into the Turing machine. Crucially (and unrealistically), the player must be able to adjust the ball's starting location with infinite precision, meaning that specifying the ball's location requires a number with an endless procession of numerals after the decimal point. Only in such a number could Moore encode the data of an infinitely long Turing tape. Then the arrangement of bumpers steers the ball to new positions in a way that corresponds to reading and writing on some Turing machine's tape. Certain curved bumpers shift the tape one way, making the data stored in distant decimal places more significant in a way reminiscent of chaotic systems, while oppositely curved bumpers do the reverse. The ball's exit from the bottom of the box marks the end of the computation, with the final location as the result. Moore equipped his pinball machine setup with the flexibility of a computer—one arrangement of bumpers might calculate the first thousand digits of pi, and another might compute the best next move in a game of chess. But in doing so, he also infused it with an attribute that we might not typically associate with computers: unpredictability. In a landmark work in 1936, Alan Turing defined the boundary of computation by describing the key features of a universal computing device, now known as a Turing machine. Some algorithms stop, outputting a result. But others run forever. (Consider a program tasked with printing the final digit of pi.) Is there a procedure, Turing asked, that can examine any program and determine whether it will stop? This question became known as the halting problem. Turing showed that no such procedure exists by considering what it would mean if it did. If one machine could predict the behavior of another, you could easily modify the first machine—the one that predicts behavior—to run forever when the other machine halts. And vice versa: It halts when the other machine runs forever. Then—and here's the mind-bending part—Turing imagined feeding a description of this tweaked prediction machine into itself. If the machine stops, it also runs forever. And if it runs forever, it also stops. Since neither option could be, Turing concluded, the prediction machine itself must not exist. (His finding was intimately related to a groundbreaking result from 1931, when the logician Kurt Gödel developed a similar way of feeding a self-referential paradox into a rigorous mathematical framework. Gödel proved that mathematical statements exist whose truth cannot be established.) In short, Turing proved that solving the halting problem was impossible. The only general way to know if an algorithm stops is to run it for as long as you can. If it stops, you have your answer. But if it doesn't, you'll never know whether it truly runs forever, or whether it would have stopped if you'd just waited a bit longer. “We know that there are these kinds of initial states that we cannot predict ahead of time what it's going to do,” Wolpert said. Since Moore had designed his box to mimic any Turing machine, it too could behave in unpredictable ways. The exit of the ball marks the end of a calculation, so the question of whether any particular arrangement of bumpers will trap the ball or steer it to the exit must also be undecidable. “Really, any question about the long-term dynamics of these more elaborate maps is undecidable,” Moore said. Cris Moore developed one of the earliest and simplest undecidable physical systems. Moore's pinball machine went beyond ordinary chaos. A tornado forecaster can't say exactly where a tornado will touch down for two reasons: the forecaster's ignorance of the precise position of every Brazilian butterfly, and limited computing power. But Moore's pinball machine featured a more fundamental form of unpredictability. Even for someone with complete knowledge of the machine and unlimited computing power, certain questions regarding its fate remain unanswerable. “This is a bit more dramatic,” said David Pérez-García, a mathematician at the Complutense University of Madrid. “Even with infinite resources, you cannot even write the program that solves the problem.” Other researchers have previously come up with systems that act like Turing machines—notably checkerboard grids with squares flickering on and off depending on the colors of their neighbors. But these systems were abstract and intricate. Moore crafted a Turing machine out of a simple apparatus you could imagine sitting in a lab. It was a vivid demonstration that a system obeying nothing more than high school physics could have an unpredictable nature. “It's a bit shocking that it's undecidable,” said Cubitt, who lectured about Moore's machine after it captured his imagination as a graduate student. “It's literally a single particle bouncing around a box.” After getting his doctorate in physics, Cubitt shifted into mathematics and computer science. But he never forgot the pinball machine, and how computer science put limits on the machine's physics. He wondered whether undecidability touched any physics problems that really matter. Over the last decade, he has discovered that it does. Cubitt put undecidability on a collision course with large quantum systems in 2012. He, Pérez-García, and their colleague Michael Wolf had gotten together for coffee during a conference in the Austrian Alps to debate whether a niche problem might be undecidable. When Wolf suggested they put that problem aside and instead tackle the decidability of one of the biggest problems in quantum physics, not even he suspected they might actually succeed. “It started as a joke. Then we started to cook up ideas,” Pérez-García said. Wolf proposed targeting a defining property of every quantum system called the spectral gap, which refers to how much energy it takes to jostle a system out of its lowest energy state. If it takes some oomph to do this, a system is “gapped.” If it can become excited at any moment, without any infusion of energy, it is “gapless.” The spectral gap determines the color that shines from a neon sign, what a material will do when you remove all heat from it, and—in a different context—what the mass of the proton should be. In many cases, physicists can calculate the spectral gap for a specific atom or material. In many other cases, they can't. A million-dollar prize awaits anyone who can rigorously prove from first principles that the proton should have a positive mass. David Pérez-García (left) and Toby Cubitt designed a quantum material whose state can capture any calculation possible for a Turing machine. Cubitt, Wolf, and Pérez-García aimed high. They sought to prove or disprove the existence of a single strategy—a universal algorithm—that would tell you whether anything from a proton to a sheet of aluminum had a spectral gap or not. To do so, they resorted to the same approach Moore had used with his pinball machine: They devised a fictitious quantum material that could be set up to act like any Turing machine. They hoped to rewrite the spectral gap problem as the halting problem in disguise. Over the next three years they churned out 144 pages of dense mathematics, combining a handful of major results from the previous half-century of math and physics. The extremely rough idea was to use the quantum particles in a flat material—a grid of atoms, basically—as a stand-in for the Turing machine's tape. Because this was a quantum material, the particles could exist in a superposition of multiple states at the same time—a quantum combination of different possible configurations of the material. The researchers used this feature to capture the different steps of the calculation. They set up the superposition so that one of these possible configurations represented the initial state of the Turing machine, another configuration represented the first step of the calculation, another represented the second step, and so on. Finally, using techniques from quantum computing, they fiddled with the interactions between the particles so that if the superposition represented a calculation that halted, the material would have an energy gap. And if the computation continued forever, the material had no gap. In a paper published in Nature in 2015, they proved that the spectral gap problem is equivalent to the halting problem—and therefore undecidable. If someone handed you some complete description of the material's particles, it would either have a gap or not. But calculating this property mathematically, from the way the particles interact, couldn't be done, even if you had a quantum supercomputer from the year 3000. In 2020, Pérez-García, Cubitt, and other collaborators repeated the proof for a chain of particles (as opposed to a grid). And last year, Cubitt, James Purcell, and Zhi Li further extended the setup to devise a material that, when subjected to a magnetic field that grows increasingly intense, will transition from one phase of matter to another at an unpredictable moment. Their research program inspired other groups. In 2021, Naoto Shiraishi, then at Gakushuin University in Japan, and Keiji Matsumoto of Japan's National Institute of Informatics dreamt up a similarly bizarre material, in which it was impossible to predict whether energy would “thermalize,” or spread evenly throughout the substance. None of these results mean that we can't predict specific properties of specific materials. Theorists might be able to calculate, for example, copper's energy gap, or even whether all metals thermalize under certain conditions. But the research does prove that no master method works for all materials. Said Shiraishi: “If you think too generally, you will fail.” Researchers have recently found an assortment of new limits on predictability outside quantum physics too. Miranda of UPC has spent the last few years trying to work out whether liquids can act as computers. In 2014, the mathematician Terence Tao pointed out that if they could, perhaps a fluid could be programmed to slosh in just the right way to bring forth a tsunami of unlimited violence. Such a tsunami would be unphysical, since no wave can accommodate infinite energy in the real world. And so anyone who found such an algorithm would prove that the theory of fluids, called the Navier-Stokes equations, predicts impossibilities—another million-dollar problem. Eva Miranda has shown that fluids can flow in such complicated ways that trajectories through them become undecidable. Along with Robert Cardona, Daniel Peralta-Salas, and Francisco Presas, Miranda started with a fluid obeying simpler equations. They converted a Turing machine's tape into a location on a plane (akin to the bottom of Moore's pinball box). As the Turing machine ticks along, this point on the plane jumps around. Then, with a series of geometric transformations, they were able to turn the hopping of this point into the smooth current of a fluid flowing through 3D space (albeit a weird one curled into a doughnut in its center). To illustrate the idea over Zoom, Miranda pulled out a rubber duck from behind her computer. “While the trajectory of the point in the water—it could be a duck—is moving around, this is the same as the tape of your Turing machine advancing somehow,” she said. And with Turing machines comes undecidability. In this case, a calculation that halts corresponds to a current that carries a duck to some specific region, while a never-ending calculation corresponds to a duck that forever avoids that spot. So deciding a duck's ultimate fate, the group showed in a 2021 publication, was impossible. While these systems have physically implausible features that would stop an experimentalist from building them, even as blueprints they show that computers and their undecidable problems are deeply woven into the fabric of physics. “We live in a universe where you can build computers,” Moore told me over Zoom on a sunny December afternoon from his backyard garden in Santa Fe. “Computation is everywhere.” Even if someone attempted to build one of the machines depicted in these blueprints, however, researchers point out that undecidability is a feature of physical theories and cannot literally exist in real experiments. Only idealized systems that involve infinity—an infinitely long tape, an infinitely extensive grid of particles, an infinitely divisible space for placing pinballs and rubber ducks—can be truly undecidable. No one knows whether reality contains these sorts of infinities, but experiments definitely don't. Every object on a lab bench has a finite number of molecules, and every measured location has a final decimal place. We can, in principle, completely understand these finite systems by systematically listing every possible configuration of their parts. So because humans can't interact with the infinite, some researchers consider undecidability to be of limited practical significance. “There is no such thing as perfect knowledge, because you cannot touch it,” said Karl Svozil, a retired physicist associated with the Vienna University of Technology in Austria. “These are very important results. They are very, very profound,” Wolpert said. “But they also ultimately have no implications for humans.” Other physicists, however, emphasize that infinite theories are a close—and essential—approximation of the real world. Climate scientists and meteorologists run computer simulations that treat the ocean as if it were a continuous fluid, because no one can analyze the ocean molecule by molecule. They need the infinite to help make sense of the finite. In that sense, some researchers consider infinity—and undecidability—to be an unavoidable aspect of our reality. “It's sort of solipsistic to say: ‘There are no infinite problems because ultimately life is finite,'” Moore said. And so physicists must accept a new obstacle in their quest to acquire the foresight of Laplace's demon. They could conceivably work out all the laws that describe the universe, just as they have worked out all the laws that describe pinball machines, quantum materials, and the trajectories of rubber ducks. But they're learning that those laws aren't guaranteed to provide shortcuts that allow theorists to fast-forward a system's behavior and foresee all aspects of its fate. The universe knows what to do and will continue to evolve with time, but its behavior appears to be rich enough that certain aspects of its future may remain forever hidden to the theorists who ponder it. They will have to be satisfied with being able to discover where those impenetrable pockets lie. “You're trying to discover something about the way the universe or mathematics works,” Cubitt said. “The fact that it's unsolvable, and you can prove that, is an answer.” Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. In your inbox: Upgrade your life with WIRED-tested gear How to avoid US-based digital services, and why you might want to The Big Story: Inside Elon Musk's ‘digital coup' ‘Airport theory' will make you miss your flight Special Edition: How to get computers—before computers get you More From WIRED Reviews and Guides © 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Almost 40 years ago, deep in the Pacific, a single voice called out a song unlike any other. The sound reverberated through the depths at 52 Hertz, puzzling those listening to this solo ringing out from the ocean's symphony. The frequency was much higher than a blue whale or its cousin, the fin, leaving scientists to ponder the mystery of Whale 52. The leviathan has been heard many times since, but never seen. Some suspect it might have some deformation that alters its voice. Others think it might simply exhibit a highly unusual vocalization — a tenor among baritones. But Marine biologist John Calambokidis of Cascadia Research Collective suggests another possibility: “The loneliest whale,” so named because there may be no one to respond to its unique call, may not be an anomaly, but a clue. Calambokidis, who has spent more than 50 years studying cetaceans, suspects Whale 52 may be a hybrid: Part blue whale, part fin whale. Such a creature, often called a flue whale, is growing more common as warming seas push blues into new breeding grounds, where they are increasingly likely to mate with their fin relatives. A survey of north Atlantic blues published last year found that fin whale DNA comprised as much as 3.5 percent of their genome, a striking figure given the two species diverged 8.35 million years ago. If Whale 52 is indeed a hybrid, its presence suggests genetic intermingling among Balaenoptera musculus, as blues are known among scientists, and Balaenoptera physalus has been occurring for decades, if not longer. The North Atlantic findings suggest it is accelerating. Cetacean interbreeding has been documented before, notably among narwhals and belugas and between two species of pilot whales, combinations attributed largely to warming seas pushing these animals into new territory and closer proximity. But hybridization has been more closely studied among terrestrial creatures like the pizzly bears born of grizzlies and polar bears. It is scarcely understood in marine mammals, and little is known about what intermingling will mean for the genetics, behavior, and survival of the largest animal to have ever lived. “Blue whales are still struggling to recover from centuries of whaling, with some populations remaining at less than 5 percent of their historical numbers,” Calambokidis said. While the number of confirmed hybrids remains low, continued habitat disruption could make them more common, eroding their genetic diversity and reducing the resilience of struggling populations. Before the arrival of genomics 30 years ago, marine biologists identified hybrids primarily through morphology, or the study of physical traits. If an animal displayed the features of two species — the dappled skin of a narwhal and stout body of a beluga, for example — it might be labeled a hybrid based on external characteristics or skeletal measurements. Anecdotal evidence might also play a role: Historical whaling logs suggest blues and fins occasionally interbred, though such pairings went largely unconfirmed. But morphology can, at best, only reveal the first-generation offspring of two distinct species. By analyzing DNA, marine biologists like Aimee Lang can now identify intermingling that occurred generations ago, uncovering a far more complex history than was previously understood. This new level of detail complicates the picture: Are flues becoming more common, or are researchers simply better equipped to find them? As scientists probe the genetic signatures of whales worldwide, they hope to distinguish whether hybridization is an emerging trend driven by climate change, or a long-standing, overlooked facet of cetacean evolution. In any case, some marine biologists find the phenomenon worrisome because flues are largely incapable of reproducing. Although some females are fertile, males tend to be sterile. These hybrids represent a small fraction of the world's blue whales — of which no more than 25,000 remain — but the lopsided population of the two species suggests they will increase. There are four times as many fins as blues worldwide, and an estimate of the waters around Iceland found 37,000 fins to 3,000 blues. “Three thousand is not a very high density of animals,” said Lang, who studies marine mammal genetics at the National Oceanic and Atmospheric Administration. “So you can imagine if a female blue is looking for a mate and she can't find a blue whale but there's fin whales all over the place, she'll choose one of them.” This has profound implications for conservation. If hybrids are not easily identifiable, it could lead to inaccurate estimates of the blue whale population and difficulty assessing the efficacy of conservation programs. More troubling, sterile animals cannot contribute to the survival of their species. Simply put, hybridization presents a threat to their long-term viability. “If it becomes frequent enough, hybrid genomes could eventually swamp out the true blue whale genomes,” Lang said. “It could be that hybrids are not as well adapted to the environment as a purebred blue or fin, meaning that whatever offspring are produced are evolutionary dead ends.” This could have consequences for entire ecosystems. Each whale species plays a specific role in ensuring marine ecosystem health by, say, managing krill populations or providing essential nutrients like iron. Hybrids that don't play the role evolution has assigned to them undermine this symbiotic relationship with the sea. “Those individuals and their offspring aren't fully filling the ecological niche of either parent species,” Calambokidis said. All of this adds to the uncertainty wrought by the upheavals already underway. Many marine ecosystems are experiencing regime shifts — abrupt and often irreversible changes in structure and function — driven by warming waters, acidification, and shifting prey distributions. These alterations are pushing some cetacean species into smaller, more isolated breeding pools. There is reason for concern beyond blue whales. Rampant interbreeding among the 76 orcas of the genetically distinct and critically endangered Southern Resident killer whale population of the Pacific Northwest is cutting their lifespans nearly in half, by placing them at greater risk of harmful genetic traits, weakened immune systems, reduced fertility, and higher calf mortality. Tahlequah, the southern resident orca who became known around the world in 2018 for carrying her dead calf for 17 days, lost another one in January. The 370 or so North Atlantic right whales that still remain may face similar challenges. Some level of cetacean interbreeding and hybridization may be inevitable as species adapt to climate change. Some of it may prove beneficial. The real concern is whether these changes will outpace whales' ability to survive. Flue whales may be an anomaly, but their existence is a symptom of broader, anthropogenic disruptions. “There are examples of populations that are doing well, even though they have low genetic diversity, and there are examples where they aren't doing well,” said Vania Rivera Leon, who researches population genetics at the Center for Coastal Studies in Provincetown, Massachusetts. “They might be all right under current conditions, but if and when the conditions shift more, that could flip.” “The effect could be what we call a bottleneck,” she added. “A complete loss of genetic diversity.” These changes often unfold too gradually for humans to perceive quickly. Unlike fish, which have rapid life cycles and clear population booms or crashes, whales live for decades, with overlapping generations that obscure immediate trends. There have only been about 30 whale generations since whaling largely ceased. To truly grasp how these pressures are shaping whale populations, researchers may need twice that long to uncover what is happening beneath the waves and what, if anything, Whale 52 might be saying about it. This article originally appeared in Grist at https://grist.org/oceans/what-the-worlds-loneliest-whale-may-be-telling-us-about-climate-change/. Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org. hybridizationwhale 52whales Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. Please select your desired newsletters and submit your email to upgrade your inbox. When whales migrate from their cold feeding grounds to warmer breeding waters, they carry tons of nutrients in their urine. There may be fewer than 100 Rice's whales left in the Gulf of Mexico, but the Department of the Interior doesn't believe that should stop ships from traveling as fast as they choose. Scientists are puzzled by what prompted the “sexy foreigner” to cross three oceans. An especially popular killer whale put a dead fish on his head, in a throwback to a fashion last seen in the 1980s. Did the Kremlin order a hit on its aquatic intel agent? Was it another foreign power? An environmental group is upset with RFK Jr. over an old story about the time he cut the head off a dead whale. We may earn a commission when you buy through links on our sites. ©2025 GIZMODO USA LLC. All rights reserved. Mode Follow us Mode Follow us
The Nissan Leaf was once the number-one selling EV in the world. Unveiled in 2009, it hit the road in 2010, beating Tesla to market as the first mass-produced EV and topping sales charts for a decade. It was a bold move, and one that should have cemented the company as one of the world's premier EV players. So what happened? Some 15 years later, in the United States, the automaker has floundered. It didn't predict the rise of hybrids, and it's still dealing with the PR aftermath of a very public failed merger with Honda. Add in years of mismanagement and an outright neglect of its EV portfolio, and the former king of EV sales has been scrambling to find its footing. A chaotic market reeling from threats of tariffs really isn't helping. Still, Nissan is at least attempting to claw its way back to its former glory. It has a new CEO, upcoming EVs and a hybrid for the US market, and the self-awareness to know it needs to adapt how it does business. Nissan is ready for change. Oh, and those talks with Honda aren't over yet. “This is the heart of Nissan,” the automaker's former chief planning officer and shiny new CEO, Ivan Espinosa, tells media gathered at the Nissan Technical Center, located just outside of Yokohama in Atsugi, Japan. Notably absent from this event is outgoing CEO Makoto Uchida. This is the introduction to Espinosa as the boss—but more importantly, the automaker wants to share its plans for the future. We're seated in a large design studio with a screen that fills an entire wall; executives address the crowd with a mix of renewed focus and humility. In an industry full of bravado, Nissan is refreshingly forthcoming about its issues. Chief performance officer Guillaume Cartier begins the two-day event by expressing that the company would be an open book, and be honest about the external and internal issues that have plagued the brand in recent years. The Nissan/Honda merger fell apart largely due to the Nissan leadership's unwillingness to concede that in the automotive world, Nissan and Honda are not equals. This time round, there's a promise for better transparency. The news Nissan wants everyone to now focus on is the unveiling of the third-generation Leaf. Gone is the hatchback—the Leaf has morphed into a sporty but handsome crossover. And after all, the US market loves a crossover. Powered by Nissan's 400-volt CMF-EV platform, the Leaf will rest on the same architecture as the Ariya. Outside of smart design and repeated insistence that the team focused on efficiency, the automaker shared no information about range, battery capacity, or price. The vehicle will be available first in the United States and Canada beginning in 2025. The new Nissan Leaf. For Europe, the automaker unveiled a new all-electric Micra, an urban runabout that senior vice president of global design Alfonso Albaisa refers to as “charming.” The wide eyes of the Micra have returned and are now powered by electrons instead of petroleum. It's built on the CMF-B EV platform that also underpins the Renault 5 E-Tech. Like the Leaf, Nissan was silent on details about range, price, and battery capacity, but we do know that Europe will get both the Micra and Leaf in 2025. The new Nissan Micra. In a move that will attempt to fix its previous huge market oversight, Nissan will begin production of an all-new hybrid Rogue in 2026, while a PHEV is also in the works. The automaker's midsize SUV competes directly with the wildly popular Honda CR-V and Toyota Rav4, both of which have hybrid powertrain options. The Rogue will use the third generation of Nissan's e-Power series hybrid technology. Unlike typical hybrids, an e-Power's wheels are only powered by the electric motor, while a specially tuned gas engine acts as a generator. The second generation of the technology is currently in use in the Nissan Qashqai, but this upcoming version combines the powerplants, gearbox, and inverter into a single 5-in-1 unit, using the same electric motor and other components found in Nissan's EV. A clever way to lower costs. Nissan says that this system delivers the attributes of an EV—increased lower-end torque, smoother acceleration, real-time motor-based torque vectoring (Nissan calls this e-4orce), and a quieter ride. At Nissan's Granddrive test track in Yokosuka, Japan, I was able to test the second and upcoming third-generation e-Power system, and I found it compelling, and, in many cases, superior to traditional hybrid systems. Although the small 1.8-kWh-capacity battery pack means drivers will still have to endure the rumble of an engine on a regular basis, even if it is quieter. After a keynote, Nissan led us into a courtyard to look at (but not photograph) a series of vehicles in various states of development. The most intriguing was a rugged electric SUV that oozed X-Terra vibes. The light-offroader will begin production in Nissan's Canton, Mississippi, plant in 2027, deftly escaping the latest tariffs announced by President Trump. Nissan sees the vehicle as a way to differentiate itself from competitors. “You saw an outdoorsy EV, which is not what you see today. The reason to do that is to be different, because the market will get very crowded very fast. We want to come in with an offer that is more unique,” Espinosa says. Sometimes, however, there is good reason why a certain category of EV “is not what you see today,” and while trying to be different is certainly laudable, it is not always advisable. We'll see soon enough if Espinosa's strategy pans out. Regardless, this Canton-built rugged electric SUV will beat Scout's offerings to market, and will go head-to-head with Rivian's R2. That is, if everything goes according to plan for both automakers. Nissan has big plans and an intriguing upcoming lineup that, on paper, seems to give it the automotive firepower to be a true competitor in the electrified vehicle market. Bringing those proposals to fruition requires leadership willing to aggressively move forward while taking a long, hard look at the current situation and making drastic changes. There's a tinge of frustration in Espinosa's voice as the new Nissan CEO explains the current situation with Honda. “The fact that the integration talks stopped is in no way meaning that we are not collaborating with them,” Espinosa said. “The future of the industry is going to be very challenging, and it's clear that the name of the game is how you build efficient partnerships that add value to your company,” Espinosa told reporters during a roundtable event. For automakers, sharing a platform reduces both parties' financial commitment. Parts procurement also benefits. Suppliers will always prioritize the customer who places the largest order. If a part is used in multiple vehicles across multiple brands, it's built sooner and at a lower cost. It's the economies of scale in action. The issue? Nissan's scale has dropped dramatically. In 2018, the automaker was producing 5.8 million units a year. Currently, that number has dropped to 3.5 million units. Its US factories are currently underutilized, and its lineup, while slowly undergoing a refresh over the past few years, in some cases still lags behind competitors. Recent moves to rectify the situation have come with their own issues. The Ariya was a fine reboot of the automaker's electric vehicle strategy, but the vehicle itself hasn't taken off like EV offerings from other automakers. Ponz Pandikuthira, Nissan's chief planning officer for North America tells WIRED how timing hurt the vehicle's launch. As it was introduced, Tesla began cutting prices to ward off new competitors in the market, and suddenly, the Ariya was 20 percent more expensive than a similarly equipped Tesla. The Ariya also isn't eligible for the $7,500 EV tax credit unless the vehicle is leased. Then, add in manufacturing delays of eight to ten months, and the result is a vehicle hitting the market after any hype that had been generated died down. Pandikuthira also explains the reason behind Nissan's lack of a hybrid in the coveted midsize SUV segment. With the vehicle price increases during the Covid lockdowns, Nissan (and other automakers) believed that this was the new normal. A rise in overall vehicle value would make EVs seem more affordable at the price points an automaker would need to sell an electric vehicle at to make a profit. Like many manufacturers, Nissan had bold plans to introduce a fleet of EVs, and at the time, to add a hybrid to its lineup would mean making one less EV. So the automaker gambled on an inflated marketplace. Suddenly, the prices of vehicles came back down to earth, and Nissan's future lineup wouldn't generate much-needed profits. It's tough to be nimble without capital. According to Espinosa, Nissan has 1 trillion yen ($6.65 billion) in cash. The issue is that the company has $1.5 billion in debt due this year, and $5.6 billion in debt due in 2026. “We're not in the situation in which we have an urgent need for cash,” Espinosa said. “What we have to work on is the free cash-flow generation, which is different. So we have to accelerate revenue generation. We have to get our sales pace in better shape, and we need to work on cost.” A big part of that is its work to reduce development time from 55 months to 37 months. Then 30 months on each vehicle iteration based on the platform. “We need to show what we're capable of doing,” the new CEO says. At the multiday event in Japan, Auto Pacific president and chief analyst Ed Kim tells WIRED, “One of the big takeaways I got from all this was that I don't think even Nissan knows how they're going to get there.” However, Nissan's willingness to partner with others, the introduction of a hybrid to compete with the Rav4 and CR-V, and its upcoming lineup are all good moves, Kim says. “Oftentimes, when an automaker has their backs against the wall, sometimes they pull out some of their best design work,” he adds. But all of this meticulous planning and good intention could quickly be derailed by the chaotic financial situation in the US market. Nissan needs to win big in the United States, and the Trump administration's tariff chaos isn't helping. “We are working on multiple scenarios to be ready when some clarity comes. It's changing every other day,” Espinosa tells WIRED. Of course, Espinosa said this mere hours before Trump announced the 25 percent tariff on all imported vehicles and parts, and an even more recent 24 percent reciprocal tariff on other goods from Japan. Whether Nissan was planning for that kind of clarity is unclear. WIRED reached out to Nissan for an update. The automaker wouldn't comment directly, but pointed us to a comment from Jennifer Safavian, president and CEO of Autos Drive America, a trade association that represents international automakers. “At a time when cost is the number one concern for American car buyers, US automakers are working to provide a range of affordable vehicles for consumers,” she says. “The tariffs will make it more expensive to produce and sell cars in the United States, ultimately leading to higher prices, fewer options for consumers, and fewer manufacturing jobs in the US.” Sam Abuelsamid, automotive analyst and vice president of market research for Telemetry, believes these costs could hit Nissan hardest. “Of the three largest Japanese automakers operating in the US, Nissan will likely face some of the biggest challenges with the tariffs,” he says. “They only have two plants in the US and import a significant percentage of their products from either Mexico or Japan.” Like many automakers, Nissan may spread the cost of the tariffs across its entire lineup of vehicles to keep the vehicles that are imported to the US from being prohibitively expensive—although each vehicle will soon undoubtedly cost more. For Espinosa, in the face of such economic turmoil, the inconvenient truth is that Nissan could do everything right and still struggle because of forces beyond its control. “From a product perspective, they're definitely headed in the right direction. But the big question mark is really on the business side,” Kim says. Indeed, it's the big question for every automaker right now. Except, for Nissan, the results could be catastrophic. In your inbox: Upgrade your life with WIRED-tested gear How to avoid US-based digital services, and why you might want to The Big Story: Inside Elon Musk's ‘digital coup' ‘Airport theory' will make you miss your flight Special Edition: How to get computers—before computers get you 10% Off Wayfair Promo Code with sign-up 20% off Dyson Promo Code $50 Off In-Person Tax Prep When You Switch From Your Tax Current Provider Up to $500 off cameras at Canon Save extra 10% Off TurboTax Exclusive: Up To 50% Off 6 Boxes With Factor Promo Code More From WIRED Reviews and Guides © 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices