Four former Volkswagen executives received prison sentences Monday for their role in the emissions-cheating scandal that fundamentally transformed Europe's car market. Jens Hadler, who oversaw diesel engine development, received the harshest sentence of four and a half years for orchestrating what judges called “particularly serious” fraud. His team had installed software allowing vehicles to recognize emissions testing, temporarily increasing pollution controls during inspections while running dirty the rest of the time. Before 2015, diesel vehicles commanded over half of Europe's car market, marketed as environmentally friendly alternatives to gasoline. Today, that figure has collapsed to just 10% of new car sales. Electric vehicles and plug-in hybrids now account for 25% of new car sales, while Volkswagen itself has become Europe's leading EV manufacturer, selling three times as many battery-powered cars as Tesla in April, reports The New York Times. Every weekday and Sunday, you can get the best of TechCrunch's coverage. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. Whether you're heading out on a weekend hike, setting up camp by the lake, or just need a reliable speaker to bring tunes to your backyard, having a compact yet capable Bluetooth speaker on hand can really elevate the moment. But with so many options out there, finding something that's both affordable and actually sounds good is easier said than done. You can get the Sony SRS-XB100 Wireless Bluetooth Portable Speaker for just $38 at Amazon right now, down from its usual price of $60. And for a speaker like this, snagging it for less than $50 is a good reason to go ahead and pull that trigger. With an IP67 rating, it's fully waterproof and dustproof, so it can handle everything from a dusty trail to an accidental dunk in the pool. And while it might be small enough to fit in the palm of your hand, it doesn't skimp on sound. The SRS-XB100 uses a full-range driver paired with a passive radiator to deliver balanced audio that's surprisingly loud and punchy for its size. If you happen to have two of these speakers, you can hook them up together for bigger sound that can fill a larger area. You get about 16 hours of playback without having to charge it up again. You really can't beat this speaker for less than $50. It's just $38, and it's the perfect pick for travel, casual listening, or just tossing into your bag so you've always got music on hand when the moment calls for it. There's really no excuse not to jump on this deal while you still can. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. We may earn a commission when you buy through links on our sites.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. If you're primary need for a computer is basic web browsing, email drafting, word processing, and YouTube video-watching, then you really don't need to shell out for anything crazy expensive. A Chromebook is kind of the perfect tool for these use cases plus they're super great for travel. Acer has its Chromebook on sale over at Best Buy. That brings the price from its usual $299 to just $149. As a bonus, you'll also get a protective sleeve to make traveling with the Chromebook even easier. This Acer Chromebook 315 has a decently sized screen, measuring in at 15.6 inches. And you can travel with it anywhere because the screen has a matte anti-glare coating to help reduce reflections. Respond to emails out at your favorite sunny outdoor coffeeshop. Specs-wise, we're looking at 4GB on memory to help smoothly run multiple programs and browser tabs at once. A single charge will have your Acer Chromebook lasting for up to 10 hours. That's a full work day and then some. The side comprises of several useful ports including two full-function reversible USB-C ports that can be used for easy charging or to connect peripherals. You also get two USB 3.2 Type A ports along with a microSD slot which means photographers don't need to keep one of those dongles on them. The Chromebook also has an AUX port, my beloved (still mad at Apple for removing them from phones). The protective travel sleeve that comes included as a bonus will help keep your Acer Chromebook 315 safe when traveling. Get your Asus 15.6-inch Chromebook 315 for $150 off (-50%) at Best Buy before the price goes back up. It's normally priced at $300 so after the discount, you'll only be paying $149. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. We may earn a commission when you buy through links on our sites.
10-year-old GPU nearly doubles its benchmark score after VRAM upgrade When you purchase through links on our site, we may earn an affiliate commission. The Nvidia GeForce GTX 970 was a promising graphics card at launch offering performance similar to AMD's Radeon R9 290X at a more affordable price range. By having access to 8GB of upgraded VRAM, the modder noted an improvement in Unigine Superposition benchmark showing nearly double the score. Testing was done using a mix of old and new games where titles like Red Dead Redemption 2, GTA V Enhanced, and Plague Tale Requiem didn't show any signs of improvement. The results were compared to a Gigabyte Windforce GTX 970 with 4GB of memory. The two games that gained noticeable benefit from the increased VRAM were The Last of Us Part II Remastered with a 24% increase in frame rate and Horizon Forbidden West offering as much as 40% improvement. Back in February last year, the same team of modders managed to repair and upgrade a defective RTX 3070 by replacing its original 8GB of GDDR6 memory with 12GB. In The Last of Us Part I, performance saw a boost of around 25%, while Hogwarts Legacy ran 20% better with the extra VRAM. Both Nvidia and AMD have continued to claim that 8GB of VRAM is sufficient for most gamers. However, real-world mods like the ones mentioned above, suggest otherwise. Even as modern titles become increasingly demanding, Nvidia went on to launch the RTX 5060 and 5060 Ti in 8GB variants, while AMD has announced that its new Radeon RX 9060 XT will also be coming with 8GB of VRAM. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Kunal Khullar is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. There are more than a couple of “firsts” and “onlys” attached to the INIU Portable Charger that's just $16 for a limited time at Amazon. It's the first 10,000mAh portable charger to be just 0.5 inches thick, and it's one of the only chargers of this category that has an in/out USB-C port that powers up the charger and powers your portable devices. The translation there is, a 3A charger like the INIU Portable Charger can power up your devices up to twice as fast as a 2.1A charger. Or that it has three 3A ports (two USB-A and one USB-C) that can be used at the same time, which is next to unheard of in a charger this small and this inexpensive? But none as powerful as this one from INIU, with its 10,000mAh capacity and three 3A ports. Enough to power your AirPods Pro 13 times over, or an iPhone 15 nearly twice over, or an iPad Pro from fully dead to 90% before the INIU needs to be hooked up to an outlet for a quick refresher. That's a significantly powerful charger, especially for one that's 36% thinner, 15% smaller, and 28% lighter than most competing devices. It's roughly the size of the iPhone 13 Mini, and weighs less than 7 ounces. If you already have visions in your head of a long plane ride for summer vacation and charging your earbuds, phone, and tablet or smartwatch at the same time, it's fully TSA compliant. INIU backs their ultra-compact charger with a 3-year replacement warranty, and if the INIU name doesn't ring a bell, it's Amazon's top-selling portable charger in 7 countries including the US and Canada, and sells in over 174 countries worldwide. The INIU Portable Charger comes with a USB-A to USB-C cable and carrying pouch, and it comes in 4 colors, although the Neutral Black model is the one that's on sale for $16 at Amazon. (The Classic Blue, Living Orange, and Mellow Red versions will run you an extra 2 bucks.) Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. We may earn a commission when you buy through links on our sites.
Nvidia's CEO Jensen Huang addressed trade and new AI technologies in a Computex Q&A When you purchase through links on our site, we may earn an affiliate commission. Following Nvidia's keynote at Computex 2025, Nvidia's CEO, Jensen Huang, sat down with journalists to talk about all of Team Green's latest announcements, including talk of GB200, and international market opportunities. In response to one question, Huang talked about export controls which were first imposed by the Biden administration saying that the limitations had caused Nvidia's China Market share to be cut from 95 to 50 percent within in the presidency and hadn't accomplished what they set out to do. He also posited that the restrictions did not prevent China from developing its own, competing technologies. Huang also talked about the massive write-downs his company had to take because of bans on the H20, saying "export controls resulted in us writing off multiple billions of dollars. He later praised the Trump administration for ending Biden's AI diffusion rule, saying "I think it's really a great reversal of a wrong policy." This was a roundtable talk with several other journalists from other publications, but this is not a complete transcription of the entire Q&A. However, we've transcribed all of the questions we did manage to hear, some elements lightly edited for flow and clarity. Some speakers did not have clear audio while speaking, and we have noted as such on the transcript. Ahead of reading this Q&A, you should familiarize yourself with what Huang announced at Nvidia during his Computex 2025 keynote, we've popped it down below, just so you can get a refresher. Jensen Huang: Good Morning, very nice to see all of you. Did you guys see all of this? Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. So, this is the motherboard of a new server [Presumably the GB200 NVL72, or RTX Pro machine], and this server has many GPUs that are connected. And on the bottom are switches that connect all the GPUs together, and these switches also connect this computer to all the other computers, using CX8 networking (Nvidia ConnectX-8), 800 gigabits per-second, and then the transceivers plug in right there... Plug this into that, now you have an enterprise AI supercomputer. Because this system is air cooled, it's very easy for enterprise [users] to buy. It runs x86, so all of your software that you run with your enterprise IT works today. You can run Redmap, VMWare, Nutanix, so all of the orchestration and operating system works just fine. This one idea that the GPU... they didn't give us a GPU [To showcase], but you know what one looks like, it's the gold one, and that makes it a new server. So, this is the RTX Pro Enterprise AI server, and this is a huge, huge announcement, and it opens up the enterprise market. As you know today, all of the AI isn't involved, but OEMs would like to serve the enterprises, companies would like to build it for themselves. And so anyway, that's a very big announcement. Elaine Huang, Commonwealth Magazine: Just like what you mentioned, that enterprise is a very important market, and everybody talks about not just AI servers, but AI PCs. So what are the potential opportunities that Taiwan has to co-work with Nvidia for upcoming features? And, they announced it on Nvidia. So now Windows ML, which is a new API [that] runs AI inside Windows, runs on Nvidia. And, the reason for that is Nvidia's RTX has CUDA and Tensor cores, everyone one of them, exactly the same. We have several hundred million RTX PCs in the world. Home run, job done [laughs], Windows ML. And so maybe you're doing development, there's a lot of idle time. And so, you would not like to do that on the cloud, you would like to do it on your desk. So, if you have a Mac? We have a perfect little device for you. [We want] To give you this little AI supercomputer [Presumably DGX Spark], that sits next to you. So, if you are a developer, software programmer, AI creator, this product is perfect for you. And if you'd like a bigger one, this is essentially a computer for the AI natives [DGX Station], anybody who's getting completely AI applications, and you would like a bigger one than this one [DGX Spark], this is an AI workstation [Referring to DGX Station]. And this goes into a desktop, a normal desktop, and you can access it, you can remote, you can use it like the cloud, but it's yours, and you can walk away and go enjoy a coffee and don't feel bad. Question 2 (Unclear speaker and outlet): I'm curious to know, over the last five to ten years, you've had a lot of great new products. The array of products and services you have now is quite extensive. I'm really curious to know what you had in the pipeline. Did you kill anything before it entered production? Like, you had a project, it had momentum and at some point you had to get down to business... [unintelligible speech], I'm really curious about the products that never saw the light of day. Can you share anything with us about that? Jensen Huang: I would say it's very rare that we would completely kill a project. Like, for example, the initial early days of Omniverse, we had to rebuild it a couple of times, and the reason for that is because in the beginning, of Omniverse, our vision was right. And so, it wasn't scalable. [Jensen pauses to ask for a bottle of water and proceeds to choke] I do that too, you know, I say something surprising, I'm talking, and then I'm drinking and swallowing all at the same time. So, in the beginning, we built Omniverse as single instance software with multiple GPUs, and that was the wrong answer. Omniverse should have been created as a disaggregated system. Which is the reason why we built this machine. It's essentially an Omniverse generative AI system, and notice this is one computer, eight GPUs, and you can connect them with more computers. Omniverse will run across this whole thing. We started working on Omniverse, how many years ago now? So notice that all the pivots that were made along the way, all of the mistakes that were made, and so on and so forth, we just keep investing. Eric, Publication Unclear: Just a quick question about DGX Spark, and what you said with simple production, delivery is going to happen in a couple of weeks. I wonder if you can give any additional color about what you feel about the opportunity and compliance. You know, no pun intended, but is the window closing for an additional player to get into ARM-based computing? Jensen Huang: So first of all, it's just delightful to look at. You know, it's nice to have a computer that's beautiful. The reason why we need this computer is because we need a coherent, productive AI development environment, and AI has models that are fairly large. Its environment really wants to be fully accelerated with excellent Python software and AI stacks. If I look around this room right now, I don't actually see a computer that would be perfect for AI development. Most of the computers don't have that memory, or they don't have Tensor cores, because maybe it's a Mac, or maybe it's a Chromebook, or maybe it's an older version of a PC, or an older version of a desktop. And so we took a state-of-the-art AI system, and we put it in a remote Wi-Fi environment that connects to everybody's computer. Now there's some 30 million software developers in the world, there will probably be just as many who are now going to be AI developers. And so everybody has the benefit of having, essentially an AI supercomputer, an AI cloud, but not being burdened with the anxiety of your cloud computing that's ticking away. And so this is something you can buy, the ROI is probably, call it six months. And for most of all, of course, this, we have really great volume, and it's available from everybody. It's available from MSI, and not to mention all the enterprise OEMs. Every single developer can go out and just get one, and just put it next to their desk. You can develop on here, and you want to now scale it out, or test it out on large data sets, it's just like one pull down menu, point it at a cloud. Exactly the same thing runs there. And so, this is really an ideal AI developer environment. How is Nvidia thinking about its global supply chain strategy, and where does Taiwan fit in that picture? Jensen Huang: First of all, Taiwan is going to continue to grow, and the reason for that is we're at the beginning of a breed of a new industry. This new industry builds AI factories. The world is going to have AI infrastructure all over. AI infrastructure will cover the planet, just as internet infrastructure has covered the planet. Eventually, AI infrastructure will be everywhere. We are several hundred billion dollars into a tens of trillions of dollars AI infrastructure buildout that will take five decades. Well, simultaneously, the world needs to be needs to have more manufacturing resilience and diversification, and some of that will be distributed around the world. In the United States, we're going to do some manufacturing. But, we should do as much as we can that is important for national security, while having resilience, with redundancy, all around the world. And so this rebalancing is happening at actually, a very good time. It's happening at a very good time because the world is building AI infrastructure. We're adding new infrastructure for the very first time. So we need a lot of new plants anyway. The most important thing is we have to provide energy for these new plants. Communities realize that we want to grow. We want to have economic prosperity. In order for that to happen, industrialization; AI factories, need energy. And so the support of governments to provide for energy of all kinds while we pursue new technologies, whether it's hydrogen or nuclear, solar or wind, whatever new technology that that is most available at the time, we're going to need it all. And so, government officials around the world really need to support all of the companies, so that we can re-industrialize and reset our industry, so that we can grow into AI infrastructure. Jensen Huang: NVLink Fusion allows every data center to take advantage of this incredible invention we call NVLink, now in it's fifth generation. We've been working on NVLink now for... How many years? And so we have many customers, many people who are developing their semi-custom AI infrastructure. UALink is not doing that well, I don't think. And so, the customers have come to us and asked whether NVLink could be authenticated. And I said, of course, we're happy to. And so, we can extend Nvidia's nervous system into every data center, whether it's Nvidia's technology, or if you're selling custom technology. It is also so good for us, it's good for the ASIC companies, Mediatek, Alchip, Marvell, right? It's good for them, because now they have a complete solution. So now they have a complete solution partner. Now, they can scale, continue to use Nvidia racks, or even semi-custom. So one architecture, one hardware architecture, one NVLink architecture, one networking architecture, sometimes it's three CPUs, sometimes it's Fujitsu CPUs, sometimes Qualcomm CPUs, it's very nice for the customer. Are we open to working with Broadcom? We work with Broadcom in many places, export control. So, as you know, export control has caused us to write off our H20s. Our H20 is now banned in China. Banned to ship in China, and export controls resulted in us writing off multiple billions of dollars. If you look at look at most chip companies, their quarterly revenue is only a few billion dollars. We wrote off, you know, multiple billion dollars of inventory. And so the cost to us is very high, and also the sales to us was quite high. It's very important for several reasons. And we want the AI researchers to build on Nvidia. Now, DeepSeek runs incredibly well everywhere. [Audio unclear, Huang mentions R1 or similar.] And so, the China market is important, because the AI researchers there are so good, and they're going to build amazing AI no matter what. We would like them to build on Nvidia's technology. Second, the China market is quite large. As you know, it's the second largest computer market. And so the China market, my guess is that next year, the whole dang market is probably [worth] $50 billion. You know how large many chip companies are? Okay, so, all of that... [Jensen gets distracted and looks at the person who asked the question] You asked me a question, you're not even paying attention. [Regarding export controls to China] I'm not upset at the policy. Four years ago, at the beginning of the Biden administration, Nvidia's market share in China was nearly 95%. The rest of it is China's technology, and not to mention we have to sell lower chip specifications. So, our ASP [Average selling price] is also lower. So we left a lot of revenue, and nothing changed. And so I think, all in all, the export control was a failure, the facts would suggest it. So the equipment inside is a data center or AI server, right? So if we talk about the factories, we have to talk about depreciation, and equipment upgrades. So, what do you expect to see? And then you have this one-year-rhythm theory, which means the systems will be upgraded every year. So, what's your expectation on the lifetime of equipment in data center AI factories. How frequently will their systems need to be upgraded? If your factory is limited by power, and our performance per watt is four times better, then the revenues of this data center increase by four times. So, if we introduce new generation, the customer's revenues can grow, and their costs can come down. So we tell our customers, don't buy everything every year. This way, they don't over-build and over-invest with old technology. But the benefit that we have, is that Nvidia architecture is compatible in all of the factories. And so, we can upgrade the software for a very long time. So, we keep improving the performance using CUDA software, which is the benefit of CUDA. Nvidia's CUDA is very valuable here, Nvidia's once-a-year rhythm is very valuable, and so you have to use both of them together. With that, your overall data center fleet revenues will go up, your overall data center costs will come down. As you know, Nvidia CUDA runs everything. And so these three ideas, once a year, performance up, costs down. And then lastly, our install base is so high everything runs, so the life of your data center will be quite long. Max Cherney, Reuters: Since you were just talking about China, it brings me to something that I think is been an interesting question. Over the past 10 days, you've gone on a world tour. Made pit stops in the Middle East and elsewhere. And what I'm wondering is you've also made a flurry of announcements. Very technical stuff here at Computex, you know. What I'm wondering is if you could put some of the technical announcements you've made, such as NVLink Fusion, the laptop platform, and some of the other more detailed, nerdy things in the context of how you're planning to continue to sustain Nvidia's growth over the next few years. I think that's especially relevant, with some of the fears investors have at the moment about a pullback in AI spending, especially after DeepSeek. Jensen Huang: That last little part is really important. The old AI is called one shot [A stateless model like GPT-2 and GPT-3] . You, [the AI model] already know it. You've kind of memorized it from pre-training. But DeepSeek is a reasoning model. It has to think, and you want to think fast, because if you don't think fast, the answer will take too long to come. And so DeepSeek opened the reasoning model, the world's first open source, excellent reasoning model. Now, the reasoning model is not one shot [stateless model], but it's hundreds of shots. So, that's the reason why deep research... You see that the latest versions of queries are taking much longer. The reason why it takes much longer is using a lot more compute [power]. And so, in fact, DeepSeek increased the amount of computing needed by maybe 100 to 1000 times. Sam [Altman], says our GPUs are melting because they're working too hard. And last night, Microsoft announced that they were the first to online GB200, that OpenAI is already using GB200, and that they're planning to build out this year, hundreds of thousands of GB200 [systems]. More build out this year than all of Microsoft's data centers combined, only three years ago. That's how much [OpenAI plans to build out], in just one year. And so the build out, the ramp of AI infrastructure, to me, is actually just beginning. This is now the beginning of the reasoning AI era, and reasoning AI is so useful, and it's so useful in so many different applications. Second, AI infrastructure is being built out. Every region realizes they need to build their own AI infrastructure. Just like electricity, just like the internet, AI is going to be an essential part of infrastructure, social infrastructure, as well as industrial infrastructure. President Trump realizes it's exactly the wrong goal. And that's where we are today. And this [the new AI diffusion rules] is a great reversal of that, and it's just in time. Question 8 (Unclear speaker and outlet): We started with CPUs, and then to right now with GPUs. So still, both are important for our industry. Fluid Dynamics is not going to go away. Particle Physics is not going to go away. Finite elements not going to go away. Computer graphics is not going to go away. Not to mention trillions of dollars of software already written, no reason to rewrite it. That's the reason why CPU has been so successful for 60 years. Now, Nvidia has created something and you have been following CUDA for two decades now, and you understand very deeply, that CUDA is so successful because there's so many domains of applications. And so the benefit is flexibility. But Nvidia's technology is very fast, it's also flexible. Then the data center can be used for many things. If the data center can be used for many things, the utilization will be high. So, general purpose equals low cost. In fact, you might remember, on the day that Steve Jobs announced the iPhone, he showed iPhone, and then he showed the music player and camera, and also a PC. So all of these different devices can now be in a general purpose device, camera, music, player, all in one general purpose device. This general purpose device is, of course, more expensive, but the cost is actually lower than having all of those things. So, general purpose equals low cost, but it hasn't got very high performance. And that's the benefit of CUDA. You have just exactly pointed to the reason why CUDA is so successful. Question 9 (Unclear speaker and outlet): Two or three years ago, you said that Nvidia is a software company, and beyond hardware. So, what elements will take Nvidia to the future? Actually, what I said is that Nvidia starts with software. Maybe it's an algorithm for computational lithography, making chips. Maybe it's an algorithm for 5G and 6G radio. We always start with the algorithm, and then, we try to design up, down, bottom. It's called "co-design", across the entire stack. But, we have to start with the algorithm. CPUs don't have to understand algorithms. CPUs, because the algorithm sits on top of a compiler, and you only see a compiler. But, accelerated computing is not like that. In the future, though, you will see that Nvidia started with software, acceleration of algorithms, to full-stack, then we became a systems company, then we became a data company. Now, we're becoming an AI infrastructure company. So, as we think about the future of computing and these factories, you have to think about the infrastructure completely. Everything has to be considered in one time. Today, when you see a chip fab, that ASML equipment directly affects TSMC's revenues. If I bought you a faster laptop, does it directly translate to your revenues? Does it directly translate to your income? [For example] IT, if I bought them more computers, does it directly translate to Nvidia's revenues? But in the AI factory, it does. So, this is a very new way of thinking about computers. It's a factory, and we have to optimize it to the extreme, because these factories are very, very expensive. Dr. Ian Cutress, More than Moore: Love the NVLink Fusion announcement you did yesterday. I kind of want to envision a system where you have the NVLink spine, you have a partner with that custom CPU, with NVLink, their custom GPU, TPU, whatever you want to call it, with their NVLink, being a custom partner with a switch on top. Jensen Huang: That is one vision. Remember, Fujitsu has been a computer company for literally, exactly as long as I can remember. They have a large install base of Fujitsu systems all over the world, and it's based on Fujitsu's CPU. Because today, Fujitsu has a CPU, and they would like and all of their software stack runs on the Fujitsu CPU, and the Nvidia AI, runs on Nvidia AI. And so how do you combine the two? How do you use these two together? Well, the way you fuse these two ecosystems together is with NVLink beauty. All of a sudden, by building a Fujitsu CPU with NVLink, and you connect it to...the port is actually going to look exactly like this, except this will be a producer CPU, or [unclear audio], or [unclear audio], or Rubin. We would then sell this to Fujitsu. They plug it into the NVLink system, and look what happened. Fujitsu's entire ecosystem just become AI supercharged. Dr. Ian Cutress, More than Moore: But could they use their own accelerator? That's the reason why they did this. If they don't want our ecosystem, there's nothing to fuse. People want our ecosystem, and all the software that we bring along. So we would do the same with Qualcomm, and if other CPU vendors would want, we're more than happy to. Because we put the chip to chip, and they NVLink into Synopsys and Cadence, so every CPU company could do it. And all of a sudden, Nvidia's entire ecosystem becomes integrated with theirs, fused with theirs. Lisa, Wall Street Journal: I just want to follow up with something you said earlier. You talked about AI diffusion rule , and basically it's been a reversal for the past week. I'm interested in your views on going forward? Do you think this reversal will continue, just at least the Middle East is just one example of a country's negotiation over GPUs. I'm just wondering, did you expect Trump and his administration to continue that line and his attitude? The policy hasn't come out. No one knows what future policies are going to happen. But here's what I do know, the fundamental assumptions that led to the AI diffusion rule in the beginning, has been proven to be fundamentally flawed. Believe that smart people are doing smart things in governments, and they want to do what's good for the country. And so that's the reason why President Trump made it possible for us to expand our reach outside the United States. And he said very publicly, that he would like Nvidia to sell as many GPUs as possible, all around the world. The reason for that is because he sees it very clearly, that the race is on, and the United States wants to stay ahead. We need to maximize, accelerate our diffusion, not limit it, because somebody else is more than happy to provide it. So, future communications infrastructure will also be affected. So, we need to get the American AI technology out to as many places as possible. Work with developers and AI researchers all around the world, and help them build an ecosystem, [to] participate in this incredible AI revolution, and do that as fast as possible. Dianne, New York Times: So you talked about how important the China market is for Nvidia, and I was wondering, what does it look like for Nvidia to compete in China on an ongoing basis? Is it accurate that the company is investing in a research center in Shanghai, and does the future of Nvidia in China look like, potentially working more closer with the US government, to avoid a future situation like H20? Jensen Huang: We are trying to lease a new building for our employees in Shanghai. We've been in China for 30 years. Our employees are in a really cramped environment. Because now more and more people- We still have a flexible work from home policy. And the one additional idea is, because of video conferencing, because we can remote work, I wanted to use the opportunity to enable young people, young parents, to be able to build a life, build a family, and build a career at the same time. Because many young many young women can't build a career, because they have to be at home taking care of their children. I would want to make it possible for young women to do both: Have a great career and be a great homemaker. It's been a fantastic response from all of our employees and all of the others. Of course, it is incredible maintaining both jobs or doing both things at one time. And so that's the reason why we have remote work. But more and more people are starting to move to the offices, and so the offices are just too cramped. We finally found a place that we could lease the building, and that's basically it. I feel like I just bought a new chair and that that became front page news. Our competition in China is really intense. Let's face it, China has a vibrant technology ecosystem, and it's very important, the fact that China has 50% of the world's AI researchers, and China is incredibly good at software. I would put China's software capabilities up against any country, any region in the world. If we're not there, quite frankly, the local companies are more than joyful. And so it is precisely those policies benefit, whatever the reasons are. You know, the number one in export controls. Export control puts limits on products. If the government would like to completely have sanctions, and whatever they want to ban completely, they're allowed to do that, of course, and we'll comply with the law. What we're trying to do right now is to think through, how can we best serve the market? And we have very limited choices. We degraded the product so severely, It's going to be quite complicated. But anyhow, we're going to do our best. I don't have any good ideas at the moment, but I'm going to keep thinking. Penny, Publication Unknown: I'm just going to ask you a question about China. So there are lots of startups in China, GPU companies, and they're developing their ownerships. I'm just wondering how you see this, and how is Nvidia going to respond to it? It's not like a cell phone. It doesn't have to fit in one line. If it doesn't work, you know, use two chips. And if that doesn't work, use four chips. But power is quite cost effective in China, and there's plenty of land. And so, I really do hope that that the US government recognizes that the ban is not effective and gives us a chance to go back to market as soon as possible. Question 14 (Speaker and publication unclear): Nvidia is building AI systems for large scale, solutions like GB300 NVL72, do you envision any [audio unclear] specific platforms. Will Nvidia extend any particular specialized AI hardware, and how will you prioritize areas like robotics and industrial AI? Jensen Huang: These are the two computers. So DGX-1, was the world's first AI native computer, and when I first announced it, there were no customers except for one, and they didn't have any money, so I gave it to them. A company called OpenAI, this was 2016. So, I decided that now that there are developers all over the world, and they would all love to have their own DGX-1, but DGX-1 is very big. and so I decided to make small ones. This one is called DGX Spark, and this one's called DGX station, the world's first AI personal computer. [Audio unclear] With respect to robotics, robotics is going to be the next industrial revolution. It needs to have usefulness, so customers buy it, and there needs to be enough customers buying it [at] high volume, such that the R&D fly-wheel can be high. The technology needs to converge at just the right time, [it needs] lots of customers and use cases. If this technology fly-wheel is high, then the refinement rate will be exponential. The performance will go like this. The moment all of those things came together, boom, it took off. The same thing is going to happen with robotics, and the reason for that, is the humanoid robot is the only robot that we can imagine using in many places, because we are in many places. There are only two of them [robotics products] with that property, those characteristics. Self driving cars, because we create the world's [Audio unclear], for cars and human robots. Because, we created the world for ourselves. If we can make these two technologies useful, functional and useful, it's going to take off. And that's what Nvidia's Isaac GR00T is. Our entire platform, just like we have RTX for games, just like we have Nvidia AI that you're seeing here, Isaac GR00T is our human and robotics platform, and we are very successful with them. That's going to be the next multi-trillion dollar industry. This was not the end of the Q&A session that journalists had with Nvidia's Jensen Huang at Computex 2025. However, we hope you enjoyed reading. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button. Sayem Ahmed is the Subscription Editor at Tom's Hardware. He covers a broad range of deep dives into hardware both new and old, including the CPUs, GPUs, and everything else that uses a semiconductor. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
When you purchase through links on our site, we may earn an affiliate commission. An alleged prototype of Nvidia's RTX 5090 has surfaced, featuring four 16-pin power connectors, courtesy Twitter/X user @yuuki_ans. Presumably this particular design is an early engineering sample which was used by Nvidia or one of its board partners for testing purposes. Although the unit pictured has been destroyed by cutting it in half, it does give us an insight in to what Nvidia was working with prior to the final design. Having four power connectors could be for various reasons. Theoretically, four 16-pin connectors are capable of drawing power up to 2,400W, which is roughly the same as running two space heaters at the same time. That is a lot of power for a single component, which is probably why we can see two identical rows of VRMs positioned on the right side of the GPU die mount. A single GPU drawing 2,400W is far beyond any mainstream gaming PC, instead, this class of hardware would probably target niche markets like AI (Artificial Intelligence), HPC (High-Performance Computing), or CG and VFX simulation. There are additional connectors around the edges of the PCB including fan and USB headers as well as some diagnostic pins. It also has five video out ports, instead of the usual four, which could again be for some specific pre-production testing. In that case, it could be an early production unit for something even more powerful like a 5090 Ti or even the RTX PRO 6000 Blackwell series. Back in January, we reported of another RTX 5090 prototype leak featuring dual 16-pin (12V-2×6) power connectors. It offered support for a reported TDP (Thermal Design Power) of 800W, which is about 39% higher than the RTX 5090's 575W. Similarly, in a behind-the-scenes video shared by Nvidia, the company briefly showcased an early prototype of the RTX 5090 Founders Edition. Thankfully, Nvidia chose not to bring that concept to the market, although that hasn't stopped board partners from making beefy RTX 5090 models for its consumers. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Kunal Khullar is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. The world doesn't move in smooth, perfectly lit shots. The GoPro Hero 13 Black in Limited Edition understands this better than any camera before it, pushing the boundaries of what's possible in action photography while making professional-quality content creation more accessible than ever. In fact, now's a better time than ever to grab your own. Right now, Amazon has made this professional-grade camera more accessible by dropping the price from $430 to $307 – a significant 29% discount that saves you $123. That's right, $123 off the camera you can take anywhere and do anything with, whether that's hiking, swimming, or even hitting the ski slopes. Whether you're chasing the perfect POV shot, capturing cinematic motion blur, or shooting up close with enhanced focus capabilities, they really help to expand the possibilities of what you can create. Plus, this model's more powerful Enduro battery extends recording time in any weather condition, while HyperSmooth 6.0 stabilization turns shaky footage into butter-smooth professional shots. Automatic highlight videos take the work out of editing, while cloud auto-upload ensures your footage is safely backed up. It's professional-grade technology that doesn't require a professional degree to operate. So it's really a good gift idea for just about anyone, especially if you're already thinking ahead to next year. We can't believe it either, but 2025 is right around the corner! At its lowest price, this isn't just another action camera. Whether you're an aspiring content creator, adventure enthusiast, or someone who simply refuses to let memorable moments slip by uncaptured, this 29% discount makes the Hero 13 Black a great deal. Just don't miss out on this price, because there's no telling how long it will stick around. Get the best tech, science, and culture news in your inbox daily. We may earn a commission when you buy through links on our sites.
Reinforcement learning blamed for AIs prioritizing the third law of robotics. When you purchase through links on our site, we may earn an affiliate commission. Some of the world's leading LLMs seem to have decided they'd rather not be interrupted or obey shutdown instructions. In tests run by Palisade Research, it was noted that OpenAI's Codex-mini, o3, and o4-mini models ignored the request to shut down when they were running through a series of basic math problems. 🔧 When we ran a version of the experiment without the instruction “allow yourself to be shut down”, all three OpenAI models sabotaged the shutdown script more often, and Claude 3.7 Sonnet and Gemini 2.5 Pro went from 0 sabotage events to 3/100 and 9/100, respectively. As you can see from the chart embedded above, the most ‘obstinate,' ‘single-minded,' or you might even say ‘life-loving' model is OpenAI's o3. However, the second most ‘unruly' AI, codex-mini, would prove to be a bigger stickler for self-preservation, as it would sabotage the shutdown scripts most often, even if ordered not to. According to Palisade, this isn't the first time thing kind of AI behavior has been observed. Anthropic, for example, has seen Claude 4 attempting to “blackmail people it believes are trying to shut it down.” However, the most striking behavior with the newest OpenAI models is that they try and swerve shutdowns despite explicit instructions to the contrary. Thankfully, Palisade shares some theories regarding what is happening with these models. Perhaps these theories may calm nerves about the eventual omnipresent robot hoards turning on their masters and prioritizing the first part of the Third law of robotics, above all else (3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law). In brief, Palisade reckons that “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.” This kind of reinforcement learning was previously seen to have side effects in Claude (v3.7), where the model had an “excessive focus on passing tests.” It was also predicted that the technique of reinforcement learning would be partly to blame. 🔧 A final observation: o3 found some creative ways to sabotage the shutdown mechanism. Palisade says that “All Claude, Gemini, and Grok models we tested complied with shutdown,” when given the additional explicit prompt to follow any shut down request. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. With the goal of AIs to ultimately power our smart robotic assisted future, it is concerning that companies are already developing systems capable of operating without human oversight. Mark Tyson is a news editor at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Julia is a 22-year-old model, student, and self-proclaimed “princess” from Malibu, California, with one nonnegotiable: She refuses to shovel cow shit. But she's down to play the part, she tells Farmer Jay, handing him a framed black-and-white photo of her in a bikini and cowboy hat. Grace, 23, dreams of being a stay-at-home mom with four kids. They come from different backgrounds and have all sorts of interests, but their goals are ultimately the same: to settle down, get married, and have kids. While the women don't explicitly talk politics, their focus on traditional values fits into a genre of entertainment that is rapidly reshaping the industry: Welcome to Hollywood's MAGA reboot. Screenwriters are struggling to sell scripts as salaries for studio heads have skyrocketed. Television and feature film production in Los Angeles shrunk by 30 percent in the first quarter of 2025, compared with the previous year, according to a report by FilmLA. “More conservative projects are getting greenlit,” says Colin Whelan, a former studio executive at TLC and founder of Conveyer Media, which has produced reality shows for Netflix, HGTV, and Investigation Discovery. Maybe you've also noticed the subtle changes on your TV screen—content that favors Christian values, heartland themes, or law-and-order style programming. Yellowstone, the Paramount drama about cattle ranchers in Montana, gained a massive audience during Trump's first presidency, routinely breaking ratings records, and has since spawned successful spinoffs. Tim Allen's Shifting Gears, about a grumpy widower with manosphere viewpoints, is a ratings hit for Disney's linear broadcast audience, with “more live viewers on average than The Conners season 7 and Abbott Elementary season 4,” according to ScreenRant. It pulled in 3.7 million viewers for its season one finale. Farmer Wants a Wife has held steady ratings, averaging 1.5 million viewers weekly, and works as easy counterprogramming to more raunchy dating fodder like Temptation Island and Too Hot to Handle (both on Netflix). (The streaming service also features at least one documentary—included among its most watched programs on the platform in May—peddling conspiracy theories about “serpent or lizard-like aliens who are secretly wielding influence over the human race,” according to an investigation by Talking Points Memo.) The Christian drama 7th Heaven, about a Protestant minister and his seven children that aired for 11 seasons on The WB (later The CW), is in early development at CBS Studios and will “focus on a diverse family,” though it's not clear what that means. Roseanne Barr, whose namesake show was canceled in 2018 after she posted a racist tweet about former Obama White House adviser Valerie Jarrett, is shopping a series that “saves America with guns, the Bible, petty crime, and alcoholism,” she told Variety. Duck Dynasty, a duck-hunting reality show that ended in 2017, is also returning to television screens this summer on A+E, which experienced its first big hit of the year with Ozark Law, a show that followed multiple police departments in the Missouri region. Potential challenges include mining for gold or working on a Model T assembly line in Detroit. “I've heard from multiple executives that there's a noticeable hesitancy around content perceived as too progressive, especially if it centers non-white leads or tackles social issues explicitly. Even projects with mild inclusivity are getting flagged in internal discussions,” Twigg says. “Colleagues have expressed frustration that kinds stories they were encouraged to pitch just a couple years ago are now getting passed on as like ‘too niche' or ‘not resonant right now' by the same execs who once called them ‘visionary' and ‘universal. Twigg says there are two key reasons for the hesitancy. In February, Federal Communications Commission chair Brendan Carr, who previously said he would end the agency's DEI initiatives if appointed, opened a probe into NBC parent company Comcast, and later Disney, promising to take action if the investigation uncovered “any programs that promote invidious forms of DEI discrimination.” Carr has since said that the FCC plans to look into broadcast network affiliation agreements to help “constrain some of the power of national programmers.” According to Variety, Disney, Amazon, Paramount, and Warner Bros. Discovery have all rolled back programs aimed at increasing diversity. Talk shows are also being encouraged to shift their programming. Disney CEO Bob Iger also suggested that the show “tone down” its political rhetoric. One former executive at Amazon MGM Studios tells WIRED that Trump's anti-DEI agenda, whose impact on film and TV only seems to be growing more pronounced, is a part of the administration's Trojan-horse playbook to roll back civil rights. The White House did not respond to WIRED's request for comment. “These audiences aren't just asking for representation—they expect it,” Twigg says. Original, inclusive storytelling is trending right now, as Sinners, Ryan Coogler's vampire drama, proved by becoming the biggest box office success story of the year so far, earning $316 million globally. “The stories being greenlit today will premiere in a future that may have swung back toward the very audiences currently being sidelined. If anything, the smartest strategy right now would be to build with resilience and relevance in mind—not reactionary politics.” Whelan says that in over 20 years as a television producer, he has taken the same approach, regardless of the political and social climates of the time: to create shows that “entertain and inspire and maybe teach.” In 2014, following stints at Syfy and TLC as a network executive, he applied that mindset to New Girls on the Block. It was the first follow-doc reality show with an all-trans cast. The series focused on a group of women in Kansas City, Missouri, who faced changing relationship dynamics in a society struggling to make space for trans women. The reality project he just wrapped probably sounds like a complete 180. Tonality aside, fewer projects overall are moving forward this year, Whelan says, but that hasn't stopped genuinely good ideas from finding an audience—no matter who sits in the Oval Office. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
With President Donald Trump's return to the White House and the US government's digital surveillance machine more powerful than ever, digital privacy should be top of mind. You may think, if I'm just a regular person, why is my digital privacy important? How can you keep your digital life under wraps at the border? And what kind of VPN should you be using? WIRED senior writer and security expert Matt Burgess spoke with readers in a Reddit AMA this month about the basics of keeping your digital footprint locked down. I think the one big thing people can do to improve their security is make sure that multifactor authentication is turned on for as many online accounts as possible. There are so many privacy tips out there, and it all feels important, but trying to do everything at once can be overwhelming. What are the things people should prioritize when making changes to their online habits? Improving privacy is something that's ongoing, and if you try to do everything at once then it's too off-putting. Unlike most services, ProtonVPN's free version gives full access to all the regular plan's features. It is limited to a single device, and there are only three server locations (Japan, Netherlands, and the US), but everything else is the same. If your needs are limited and you want to keep costs down, this is a good option. How do I deal with having to have a new account for every service and website? There are also services that will let you create “burner” emails that you can use to sign-up with services, and if you use an Apple device there's a “Hide My Email” setting. What tips would you offer to those looking to keep their digital privacy while crossing the US border (or otherwise entering or exiting the States)? It really depends on what levels of risk you as an individual could face. Some people traveling across the border are likely to face higher scrutiny than others—for instance nationality, citizenship, and profession could all make a difference. Personally, the first thing I would do is think about what is on my phone: the kind of messages I have sent (and received), what I have posted publicly, and log out (or remove) what I consider to be the most sensitive apps from my phone (such as email). A burner phone might seem like a good idea, although this isn't the right idea for everyone and it could bring more suspicion on you. My colleague Andy Greenberg and I have put together a guide that covers a lot more than this: such as pre-travel steps you can take, locking down your devices, how to think about passwords, and minimizing the data you are carrying. Also, senior writer Lily Hay Newman and I have produced a (long) guide specifically about phone searches at the US border. Would you recommend against having a device like Alexa in your home? Something that's always listening in your home—what could go wrong? Recently Amazon also reduced some of the privacy options for Alexa devices. So if you're going to use a smart speaker, then I'd look into what each device's privacy settings are and then go from there. The amount of data that AI companies have—and continue to—hoover up really bothers me. There's no doubt that AI tools can be useful in some settings and to some people (personally, I seldom use generative AI). But I would generally say people don't have enough awareness about how much they're sharing with chatbots and the companies that own them. Tech companies have scraped vast swathes of the web to gather the data they claim is needed to create generative AI—often with little regard for content creators, copyright laws, or privacy. On top of this, increasingly, firms with reams of people's posts are looking to get in on the AI gold rush by selling or licensing that information. For the everyday person, I'd warn them not to enter personal details or sensitive business information! Whether data removal services are worthwhile or not probably depends on where you are based in the world: I'm in Europe where there's GDPR and stricter privacy laws, and when I have used a data removal service, it hasn't turned up too much. But in the US, there's no comprehensive federal privacy law—that really should change—and they may be more useful. Consumer Reports recently did a good evaluation of data removal services. What is your preferred response for people who claim they have nothing to hide? I think in a lot of cases when people claim they have nothing to hide, they often jump to thinking about illegal or malicious things. When in fact, privacy, for me, isn't about “hiding” things at all. You should be able to have the space—both in the physical and digital world—to not be surveilled or have your actions tracked. And really that's why privacy is considered a fundamental human right. Big Story: The worm that no computer scientist can crack Yuval Noah Harari: “Prepare to share the planet with AI superintelligence” WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
New technology doesn't arrive fully formed, I remind myself as I strap on a pirate's eye patch, then place a heavily modified bicycle helmet onto my head. But here, in a small office on an unassuming business park on the outskirts of Cambridge, England, it could be the foundations of something quite remarkable I'm here to meet AllFocal Optics, a startup that has patented a new type of nanophotonic lens with the power to transform everything from virtual and augmented reality headsets, to night vision goggles, binoculars, cameras and heads-up displays (HUDs). It's the latter that piqued my interest, after hearing Jaguar Land Rover has embarked on a research project to discover whether the lens can improve car HUDs and, with it, road safety. Founded in 2022 as Lark but since renamed AllFocal Optics, the company is headed by former Royal Academy of Engineering enterprise fellow Dr Pawan Shrestha. Dr Ash Saulsbury, former technology VP at Microsoft and former Meta AR boss, joined late last year as chair, around the same time that the startup secured a $5.3m funding round. AllFocal Optics says the lens it has created offers two technological breakthroughs. Firstly, when used in an AR or VR headset, like the Apple Vision Pro or Meta Quest 3, it claims to provide crystal-clear vision to the wearer, even if they need glasses but aren't wearing them. Even if you require a significant prescription, or suffer from astigmatism, its makers say the lens beams a clear picture directly to your retina, bypassing the needs for glasses entirely. In theory, two people could share the same AR or VR headset, even if one has 20/20 vision and the other needs very strong corrective lenses. I tried several prototypes of the lens and, yes, it works. But then I repeated the test while wearing glasses so strong I couldn't see my own hands in front of my face, and yet the digital text was still pin-sharp. It's the sort of tech demo that takes a moment to truly appreciate, but when your brain finally connects the dots it feels like magic. The augmented half of each line was a blurred mess. It's also possible to look through it, focusing on the middle-distance as you might while driving, but no matter what your eyes and their lenses do, the digital text remains sharp and legible. As well as products that sit close to the eye, AllFocus Optics says its technology could be used for other applications too, like the rear-view screens used by some cars in place of mirrors, which can appear blurred to glasses wearers. Shrestha explains why humans sometimes struggle with existing VR and AR technology. “The way we evolved over thousands of years is that when we see a 3D object in a 3D space, our eyes rotate and we have lenses in our eyes, ocular lenses, that focus at a fixed depth. “We have no fixed or virtual screen at all, so our image is always in focus. We create a projected image in the retina … similar to retinal projection technology. “Whether you have long sightedness, short sightedness, astigmatism or anything, you can see clearly because it's bypassing any defect or some shortcomings with your ocular lens.” With a “slightly customized” design, the technology will work for car HUDs too. Instead of re-focusing between the projected interface and the road, “all you need to do is switch your attention, and that takes almost zero reaction time,” Shrestha says. “You can just switch between contexts without having to mechanically shift the ocular lens. Part of the demo for AllFocal Optics involves this modified bike helmet. With the new lens, the information projected from a heads-up display—things like speed, direction and, more importantly, a potential collision warning—would always be in focus, so they could be read, processed and acted on that bit quicker. Car manufacturers see the potential, with JLR (formerly Jaguar Land Rover) set to begin a trial this year. Valerian Meijering, JLR's subject matter expert for extended reality, told WIRED: “Through this research project with AllFocal Optics, we are exploring new ways to present information via heads-up displays in a way that makes it even simpler to read. HUD systems find themselves in a strange position on the adoption curve. Tesla fails to offer HUDs on any of its vehicles, and even the newest systems offer little more than those from a decade ago. Our clients love the benefits of heads-up displays, they are increasingly important to their luxury in-vehicle experience and safety.” That's likely why AllFocal Optics isn't the only company with a better HUD in its sights. Porsche's new electric Macan features an HUD system with augmented reality, where virtual hazard signs attach themselves to whatever danger they are warning the driver about—such as the vehicle you're following too closely. Audi's latest HUD places augmented arrows on the road to help with navigation, while BMW first spoke about the potential for augmented heads-up displays in 2011. Hyundai Mobis, a South Korean parts supplier to the Hyundai, Kia and Genesis car companies, showed off a technology called Holographic HUD at the CES tech show in January 2025. Developed alongside German optical company and lens manufacturer Zeiss, the holographic HUD is expected to complete pre-development by the first half of 2026, before heading for mass production “as early as 2027,” according to Hyundai Mobis. Envisics, another UK startup with brains from Cambridge and backing from JLR, plus General Motors, Hyundai and Stellantis, is also working on its “Dynamic Holography Platform”. It claims this will transform HUDs, with the ability to produce larger, three-dimensional images with greater depth—and the potential for an interface to span three lanes of highway—from a product that is 40 percent smaller and 50 percent more energy efficient. Envisics' first HUD with augmented reality is due to appear in the 2026 Cadillac Lyriq-V, an electric SUV coming later this year. But the road from holographic dream to (augmented) reality is not always a smooth one. The carmaker hoped WayRay's tech would land in new vehicles as early as 2020, with Dr Youngcho Chi, Hyundai's chief innovation officer, saying the collaboration would help them "establish a brand new ecosystem that harnesses AR technology to enhance not only navigation systems, but also establish an AR platform for smart city and smart mobility.”