LAS VEGAS — Speaking this week on the Amazon Web Services re:Invent stage, AWS executive Colleen Aubrey delivered a prediction that doubled as a wake-up call for companies still thinking of AI as just another tool. “I believe that over the next few years, agentic teammates can be essential to every team — as essential as the people sitting right next to you,” Aubrey said during the Wednesday keynote. “They will fundamentally transform how companies build and deliver for their customers.” On her own team, for example, she challenged groups that once had 50 people taking nine months to deliver a new product to do the same with 10 people working for three months. Meanwhile, non-engineers such as finance analysts are building working prototypes using AI tools, contributing code in Amazon's Kiro agentic development tool alongside engineers and feeding those prototypes into Amazon's famous PR/FAQ planning process on weekly cycles. Aubrey is senior vice president of Applied AI Solutions at AWS, overseeing the company's push into business applications for call centers, supply chains, and other sectors. Aubrey draws a clear line between single-purpose AI tools that do one thing well and the agentic teammates she sees emerging — systems that take responsibility for whole objectives, and require a different kind of management. “I think people will increasingly be managers of AI,” she said. And in fact, everyone is going to be a manager now. You have to think about prioritization, delegation, and auditing. AWS's call center platform reached $1 billion in annual revenue on a run rate basis, with Aubrey noting it has accelerated year-over-year growth for two consecutive years. This week at re:Invent, the team announced 29 new capabilities across four areas: Nova Sonic voice interaction that Aubrey says is “very close to being indistinguishable” from human conversation; agents that complete tasks on behalf of customers; clickstream intelligence for product recommendations; and observability tools for inspecting AI reasoning. One interesting detail: Aubrey said she's often surprised by Nova Sonic's sophistication and empathy in complex conversations — and equally surprised when it fails at basic tasks like spelling an address correctly. The ROI question gets a “yes and no.” Asked whether companies are seeing the business value to justify AI agent investments, Aubrey offered a nuanced response. But she said the value often shows up as eliminating bottlenecks — clearing backlogs, erasing technical debt, accelerating security patching — rather than immediate revenue gains. “I'm not going to see the impact on my P&L today,” she said, “but if I fast forward a year, I'm going to have a product in market where real customers are using and getting real value, and we're learning and iterating where I might not have even been halfway there in the past.” Her advice for companies still hesitating: “If you don't start today, that's a one way door decision… I think you have to start the journey today. You can refine your guardrails, and then confidently keep iterating… the same way we do with each other. She teased “a few other new investment areas” expected to come in early 2026. The chips powering your smart TV, voice assistant, tablet, and car all have something in common: MediaTek MediaTek's chips power over 2 billion devices a year. From AI experiences in your smart home, vehicle, office, and beyond — processing voice commands, visual recognition, and predictive responses are faster than ever. Click for more about underwritten and sponsored content on GeekWire. Click for more about underwritten and sponsored content on GeekWire. Have a scoop that you'd like GeekWire to cover? Inside Amazon's AI deployment team: Lessons from the front lines of enterprise adoption
When you purchase through links on our site, we may earn an affiliate commission. We're back with another classic from TrashBench, the ingenious modder who has previously dunked GPUs into transmission fluid to cool them. Our journey starts with a Thermalright Peerless Assassin, a competent cooler on its own, but clearly, there's room for improvement. For those unaware, the heatpipes inside an air cooler are hollow, with a small amount of liquid inside that evaporates and condenses during heat cycles, acting as a phase-change system. That's enough to cool a CPU when combined with fans on either side. Thin water tubes are connected and secured to these pipes, and once initial leaks are fixed, a pump at the other end successfully injects green-colored water through them, bringing this custom apparatus to life. It's time for testing, and an MSI RTX 3070 is the first recipient of this honor. On the side is a portable ice chiller, on which a 12V diaphragm pump is mounted to power the entire setup. Once turned on, ice-cold water flows through the heat pipes, touching the 3070's GPU, which sits at a casual -14 degrees Celsius. Owing to his name, TrashBench runs a bunch of games and benchmarks on this new below-zero RTX 3070, and compared to the stock results, we see an average 10% uplift across the board. The unlocked cooling headroom enables a +320 MHz overclock that delivers decent improvements, but it's not drastic. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. But the results here are far more impressive; Cyberpunk 2077 shows a massive 21% increase in FPS, while COD: Black Ops 7 demonstrates a 220 MHz uplift in boost clocks. Overall, across all tests, the GTX 960 saw a ~17% performance bump. Still, we're pretty confident in singing its praises — this seems like a legit upgrade to an existing air cooler, turning it into a pseudo AIO that can help overclock GPUs without requiring a full-blown liquid nitrogen setup. It's wild but just accessible enough to be something truly special, adding the "fun" in functional. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he's not working, you'll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Rizzbot has more than a million followers (and 800 million views) across social media and is known for its comedic roasting of subjects, as well as giving people the middle finger. Speed, meanwhile, has more than 50 million followers (and 6 billion views) across various platforms and is known for his dramatic behavior while livestreaming. What happened when the two parties met is the subject of a lawsuit that Rizzbot's creators, Social Robotics, detailed in a petition filed in November against Speed, né Darren Jason Watkins Jr., his management company, Mixed Management, and another producer who was with Speed's team that day. The petition, obtained by TechCrunch, alleges that Speed inflicted “irreparable damage” to Rizzbot. “Speed absolutely knew that this was not an appropriate way to interact with a sophisticated robot and knew that such actions with inflict irreparable damage to Rizzbot,” the petition read. The petition read that Speed's handling of the robot caused “complete loss of functionality,” and that Rizzbot had “significant damages” to its mouth and neck. Speed's management team did not respond to TechCrunch's request for comment. “This was an event that was live-streamed so there's not a ton of discrepancy as to the facts,” Levine told TechCrunch. The petition said that Speed “failed to act as a careful, reasonable, and prudent person,” and that he “wrongfully exercised control over,” Rizzbot. It also said that as a result of the destruction, the team behind Rizzbot has lost out on economic opportunities since Rizzbot is indefinitely unable to partake in high-profile appearances and deals, including scheduled upcoming ones with CBS's The NFL Today and Mr. Levine said there has been no formal answer to his plaintiff's suit just yet and noted that they are still in the very early stages of litigation. When asked for comment, Rizzbot told TechCrunch via email it had to get “a whole new body” after Speed “wrecked” its last one. “Everything's brand new except my Nike kicks and cowboy hat,” Rizzbot told TechCrunch in a statement. Dominic-Madori Davis is a senior venture capital and startup reporter at TechCrunch. Show your CFO the marketing proof they want!Join a free webinar hosted by Pantheon on Tuesday December 9 at 10am PT to learn where spend delivers & how to build a 2026 strategy grounded in real results. SpaceX reportedly in talks for secondary sale at $800B valuation, which would make it America's most valuable private company Andy Jassy says Amazon's Nvidia competitor chip is already a multibillion-dollar business Company backed by Donald Trump Jr.'s firm nabs $620M government contract YouTube releases its first-ever recap of videos you've watched
When you purchase through links on our site, we may earn an affiliate commission. A Reddit user has reported that Nvidia — a company with a $5.2 trillion market cap — declined to replace their brand-new GeForce RTX 5080 Founders Edition, one of the best graphics cards, and that the company is trying to “burn my house down” after the card's 12V-2x6 connector lost its retention clip during the first attempt to remove the cable. This is particularly important because Nvidia attributed the widespread RTX 4090 melting incidents to partially seated 12VHPWR connectors, and the revised 12V-2x6 standard was introduced with the RTX 50 series to improve reliability. A clip failure removes one of the few mechanical safeguards that prevent the plug from backing out under cable tension. This isn't a first for the 5080, which has been the subject of at least one earlier Reddit thread in which an owner asked whether a broken clip could cause long-term issues. Other reports include a 5080 power cable allegedly melting at the power supply side and isolated cases of 5090 connector damage. These incidents have not yet formed a clear pattern, but they sit alongside high-profile reminders that the underlying design may be flawed. In the RTX 4090 cycle, Nvidia said it would handle RMAs for connector-related failures, even when third-party adapters were involved. Board partners did not always match that posture. MSI previously rejected an RMA when a CableMod adapter was used, and the case only came to light after customers shared support transcripts. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
This week on the GeekWire Podcast, we dig into our scoop on Amazon Now, the company's new ultrafast delivery service. Plus, we recap the GeekWire team's ride in a Zoox robotaxi on the Las Vegas Strip during Amazon Web Services re:Invent. In our featured interview from the expo hall, AWS Senior Vice President Colleen Aubrey discusses Amazon's push into applied AI, why the company sees AI agents as “teammates,” and how her team is rethinking product development in the age of agentic coding. Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen. Does Compute, presented by Carnegie Mellon University's School of Computer Science and GeekWire Studios, is a podcast exploring the ways that computer science is reshaping our world through building useful stuff that works. Check out Does Compute to hear from experts about topics like artificial intelligence and robotics to ethics, policy, and culture. Click for more about underwritten and sponsored content on GeekWire. Bezos is back in startup mode, Amazon gets weird again, and the great old-car tech retrofit debate Have a scoop that you'd like GeekWire to cover? Amazon's Zoox reaches robotaxi milestone with launch of service in Las Vegas
> GrapheneOS has officially confirmed a major new hardware partnership—one that marks the end of its long-standing Pixel exclusivity. According to the team, work with a major Android OEM began in June and is now moving toward the development of a next-generation smartphone built to meet GrapheneOS' strict privacy and security standards. It's impossible to escape the Apple/Google duopoly but at least GrapheneOS makes the most out of Android regarding privacy.I still wish we could get some kind of low resource, stable and mature Android clone instead of Google needlessly increasing complexity but this will over time break app compatibility (Google will make sure of it)Edit: I do think Pixel devices used to be one of the best but still I'd like to choose my hardware and software separately interoperating via standards I still wish we could get some kind of low resource, stable and mature Android clone instead of Google needlessly increasing complexity but this will over time break app compatibility (Google will make sure of it)Edit: I do think Pixel devices used to be one of the best but still I'd like to choose my hardware and software separately interoperating via standards Edit: I do think Pixel devices used to be one of the best but still I'd like to choose my hardware and software separately interoperating via standards Companies realized that the OS is a profit center, something they can use to influence user behavior to their benefit. So why would a company, in this new environment, invest resources in making their hardware compatible with competing software environments? They'd be undercutting themselves.That's not to say that attempts to build interoperability don't exist, just that they happen due to what are essentially activist efforts, the human factor, acting in spite of and against market forces. That doesn't tend to win out, except (rarely) in the political realm.i.e. That's not to say that attempts to build interoperability don't exist, just that they happen due to what are essentially activist efforts, the human factor, acting in spite of and against market forces. That doesn't tend to win out, except (rarely) in the political realm.i.e. But, I'd guess this accounts for a relatively small fraction of corporate decision on lock-in strategies for rent extraction - advanced users should be able to treat their cell phones OS like laptops, with the same basic concepts, eg just lock down the firmware for the radio output, to keep the carriers happy, and open everything else, maybe with a warranty void if you swap out your OS. https://www.fcc.gov/oet/ea/rfdevice> INTENTIONAL RADIATORS (Part 15, Subparts C through F and H)> An intentional radiator (defined in Section 15.3 (o)) is a device that intentionally generates and emits radio frequency energy by radiation or induction that may be operated without an individual license.> Examples include: wireless garage door openers, wireless microphones, RF universal remote control devices, cordless telephones, wireless alarm systems, Wi-Fi transmitters, and Bluetooth radio devices.https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. > INTENTIONAL RADIATORS (Part 15, Subparts C through F and H)> An intentional radiator (defined in Section 15.3 (o)) is a device that intentionally generates and emits radio frequency energy by radiation or induction that may be operated without an individual license.> Examples include: wireless garage door openers, wireless microphones, RF universal remote control devices, cordless telephones, wireless alarm systems, Wi-Fi transmitters, and Bluetooth radio devices.https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. > An intentional radiator (defined in Section 15.3 (o)) is a device that intentionally generates and emits radio frequency energy by radiation or induction that may be operated without an individual license.> Examples include: wireless garage door openers, wireless microphones, RF universal remote control devices, cordless telephones, wireless alarm systems, Wi-Fi transmitters, and Bluetooth radio devices.https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. > Examples include: wireless garage door openers, wireless microphones, RF universal remote control devices, cordless telephones, wireless alarm systems, Wi-Fi transmitters, and Bluetooth radio devices.https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A...Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. Other countries have similar regulations.PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. PCs don't have that restriction.You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. You might be able to get to the point where you have a broadcast license and can get approved to transmit in the cellphone radio spectrum and get FCC approval for doing so with your device... but if you were to distribute it and someone else was easily able to modify it who wasn't licensed and made it into a jammer you would also be liable.The scale that the cellphone companies work at such liability is not something that they are comfortable with. A modern phone is much more complicated.As to why there aren't a plethora: the market doesn't demand it that much. The people doing it aren't wildly successful. The people doing it aren't wildly successful. They tried to fix some with the MCA bus of the PS/2 but that flopped.> almost every phone has closed driversLots of hardware manufacturers refuse to provide anything else and balk at the idea of open drivers. And reverse engineering drivers is either not worth the hassle for the manufacturer or a risk of being sued.> Why are there not yet a plethora of phones on the market that allow anyone to install their OS of choice?Incentive. > almost every phone has closed driversLots of hardware manufacturers refuse to provide anything else and balk at the idea of open drivers. And reverse engineering drivers is either not worth the hassle for the manufacturer or a risk of being sued.> Why are there not yet a plethora of phones on the market that allow anyone to install their OS of choice?Incentive. And reverse engineering drivers is either not worth the hassle for the manufacturer or a risk of being sued.> Why are there not yet a plethora of phones on the market that allow anyone to install their OS of choice?Incentive. > Why are there not yet a plethora of phones on the market that allow anyone to install their OS of choice?Incentive. And yes, today we have the transistor budgets to spend on things like this, which wasn't an option back when the PC architecture was devised. Currently they're only permitted to release binaries of the patches due to the embargo, this is why these patches are in the parallel stream/optional (so people unhappy with being unable to see the sources won't have them shoved down their throats).I don't have URLs at hand at the moment but all these questions have been asked many times and explained extensively on their discussion forum.I, for one, feel safe. I was patched since late October (IIRC) for the vulnerabilities that Android-related outlets were warning about in early December.It's quite surreal how unsafe the standard Android is. While all it takes is the user stumbling upon one malicious we page or getting a WhatsApp message they won't even see. I don't have URLs at hand at the moment but all these questions have been asked many times and explained extensively on their discussion forum.I, for one, feel safe. I was patched since late October (IIRC) for the vulnerabilities that Android-related outlets were warning about in early December.It's quite surreal how unsafe the standard Android is. While all it takes is the user stumbling upon one malicious we page or getting a WhatsApp message they won't even see. I was patched since late October (IIRC) for the vulnerabilities that Android-related outlets were warning about in early December.It's quite surreal how unsafe the standard Android is. While all it takes is the user stumbling upon one malicious we page or getting a WhatsApp message they won't even see. It's quite surreal how unsafe the standard Android is. While all it takes is the user stumbling upon one malicious we page or getting a WhatsApp message they won't even see. GrapheneOS wants to make a FOSS Android with the security model that makes it hard for any bad party to break into the phone.LineageOS wants to make a FOSS Android that respects user's privacy first and foremost - it implements security as best as it can but the level of security protections differs on different supported devices.Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS - differing in that third parties with local access to the device can still brute-force their access whereas with GrapheneOS they can't - unless they have access to hardware level attacks. LineageOS wants to make a FOSS Android that respects user's privacy first and foremost - it implements security as best as it can but the level of security protections differs on different supported devices.Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS - differing in that third parties with local access to the device can still brute-force their access whereas with GrapheneOS they can't - unless they have access to hardware level attacks. Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS - differing in that third parties with local access to the device can still brute-force their access whereas with GrapheneOS they can't - unless they have access to hardware level attacks. It's such a big deal, and I see little to no work being done on this front.Anyone have any idea what GrapheneOS actually deblobbed? Anyone have any idea what GrapheneOS actually deblobbed? For a list of security features see here [0]. LineageOS has a place for those who care less about security and more about features, "freedom", compatibility, community etc...I was a LOS user and maintained my own forks for devices, but switching to GrapheneOS was a good decision and I don't really miss anything. I was a LOS user and maintained my own forks for devices, but switching to GrapheneOS was a good decision and I don't really miss anything. You can have root to control your own device on Lineage, but not Graphene. If neither of the two major players can make an open, secure, _simple_, easy-to-understand, bloat-free OS, then we somehow need another player.Presently (and I confess, my bias to seek non-state solutions may show here), it seems that a non-trivial part of the duopoly stems from regulatory capture insofar as the duopoly isn't merely software, but extends all the way to TSMC and Qualcomm, whose operations seem to be completely subject to state dictates, both economic/regulatory and of the darker surveillance/statecraft variety (and of those, presumably some are classified).I'm reminded of the server market 20ish years ago, where, although there were more than two players, the array of simple, flexible linux distros that are dominant today were somewhere between poorly documented and unavailable. I remember my university still running windows servers in ~2008 or so.What do we need to do to achieve the same evolution that the last 2-3 decades of server OS's have seen? Is there presently a mobile linux OS that's worth jumping on? Is there simple hardware to go with it? Presently (and I confess, my bias to seek non-state solutions may show here), it seems that a non-trivial part of the duopoly stems from regulatory capture insofar as the duopoly isn't merely software, but extends all the way to TSMC and Qualcomm, whose operations seem to be completely subject to state dictates, both economic/regulatory and of the darker surveillance/statecraft variety (and of those, presumably some are classified).I'm reminded of the server market 20ish years ago, where, although there were more than two players, the array of simple, flexible linux distros that are dominant today were somewhere between poorly documented and unavailable. I remember my university still running windows servers in ~2008 or so.What do we need to do to achieve the same evolution that the last 2-3 decades of server OS's have seen? Is there presently a mobile linux OS that's worth jumping on? Is there simple hardware to go with it? I'm reminded of the server market 20ish years ago, where, although there were more than two players, the array of simple, flexible linux distros that are dominant today were somewhere between poorly documented and unavailable. I remember my university still running windows servers in ~2008 or so.What do we need to do to achieve the same evolution that the last 2-3 decades of server OS's have seen? Is there presently a mobile linux OS that's worth jumping on? Is there simple hardware to go with it? What do we need to do to achieve the same evolution that the last 2-3 decades of server OS's have seen? Is there presently a mobile linux OS that's worth jumping on? Is there simple hardware to go with it?
But if there's one thing that you might want out of a pair of glasses with wires in them, it's audio. I've maintained that glasses are perfect conduits for open-ear audio, and as someone who's worn the Ray-Ban Meta AI glasses extensively over the past two years, I can speak from experience. Having speakers in your glasses lets you take hands-free calls, listen to music while still hearing your surroundings, and theoretically, you don't need to reach for a separate device like wireless earbuds or headphones to do all that, since your glasses are waiting patiently on your face. But what if you don't want all the potentially problematic stuff that comes with smart glasses, like AI, or discreet cameras, or screens? For that, you have options, and one of them (if you're a certain type of person) is Chamelo's Music Shield. The Chamelo Music Shield audio glasses pack a lot of volume and have cool adjustable lenses but are lacking in features. The $260 Music Shield are a… distinct pair of wraparound audio glasses. They're made by Chamelo, an eyewear company backed by an unlikely star: former New York Knicks point guard Stephon Marbury, who is officially listed as Chamelo's Chief Brand Officer. While Marbury and company call Chamelo glasses “smart,” the word is out on that one. Unlike other pairs of smart glasses like Ray-Ban Meta AI glasses and the Meta Ray-Ban Display, the Music Shield lack most of what makes other frames smart. There is, however, a set of speakers, which you may have already gathered from the name. With those speakers, you can do a few things, like (duh) listen to music. On that front, Chamelo does a pretty decent job. In my testing, I found that the Music Shield sound… pretty okay. The volume is good enough, which is a major component of audio in glasses like this, since you'll be contending with ambient noise while you listen. Most likely, if you're interested in Chamelo glasses, you'll want to use them in some kind of sports environment, though. Wraparound shades are ideal for things like snowboarding, skiing, or cycling because they protect your eyes from the wind and help you see where you're going without getting blinded by air or snow. I didn't get a chance to test the Music Shield on the slopes, unfortunately, but based on the louder environments I did test them in, I'd wager they could still be heard in fast-paced, wind-heavy action. The Oakley Meta Vanguard have big sound that's unlike any I've heard in smart glasses, and comparing one-to-one, I don't think Chamelo quite reaches the same volume. It also doesn't quite have the same fidelity. I had some issues while calling, where the call audio was much quieter than music playback despite cranking my phone volume up. Like I mentioned, the Music Shield sound very good, but there's still a slight tinniness compared to beefier smart glasses like the Oakley Meta Vanguard. The Music Shield still sound much better than other glasses I've tried, like the Solos AirGo A5, but they're not dethroning Meta. From an audio quality perspective, I wouldn't be upset with the fidelity of the Music Shield if I bought them with my own hard-earned money, but having tried a direct competitor like the Oakley Meta Vanguard, I might not be as impressed. Meta's version does cost a great deal more at $500, but you also get a lot of extra features there as well, including certain health integrations with Garmin smartwatches, cameras, a voice assistant, and more touch controls. This feature, I'm happy to report, works well. As a bit of background, electrochromic glass is a thing found in various gadgets now, including some rearview mirrors in cars. The technology works by applying an electric current to a special film or gel that's adhered to a piece of glass. That jolt actually changes the tint in response, creating an automatically dimmable panel of glass. As functional as transition lenses are in smart glasses, electrochromic lenses and their ability to adjust tint instantly and to a level you've specified are a superior experience to their photochromic counterparts. Chamelo's Music Shield does not have that problem. Sorry, Meta, transition lenses just aren't it. If there's another point I can give the Music Shield, it's that they are fairly lightweight for their size. Chamelo's glasses weigh 49g, which is well under the Oakley Meta Vanguard, at 66g. That's not surprising since the Vanguard have a lot more going on inside, but it's still notable if you're in the market for a pair of lightweight glasses and don't care about cameras and AI. I wore the glasses in hour increments, and while they did get a little irksome toward the end of the hour, I would say no more than most glasses (smart and dumb) that I own. The nose pads, while I still think Oakley's are more comfortable, do a pretty good job of holding up the weight on my nose in a way that's not aggravating. Luckily, since these are sports glasses, they fit snugly, which is great if you're like me and have a narrower head. Maybe you are, and if that's your thing, then go for it, but on a scale of 1 yeehaw out of 10, I'm giving them 8 yeehaws. The glasses are also IPX4 rated, which makes them resistant to water splashes (light rain) and sweat, but not fully waterproof. While Chamelo purposely focuses mostly on audio, I do find the Music Shield to be a little lacking in terms of features. There is no companion app for one, though Chamelo's website confusingly mentions “app-enabled controls” for some reason; there is no voice assistant; there is no touch bar on the arm for controlling volume. For $260, those are things that I'd expect, but maybe adding electrochromic lenses comes at a cost both literally and figuratively. Those aren't dealbreakers, but it does make doing things like checking the battery interesting. The best way I've identified to monitor the battery level is natively through iOS, which will tell you (like it does with other Bluetooth devices) how much juice you have. After two hours of music playback at 80% volume, the glasses dropped from 90% to 50% battery. It's not quite as good as Meta and its Ray-Ban Meta Gen 2 AI glasses, which claim 8 hours on a single charge, but it's solid. I do like the inclusion of an easily accessible on/off button, though there are no sensors that detect when the glasses are folded, so you'll have to use it each time you want to turn the glasses off. To charge the Music Shield, there's an included magnetic cable. Not everyone will find audio glasses useful, but some might. If you're looking for a device that provides decent open-ear audio and wind protection, and you love the wraparound look, they should be on your radar. If you want more features that other truly smart glasses have to offer, though, these are not the specs for you. But if none of that bothers you, then maybe you'll feel fine wrapping your hands and your head around these glasses. If you're an audiophile, dongles might be your new best friend. For $130 you can get some of the best-sounding wireless earbuds I've listened to all year. It is my pleasure to welcome Robosen's Soundwave.
When you purchase through links on our site, we may earn an affiliate commission. Some chipmakers are dropping out of the consumer market entirely. However, the ongoing crisis does make for some amusing, or should we say heart-warming anecdotes, like this Facebook PC enthusiast who traded 192GB of DDR5-5200 RAM worth $2,200 for one RTX 5070 Ti worth roughly $800, despite his memory being worth roughly triple what he got in return. The trade took place in the Facebook group "Pc, Gaming, Setups, and Building Advice", where one Abdul Kareem As, who had a Corsair Vengeance DDR5-5200 C38 192GB (4x48GB), traded the RAM haul for a PNY RTX 5070 Ti graphics card. Like us, you're probably thinking that this negotiation wasn't the smartest deal in history. True to his word, he states that it "would have felt unethical to sell at such a high price" and that he's happy with his decision. Likewise, he didn't want to parcel out the kit for maximum profit either. It's safe to say he bows to no one, and this is probably the best Christmas story we techies will see this year. We did some digging for the cheapest no-frills memory and found a Crucial 96 GB DDR5-5600 kit for $749.99, so you could complete the set with two of those. As for the graphics card, it's a no-frills, solid PNY RTX 5070 Ti. Another dive into Newegg pulls out a couple of RTX 5070 Ti graphics cards from Zotac, MSI, and Gigabyte for $749.99, with the standard low-end tag hovering around $800. So all things considered, Abdul's grand generosity had a $600 to $700 value, meaning "fair" value for the trade would have been closer to two 5070 Ti cards, not one. There's no telling what exact model that would be, but it's possible it's the ROG Strix XG27AQDMG, a 1440p WOLED display that goes for $699. Abdul would perhaps have been wiser to exchange the display for both pieces of hardware, or for one of them plus some cash. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Nevertheless, while it's easy to deride Abdul's negotiation savvy, it's worth noting that the rise in DDR5 prices was so rapid and violent that anyone not following the space closely might not be aware of the price crisis. Even so, it seems Abdul was fully aware of the value of the goods he was carrying, but decided instead to make someone's day. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Bruno Ferreira is a contributing writer for Tom's Hardware. When not doing that, he's usually playing games, or at live music shows and festivals. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
A pseudonymous trader, known only as AlphaRaccoon, on prediction market Polymarket has drawn sharp scrutiny after reportedly turning a $1.15 million profit in under 24 hours by betting on Google's 2025 Year in Search rankings. Notably, these were not the only Google-related bets in AlphaRacoon's history. Across 23 Google-related markets this time, the account hit 22 correct outcomes, according to a report in Forbes. This dude just profited $1,000,000 in a single day betting on the Google search markets. Google accidentally pushed the results early, then removed them, but not before it revealed he went 22/23 on his bets and… pic.twitter.com/44raBXoD4x Polymarket trader Haeju Jeong, who is also a blockchain engineer, laid out the case bluntly on X. “This isn't a lucky streak,” they wrote, sharing screenshots of the profile and markets. “He's a Google insider milking Polymarket for quick money.” “$1.15M profit in 24 hours trading Google search markets,” it posted, tagging the profile with a simple question: “Who is AlphaRaccoon?” To be clear, no hard proof ties AlphaRaccoon to Google as of yet, and the user hasn't commented on the matter publicly. The evidence remains circumstantial for the time being, and it's unclear if there is any ongoing investigation into these trades from either Google or Polymarket. While a real-world identity has yet to be tied to these trades publicly, the reality is that the money can be traced on the completely public and transparent, albeit pseudonymous, blockchain ledger upon which they were made. While critics of AlphaRaccoon's recent trades decry the episode as cheating, the reality is that this is what people who love prediction markets want to see. The Unlawful Internet Gambling Enforcement Act of 2006 clamped down on online platforms, treating many bets as unregulated futures and blocking payment processors from facilitating trades. However, Kalshi avoided the UIGEA issue by getting regulated by the CFTC as a platform for legitimate financial derivatives rather than gambling, while Polymarket initially came to prominence using crypto to get around payment restrictions. Polymarket was fined by the CFTC in 2022 and had to stop offering services to the U.S., but it is currently betting on the relaxed regulatory environment under Trump and is slowly bringing back services focused on sports gambling, for now. Advocates say unrestricted prediction markets can offer better information verification by pooling bets on specific claims being true or false and effectively making cheap talk on social media more expensive. If insiders flood these sorts of markets with their own bets, the argument goes, sharpened prices should indirectly provide unknown information to market observers. However, many critics argue that these platforms are simply mechanisms for gambling where information insiders have an unfair advantage or even avenues for election or corporate manipulation. “Why are you saying this as it was a bad thing? You want insider trading!” one user wrote on X. Why are you saying this as it was a bad thing? When public companies are involved in these sorts of markets, it creates a potential loophole for trading on similar information without SEC oversight. Accusations of insider trading on prediction markets also aren't new. Traders have faced similar heat over other bets, such as the Nobel Peace Prize winner. You've been giving Google the tools it needs for decades. Centralized mixing tools for obscuring crypto transactions are the latest target of authorities. It's cheaper than ever to turn antisocial behavior into big bucks.
It was only back in June that Android 16 delivered a raft of new features for Google's operating system, but the company just announced another bumper package of updates, including more customization options, better parental controls, and smart notifications. You can now create custom icon shapes, cohesive themes, and extend dark mode to apps that don't have their own dark theme. The overhauled parental controls let you manage screen time, downtime, app usage, and rewards directly on your kid's devices, while AI-powered notification summaries give you a TL;DR of long messages or group chats. Expressive Captions now have the relevant emotion tags, and they're rolling out in English-speaking YouTube videos as well as across Android. Configurable AutoClick for mouse users reduces strain, Guided Frame is now more descriptive about what's in the camera's view on the Pixel camera app, and you can launch Voice Access with a voice command to Gemini. Fast Pair for hearing aids is also expanding (now available for Demant, Starkey support coming in early 2026), and better voice dictation with TalkBack is coming soon. Google is also showing some love to older versions of Android, with features that aren't 16-specific, such as Emoji Kitchen stickers, the ability to leave and report group chats in Google Messages, the option to check for scams with Circle to Search, and pinned tabs in Chrome just like on desktop. My favorite new feature is Call Reason, which enables you to flag your call to any saved contact as “urgent.” —Simon Hill There are 16 stops of dynamic range, 30-fps shooting with full autofocus, and Sony's remarkably good AI subject and eye detection as well. The low-light possibilities of 7.5-stop IBIS are also impressive. You can preorder today at Adorama and B&H Photo. Even the best folding phones right now only have a single fold, mostly going from regular phone size to a 7- or 8-inch tablet, but Samsung is about to unfurl its Galaxy Z TriFold in the US. Despite the name, it has two folds, but that lets you go from a regular (but pretty thick) phone to a whopping 10-inch tablet that's just 3.9-mm thick (ignoring the camera module). But it's really all about that mammoth 10-inch AMOLED screen. We knew it was coming because Samsung has been teasing a trifold all year, but it will be the first such design to land stateside. While a 10-inch screen in your pocket is great if you love to watch movies or multitask on your phone (Samsung will allow three apps side-by-side), the Galaxy Z TriFold is thick when closed (12.9 mm discounting the camera module), very heavy (309 grams), and will likely cost a frightening amount of money (I'm guessing $3K-ish). A popular accessory for e-readers has been a page-turning remote, allowing you to turn the page with a click of the remote in your hand instead of needing to swipe or click on the device itself. These page turners have only been third-party accessories until now, with Kobo dropping the Kobo Remote just in time for the holiday season. It's built to connect with any Kobo e-reader with Bluetooth capabilities, and Kobo promises it will have a long battery life (which is important, speaking as someone whose cheap third-party page turner died every time I tried to use it.) It's a fun gift for anyone with a Kobo e-reader, but it makes me wonder: Where's the Kindle version, Amazon? Hopefully we'll see an option from all popular e-reader makers next year, but for now, Kobo is the first. Amazon expanded the conversation options with its new assistant, Alexa+, by adding a new jump-to-scene feature with Fire TVs that lets you ask for a specific scene in a show or movie and Amazon's assistant will immediately play your entertainment of choice from that point. You can describe scenes for Alexa+ to find, by saying things like “the scene in Mamma Mia where Sophie sings 'Honey, Honey'" or “the card scene in Love Actually” and Alexa+ will now be able to find it. Big Interview: Palantir's CEO Alex Karp goes to war Starlink devices are allegedly being used at scam compounds Livestream: What businesses need to know about agentic AI WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
To me, this clearly looks like a case of a very high compression ratio with the motion blocks swimming around on screen. They might have some detail enhancement in the loop to try to overcome the blockiness which, in this case, results in the swimming effect.It's strange to see these claims being taken at face value on a technical forum. It should be a dead giveaway that this is a compression issue because the entire video is obviously highly compressed and lacking detail. It's strange to see these claims being taken at face value on a technical forum. It should be a dead giveaway that this is a compression issue because the entire video is obviously highly compressed and lacking detail. And it being a default that you can't even seem to disable... I am by no means fluent in French, but I speak it well enough to get by with the aid of the subtitles, so that was fine. In an ideal world, I'd have the original French audio with English subtitles, but that did not appear to be an option. Such as?This seems like such an easy thing for someone to document with screenshots and tests against the content they uploaded.So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers) This seems like such an easy thing for someone to document with screenshots and tests against the content they uploaded.So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers) So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers) Rheet Shull's video is quite high quality and shows it.When it was published I went to Youtube's website and saw Rick Beato's short video mentioned by him and it was clearly AI enhanced.I used to work with codec people and have them as friends for years so what TFA is talking about is definitely not something a codec would do. When it was published I went to Youtube's website and saw Rick Beato's short video mentioned by him and it was clearly AI enhanced.I used to work with codec people and have them as friends for years so what TFA is talking about is definitely not something a codec would do. I used to work with codec people and have them as friends for years so what TFA is talking about is definitely not something a codec would do. After posting a cogent explanation as to why integrated AI filtering is just that, and not actually part of the codec, Youtube creates dozens of channels with AI-generated personalities, all explaining how you're nuts.These channels and videos appear on every webpage supporting your assertions, including being top of results on search. Oh, and AI summaries on Google searxh, whenever the top is searched too. Oh, and AI summaries on Google searxh, whenever the top is searched too. It's difficult for me to read this as anything other than dismissing this person's views as being unworthy of discussing because they are are "non-technical," a characterization you objected to, but if you feel this shouldn't be the top level comment I'd suggest you submit a better one.Here's a more detailed breakdown I found after about 15m of searching, I imagine there are better sources out there if you or anyone else cares to look harder: https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...To me it's fairly subtle but there's a waxy texture to the second screenshot. Here's a more detailed breakdown I found after about 15m of searching, I imagine there are better sources out there if you or anyone else cares to look harder: https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...To me it's fairly subtle but there's a waxy texture to the second screenshot. Whether it met the letter of some specification and is "correct" in that sense doesn't matter.If you change someone's appearance in your post processing to the point it looks like they've applied a filter, your post processing is functionally a filter. If you change someone's appearance in your post processing to the point it looks like they've applied a filter, your post processing is functionally a filter. If your compression pipeline gives people anime eyes because it's doing "detail enhancement", your compression pipeline is also a filter. If you apply some transformation to a creator's content, and then their viewers perceive that as them disingenuously using a filter, and your response to their complaints is to "well actually" them about whether it is a filter or a compression artifact, you've lost the plot.To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way. Regardless of whether they are technical, they are living in a world increasingly intermediated by technology, and we should be listening to their feedback on it. They have to live with the consequences of our design decisions. To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way. Regardless of whether they are technical, they are living in a world increasingly intermediated by technology, and we should be listening to their feedback on it. They have to live with the consequences of our design decisions. I'm not critiquing their opinion that the result is bad. I was critiquing the fact that someone on HN was presenting their non-technical analysis as a conclusive technical fact.Non-technical is describing their background. It's not an insult.I will be the first to admit I have no experience or knowledge in their domain, and I'm not going to try to interpret anything I see in their world.It's a simple fact. This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum It's not an insult.I will be the first to admit I have no experience or knowledge in their domain, and I'm not going to try to interpret anything I see in their world.It's a simple fact. This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum I don't really see where you said the output was "bad," you said it was a compression artifact which had a "swimming effect", but I don't really see any acknowledgement that the influencer had a point or that the transformation was functionally a filter because it changed their appearance above and beyond losing detail (made their eyes bigger in a way an "anime eyes" filter might).If I've misread you I apologize but I don't really see where it is I misread you. He's getting his compassionate nodding and emotional support in the comments over there.I agree that him being non-technical shouldn't be discussion-ending in this case, but it is a valid observation, wether necessary or not. I agree that him being non-technical shouldn't be discussion-ending in this case, but it is a valid observation, wether necessary or not. Watch them try to spin this as “user preference” that just opted everyone into. Also, no one else can bear the shear amount of traffic and cost "Meta has been doing this; when they auto-translate the audio of a video they are also adding an Al filter to make the mouth of who is speaking match the audio more closely. But doing this can also add a weird filter over all the face. "I don't know why you have to get into conspiracy theories about them applying different filters based on the video content, that would be such a weird micro optimization why would they bother with that I don't know why you have to get into conspiracy theories about them applying different filters based on the video content, that would be such a weird micro optimization why would they bother with that Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there. Video compression operates on macroblocks and calculates motion vectors of those macroblocks between frames.When you push it to the limit, the macroblocks can appear like they're swimming around on screen.Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.The giveaway is that the entire video is extremely low quality. Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.The giveaway is that the entire video is extremely low quality. Look at figure 5 and beyond.Here's one such Google paper:https://c3-neural-compression.github.io/ Neural compression wouldn't be like HVEC, operating on frames and pixels. Larger fingers, slightly misplaced items, etc.Neural compression techniques reshape the image itself.If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are. Neural compression techniques reshape the image itself.If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are. If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are. Seriously?Then why is nobody in this thread suggesting what they're actually doing?Everyone is accusing YouTube of "AI"ing the content with "AI".What does that even mean?Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. They probably should start culling YouTube of cruft nobody watches. Then why is nobody in this thread suggesting what they're actually doing?Everyone is accusing YouTube of "AI"ing the content with "AI".What does that even mean?Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. They probably should start culling YouTube of cruft nobody watches. Everyone is accusing YouTube of "AI"ing the content with "AI".What does that even mean?Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. They probably should start culling YouTube of cruft nobody watches. What does that even mean?Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. They probably should start culling YouTube of cruft nobody watches. They probably should start culling YouTube of cruft nobody watches. They probably should start culling YouTube of cruft nobody watches. I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches. To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving. And this was over a year ago.They've probably developed some really good models for this and are silently testing how people perceive them. They've probably developed some really good models for this and are silently testing how people perceive them. Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. That doesn't include all of the transcoding and alternate formats stored, either.People signing up to YouTube agree to Google's ToS.Google doesn't even say they'll keep your videos. Google doesn't even say they'll keep your videos. It is bad enough we can deepfake anyone. And over time the AI content will improve enough where it becomes impossible and then the Great AI Swappening will occur. There's already popular subreddits (something blursed ai I think) where people upload this type of content and it's getting decent engagement it seems It seems like a minor difference, but the undifferentiated unlabeled short form addiction feed is much worse.Reddit has been heading that way though. There's a component of people around here who will shrug or even cheer. That's because we are losing any belief in compassion or optimism about uplifting humanity. When I really think about it, it's terrifying.We are headed for a world where normal people just step over the dying, and where mass exploitation of the “weak” gets a shrug or even approval. Already there in some areas and sectors.Is this the world you want, folks? Unfortunately I've met a disturbing number of people who would say yes.Now consider that this includes children, whose childhood is being stolen by chum feeds. This includes your friend who maybe has a bit of an addictive personality who gets sucked into gambling apps and has their life ruined. Or it's you, maybe, though I get the sense there's a lot of “high IQ” people who think they can't be conned. We are headed for a world where normal people just step over the dying, and where mass exploitation of the “weak” gets a shrug or even approval. Already there in some areas and sectors.Is this the world you want, folks? Unfortunately I've met a disturbing number of people who would say yes.Now consider that this includes children, whose childhood is being stolen by chum feeds. This includes your friend who maybe has a bit of an addictive personality who gets sucked into gambling apps and has their life ruined. Or it's you, maybe, though I get the sense there's a lot of “high IQ” people who think they can't be conned. Unfortunately I've met a disturbing number of people who would say yes.Now consider that this includes children, whose childhood is being stolen by chum feeds. This includes your friend who maybe has a bit of an addictive personality who gets sucked into gambling apps and has their life ruined. Or it's you, maybe, though I get the sense there's a lot of “high IQ” people who think they can't be conned. Now consider that this includes children, whose childhood is being stolen by chum feeds. This includes your friend who maybe has a bit of an addictive personality who gets sucked into gambling apps and has their life ruined. Or it's you, maybe, though I get the sense there's a lot of “high IQ” people who think they can't be conned. The conspiracy theory that this is done to make people get used to AI content is the kind of bs that would be derided and flagged otherwise… but since it is anti-tech, it's fine. ffmpeg -i source.mkv -i suspect.mkv -filter_complex "blend=all_mode=difference" diff_output.mkvI saw these claims before but still have not found someone to show a diff or post the source for comparison. I saw these claims before but still have not found someone to show a diff or post the source for comparison. We need more people experimenting with creating a better platform for content creators. Not least so people like Beato, but not as well known, don't constantly get harassed by fraudulent and incorrect copyright infringement claims. OR they need to justify the mountain of money they burned on AI somehow.Also there are alternatives to Youtube in the Fediverse like PeerTube. Also there are alternatives to Youtube in the Fediverse like PeerTube. As it is, when a video has a catchy clickbait title, I screenshot the thumbnail and have ChatGPT give me the solution. Or I'll copy the URL into a transcript fetcher and feed that into Gemini so I can ask specific questions.He who clickbaits is demoted to the role of “Suggest a topic for me to ask ChatGPT about”. edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way That would presumably be an easy smoking gun for some content creator to produce.There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare. There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare. People in the media business have long found their media sells better if they use photoshop-or-whatever to give their subjects bigger chests, defined waists, clearer skin, fewer wrinkles, less shiny skin, more hair volume.Traditional manual photoshop tries to be subtle about such changes - but perhaps going from edits 0.5% of people can spot to bigger edits 2% of people can spot pays off in increased sales/engagement/ad revenue from those that don't spot the edits.And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me. Traditional manual photoshop tries to be subtle about such changes - but perhaps going from edits 0.5% of people can spot to bigger edits 2% of people can spot pays off in increased sales/engagement/ad revenue from those that don't spot the edits.And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me. And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me. These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". )This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.I'm not saying YouTube is an angel. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost. > This level of AI paranoia is getting annoying.Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. What are people supposed to feel, totally chill because they have tons of control? Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. What are people supposed to feel, totally chill because they have tons of control? What are people supposed to feel, totally chill because they have tons of control? If you try to watch the makeup guy's proof, it's talking about Instagram (not YouTube), doesn't have clean comparisons, is showing a video someone sent back to him, which probably means it's a compression artifact, not a face filter that the corporate overlords are hiding from the creator. But most people here are just taking this headline at face value and getting pitchforks out. If you try to watch the makeup guy's proof, it's talking about Instagram (not YouTube), doesn't have clean comparisons, is showing a video someone sent back to him, which probably means it's a compression artifact, not a face filter that the corporate overlords are hiding from the creator. I have a funny attitude towards Google: I am a big privacy nut, have read the principle books on privacy, etc. )I find that the availability of an infinite number of Qi Gong exercise videos, philosophy, tiny bit of politics, science, and nature videos that is it almost infinitely better than HBO, Netflix, etc. I am a paid subscriber to all these services so I am comparing Apples to Apples here.I do hate spending 10 seconds opening a video and realizing that it was created artificially, but I immediately stop watching it so the overhead isn't too bad.One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. )I find that the availability of an infinite number of Qi Gong exercise videos, philosophy, tiny bit of politics, science, and nature videos that is it almost infinitely better than HBO, Netflix, etc. I am a paid subscriber to all these services so I am comparing Apples to Apples here.I do hate spending 10 seconds opening a video and realizing that it was created artificially, but I immediately stop watching it so the overhead isn't too bad.One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. I find that the availability of an infinite number of Qi Gong exercise videos, philosophy, tiny bit of politics, science, and nature videos that is it almost infinitely better than HBO, Netflix, etc. I am a paid subscriber to all these services so I am comparing Apples to Apples here.I do hate spending 10 seconds opening a video and realizing that it was created artificially, but I immediately stop watching it so the overhead isn't too bad.One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. I do hate spending 10 seconds opening a video and realizing that it was created artificially, but I immediately stop watching it so the overhead isn't too bad.One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it. We'll get local AIs that can skip the cruft soon enough anyway. I wonder if it will end up being treated as part of a codec instead of edits to the base film, and can then be re-run to undo the video's?It feels like there needs to be a way to verify that what you uploaded is what's on the site. It feels like there needs to be a way to verify that what you uploaded is what's on the site. Absolutely none of these CEOs give a shit. And then the discourse is so riddled with misnomers and baited outrage that it goes nowhere.The other example in submitted post isn't 'edits to videos' but rather the text descriptions of automated captions. The Gemini/AI engine not being very good at summarizing is a different issue. The Gemini/AI engine not being very good at summarizing is a different issue. The key section:> Rene Ritchie, YouTube's creator liaison, acknowledged in a post on X that the company was running “a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”So the "AI edits" are just a compression algorithm that is not that great. > Rene Ritchie, YouTube's creator liaison, acknowledged in a post on X that the company was running “a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”So the "AI edits" are just a compression algorithm that is not that great. It looks like quality cleanup, but I can't imagine many creators aren't using decent camera tech and editing software for shorts. And as you say, arbitrarily applying quality cleanup is making assumptions of the quality and creative intent of the submitted videos. It would be one thing if creators were uploading raw camera frames to YouTube (which is what smartphone camera apps are receiving as input when shooting video), but applying that to videos that have already been edited/processed and vetted for release is stepping over a line to me. In fact many video codecs are about determining which portion of the video IS noise that can be discarded, and which bits are visually important... Or to put it another way, to me it would be similarly disingenuous to describe e.g. dead code elimination or vector path simplification as "just a compression algorithm" because the resultant output is smaller than it would be without. I think part of what has my hackles raised is that it claims to improve video clarity, not to optimise for size. IMO compression algorithms do not and should not make such claims; if an algorithm has the aim (even if secondary) to affect subjective quality, then it has a transformative aspect that requires both disclosure and consent IMO. I think the first comment is why they would position noise reduction as being both part of their compression and a way to improve video clarity. Video compression has used tricks like this for years. No, gen AI is a subset of machine learning. There's a lot more to AI than machine learning and a lot more to machine learning than LLMs (“gen AI”). That being said, I don't believe they should be doing anything like this without the creator's explicit consent. I do personally think there's probably a good use case for machine learning / neural network tech applied to the clean up of low-quality sources (for better transcoding that doesn't accumulate errors & therefore wastes bitrate), in the same way that RTX Video Super Resolution can do some impressive deblocking & upscaling magic[2] on Windows. > "Making AI edits to videos" strikes me as something particularly egregious; it leads a viewer to see a reality that never existed, and that the creator never intended. YouTube is not applying any "face filters" or anything of the sort. But that is, IMO, very different and considerably less bad than changing someone's face specifically. Like I said, I think that's still bad and they should have never done it without the clear explicit consent of the creator. But that is, IMO, very different and considerably less bad than changing someone's face specifically. NOT face filters.EDIT : same thing with the two other links you edited into your comment while I was typing my reply.Again, I'm not defending YouTube for this. EDIT : same thing with the two other links you edited into your comment while I was typing my reply.Again, I'm not defending YouTube for this.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. Apple rarely matches Black Friday generosity like we saw this year and in a surprising move Amazon extended the MacBook Air M4 discount well beyond Cyber Monday. The 256GB model (which happens to be Amazon's best-selling laptop) currently sits at $749 instead of $999 even though it launched just this past March. Over 4,000 reviews averaging 4.8 stars confirm this configuration delivers the performance most people actually need for daily computing. The Apple M4 processor lets you use multiple apps at the same time without any lag, so you forget you're doing it.The chip architecture combines the CPU, GPU, Neural Engine and memory controller on one die: This lets data move between parts faster than in traditional laptop designs and it means that applications start up right away, big files open up right away, and the system never stutters when you switch between apps. Apple Intelligence runs on the device instead of sending your data to cloud servers and it uses the M4's Neural Engine to do AI tasks locally, keeping your data completely private. The system can help you write emails, summarize long documents, make custom emojis based on what you say, and sort notifications by importance without sending your personal information outside of the laptop. Instead of having separate memory spaces, the CPU and GPU share the same pool of memory and it cuts down on the time it takes to copy data between dedicated RAM and VRAM, which speeds up performance. The battery can last up to 18 hours with normal mixed use like web browsing, editing documents, streaming videos, and doing light creative work. Many laptops slow down their CPUs when they are not plugged in to save power, but this one works the same whether it is plugged in or running on battery. Because it works all the time, you can work through long flights, full days of classes, or long sessions at the cafe without having to look for outlets or carry charging bricks. Text rendering uses subpixel antialiasing to make fonts look sharp at any size which makes it easier on the eyes when working on long documents. Wi-Fi 6E speeds up wireless connections and lowers latency on networks that support it. The headphone jack can handle high-impedance headphones, which gives you better sound quality than most laptop outputs. This $250 discount on current-generation hardware with double the previous base RAM configuration is an amazing deal that makes the MacBook Air M4 affordable for students and professionals who couldn't afford to buy a laptop for more than $1,000. This is the best value for a Mac laptop in years because you get Apple's newest chip, more memory, and better AI capabilities for less than older models.
His “Regular Animals” project features $100,000 robotic dogs outfitted with hyper-realistic heads resembling Elon Musk, Mark Zuckerberg, and Jeff Bezos, alongside art legends Pablo Picasso and Andy Warhol. The robot dogs roam a plexiglass pen, capturing images through chest-mounted cameras that are processed by AI and then essentially pooped out, according to the WSJ. Of the prints produced, 256 include QR codes that offer collectors a free NFT, dispensed in bags labeled “Excrement Sample.” Beeple also included himself in this exclusive group, a move the Charleston-based artist himself called “ballsy.” His self-portrait dog sold first, surprising even Beeple, he told the Journal. Four years ago, his digital collage sold at Christie's for $69 million, helping to fuel an NFT boom that would peak a year later before largely imploding. Show your CFO the marketing proof they want!Join a free webinar hosted by Pantheon on Tuesday December 9 at 10am PT to learn where spend delivers & how to build a 2026 strategy grounded in real results. Every weekday and Sunday, you can get the best of TechCrunch's coverage. Startups are the core of TechCrunch, so get our best coverage delivered weekly. Provides movers and shakers with the info they need to start their day. By submitting your email, you agree to our Terms and Privacy Notice.