Google is making a key push into AI-powered shopping with the unveiling of Universal Commerce Protocol (UCP), a new open technical standard aimed at letting shoppers buy products directly through AI chatbots and search interfaces. Notably, there was one e-commerce giant not included in Sunday's announcement: Amazon. But UCP offers an alternative pathway that could bypass Amazon, potentially steering shoppers to competitors at the critical moment of product discovery. Announced over the weekend at the National Retail Federation conference in New York City, Google pitched UCP as a foundation for “agentic commerce,” a fast-emerging concept in which AI agents help shoppers carry out multi-step tasks on their behalf. “AI agents will be a big part of how we shop in the not-so-distant future,” Google CEO Sundar Pichai said on X. But it could challenge the idea that shopping must begin inside Amazon's app or website, said Maju Kuruvilla, CEO and founder of Seattle-based agentic commerce startup Spangle. “This doesn't change Amazon's core advantage — price, selection, and convenience,” Kuruvilla said. A new report from Adobe Digital Insights found that AI-driven traffic to retail sites surged 693% year-over-year during the 2025 holiday season, with AI-referred visitors converting at higher rates and spending more time on sites than non-AI traffic. But analysts caution that traffic growth and checkout partnerships do not equal behavior change. Juozas Kaziukenas, an independent e-commerce analyst, said many forecasts around agentic commerce assume unrealistically fast adoption. “Product discovery, curation, personalization, and recommendations are still barebones on most AI tools,” he said. Some argue that even if agentic commerce does take off, Amazon is unlikely to be displaced. “AI will have implications for retail, but those tenets won't change. Ironically, an agentic commerce boom could actually give Amazon more leverage, said Sucharita Kodali, a retail industry analyst with Forrester. “If, big if, there does appear to be a winner — and that would be years away — the winner will likely pay Amazon billions for its feed and cooperation, like Google pays Apple,” she said. Kaziukenas said the growing wave of partnerships reflects a familiar dynamic: an anti-Amazon alliance. The company has not publicly announced support for open agentic commerce standards like UCP. Amazon CEO Andy Jassy acknowledged on a recent earnings call that agentic commerce “has a chance to be really good for e-commerce” and said that he expects the company to partner with third-party agents over time. They can check out using payment details already saved in Google Wallet, with PayPal support coming later. Google says retailers remain “the seller of record” and maintain control over customer relationships. Last week, Microsoft debuted Copilot Checkout, allowing users to complete purchases directly inside its AI assistant. OpenAI, working with Stripe, has developed the Agentic Commerce Protocol (ACP) for completing transactions within ChatGPT. Emily Pfeiffer, a principal analyst at Forrester, said she's encouraged to see companies pushing for standards — but stressed that it's “still very early, the experiences are pretty poor, and adoption is very low.” The chips powering your smart TV, voice assistant, tablet, and car all have something in common: MediaTek GeekWire Podcast: Alexa's next act, Microsoft's retail play, Google's AI Inbox, and a smart bird feeder fail Former Amazon execs raise $15M for agentic commerce startup that uses AI to generate custom storefronts GeekWire Studios has partnered with AWS for the Guide to re:Invent. This interview series took place on the Expo floor at AWS re:Invent 2025, and features insightful conversations about the future of cloud tech, as well as partnership success stories. AI is coming for your shopping cart: How agentic commerce could disrupt online retail Former Amazon execs raise $15M for agentic commerce startup that uses AI to generate custom storefronts Microsoft debuts Copilot Checkout, joining AI shopping race vs. Amazon, Google and OpenAI ‘Commerce is entering the agentic era': Envive raises $15M to build AI agents for online retailers
Washington state lawmakers are taking another run at regulating artificial intelligence, rolling out a slate of bills this session aimed at curbing discrimination, limiting AI use in schools, and imposing new obligations on companies building emotionally responsive AI products. The state has passed narrow AI-related laws in the past — including limits on facial recognition and distributing deepfakes — but broader efforts have often stalled, including proposals last year focused on AI development transparency and disclosure. The bills could affect HR software vendors, ed-tech companies, mental health startups, and generative AI platforms operating in Washington. An interim report issued recently by the Washington state AI Task Force notes that the federal government's “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.” This sweeping bill would regulate so-called high-risk AI systems used to make or significantly influence decisions about employment, housing, credit, health care, education, insurance, and parole. Companies that develop or deploy these systems in Washington would be required to assess and mitigate discrimination risks, disclose when people are interacting with AI, and explain how AI contributed to adverse decisions. Still, it could affect a wide range of tech companies, including HR software vendors, fintech firms, insurance platforms, and large employers using automated screening tools. The bill's findings warn that AI companion chatbots can blur the line between human and artificial interaction and may contribute to emotional dependency or reinforce harmful ideation, including self-harm, particularly among minors. These rules could directly impact mental health and wellness startups experimenting with AI-driven therapy or emotional support tools — including companies exploring AI-based mental health services, such as Seattle startup NewDays. Babak Parviz, CEO of NewDays and a former leader at Amazon, said he believes the bill has good intentions but added that it would be difficult to enforce as “building a long-term relationship is so vaguely defined here.” “For critical AI systems that interact with people, it's important to have a layer of human supervision,” he said. “For example, our AI system in clinic use is under the supervision of an expert human clinician.” Under this bill, companies could face lawsuits if their AI system encouraged self-harm, provided instructions, or failed to direct users to crisis resources — and would be barred from arguing that the harm was caused solely by autonomous AI behavior. If enacted, the measure would explicitly link AI system design and operation to wrongful-death claims. The bill comes amid growing legal scrutiny of companion-style chatbots, including lawsuits involving Character.AI and OpenAI. Educators and civil rights advocates have raised alarms about predictive tools that can amplify disparities in discipline. This proposal updates Washington's right-of-publicity law to explicitly cover AI-generated forged digital likenesses, including convincing voice clones and synthetic images. Using someone's AI-generated likeness for commercial purposes without consent could expose companies to liability, reinforcing that existing identity protections apply in the AI era — and not just for celebrities and public figures. Microsoft's mission: empowering every person and organization on the planet to achieve more. Learn how Microsoft is thinking about responsible artificial intelligence, regulation, sustainability, and fundamental rights. Wi-Fi on the water: Washington State Ferries explores public internet service with new pilot program Joe Nguyen named Seattle Chamber president and CEO after leaving Washington state Commerce role AI bargaining bill returns as Washington lawmakers weigh new rules for public employers New bills aim to create framework for AI transparency in Washington state Ex-Microsoft strategist running for Congress wants a ‘realistic' approach to regulating and guiding AI policy
When you purchase through links on our site, we may earn an affiliate commission. Due to the popularity of this product, the listed MSI Aegis ZS2 C9NVZ-1632US gaming desktop featured in this post is now sold out. There are alternative desktop gaming PCs on the MSI store with large discounts on both RTX 5090 and RTX 5080-powered rigs. Check out these alternate Aegis prebuilt PC deals with savings up to $1160 on the top-end Aegis R2 C14NVZ9-1442US gaming desktop with 5090, 96GB of RAM, and a 4TB SSD. An interesting spot today on a very high-end gaming PC from MSI. This high-power rig sports the most powerful gaming graphics card available, but what's really interesting is that this entire PC build is the same price as the cheapest individual RTX 5090 GPU available to buy right now. When we check RTX 5090 GPU prices at PC Partpicker, we can see that the lowest price of an available-to-buy RTX 5090 is $3599, which also happens to be an MSI Gaming Trio 5090 card. The MSI Aegis ZS2 C9NVZ-1632US features a combination of hardware components designed to deliver exceptional performance across a wide range of game titles. Inside this ultra-powerful PC is, first off, an Nvidia RTX 5090 GPU, an AMD Ryzen 7 9700X processor, 32GB of DDR5 RAM, and 2TB of SSD storage. There's also liquid cooling with a 360mm AIO cooler for the CPU, and a 1300 Watt 80 Plus Gold-rated power supply providing the juice for that 5090. No need to worry about sourcing all your separate components, as this MSI gaming rig comes prebuilt with some of the world's most powerful components already installed and ready to game as soon as you connect the PC to a mouse, keyboard, and monitor. In our review of Nvidia's RTX 5090, we benchmarked the card against our suite of 16 games and compared the performance against the competition, and also previous generations of Nvidia and AMD GPUs. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Stewart Bendle is a deals and coupon writer at Tom's Hardware. A firm believer in “Bang for the buck” Stewart likes to research the best prices and coupon codes for hardware and build PCs that have a great price for performance ratio. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
It won't get new capacity up and running until 2028 When you purchase through links on our site, we may earn an affiliate commission. For the first time since announcing its seismic decision to kill its consumer SSD and memory brand Crucial, Micron has addressed the notion that it is leaving consumers behind in a new interview. The company also warned that despite breaking ground on new memory fabs, we shouldn't expect to see meaningful output impacting memory supply until at least 2028. In early December, the company said that it plans to wind down its consumer business by the end of next month (January), reallocating its output and time to enterprise-grade DRAM and SSDs for AI buildouts. Moore was asked if memory suppliers were inclined towards catering to the AI sector, "leaving consumers behind" as a result. "Well, first I would want to try to help everybody understand that the perception may not be exactly correct, at least from our point of view," Moore said. Moore then cited Micron's sizeable businesses in the client and mobile market. Moore hinted that Micron is still technically serving consumers by supplying LPDDR5 to OEMs like Dell and Asus for inclusion in laptops, amongst other things. While the report claims Micron is in contact with "every single PC brand out there", the company simply cannot afford to ignore AI demand. Micron recently announced it would begin work on a $100 billion New York 'megafab', where it plans to produce 40% of the company's overall DRAM output by the 2040s. Its CEO said in December that it can only meet half to two-thirds of demand, meaning that even the upcoming new capacity will initially go towards making up shortfalls for existing demand. As such, while 2028 might mark the first meaningful dent Micron makes in DRAM supply, it could be months more before consumers start to see any shift in pricing for PC builds. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Stephen is Tom's Hardware's News Editor with almost a decade of industry experience covering technology, having worked at TechRadar, iMore, and even Apple over the years. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Before the trump admin dropped the Boeing case, Boeing was going to be held liable for design defects in its Max planes that caused crashes. The government wasn't going after Boeing bc a plane crashed, but bc Boeing did not take adequate steps from preventing that from happening. Any product can be used to cause harm and there are always steps that could be taken to prevent that. But that would often do more harm than it prevents. You can't go after a company that makes kitchen knives if those are used to harm bc there's nothing reasonable they could have done to prevent that harm, and there's a legitimate use case for knives for cooking.In this case, my understanding is other companies (OpenAI and Anthropic) have done more to limit harm, whereas XAI hasn't. In this case, my understanding is other companies (OpenAI and Anthropic) have done more to limit harm, whereas XAI hasn't. But in this case it's pretty easy, other model providers have in fact limited harm better than grok. OK. A different and better question.The problem is, would it be considered reasonable to avoid harm to the mental wellbeing of bikinified persons at the cost of harm to all users enjoying a service supported by bikinification earnings. The problem is, would it be considered reasonable to avoid harm to the mental wellbeing of bikinified persons at the cost of harm to all users enjoying a service supported by bikinification earnings. The UK needs someone who knows how tech and business works to tackle this, and that's not Peter Kyle.A platform suspension in the UK should have been swift, with clear terms for how X can be reinstated. As much as it appears Musk is doubling down on letting Grok produce CSAM as some form of free speech, the UK government should treat it as a limited breach or bug that the vendor needs to resolve, whilst taking action to block the site causing harm until they've fixed it.Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved. A platform suspension in the UK should have been swift, with clear terms for how X can be reinstated. As much as it appears Musk is doubling down on letting Grok produce CSAM as some form of free speech, the UK government should treat it as a limited breach or bug that the vendor needs to resolve, whilst taking action to block the site causing harm until they've fixed it.Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved. Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved. Peter Kyle was in opposition until July 2024, so how could he have spearheaded it? I don't know exactly when various parts came into effect that would constitute that, but for the point of my post I'm going on Peter Kyle's own website's dated reference to holding companies accountable. "As of the 24th July 2025, platforms now have a legal duty to protect children"https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to. "As of the 24th July 2025, platforms now have a legal duty to protect children"https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to. https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to. I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to. Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to. The bill is designed to protect children from extreme sexual violence, extreme pornography and CSAM.Not to protect adults from bikinification.It is working as designed. Not to protect adults from bikinification.It is working as designed. Sometimes people just dig themselves into a hole and they start going off the deep end. Why did it take until 1944 for someone to blow up Hitler? It does not empower platform suspension for bikinification.And there's as yet no substantiation of your claim Grok produces CSAM. And there's as yet no substantiation of your claim Grok produces CSAM. If you believe in democracy, and the rule of law, and citizenship, then the responsibility obviously lies with people who create and publish pictures, not the makers of tools.Think of it. You can use a phone camera to produce illegal pictures. What kind of a world would we live in if Apple was required to run an AI filter on your pics to determine whether they comply with the laws?A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal. You can use a phone camera to produce illegal pictures. What kind of a world would we live in if Apple was required to run an AI filter on your pics to determine whether they comply with the laws?A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal. A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal. They don't.The Govt's problem is imagery it calls 'barely legal'. I.e. "legal but we wish it wasn't." The Govt's problem is imagery it calls 'barely legal'. Is there actually a significant number of problematic sexualised AI images of men on X that the title fails to mention? If not, the follow-up question would be: what are you actually complaining about, exactly?Women are often sexualised, way more than men. Would it be more comfortable to you if this fact was invisibilized? Women are often sexualised, way more than men. Would it be more comfortable to you if this fact was invisibilized? I agree this is problematic, but I am inclined to see it as an opportunity to discuss the problem and illustrate how widespread it is. Besides, who is going to decide when people's images are sexualized enough? They banned porn sites too.> Why not you?The UK Govt has no power to ban it, since it is legal.
After five years of building an edtech company, Nathan Nwachuku, 22, realized that Africa was at a crossroads. The continent is undergoing rapid industrialization, he told TechCrunch. There is money, opportunity, and a young, driven population. The company announced Monday that it emerged from stealth with an $11.75 million round led by Joe Lonsdale's 8VC. Others in the round include Valor Equity Partners, Lux Capital, SV Angel, and Nova Global. The company previously raised an $800,000 pre-seed round, and Nwachuku said others took much interest in the company after it appeared on CNN. African investors in the company include Tofino Capital, Kaleo Ventures, and DFS Lab. Maduka also served as an engineer in the Nigerian Navy and founded a drone company at 19. The company, based in Nigeria's capital, Abuja, took a multi-domain approach to product development, considering how to protect critical infrastructure from the ground, water, and air. The company is still working on developing maritime technology to help protect infrastructure such as offshore rigs and underwater pipelines. “We want to geofence all of Africa's critical infrastructure and resources,” Nwachuku said, adding that the problem is not lack of firepower (many African armies already have that). Instead, it's a lack of sovereign intelligence, as much of the intelligence that African countries depend on comes from Western powers, China, and Russia. “We want to take the defense of our continent's resources and infrastructure into Africa's own hands,” Nwachuku continued. Nwachuku said the company has generated more than $2.5 million in commercial revenue so far and is protecting assets valued at around $11 billion. Commercial revenue comes from protecting private infrastructure, like gold mines or power plants. Terra said it is protecting at least two hydro power plants and several smaller mines, with most of the company's clientele coming from Nigeria. It will open software offices in San Francisco and London, but the company said manufacturing will remain in Africa, with more factories opening across the continent to boost job creation. “It's clear Africa today is undergoing what I see as an epic struggle for its very survival,” Nwachuku said. Dominic-Madori Davis is a senior venture capital and startup reporter at TechCrunch. Google announces a new protocol to facilitate commerce using AI agents The most bizarre tech announced so far at CES 2026 Yes, LinkedIn banned AI agent startup Artisan, but now it's back Founder of spyware maker pcTattletale pleads guilty to hacking and advertising surveillance software