REDMOND, Wash. — For more than three years, much of the focus on AI in education has been on the implications of handing students what amounts to a technological cheat code. But what if this disruptive force could be used to improve education instead? That's the idea behind an effort now getting under way in Washington state. Last week, school districts gathered at Microsoft in Redmond for the start of a two-year “community of practice” focused on AI in education. The Gates Foundation is funding a separate cohort of 10 districts focused on AI infrastructure and data systems. The Microsoft Elevate grants also include up to $25,000 in funding for technology consulting. The project was inspired by listening sessions with high school students who receive special education services. It can be stressful and burdensome for students to explain their needs to each new teacher, ensuring that their accommodations and goals are being met. Dr. Sharine Carver, the district's executive director of special services, said the goal is to “empower students, reduce that psychological burden and put them in the driver's seat of really understanding their IEP and being able to advocate for themselves.” Diana Eggers, the district's director of educational technology, said the IEP project is different in that it goes beyond seeking efficiency in existing activities to instead build AI for a new purpose. Jane Broom, senior director of Microsoft Philanthropies, who grew up in Washington public schools, told the group that they are on the front lines of an unprecedented transformation. “These last two or three years have been pretty unreal.” The 10 Microsoft grantees range from Seattle, the state's largest district with about 49,000 students, to Manson, a rural district in Chelan County with about 700 students. Collectively, the grantees serve about 17 percent of Washington's K-12 students. Microsoft highlighted this divide when it launched the Elevate program last fall. The opening session Thursday morning came with an additional reality check: National research presented at the event showed that even the most ambitious districts are still in early stages, and struggling to answer a basic question: is any of this actually working? Bree Dusseault, principal and managing director at the Center on Reinventing Public Education at the University of Washington, presented findings from a national study of early-adopter school districts. Her team surveyed 119 systems (with 45 responses), interviewed leaders at 14, and profiled 79 for a database of districts pushing ahead on AI adoption. Districts have moved quickly to put infrastructure in place, but significant gaps remain: About two-thirds have data privacy protocols and dedicated AI staff. Only 24% of the surveyed districts have any system for measuring whether their AI efforts are working. Only 9% have updated learning standards to reflect new student competencies. Every early-adopter district in the study trains teachers and approves them to use AI. A separate RAND/CRPE survey from September 2025 found that 54% of students use AI for schoolwork. But only 19% report getting any guidance on how to use it. Most early adopters aren't using AI to rethink education. Another 30% are using it to support existing reform efforts. Microsoft to provide free Copilot tools for Washington state schools amid debate over AI's role in learning Public schools leader urges Washington state districts to devise rules limiting smartphone use of Washington lands $10M grant to launch a new center developing gen AI teaching tools Majority of Washington state school districts will limit student access to cellphones, smart devices
Looks like Sam Altman and Jony Ive will have to wait until next year to kill AirPods, or the iPhone, or revolutionize the pen, or whatever it is they're actually doing with their AI-centric hardware venture. According to a report from Wired, court filings indicate that Sam Altman and Jony Ive have hit yet another snag in their nascent journey into AI gadgets with a newly formed company, io. “Peter Welinder, OpenAI's vice president and general manager, said in the filing that OpenAI had reviewed its product-naming strategy and decided not to use the name ‘io'…in connection with the naming, advertising, marketing, or sale of any artificial intelligence-enabled hardware products.” “Decided” is an interesting choice of words here, given that the company was actually sued and issued a court order in June over a trademark claim regarding the use of that name. It's also unclear what name they'll go with now, but maybe they could try “Pear,” or “Grape,” or some other one-word fruit, since last I checked, “Apple” was already taken. Wouldn't want to repeat that mistake twice. Snag number two is that the company now has a new timeline for the release of its first piece of hardware, and it's a bit further out than we had anticipated. According to Wired, Sam Altman and Jony Ive's now nameless company will not start shipping its first gadget until February of 2027. I say “might” because it's really hard to tell just how far along any of this stuff is. Reports last year of difficulties getting devices to do basic stuff aren't instilling much confidence. One problem in particular has reportedly been getting the voice assistant to listen when you want it to and shut up when appropriate. Listen, I get it; figuring anything new out is going to come with its own unique set of challenges, and some of those challenges aren't going to be easily solvable overnight. The problem is that when you take a simple fact like that and place it in the context of AI gadgets, it becomes very easy to cast doubt on the whole idea. AI gadgets have had a rough go; just ask Humane and its fallen AI Pin or Rabbit and the increasingly irrelevant R1. Maybe OpenAI can solve critical flaws with AI gadgets, but maybe the problem is that there are too many issues to solve. Could be that AI just isn't smart enough yet, and neither are the voice assistants powered by it. Or maybe our general vice grip on phones as the end-all, be-all form factor is too strong for a device like Altman and Ive's to pry open. No matter which way you spin it, the company formerly known as io has a lot to solve, and the problems just seem to keep piling up. Your next OLED monitor deserves slightly more TLC than other screen types. Is this what a frothy bubble looks like? Recent comments from Tim Cook and a blockbuster acquisition are painting an interesting picture of Apple's interest in AI hardware.
Luxury brands lose more than $30 billion a year to counterfeits, while buyers in the booming $210 billion second-hand market have no reliable way to verify that what they're purchasing is genuine. Veritas wants to solve both problems with a solution that combines custom hardware and software. The startup claims that it has developed a “hack-proof” chip that can't be bypassed by devices like Flipper Zero, a widely available hacking tool that can be used to tamper with wireless systems. Vertitas founder Luci Holland has experienced life as both a technologist and an artist. She has worked in different artistic mediums, including mixed media painting and metal sculpture. Holland noted that traditionally, luxury goods makers use various symbols or physical marks to authenticate their products. However, with the growing demand for these goods, counterfeiters have learned to create convincing copies of these marks along with high-quality fake certificates. “For me, as someone who has a background in being a designer and then also has experience in tech, I saw this problem and thought about the different ways we could solve it. I think what's truly innovative is we've used and combined elements from both hardware and software to create this solution that helps protect brands in this way to convey the information,” she said. The chip is the size of a small gem and can easily be inserted even after a product is made without compromising its integrity. This means you can tap your smartphone on the item to verify its authenticity. The company also creates a blockchain-based digital clone of the product for possible digital art gallery shows or metaverse activities. The company didn't reveal who it is working with, but said that brands can use its software suite to get information about all the chipped products, add team members to manage items and add product information along with the product story — details that can also be used to connect with their community. While the counterfeiting market is big, Holland thinks the market still needs education around why it needs robust tech solutions. “It is shocking to see that some of the shelf solutions, like NFC chips that brands are using, are actually so vulnerable and could easily be bypassed. This is the one thing most people don't know, and we want to educate the ecosystem to adopt safer solutions,” Holland said. Veritas said that it raised $1.75 million in pre-seed funding led by Seven Seven Six, along with Doordash co-founder Stanley Tang, skincare brand Reys' co-founder Gloria Zhu, and former TechCrunch editor Josh Constine. Seven Seven Six's Alexis Ohanian said that he was impressed by the combination of design taste and technology expertise of Holland. “It's absolutely an arms race [against fake goods makers], but we're used to fighting those and consistently winning in tech — and luxury brands need all the help they can get,” Ohanian said. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. YouTube TV introduces cheaper bundles, including a $65/month sports package Senator, who has repeatedly warned about secret US government surveillance, sounds new alarm over ‘CIA activities' The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be OpenAI launches new agentic coding model only minutes after Anthropic drops its own Sam Altman got exceptionally testy over Claude Super Bowl ads
Federal records obtained by WIRED show that over the past several months, Immigration and Customs Enforcement (ICE) and the Department of Homeland Security (DHS) have carried out a secret campaign to expand ICE's physical presence across the US. Documents show that more than 150 leases and office expansions have or would place new facilities in nearly every state, many of them in or just outside of the country's largest metropolitan areas. In many cases, these facilities, which are to be used by street-level agents and ICE attorneys, are located near elementary schools, medical offices, places of worship, and other sensitive locations. In El Paso, Texas, for example, the agency is moving into a large campus of buildings right off of Interstate 10 near multiple local health providers and other businesses. In Irvine, California, ICE is moving into offices located next to a childcare agency. In New York, ICE is moving into offices on Long Island near a passport center. The General Services Administration (GSA), which manages federal buildings and functions as the government's internal IT department, is playing a critical role in this aggressive expansion. In numerous emails and memorandums viewed by WIRED, DHS asked GSA explicitly to disregard usual government lease procurement procedures and even hide lease listings due to “national security concerns” in an effort to support ICE's immigration enforcement activities across the US. “GSA is committed to working with all of our partner agencies, including our patriotic law enforcement partners such as ICE, to meet their workspace needs. GSA remains focused on supporting this administration's goal of optimizing the federal footprint, and providing the best workplaces for our federal agencies to meet their mission,” Marianne Copenhaver, GSA associate administrator for communications, tells WIRED. DHS, ICE's parent agency, did not reply to requests for comment. The agency received nearly $80 billion in funding as part of Trump's One Big Beautiful Bill Act, giving it virtually unlimited resources to combat what the administration has consistently portrayed as an “invasion.” With new employees comes a desperate need for office space, and the possibility of deployment to new areas of operation. In September, as NPR and The Washington Post reported, a number of GSA employees were added to an “ICE surge” team responsible for finding new office locations and expanding preexisting offices for ICE employees. More specifically, according to documents viewed by WIRED, workers at the Public Buildings Service (PBS), the department within GSA that handles government buildings and leases, were assigned to actively support ICE's physical expansion and told to find leasing spaces for ICE's Enforcement and Removal Operations (ERO) and Office of the Principal Legal Advisor (OPLA) divisions across the country. ERO is tasked with immigration enforcement, including the arrest, detention, and removal of immigrants, and previously operated out of only 25 field offices in the US; OPLA is the legal arm of ICE, and lawyers with OPLA litigate “all removal cases including those against criminal aliens, terrorists, and human rights abusers,” for DHS, according to ICE's website. In addition to expanding previously held ICE offices, it has moved or is moving ICE into new buildings, or into space the government controlled under the terms of existing leases, in almost every US state and major city. Starting in September, GSA was pushed to bypass the Competition in Contracting Act (CICA) that requires open competition among bidders for federal building and lease procurements, because ICE requested that leases fall under the “unusual or compelling urgency” government statute. These team members were told that around 250 new locations were needed for ICE employees, and this would be potentially achieved by new lease acquisitions and by locating ICE in existing federal spaces. In a memorandum dated September 10, 2025, an OPLA representative asked GSA's office of general counsel to look past the usual leasing procedures with the “unusual and compelling urgency justification,” in accordance with Trump's executive order on immigration. “OPLA has critical space needs that require the ability to identify office locations nationwide that OPLA can readily occupy as soon as possible.” GSA's ICE surge team began visiting potential leasing locations and worked to finalize deals within days. A DHS official sent GSA an email on September 24, 2025, asking that the agency not publicize leasing information, recognizing that this request was outside of the “normal” process. “Due to national security concerns and recent attacks against ICE, publicizing new lease locations puts our officers, employees, and detainees in grave danger,” the email stated. GSA was instructed in January 2025 to pause most acquisitions, deliveries, and modifications, except for projects under $50,000 and those related to supporting security measures for the president's office. By September 29, GSA had already awarded leasing projects, and the ERO division at ICE had sent the ICE surge team a list of requirements for specific leasing locations, including sally ports—a secure entryway system with interlocking doors used by military troops, prisons, and police stations—and other security measures. By early October, the ICE surge team was working through the government shutdown, even as other critical government work was put on hold. Days after the shutdown began, GSA was still awarding leases. On October 6, 2025, a signed internal memorandum stated that GSA should “approve of all new lease housing determinations associated with ICE hiring surge,” in light of ICE's “urgent” space requirements and the purported impact of delays on the agency's ability to “meet critical immigration enforcement deadlines.” In a memorandum dated October 29, 2025, a representative from Homeland Security Investigations—one of the two major departments within ICE, along with ERO, and tasked with a wide range of investigative work in cases ranging from human trafficking to art theft—asked GSA's office of general counsel to engage in nationwide lease acquisition on behalf of DHS “using the unusual and compelling urgency justification,” in accordance with Trump's executive immigration order. “If HSI cannot effectively obtain office space in a timely manner, HSI will be adversely impacted in accomplishing its mission—a mission that is inextricably tied to the Administration's priority in protecting the American People Against Invasion,” the memorandum states. By early November, according to documents viewed by WIRED, 19 projects had been awarded in cities around the US, including Nashville, Tennessee; Dallas, Texas; Sacramento, California; and Tampa, Florida. Multiple projects were days away from being awarded in Miami, Florida; Pittsburgh, Pennsylvania; and New Orleans, Louisiana, among others, and emergency requests for short-term space had been made in eight cities, including Atlanta, Georgia; Baltimore, Maryland; Boston, Massachusetts; and Newark, New Jersey. In documents viewed by WIRED, ICE has repeatedly outlined its expansion to cities around the US. The September memorandum citing “unusual and compelling urgency” for office expansion states that OPLA will be “expanding its legal operations” into Birmingham, Alabama; Fort Lauderdale, Fort Myers, Jacksonville, and Tampa, Florida; Des Moines, Iowa; Boise, Idaho; Louisville, Kentucky; Baton Rouge, Louisiana; Grand Rapids, Michigan; St. Louis, Missouri; Raleigh, North Carolina; Long Island, New York; Columbus, Ohio; Oklahoma City, Oklahoma; Pittsburgh, Pennsylvania; Charleston and Columbia, South Carolina; Nashville, Tennessee; Richmond, Virginia; Spokane, Washington and Coeur d'Alene, Idaho; and Milwaukee, Wisconsin. The table below gives a detailed listing of planned ICE lease locations as of January, and includes current ICE offices that are set to expand and new spaces the agency is poised to occupy. It does not include more than 100 planned ICE locations across many states—including California, New York, and New Jersey—where WIRED has not viewed every specific address. As of January, ICE's expansion is heavily concentrated in a few key states. In the Woodlands, a district near Houston, ICE appears poised to move into an office building at 1780 Hughes Landing Boulevard, blocks away from a Primrose preschool. In El Paso, ICE is moving into the Epicenter Office Community, a large campus of buildings right off Interstate 10 near many local health providers and other businesses. In San Antonio, ICE is considering a move into a building located at 15727 Anthem Parkway, near apartment buildings, dozens of restaurants, and the Methodist Hospital Landmark. A Trump administration official recently told WIRED that California and New York are “next” for the type of fraud investigation that culminated in 3,000 ICE agents in Minneapolis. In Sacramento, ICE has installed security features at the John E. Moss building ahead of further expansion. In Irvine, a city in Orange County located an hour's drive from Los Angeles, ICE is moving into offices on 2020 Main Street, located right next to the airport and a childcare agency. In Van Nuys, a neighborhood of Los Angeles, ICE is expanding its offices at the James C. Corman federal building that also has offices for the IRS and Health and Human Services. In Woodbury, New York, a hamlet in Long Island, ICE is moving into offices located at 88 Froehlich Farm Boulevard, near an expedited passport center. All three of these locations are within an hour and a half from a warehouse in Chester, New York, that DHS is pursuing as an immigrant detention center. In Hyattsville, Maryland, ICE is expanding its offices at the Metro 1 building on 6505 Belcrest Road, which sits a few blocks from a Lutheran church. The One City Center building in Portland, Maine, where ICE plans to expand its offices, is within walking distance to at least six churches, a mosque, one synagogue, and a Salvation Army adult rehab center. Several planned ICE office spaces are located near schools and early-childhood care centers. ICE is poised to move into a building on 1000 Westlakes Drive in the Philadelphia, Pennsylvania, suburb of Berwyn; the Hillside Elementary School is about a mile away. In Hartford, Connecticut, OPLA is poised to expand existing ICE office space in the Abraham A. Ribicoff federal building, which sits two blocks from the Betances elementary school. In the Columbus, Ohio, suburb of Westerville, OPLA appears ready to move into a small office building at 774 Park Meadow, near the Oakstone Academy High School. Many other planned locations are near hospitals and medical offices. In Columbia, South Carolina, ICE is poised to move into offices at 1441 Main Street, which sits blocks away from the Prisma Health Baptist Hospital. The Hyattsville location is about an hour and a half from a warehouse DHS recently purchased that ICE has said will be used to detain immigrants, as is a second office marked for OPLA use in Cockeysville, Maryland. Among the many ICE leasing projects underway in Florida is Research Commons, a building located in Orlando at 12249 Science Drive that's less than 25 minutes away from another warehouse identified by the Post as a potential large-scale detention center. In Alexandria, Louisiana, ICE is moving into a building on 1201 3rd Street, located right in the city's historic downtown center and a 16-minute drive from the Alexandria Staging Facility, where immigrants have been detained, transferred, and then deported. They'll expand existing offices and move in with unrelated government agencies—at 801 Arch Street in Philadelphia, Pennsylvania, for example, they'll share space with the DMV. In your inbox: Sign up for our new Tracker: ICE newsletter Watch: We raced in exoskeletons to see if they actually help WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Modern enterprises generate enormous amounts of security data, but legacy tools like Splunk still require companies to store all of it in one place before they can detect threats — a slow and costly process that's increasingly breaking down in cloud environments where volumes are exploding and data lives everywhere. AI cybersecurity startup Vega Security wants to flip that approach by running security where the data already lives, implementing in cloud services, data lakes, and existing storage systems. And the two-year-old firm just raised a $120 million Series B round to scale that vision, TechCrunch has exclusively learned. Led by Accel with participation from Cyberstarts, Redpoint, and CRV, the new round nearly doubles Vega's valuation to $700 million, and brings its total funding to $185 million, money the startup will use to further develop its AI-native security operations suite, beef up its go-to-market team, and expand globally. In complex cloud environments, he says, the current model often increases exposure to threat actors. Like so many cybersecurity founders, Sandler did his time in the Israeli military's cybersecurity unit before being one of the founding employees behind Granulate, which Intel acquired for $650 million in 2022. After a year at Intel, Sandler decided to “do it big time in the cybersecurity world.” That pedigree is partly what attracted the attention of Andrei Brasoveanu, a partner at Accel. But it was also Vega's ambitious approach to security management in a market that is already dominated by one player: Splunk. They fail at processing the insane rise of data volumes driven by AI. That's why Sandler says Vega's “North Star” was to not only build a solution that is more cost effective and better at threat detection, but “to make it no drama, as simple as possible for the biggest, most complex enterprises in the world to adopt it within minutes.” Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. Senator, who has repeatedly warned about secret US government surveillance, sounds new alarm over ‘CIA activities' The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be OpenAI launches new agentic coding model only minutes after Anthropic drops its own
Netflix, already the largest streaming platform with over 325 million subscribers, took a bold step by acquiring Warner Bros.' film and television studios, as well as HBO, HBO Max, and other assets. The deal, announced in early December, will bring together some of the most legendary franchises, such as Game of Thrones, Harry Potter, and DC Comics properties, among others, all under one roof. Discovery (WBD) revealed it was exploring a potential sale after receiving unsolicited interest from several major players in the industry. For years, WBD has struggled under the weight of billions of dollars in debt, compounded by declining cable viewership and fierce competition from streaming platforms. But ultimately, WBD's board determined that Netflix's offer was the most attractive, despite Paramount offering approximately $108 billion in cash. Additionally, Netflix recently amended its agreement to an all-cash offer at $27.75 per WBD share, further reassuring investors and paving the way for the deal to proceed. Even after Netflix emerged as the preferred buyer, tensions with Paramount remained high, as the rival company continued to pursue Warner Bros.' assets. Paramount persisted in its attempts to acquire WBD for several months. Still, the board repeatedly rejected its offers, citing concerns about Paramount's heavy debt load and the increased risk associated with its proposal. The board noted that Paramount's offer would have left the combined company burdened with $87 billion in debt, a risk they were unwilling to take. In January, Paramount filed a lawsuit seeking more information about the Netflix deal. The company continues to assert that its offer is far superior. Earlier this week, it was reported that Netflix co-CEO Ted Sarandos is scheduled to testify before a U.S. Senate committee about the deal, a move that highlights just how seriously lawmakers are taking these concerns. In November, prominent lawmakers — Senators Elizabeth Warren, Bernie Sanders, and Richard Blumenthal — voiced their concerns to the Justice Department's Antitrust Division, warning that such a massive merger could have serious consequences for consumers and the industry at large. Should regulators block the acquisition, Netflix would be obligated to pay a $5.8 billion breakup fee. It remains unclear whether Warner Bros. would remain an independent company or revisit previous acquisition proposals. There are also widespread concerns about potential job losses and lower wages. For creators and theaters, uncertainty remains around release windows. Netflix co-CEO Ted Sarandos has stated that all films planned for theatrical release through Warner Bros. will continue as scheduled. However, he also hinted that, over time, release windows may be shortened, with movies coming to streaming platforms sooner than before. What does all this mean if you're a Netflix or HBO Max subscriber? Netflix executives have reassured viewers that HBO's operations will remain largely unchanged in the near term. At this stage, the company says it's too early to make any definitive announcements about potential bundles or app integration. Regarding pricing, Sarandos has stated that no immediate changes will occur during the regulatory approval period. However, subscribers should be aware that Netflix has historically raised subscription prices regularly, so price increases are possible once the acquisition is finalized. Netflix tends to hike its rates every year or two. However, regulatory approvals are still pending, and scrutiny could shape the final outcome. Lauren covers media, streaming, apps and platforms at TechCrunch. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. Senator, who has repeatedly warned about secret US government surveillance, sounds new alarm over ‘CIA activities' The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be OpenAI launches new agentic coding model only minutes after Anthropic drops its own Sam Altman got exceptionally testy over Claude Super Bowl ads
Former GitHub CEO Thomas Dohmke has raised the largest-ever seed round for a dev tool startup, according to its lead backer, Felicis. Entire offers an open source tool to help developers better manage code written by AI agents. One is a Git-compatible database to unify the AI-produced code. The final piece is an AI-native user interface designed with agent-to-human collaboration in mind. The first product Entire is releasing is an open source tool it calls Checkpoints that automatically pairs every bit of software the agent submits for use in a software project with the context that created it, including prompts and transcripts. Entire hopes to help developers better deal with the large volumes of software created by AI coding agents. Popular open source projects are particularly overwhelmed these days with suggested code contributions that may or may not be AI slop — meaning poorly designed and possibly unusable code. Dohmke explains in the press release: “We are living through an agent boom, and now massive volumes of code are being generated faster than any human could reasonably understand. Dohmke was CEO of Microsoft's GitHub for four years, leaving in August 2025 to found a startup, he said in a post on X at the time. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. Senator, who has repeatedly warned about secret US government surveillance, sounds new alarm over ‘CIA activities' The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be OpenAI launches new agentic coding model only minutes after Anthropic drops its own
When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. TSMC on Tuesday held a board meeting where it, among other things, approved plans to spend $44.962 billion on building new fabs and upgrading existing production capacities. In addition, the company promoted a developer of its 1nm-class process technology. TSMC holds board meetings — where it approves things like capital appropriations, capital injections, or distribution of dividends — every quarter. Approval of a plan to spend $44.962 billion is a record one and indicates that the company is getting more aggressive with its expansion, as well as its projects getting more costly, which is in line with the general industrial trend for fabs to get more expensive. It should be noted that approvals of capital appropriations are not indicators of actual spending, but these are allowances for management to spend them on certain projects, which may or may not be parts of the ongoing fiscal year's CapEx. Earlier this year, TSMC announced plans to spend between $52 billion and $56 billion on all-new manufacturing capacities, upgrading existing fabs, and building advanced packaging facilities. The contract chipmaker plans to spend between 70% and 80% of its 2026 CapEx on advanced process technologies, between 10% and 20% of the budget on advanced packaging and mask making, as well as approximately 10% on specialty technologies. If the company has considerably more capacity than its rivals, it will be more likely to land big orders from big customers. If TSMC has slightly more capacity than its clients need, then the latter will unlikely outsource even a part of their production to competitors. The promotion may also mean that as a VP, S.S. Lin will be able to oversee more technology programs than a single (yet very important) process node as well as have greater influence over roadmap goals, priorities, and resource allocation, though we are speculating. Also, as the A10 program is moving from research and development towards finalization and adoption by partners and customers, the program leader may need to have more power to execute their goals. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Some believe that TSMC plans to start using High-NA EUV lithography tools with its A10 node. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
When you purchase through links on our site, we may earn an affiliate commission. It's expensive, but unmatched in features, performance, and premium design for AM5 builders. Beautiful decorative plate must be removed to access primary PCIe slot Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Thanks to X3D CPUs like the Ryzen 7 9850X3D, AMD's partners are still pushing out new high-end motherboards, like Asus' latest flagship ROG Crosshair X870E Glacial, which is on our test bench today. You can tell just by looking at it that this board is both impressive and expensive. At $1,199.99, it is costly and right up there with other recently released flagships like the MSI X870E Godlike X and Gigabyte's X870E Aorus Xtreme X3D AI TOP, both well over $1,000. It's also the only desktop-class board with two 10 GbE ports. On top of that, the Glacial has an incredibly robust 28-phase VRM, a high-end audio solution with integrated DAC/AMP, and loads of other perks, including EZ PC DIY and AI features (for overclocking, cooling, and performance) that aim to pull you in. The ROG Crosshair X870E Glacial performed well across our test suite. Productivity results were in line with other AM5 boards, occasionally surpassing or falling short of the average. While gaming benchmarks like 3DMark were slower, performance in actual games was good, with most differences only notable in synthetic tests. Overall, the board delivers solid performance, looks good, and is fully featured.Below, we'll examine the Glacial's performance and other features to determine whether it deserves a spot on our list of the best motherboards. But before we share test results and discuss details, here are the specifications from Asus' website: (8) 4-Pin (Accepts PWM and DC)(2) W_PUMP+ headers (4-pin)(1) AIO Q-Connector(1) Extra Flow fan header ROG Supreme FX (ALC4082) + ESS9219 Quad DAC, LED illuminated audio jacks The number is too large to put in a paragraph, so we've listed all that box includes, and a picture of the unique items, below. It looks fantastic in white — a very clean look. I understand it's typically not visible, but if you use a vertically mounted GPU, the less-attractive M.2 heatsink plate is visible. On the surface, it just seems wasteful (read: increase on the BOM) when they can incorporate that into one heatsink and still show it off. Perhaps there's some engineering magic we're unaware of. You can display things like pre-loaded ROG animations, hardware information, or your own customized image. Under that is a large heatpipe-connected VRM heatsink to cool the highly-capable power delivery below. Above that, and hidden beneath the top magnetically connected shroud, are two 8-pin EPS connectors (one required) to power the processor. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. It's a useful feature if you plan to use compatible Asus AIOs. Otherwise, the gold contacts stick out from the white aesthetic (why not put a white rubber cover on it, Asus? It's a tight fit for the bottom locks (requires something skinny to poke them), or remove another magnetic piece next to PCIe latch that says Glacial on it to get better access. Or, use a single locking mechanism at the top. Still, that's plenty fast and way past AMD's sweetspot. This is a great way to add easily swappable M.2 storage. Asus also includes the Hyper M.2 card, which you install in a PCIe slot. The Hyper M.2 offers two more PCIe 5.0 x4 (128 Gbps) M.2 sockets, bringing the total to four (if you force M.2_2 to x4 speeds). If your build needs a lot of fast M.2, the Glacial is where it's at. Per usual, each supports PWM and DC-controlled devices. Power output varies from 1A/12W on most headers (CPU, Rad, Chassis, AIO, and EF fans), while the two pump headers allow 3A/36W. The Asus BIOS or Armory Crate software controls these attached devices. To the right of that are the ProbeIt measurement points that let you measure your system's current voltage and oc settings. You can measure Vcore, Vmem, VSOC, and eight other voltages. This is primarily useful for the extreme overclocker, but it's always worth verifying against software, as that can be off. Looking down the right edge, we see another shroud with two buttons on top (Start and Flexkey), and beneath those are multiple 90-degree headers. This includes the first 3-pin ARGB header, two additional 4-pin fan headers (W_Pump and CHA_FAN2), the 24-pin ATX power connector, and two front-panel USB 3.2 Type-C connectors (both 20 Gbps). With a total of 28 phases (24 for Vcore), you're not going to find one more potent. From there, it moves to the Infineon PMC41420 110A MOSFETs. The 2,640 Amps available will handle any CPU you throw at it, whether you're using ambient or extreme cooling methods, even a Ryzen 9 9950X or the recently released Ryzen 9 9850X3D. Starting on the left, Asus uses the flagship-class Supreme FX audio solution (read: Realtek ALC4082 codec) along with an ESS Q9219 DAC. This is the best native audio combination you can get, and what you'd expect from a high-end board. Next are the two PCIe slots hidden beneath magnetically attached shrouds. I don't see the point of this decorative shroud, as you have to remove it to use either PCIe slot (and who's going to use the iGPU only on a board like this?). Both of these reinforced slots connect through the CPU, offering PCIe 5.0 bandwidth. The top slot is for primary graphics and runs at x16 speeds (breaking down to x8/X8, x8/x4/x4, or x4/x4/x4/x4 modes), while the bottom slot is limited to x8. Note that this applies to 7000 and 9000 series desktop processors; APUs are different (see the specifications on Asus' website for details). Asus moved away from its controversial PCIe latching mechanism. Under the shrouds and heatsinks are three M.2 sockets. The top M.2 socket, M.2_1, under the large 3D VC M.2 heatsink (with Q-Release) runs PCIe 5.0 x4 (128 Gbps) and supports up to 110mm modules. Asus made connecting M.2 drives easy with the M.2 Q-Latch or M.2 Q-Slide functionality. It's not like they're ever visible and need protection, so why even bother putting them on? I don't think it's a big deal, but it was a curious choice to put plastic on something buried that doesn't need protection. Moving to the right edge, we see more horizontal connectivity. Along the bottom are several headers under the magnetic cover, ranging from all four SATA ports to BCLK adjustment buttons for overclocking, with a lot in between. If you're keen on using the shroud and the SATA ports, be sure to use 90-degree connectors so they fit underneath without excessive cable bends. A complete list of connectivity is listed below (from L to R): The eight red Type-A ports are all 10 Gbps. Joe Shields is a staff writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
"The Court has not decided which side is right. When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. As per documents from the case, the initial class action lawsuit was brought over claims "that G.Skill deceptively advertised and labeled the speed of its DDR-4 and DDR-5 DRAM (non-laptop) memory products with rated speeds over 2133 MHz or 4800 MHz, and that G.Skill is liable for violations of consumer protection statutes and breach of express warranty." Specifically, the lawsuit seems to be about overclocking, noting plaintiffs represented allege "they were lead to believe that the advertised speeds were 'out of the box' speeds requiring no adjustments to their PCs." G.Skill denies the allegations, and the court hasn't decided in favor of either party, avoiding "the uncertainties, burdens, and expenses associated with ongoing litigation," and ensuring class members get a payout sooner rather than later. To that end, "All individuals in the United States who purchased one or more G.Skill DDR-4 and DDR-5 DRAM (non-laptop) memory products with rated speeds over 2133 MHz or 4800 MHz respectively from January 31, 2018 to January 7, 2026," are part of the settlement class and eligible for payout. Court documents go on to specify that class members will be eligible for up to five qualifying purchases per household, provided you have proof of purchase. As is often the case with settlements, a lot of that money has already been allocated. $295,000 in settlement administration costs, up to $800,000 in attorneys' fees, an undetermined amount of attorneys' expenses, and service awards to class representatives of up to $5,000 means that upwards of half the settlement pot has already been spent. How much you'll get if you're eligible depends entirely on how many class members there are, with the remainder of the fund split between them. You'll need to submit a claim form by April 7, or you can, of course, also submit an objection or indeed exclude yourself from the class by the same deadline. The court documents state that rated speeds will be listed as "up to" speeds, and include the following disclaimer: "Requires overclocking/BIOS adjustments. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Stephen is Tom's Hardware's News Editor with almost a decade of industry experience covering technology, having worked at TechRadar, iMore, and even Apple over the years. He has covered the world of consumer tech from nearly every angle, including supply chain rumors, patents, and litigation, and more. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. AMD is getting ready to ditch its AGESA microcode design in favor of an open-source successor dubbed openSIL, starting with Zen 6. In the meantime, 3mbdeb, a Polish open-source consulting firm, has announced that the first stages of porting openSIL to a consumer Zen 5 motherboard are underway. If you're an enthusiast for this kind of stuff, you can now take openSIL for a spin before it shows up with AMD's next-generation CPUs, though the firm warns that this is a "proof of concept" that is "not intended for production use." AMD published its openSIL initialization code for the aforementioned Turin server chips well before AMD published the same code for its desktop Phoenix CPUs. Without these microcode platforms, your computer would not boot at all. OpenSIL represents a big improvement in the way code is inspected and guarded against cyberattacks over AMD's outgoing AGESA platform. AGESA's main problem is that the code is closed-source, preventing users from inspecting the firmware code for security purposes, bug checking, or other purposes. With openSIL, AMD has improved on this by making the new firmware open-source. AGESA is designed around UEFI as the host firmware. There's not much reason to run openSIL right now if you own an MSI B850-P Pro. Coreboot's initial support for openSIL with the B850-P Pro is still in development and the board is technically not on Coreboot's support list yet. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Aaron Klotz is a contributing writer for Tom's Hardware, covering news related to computer hardware such as CPUs, and graphics cards. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
If you've been waiting for a reassuring answer to the question “Why does Kimbal Musk seem like he was so close to convicted sex criminal Jeffrey Epstein?” here you go: My only meeting with that demon was in his New York office during the day. One impression a reasonable person might get from looking at the Epstein documents involving Kimbal Musk is that Epstein was aware of at least some of the details of his romantic life, knew his partner well, and that Epstein planned to spend—or at least was considering spending—a great deal of time with them in many locations around the country back in 2022 and 2023. But as laid out in an X post from Musk on Monday night: If this 2012-era newsletter was just Kimbal Musk using his normal email client to send out emails to “thousands of people” instead of using something like Mailchimp, that would be somewhat weird, but not unheard of, and there's a conceivably universe where that could at least partly explain why Epstein was on email chains with him. I say “partly” because it doesn't fully clarify all the reasons for the October 7, 2012 email in the files in which Kimbal Musk emails two people—an associate of Epstein and Bill Gates named Boris Nikolic, and Jeffrey Epstein, and writes in part: “Great to hang out today.Jeffrey and Boris, many thanks for connecting me with [name voluntarily redacted by Gizmodo]. .“Kimbal — just fyi — you better be nice to [name redacted] 😉Jeffrey goes crazy when someone mistreats his girls/friends. For that matter, Musk's explanation doesn't completely satisfy my curiosity as to why Epstein had a Google Calendar reminder from shortly before those emails about an event in September called “kimba musk, birthday four seasons,” which occurred the Saturday after Kimbal Musk's birthday.And it could leave the public with some lingering curiosity about why Epstein received an email a few weeks later on October 26 purporting to be the schedule of a person with the same name as the apparent “girl” mentioned in the October 7 email. That list spans fall of 2012 through January of 2013, and includes items like “Kimbal in NYC,” and this “Kimbal” character is mentioned five times total. Kimbal Musk is a major shareholder in Tesla, and has sold or exercised options when the price appears to be peaking multiple times according to Electrek's Fred Lambert. Forbes placed his net worth at about $700 million in 2021.Musk was a board member at Burning Man from 2019 until this year, when he left amid the Epstein controversy. He was also a board member at Chipotle from 2013 through 2019. Your next OLED monitor deserves slightly more TLC than other screen types. Colonizing Mars has been the ultimate mission of SpaceX for decades, but it would appear that reality is finally setting in for CEO Elon Musk. These constellations are all key players within a new generation of space-based internet providers, but differ wildly in terms of scale, deployment, purpose, and target market.
That is, it's a modern day truism that if automation—AI or otherwise—makes any sort of positive change in your work life, you'll feel a sort of squeezing sensation, and additional work will materialize to erase any momentary feelings of relief. According to a case study highlighted in some “in-progress research” from Aruna Ranganathan, who teaches management at UC-Berkeley and Xingqi Maggie Ye, a Ph.D. student who is part of Ranganathan's Berkeley program, AI “intensifies” work, and certainly doesn't make people's days easier.It sounds, in other words, like hell on earth. If that is, paradoxically, what you want in your workday, then you probably work in a place like Silicon Valley, or even at OpenAI, where CEO Sam Altman has described AI's ability to intensify his own work in ways that make him sound strangely awed and humbled (even as he expresses little to no regret about his ambition to annihilate knowledge worker jobs). “I don't think I can come up with ideas fast enough anymore,” he said in an interview in October of last year, adding “I think it will mean that stuff just happens faster and that you can… that you can try a lot more stuff, and figure out the better ideas quickly.” Altman's experience may resonate with the workers mentioned in the article about Ranganathan and Ye's research for Harvard Business Review. They describe an eight-month study into generative AI's effects on working life at a company with about 200 employees. Employees “worked at a faster pace,” the authors write, covered a “broader scope of tasks,” and found themselves working “more hours of the day, often without being asked to do so.” This was a workplace that, Ranganathan and Ye explain, didn't mandate AI use. This doesn't sound like a 200-person workplace where widgets were being glued together. Instead, many of the roles described in the article involve engineering, writing code, and communicating in Slack, so it's safe to say these were knowledge workers and software engineers, quite possibly making use of tools like Claude Code. Due to AI, many of Ranganathan and Ye's subjects, it seems, started expanding the scope of their jobs, usurping one another's roles, and taking on roles coaching others on coding, or correcting their vibe-coded work. Hiring new employees may have been postponed or circumvented altogether, because employees “absorbed work that might previously have justified additional help or headcount.” Workers also, it seems, furtively fed tasks into their AI tools while they were ostensibly in meetings, and submitted prompts while on breaks, while waiting for things to load, or while they were supposed to be having lunch. How you interpret this case study is going to vary. According to a 2024 Pew survey, about half of U.S. workers reported that they were either somewhat satisfied or “not too/not at all satisfied,” and the other half said they were “extremely/very satisfied.” That “extremely/very satisfied” group shrinks from 50% to 42% when the respondent has a lower income. So I don't get the impression that fewer people, having to learn to do more things, and work that seeps into breaks will help most people's job satisfaction, but maybe I lack a certain kind of vision. But let's not assume that all tech workers love this kind of productivity theater, or that the sense of greater productivity in Ranganathan and Ye's case study is necessarily anything other than an illusion. An anonymous worker at the cybersecurity firm Crowdstrike wrote into the newsletter Blood in the Machine last year, and said workers at that company “have been encouraged to handle the additional per capita workload by simply working harder and sometimes working longer for no additional compensation,” and that “While our Machine Learning systems continue to perform with excellence, I have yet to be convinced that our usage of genAI has been productive in the context of the proofreading, troubleshooting, and general babysitting it requires.” According to this person, “The net result is not a lightening of the load as has been so often promised,” and “Morale is at an all-time low.” Your next OLED monitor deserves slightly more TLC than other screen types. Seems like Europeans were playing blocking games centuries earlier than previously thought. Someone who calls herself "Coral Hart" is telling the world all her tricks. What happens if a Waymo runs into an elephant?
The location was first reported Monday by the Puget Sound Business Journal, and confirmed by GeekWire based on references to xAI in online permit logs. In fact, the xAI and OpenAI offices in downtown Bellevue will be about a 10-minute walk from each other, should Musk and Sam Altman ever find themselves in their respective Seattle-area hubs at the same time and decide to patch things up. News of the company's Seattle-area location comes days after SpaceX announced its acquisition of xAI in a deal valuing the AI company at $250 billion, further consolidating Musk's businesses. Job listings show xAI is hiring for a range of engineering roles in the Seattle area, with salaries from $180,000 to $440,000. The positions go beyond networking and infrastructure to include core AI research roles, such as members of the technical staff focused on CUDA/GPU kernel development, image generation, video generation, and world models. This signals that the Bellevue office will serve as a hub for AI model development, not just operations support. University of Washington scientists and students are using AI to create real medicines. Better treatments for cancer, autoimmune diseases, viruses and more are now on the horizon thanks to groundbreaking work with artificial intelligence from a team of scientists at the University of Washington's Institute for Protein Design. Led by Nobel Prize winner David Baker, this team of Huskies uses AI tools to create proteins — biology's building blocks — that lay the foundation for new medicines. Zillow at 20: Real estate giant leans on AI to make homebuying hurt less Amazon rolls out Alexa+ to all U.S. customers, making its AI assistant free for Prime members Ai2 cooks up open-source coding agents with tech equivalent of ‘hot plate and frying pan' Elon Musk's xAI plans Seattle hub with engineering jobs paying up to $440k Elon and Satya, together again: Microsoft brings Musk's xAI models to Azure, despite OpenAI feud