Thomas said Microsoft “shaped me in ways I never imaged.” He began his two-decade run at the company as an intern. “Leaving isn't easy — but some opportunities are so special and unique that you just have to go for them.” Thomas spent the past six years as a corporate vice president at Microsoft, where he led strategy, product management, and engineering execution for Microsoft Cloud for Industry. — Raji Rajagopalan has a new role at Microsoft: GitHub's vice president of engineering. “My goal is to help GitHub continue to be the place loved by devs, where innovation happens and human-agent workflows thrive, as we move into this new era of AI-driven development,” Rajagopalan said on LinkedIn. — Katie Bardaro is senior VP of customer experience at Avante, a Seattle startup building software to help companies decrease HR administration workload and reduce overall benefits program costs. Bardaro was previously chief customer officer at Syndio, a company that analyzes workplace pay equity issues and provides strategies for fixing disparities. Prior to that she was at Payscale for more than a decade. — Vivek Sharma is leaving Stripe for a cryptic new venture focused on “AI's potential to fundamentally change how people work.” Sharma, who has held executive roles at Microsoft and Meta, didn't provide further details about the stealthy startup in a LinkedIn post, but did name his collaborators: — Jeff Carr is now CEO of Atana, a startup building workplace training content that incorporates behavior-based learning and development. He succeeds Atana co-founder and former CEO John Hansen, who will remain as executive chair. Hyrb, known by his longtime handle “Major Nelson,” left Microsoft in 2023 after more than two decades in corporate communications, promoting the launches of games and other products. Bartot is also co-founder and CTO of the software startup AirSignal, an affiliate professor at the UW, and a startup mentor at Creative Destruction Lab. Bartot said on LinkedIn that he looks forward to working with the TheFounderVC team “to help exceptional early-stage founders build the next generation of great Vertical AI companies and products.” Dickey said on LinkedIn that he has used Atlas four times to start his own companies and aligns with Stripe's goal of “making the administrative layer a breeze — and helping new companies start strong from day one.” Great teams still win, and finding them is harder than ever. Prime Team Partners blends AI-powered recruiting with deep human expertise. Together, we help employers cut through the noise and hire smarter, faster. Learn more about GeekWork: Contact GeekWire co-founder John Cook at [email protected]. Click for more about underwritten and sponsored content on GeekWire. Redfin CEO Glenn Kelman departs after leading Seattle real estate giant for 20 years Tech Moves: Acumatica hires CPO; former Amazon manager named new mayor of Bellevue Tech Moves: AWS VP switches roles; Seattle's new economic development head; Microsoft Teams exec departs GeekWire Studios has partnered with AWS for the Guide to re:Invent. This interview series took place on the Expo floor at AWS re:Invent 2025, and features insightful conversations about the future of cloud tech, as well as partnership success stories. GitHub will join Microsoft's CoreAI division with departure of CEO Thomas Dohmke Tech Moves: Syndio hires LinkedIn product vet; Microsoft chief scientist joins Shutterstock board Tech Moves: Amazon AI leader joins Google Cloud; Meta taps new chief legal officer from Microsoft Tech Moves: Read AI adds product VP; Microsoft legal chief steps down; Truveta co-founder departs; Marchex CEO is out
On Tuesday, U.K.-based Iranian activist Nariman Gharib tweeted redacted screenshots of a phishing link sent to him via a WhatsApp message. “Do not click on suspicious links,” Gharib warned. This hacking campaign comes as Iran grapples with the longest nationwide internet shutdown in its history, as anti-government protests — and violent crackdowns — rage across the country. Given that Iran and its closest adversaries are highly active in the offensive cyberspace (read: hacking people), we wanted to learn more. He also shared a write-up of his findings. This data revealed dozens of victims who had unwittingly entered their credentials into the phishing site and were subsequently likely hacked. The list includes a Middle Eastern academic working in national security studies; the boss of an Israeli drone maker; a senior Lebanese cabinet minister; at least one journalist; and people in the United States or with U.S. phone numbers. TechCrunch is publishing our findings after validating much of Gharib's report. According to Gharib, the WhatsApp message he received contained a suspicious link, which loaded a phishing site in the victim's browser. Dynamic DNS providers allow people to connect easy-to-remember web addresses — in this case, a duckdns.org subdomain — to a server where its IP address might frequently change. It's not clear whether the attackers shut down the phishing site of their own accord or were caught and cut off by DuckDNS. We reached out to DuckDNS with inquiries, but its owner Richard Harper requested that we send an abuse report instead. This domain has several other, related domains hosted on the same dedicated server, and these domain names follow a pattern that suggests the campaign also targeted other providers of virtual meeting rooms, like meet-safe.online and whats-login.online. The phishing page would not load in our web browser, preventing us from directly interacting with it. Depending on the target, tapping on a phishing link would open a fake Gmail login page, or ask for their phone number, and begin an attack flow aimed at stealing their password and two-factor authentication code. But the source code of the phishing page code had at least one flaw: TechCrunch found that by modifying the phishing page's URL in our web browser, we could view a file on the attacker's servers that was storing records of every victim who had entered their credentials. The records also contained each victim's user agent, a string of text that identifies the operating system and browser versions used to view websites. This data shows that the campaign was designed to target Windows, macOS, iPhone, and Android users. We can tell this because Google sends two-factor codes in a specific format (usually G-xxxxxx, featuring a six-digit numerical code). Beyond credential theft, this campaign also seemed to enable surveillance by tricking victims into sharing their location, audio, and pictures from their device. This is a long-known attack technique that abuses the WhatsApp device linking feature and has been similarly abused to target users of messaging app Signal. We asked Granitt founder Runa Sandvik, a security researcher who works to help secure at-risk individuals, to examine a copy of the phishing page code and see how it functions. However, we did not see any location data, audio, or images that had been collected on the server. We do not know who is behind this campaign. The number of victims hacked by this campaign (that we know of) is fairly low — fewer than 50 individuals — and affects seemingly ordinary people across the Kurdish community, as well as academics, government officials, business leaders, and other senior figures across the broader Iranian diaspora and Middle East. It may be that there are far more victims than we are aware of, which could help us understand who was targeted and potentially why. It is unclear what motivated the hackers to steal people's credentials and hijack their WhatsApp accounts, which could also help identify who is behind this hacking campaign. That could make sense since Iran is currently almost entirely cut off from the outside world, and getting information in or out of the country presents a challenge. Both the Iranian government, or a foreign government with interests in Iran's affairs, could plausibly want to know who influential Iranian-linked individuals are communicating with, and what about. As such, the timing of this phishing campaign and who it appears to be targeting could point to an espionage campaign aimed at trying to collect information about a narrow list of people. Miller said the attack “certainly [had] the hallmarks of an IRGC-linked spearphishing campaign,” referring to highly targeted email hacks carried out by Iran's Islamic Revolutionary Guard Corps (IRGC), a faction of Iran's military known for carrying out cyberattacks. Miller pointed to a mix of indications, including the international scope of victim targeting, credential theft, the abuse of popular messaging platforms like WhatsApp, and social engineering techniques used in the phishing link. On the other hand, a financially motivated hacker could use the same stolen Gmail password and two-factor code of another high-value target, such as a company executive, to steal proprietary and sensitive business information from their inbox. The campaign's focus on accessing a victim's location and device media, however, is unusual for a financially motivated actor, who might have little use for pictures and audio recordings. We asked Ian Campbell, a threat researcher at DomainTools, which helps analyze public internet records, to look at the domain names used in the campaign to help understand when they were first set up, and if these domains were connected to any other previously known or identified infrastructure. Campbell found that while the campaign targeted victims in the midst of Iran's ongoing nationwide protests, its infrastructure had been set up weeks ago. He added that most of the domains connected to this campaign were registered in early November 2025, and one related domain was created months back in August 2025. The U.S. Treasury has sanctioned Iranian companies in the past for acting as fronts for Iran's IRGC and conducting cyberattacks, such as launching targeted phishing and social engineering attacks. As Miller notes, “This drives home the point that clicking on unsolicited WhatsApp links, no matter how convincing, is a high-risk, unsafe practice.” To securely contact this reporter, you can reach out using Signal via the username: zackwhittaker.1337 He also authors the weekly cybersecurity newsletter, this week in security. He can be reached via encrypted message at zackwhittaker.1337 on Signal. You can also contact him by email, or to verify outreach, at zack.whittaker@techcrunch.com. Mira Murati's startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI AI models are starting to crack high-level math problems Digg launches its new Reddit rival to the public Google's Gemini to power Apple's AI features like Siri Google announces a new protocol to facilitate commerce using AI agents
The team's former coders, designers, and UX experts have watched in horror as Donald Trump rebranded the service as DOGE, effectively forced out its staff, and employed a strike force of young and reckless engineers to dismantle government agencies under the guise of eliminating fraud. A small though influential team is proposing to answer that exact question, working on a solution they hope to deploy during the next Democratic administration. Tech Viaduct's advisory panel includes former Obama chief of staff and Biden's secretary of Veterans Affairs Denis McDonough; Biden's deputy CTO Alexander Macgillivray; Marina Nitze, former CTO of the VA; and Hillary Clinton campaign manager Robby Mook. But most attention-grabbing is its senior adviser and spiritual leader, Mikey Dickerson, the crusty former Google engineer who was the first leader of USDS. His hands-on ethic and unfiltered distaste for bureaucracy embodied the spirit of Obama's tech surge. No one is more familiar with how government tech services fail American citizens than Dickerson. And no one is more disgusted with the various ways they have fallen short. Dickerson himself unwittingly put the Viaduct project in motion last April. He was packing up the contents of his DC-area condo to move as far away as possible from the political scrum (to an abandoned sky observatory in a remote corner of Arizona) when McDonough suggested he meet with Mook. “The basic idea is that it's too hard to get things done,” says Dickerson. “They're not wrong about that.” He admits that Democrats had blown a big opportunity “For 10 years we've had tiny wins here and there but never terraformed the whole ecosystem,” Dickerson says. Dickerson was surprised a few months later when Mook called him to say he found funding from Searchlight Institute, a liberal think tank devoted to novel policy initiatives, to get the idea off the ground. “When I was there, we were severely outgunned, 200 people running around trying to improve websites,” he says. The first is to produce a master plan to remake government services—establishing an unbiased procurement process, creating a merit-based hiring process, and assuring oversight to make sure things don't go awry. The idea is to design signature-ready executive orders and legislative drafts that will guide the recruiting strategy for a revitalized civil service. In the next few months, the group plans to devise and test a framework that could be executed immediately in 2029, without any momentum-killing consensus building. In Viaduct's vision that consensus will be achieved before the election. “Thinking up bright ideas is going to be the easy part,“ Dickerson says. “There needs to be a task force to triage and figure out what has been done” by DOGE, Dickerson says. One challenge will be reversing the de-siloing of personal information that violated previous privacy standards. “That was DOGE's whole schtick from the very beginning. Writing a plan to roll back DOGE is tricky, because there are three years left for the current White House to muck things up—or perhaps course-correct to mitigate some of the missteps made in 2025. “Getting the buy-in is the key to a successful plan,” says Jenny Wang, a former official under both Obama and Biden, who is now Tech Viaduct's project manager. “If there's no support, it doesn't matter.” Republicans are usually willing to walk through coals to achieve their aims, while Democrats tiptoe over eggshells. “Surrendering to a status quo that is not working right is a natural reaction, but it would be terrible for leadership to do that,” says one longtime government reformer familiar with the Viaduct plan. Dickerson acknowledges Viaduct's effort might well be for naught. “I am not sure at all that there's going to be what we recognize as a fair election in 2029, and I'm even less sure that someone who's not crazy is going to win it. If the worst happens, Dickerson is prepared for that, too. “I'm half-retired in the middle of the Arizona desert, and if the US is going to continue to collapse into chaos, there's nothing I can do about it except be as far away from it as I can,” he says. In that case, a lot of his friends—and maybe a certain journalist—might show up on his doorstep, offering help to restore that abandoned sky observatory. This is an edition of Steven Levy's Backchannel newsletter. Big Story: Understanding Trump's retro coup in Venezuela Billion-dollar data centers are taking over the world WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
YouTube is updating its advertiser-friendly content guidelines to allow more videos on controversial issues to earn full ad revenue, as long as they're dramatized or discussed in a non-graphic manner. YouTube notes that content on child abuse or eating disorders will remain ineligible for full monetization. YouTube announced the change this week in a video on its Creator Insider channel. “In the past, the degree of graphic or descriptive detail was not considered a significant factor in determining advertiser friendliness, even for some dramatized material,” YouTube explained. “Consequently, such uploads typically received a yellow dollar icon, which restricted their ability to be fully monetized. The Google-owned company says it's making the change in response to creator feedback that YouTube's guidelines were leading to limited ad revenue on dramatized and topical content. YouTube notes that it wants to ensure that creators who are telling sensitive stories or producing dramatized content have the opportunity to earn ad revenue. “We took a closer look and found our guidelines in this area had become too restrictive and ended up demonetizing uploads like dramatized content,” YouTube said. “This content might reference topics that advertisers find controversial, but are ultimately comfortable running their ads against. The policy shift came at a time when social media platforms were rolling back online speech moderation after President Donald Trump returned to office. YouTube notes that there are still some areas where ads will remain restricted, as topics like child abuse, including child sex trafficking and eating disorders, are not included in this update. Descriptive segments of those topics or dramatized content around them remain ineligible for ad revenue. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Mira Murati's startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI Google announces a new protocol to facilitate commerce using AI agents
When you purchase through links on our site, we may earn an affiliate commission. As a result, the company's share in China could drop to just 8% in the coming years as domestic suppliers can satisfy around 80% of local demand, reports Nikkei, citing analysis from Bernstein. "There will be no more need to wait for advanced products from overseas." Analysts from Bernstein cited by Chinese media expect Nvidia's share of China's AI processor market to drop to around 8% this year from 66% in 2024 as Huawei, Cambricon, and other local independent hardware vendors (IHVs) together approaching 80%. Moore Threads' Huashan can compete against Nvidia's Hopper H100 and H200 products, the company's previous-generation AI accelerators that the U.S. recently allowed to export to China, but with some serious strings attached. Meanwhile, Huawei's AI CloudMatrix 384 can beat both GB200 NVL72 and GB300 NVL72 systems in BF16 FLOPS, a popular format used for AI training, albeit with four times more power consumption. This is still behind leading Blackwell-based clusters, such as Oracle's OCI Supercluster running 131,072 B200 GPUs and offering peak performance of up to 2.4 FP4 ZettaFLOPS for inference, but it is evident that Chinese developers are rapidly increasing the performance of their AI hardware. A draft five-year plan reportedly circulated by the Communist Party in October calls for semiconductor self-reliance under a 'new national system' that directs state bodies, private companies, and financial institutions. At the heart of this effort are the so-called 'four little dragons' of Chinese GPUs: Moore Threads, MetaX, Biren Technology, and Suiyuan Technology (Enflame). Large hyperscalers are also intensifying their custom silicon programs. Baidu's Kunlunxin unit plans to introduce five AI processors by 2030, and Alibaba is also not giving up on its own silicon efforts. Yet, to a large degree, China's AI industry is limited by SMIC's ability to produce chips on its 7nm-class process technologies in sizable quantities. If the company cannot increase its output substantially in the coming years, then either China's AI sector will fall behind America's dramatically, or it will find a way to obtain high-performance GPUs from Nvidia to keep up. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Italy has launched two investigations into Microsoft's Activision Blizzard, alleging the company has engaged in “misleading and aggressive” sales practices for its popular smartphone games Diablo Immortal and Call of Duty Mobile. The country's competition regulator, Autorità Garante della Concorrenza E Del Mercato (AGCM), said the investigations focus on the use of design elements to induce users, particularly children, into playing for long periods, and make in-game purchases by urging them to not miss out on rewards. That isn't particularly surprising, however, as, unlike full-priced games, free-to-play games have long relied on loot boxes and sales of in-game cosmetics for monetization. The AGCM also highlighted privacy concerns, as the games appear to lead users to select all consent options when signing up, and said it would look into the company's consent process for harvesting and using personal data. Activision Blizzard did not immediately respond to a request for comment. You can contact or verify outreach from Ram by emailing ram.iyer@techcrunch.com. Mira Murati's startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI AI models are starting to crack high-level math problems Digg launches its new Reddit rival to the public Google's Gemini to power Apple's AI features like Siri Google announces a new protocol to facilitate commerce using AI agents
More than 150 techies packed the house at a Claude Code meetup event in Seattle on Thursday evening, eager to trade use cases and share how they're using Anthropic's fast-growing technology. Software development has emerged as the first profession to be thoroughly reshaped by large language models, as AI systems move beyond answering questions to actively doing the work. Last summer GeekWire reported on a similar event in Seattle focused on Cursor, another AI coding tool that developers described as a major productivity booster. On stage at Thursday's event, Rector demoed an app that automatically fixed front-end bugs by having Claude Code control a browser. “It's kind of evolving the mentality from just writing code to becoming like an architect, almost like a product manager,” he said on stage during his demo. R. Conner Howell, a software engineer in Seattle, showed how Claude Code can act as a personal cycling coach, querying performance data from databases and generating custom training plans — an example of the tool's impact extending beyond traditional software development. Earlier this week Anthropic — which is reportedly raising another $10 billion at a $350 billion valuation — released Claude Cowork, essentially Claude Code's non-developer cousin that is built for everyday knowledge work instead of just programming. AI coding tools are energizing longtime software developers like Damon Cortesi, who co-founded Seattle startup Simply Measured in 2010 and is now an engineer at Airbnb. In a post titled “How Claude Reset the AI Race,” New York Magazine columnist John Herrman noted the growing concern around coding automation and job displacement. “If you work in software development, the future feels incredibly uncertain,” he wrote. However, analysts at William Blair issued a report this week expressing skepticism that other businesses will simply start building their own software with these new AI tools. “Vibe coding and AI code generation certainly make it easier to build software, but the technical barriers to coding have not been the drivers of software moats for some time,” they wrote. The tool reached a $1 billion run rate six months after launch in May. “We're excited to see all the cool things you do with Claude Code,” Caleb John, a Seattle entrepreneur working at Pioneer Square Labs, told the crowd. Editor's note: This story has been updated to reflect that the report cited was from William Blair. University of Washington scientists and students are using AI to create real medicines. Better treatments for cancer, autoimmune diseases, viruses and more are now on the horizon thanks to groundbreaking work with artificial intelligence from a team of scientists at the University of Washington's Institute for Protein Design. Led by Nobel Prize winner David Baker, this team of Huskies uses AI tools to create proteins — biology's building blocks — that lay the foundation for new medicines. The institute's recent breakthroughs — including an antivenom for snakebites, and antibiotics that combat drug-resistant bacteria — show how this innovative science can save and change lives. Click for more about underwritten and sponsored content on GeekWire. GeekWire Studios has partnered with AWS for the Guide to re:Invent. This interview series took place on the Expo floor at AWS re:Invent 2025, and features insightful conversations about the future of cloud tech, as well as partnership success stories. We asked Seattle founders and investors about their favorite tools Finding value from AI: These tools boost coding, writing, video editing, researching, travel planning ‘Massive productivity booster': Seattle developers on how Cursor is changing the way they code
It took Rebecca Yu seven days to vibe code her dining app. She was tired of the decision fatigue that comes from people in a group chat not being able to decide where to eat. Armed with determination, Claude, and ChatGPT, Yu decided to just build a dining app from scratch — one that would recommend restaurants to her and her friends based on their shared interests. Yu is part of the growing trend of people who, due to rapid advancements in AI technology, can easily build their own apps for personal use. Most are coding web applications, though they are also increasingly vibe coding mobile apps intended to run only on their own personal phones and devices. It is a new era of app creation that is sometimes called micro apps, personal apps, or fleeting apps because they are intended to be used only by the creator (or the creator plus a select few other people) and only for as long as the creator wants to keep the app. They are not intended for wide distribution or sale. For example, founder Jordi Amat told TechCrunch that he built a fleeting web gaming app for his family to play over the holidays and simply shut it down once the vacation was over. Interestingly enough, Darrell Etherington, a former TechCrunch writer, now a vice president at SBS Comms, is also building his own personal podcast translation app. “A lot of people I know are using Claude Code, Replit, Bolt, and Lovable to build apps for specific use cases,” he said. One artist told TechCrunch that he built a “vice tracker” for himself to see how many hookahs and drinks he was consuming each weekend. Software engineer James Waugh told TechCrunch he built a web app planning tool to help with his cooking hobby. These are apps that are extremely context-specific, address niche needs, and then “disappear when the need is no longer present,” Legand L. Burge III, a professor of computer science at Howard University, said. “It's similar to how trends on social media appear and then fade away,” Burge III continued. “It's really exciting to be alive right now,” she said. In some ways, it was always easy for someone without much coding experience to create web apps via no-code platforms like Bubble and Adalo, which launched before LLMs became popular. What's new is the rising ability to create personal, temporary apps for mobile devices, too. This is because the standard way to load an app on an iPhone is to download it from the App Store, which requires a paid Apple Developer account. But increasingly mobile vibe-coding startups like Anything (which raised $11 million, led by Footwork) and VibeCode (which raised a $9.4 million seed round from Seven Seven Six last year) have emerged to help people build mobile apps. Christina Melas-Kyriazi, a partner at Bain Capital Ventures, compared this era of app building to social media and Shopify, “where all of a sudden it was really easy to create content or to create a store online, and then we saw an explosion of small sellers.” she said. “Once I learned how to prompt and solve issues efficiently, building became much easier,” she said. Such personal apps may have bugs or critical security flaws — they can't just be sold as-is to the masses. But there is still significant potential in an era of personal app building, especially as AI and model reasoning, quality, and security become more sophisticated over time. The software engineer, Waugh, said he once built an app for a friend who had heart palpitations. Another founder, Nick Simpson, told TechCrunch he was so bad at paying parking tickets — the consequence of San Francisco's tough parking availability — that he decided to build an app that would automatically pay them after scanning the ticket. As a registered Apple developer, his app is in beta on TestFlight, but he said a bunch of his friends now want it, too. Nevertheless, Burge III believes that these types of apps can open “exhilarating opportunities” for businesses and creators to create “hyper-personalized situational experiences.” Etherington added to that, saying he believes a day is dawning when people stop subscribing to apps that have monthly fees. Instead, they will just build their own apps for personal use. Melas-Kyriazi, meanwhile, expects to see the use of personal, fleeting apps the same way spreadsheets like Google Sheets or Excel were once used. She had no technical experience and finished the web app in the same time it took her husband to go to dinner and back. Now, she said, they have two web apps, both built with Claude: one for allergies and sensitivities, and the other to keep tabs on chores around the house. She thinks vibe coding will bring “a lot of innovation and problem solving for communities that wouldn't have access otherwise,” and hopes to beta-test her allergy health app so she can one day release it to others. Dominic-Madori Davis is a senior venture capital and startup reporter at TechCrunch.
When you purchase through links on our site, we may earn an affiliate commission. The Pure Power 13 M 650W distinguishes itself with Platinum-level efficiency despite Gold certification, exceptional voltage regulation, and outstanding ripple suppression. Elite capacitors raise minor concerns about longevity, though the 10-year warranty provides reassurance. The price sits high for Gold certification but represents solid value when compared against Platinum-tier competitors. Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. maintains its reputation for engineering products that prioritize acoustic performance without compromising functionality. The German manufacturer's portfolio spans power supplies, cases, and thermal solutions, all designed with noise reduction as a fundamental principle. This focus has cultivated a loyal following among enthusiasts who refuse to accept unnecessary system noise as inevitable. 's latest effort to balance performance, efficiency, and value in the mid-range segment. This unit targets builders constructing systems where reliable power delivery and quiet operation matter more than bleeding-edge specifications. This 650W model provides adequate capacity for mainstream gaming configurations while maintaining headroom for transient load spikes, making it one of the best power supplies on the market. The Pure Power 13 M 650W arrives in sturdy cardboard packaging with an all-black aesthetic. Internal protection consists of a nylon pouch and basic paper inserts that secure the PSU during shipping. Mounting screws and an AC power cable constitute the entirety of included accessories. A basic printed manual provides necessary installation guidance without excessive documentation. All cables feature uniform black coloring across connectors and wires. An unusual CPU power configuration provides one 4+4 pin EPS connector alongside a single 4-pin EPS connector, creating an asymmetric arrangement rarely seen in modern designs. Given the 120mm fan and 650W output, a more compact design could have been achievable with a slightly different internal layout, but Be Quiet! is primarily focused on optimal heat dissipation and would not let 20 mm's of extra length get in the way The external finish employs satin black chassis paint, applied with precision. 's embossed logo appears on the right side panel, providing subtle branding without visual clutter. A removable parallel wire fan guard sits above the intake, with a white decorative ring beneath creating modest visual interest. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. The front accommodates modular cable connectors with subtle white legends printed alongside each position. clearly marks the 12V-2x6 connector's 450W limitation, preventing confusion about power delivery capabilities. With a total sustained output of just 650W, this unit is definitely not designed to power a top-tier graphics card. While rifle bearings represent an advancement over basic sleeve designs through enhanced lubrication and structural improvements, they typically exhibit shorter operational lifespans compared to fluid dynamic bearing or ball bearing alternatives. The platform employs established but modern topologies, emphasizing on reliability and cost effectiveness. Input filtering incorporates four Y capacitors, two X capacitors, and two filtering inductors at the AC receptacle entry point. Two rectifying bridges occupy a dedicated heatsink immediately following the filtration stage, providing adequate thermal management for the rectification components. The APFC circuitry features two Toshiba TK20A60W MOSFETs and one diode on a substantial heatsink spanning the PCB edge. These Elite capacitors represent the first indicator of cost optimization in component selection. This configuration has become standard in modern mid-range units, offering good efficiency characteristics. Four MOSFETs generate the 12V rail through synchronous rectification, with small PCB-mounted heatsinks providing cooling. DC-to-DC conversion circuits on an additional daughterboard produce the 3.3V and 5V rails. Secondary side capacitors consist primarily of Elite units, with just one Rubycon capacitor present. For the testing of PSUs, we are using high precision electronic loads with a maximum power draw of 2700 Watts, a Rigol DS5042M 40 MHz oscilloscope, an Extech 380803 power analyzer, two high precision UNI-T UT-325 digital thermometers, an Extech HD600 SPL meter, a self-designed hotbox and various other bits and parts. Peak efficiency occurs near 50% load, reaching approximately 94% with 230 VAC input. These figures comfortably exceed 80Plus Platinum requirements and approach Titanium-level performance, making the Gold certification a curious understatement. It actually does have a Platinum certification from both Cybenetics and CLEAResult. likely chose conservative marketing to position this unit below their Straight Power series, avoiding internal product cannibalization. This design choice prioritizes acoustic performance during typical operating conditions. Elevated ambient temperature testing reveals measurable but passable efficiency degradation. Even though the unit is technically rated for operation up to 40°C, it effortlessly delivers its full output while maintaining commendable performance levels. This thermal headroom demonstrates robust component selection and effective heatsink design. Despite earlier activation, fan speed remains subdued until load reaches approximately 90% of capacity. At this threshold, the thermal control circuit prioritizes reliability over acoustics, commanding maximum fan speed. This transition is very aggressive, suggesting that the unit it programmed to prioritize component protection over consistent acoustic performance when stressed. The internal temperatures remain relatively low even at maximum output, well below the point where over-temperature protection would engage. The electrical performance demonstrates competitive characteristics within its segment. Voltage regulation maintains tight tolerances, with the 12V rail exhibiting approximately 1% variance. This precision is rather typical performance for Gold-certified units but not bad compared to more premium Platinum-certified products either. These figures are way below the ATX specification limits, demonstrating exceptional filtering capabilities. The 3.3V and 5V rails trigger OCP at 146% and 142% of maximum current respectively – a bit high but not unnaturally so for a modern PSU. The 12V rail OCP activates at 120%, a bit sharp for an ATX 3.1 unit. The OPP permits sustained operation up to 128% of nominal capacity before shutdown, offering substantial headroom for transient loads. FSP's platform delivers very solid performance through mature design choices rather than innovative approaches. This conservative strategy ensures reliability while potentially limiting competitive differentiation beyond the core specifications. The electrical characteristics consistently exceed its Gold-level markings substantially, with efficiency and power quality reaching Platinum-tier performance. The ripple suppression achieves exemplary results, delivering cleaner power than many higher-certified competitors. While active components utilize quality silicon from reputable manufacturers, the reliance on Elite capacitors for bulk filtering raises questions about long-term stability. Elite is an established manufacturer and has been around for decades but we rarely see their products in top-tier products. Thermal and acoustic performance delivers great results, with a hint of. The semi-passive mode provides excellent silence during light loads, aligning with Be Quiet! This design choice favors reliability over consistent acoustic refinement, a reasonable engineering decision that nevertheless creates a slight contradiction with the company's quiet-focused branding. At approximately $100 retail, this Gold-certified unit commands a premium compared to competitors with identical certification. However, when evaluated against Platinum-certified alternatives with similar specifications, the value proposition becomes more compelling. The 10-year warranty, ATX 3.1 compliance, and genuine Platinum-level efficiency provide tangible benefits that justify the price premium for users prioritizing long-term reliability over initial cost savings. The Pure Power 13 M 650W targets a specific audience. Budget-focused builders seeking minimum cost for adequate Gold certification will find better value elsewhere. The ATX 3.1 compliance ensures compatibility with current and future mainstream graphics cards, providing partial protection against obsolescence. The only setback is the high retail price but, considering this should be compared to Platinum-level products when reaching the shopping cart, the Pure Power 13 M 650W is a great investment for those seeking a premium unit at this power range. Dr. E. Fylladitakis has been passionate about PCs since the 8088 era, beginning his PC gaming journey with classics like Metal Mutant and Battle Chess. Not long after, he built his first PC, a 486, and has been an enthusiast ever since. In the early 2000's, he delved deeply into overclocking Duron and Pentium 4 processors, liquid cooling, and phase-change cooling technologies. While he has an extensive and broad engineering education, Dr. Fylladitakis specializes in electrical and energy engineering, with numerous articles published in scientific journals, some contributing to novel cooling technologies and power electronics. Outside of his professional pursuits, he enjoys immersing himself in a good philosophy book and unwinding through PC games. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
That's true to a certain degree; you can use a smart plug to add instant power control to any outlet, letting you turn the outlet on and off at your command from anywhere in your home (or even if you aren't there). The TV can now turn off on command, the lights will automatically flip on at 5 pm, and a simple coffee maker or appliance can essentially run itself if all it needs is power. If you're looking to control something simple that really only needs power sent to it for the full experience, then we've found some of the best smart plugs to do that for you. TP-Link's smart plugs have been my favorite for years, and the TP-Link Tapo Matter-Certified Smart Plug Mini (P125M) ($19, 3-pack) lets you skip getting an app and use Matter to directly connect it to your home hub of choice. Smart plugs are also great for outdoor use, and the Cync Outdoor Smart Plug ($19) is made for the outdoors and has two plugs built into it. Uncertain if a smart plug can solve your dumb device problems? Smart plugs get plugged into an outlet socket, and then you plug your device of choice (a lamp, a coffee maker, et cetera) into the plug to allow you control over the power flow. The smart plug can connect to Wi-Fi and an app, along with your smart speaker if you have one, to let you control it with automated schedules, the dedicated app, or your voice. Controlling the power flow to a device can let you switch on lamps around your house at a certain time or turn them off without leaving your bed. I also really like outdoor-specific smart plugs for “dumb” outdoor lights and decorations (like my Santa Claus inflatable that hangs off my balcony), though I've now switched to permanent outdoor lights that have controls akin to a smart bulb. Maybe one day I'll go to bed on time.) My electric tea kettle won't heat up until I choose how hot it should become, for example, so I can't use a smart plug to start my morning routine, as some people recommend. As mentioned above, the TV is another example that won't turn on when power is restored; I would still need to find the remote to turn it on and choose what I want to watch. We've tested many smart plugs over the years. The Tapo Smart Wi-Fi Plug Mini (TP15) has everything I'm looking for in a smart plug: a small form factor that doesn't block other outlets, Matter compatibility, and easy setup. The Matter aspect means you can skip getting the TP-Link app and set it up directly with home hubs like Google, Alexa, and Apple. Works with Apple Home, Amazon Alexa, Google Assistant, and Matter If you need a smart plug made to withstand the elements, we like this one from Cync. I used it for controlling my outdoor Christmas decorations that aren't already smart (it's permanently attached to my inflatable Santa), while my smart string lights are plugged in next to it. If you want a smart plug you're certain will play well with Siri and Apple Home, Meross' plugs are my go-to. The MSS110 smart plug costs more than our other picks, but it's designed with Apple HomeKit in mind while also being compatible with Google and Alexa. It behaves like everything else–you're just paying extra for those HomeKit powers, and you will need an Apple HomePod, HomePod Mini, or Apple TV to act as your smart home hub. Works with Apple HomeKit, Google Assistant, Amazon Alexa, and Samsung SmartThings TP-Link's Kasa line of mini smart plugs is a favorite at WIRED. If you use just one, it won't obstruct the second outlet at all. WIRED editor Julian Chokkattu has also been using the larger version of these plugs, the HS103 ($14), for years on his lamps, Christmas lights, and fans with no issues. There's also the EP25 ($23) version of that offers energy monitoring. There are many smart plugs with similar features and designs, so choosing one might come down to price and brand preference. But if you aren't interested in mixing ecosystems and want to guarantee you'll never, ever need another app (which Matter plugs also guarantee! The new app is super simple to use and controls all your products. It works with Google Assistant and Amazon Alexa. Eve Energy Strip for $75: Eve's sleek black-and-silver casing will fit right in with your Apple aesthetic, and it also works with HomeKit. But it's very expensive and has only three outlets, despite its size. Hubspace Defiant Smart Indoor Plug for $10: This is made by Home Depot's smart home ecosystem, and it works fine, besides needing a little more effort to plug something into it. It also works with Google Assistant and Amazon Alexa. Roku Indoor Smart Plug SE for $9: Roku's smart-home ecosystem is made by Wyze, so it's the same product with some extra compatibility. ), you can use that to command your smart plug. Wyze Smart Plug for $29 (2-pack): This used to be my budget pick, but now that it's $15 a plug (in a two-pack for $30 total), it's been moved down to an honorable mention. Can You Control Smart Plugs Away From Home? Don't plug in anything that will still need to be turned on after power is established to the device. Personally, I prefer smart bulbs (see our guide to those here) since you'll get many more options, including control over the lamp's color and brightness, and can even sync some bulbs to music. Plugs are a better option if you have a lamp that won't work with standard light bulbs or if you just want simple on-and-off controls. Smart plugs aren't a super-complicated item, but they should integrate easily and quickly with your existing smart home ecosystem. My main test is always to determine how easy a plug is to set up and use in my everyday life, so I can tell you if it would be easy to use in yours. I test smart plugs by setting them up with both the associated app and via the smart home interoperability standard Matter, if available, to compare ease of setup. I also keep an eye out for whether it takes too much manual effort to plug something into the smart plug, if it isn't secure when plugged in, or if it blocks other outlets. Then I connect the smart plug with different voice assistants to check compatibility and response time, and use the plug in my everyday routines for about a week. Get best-in-class reporting and exclusive subscriber content that's too important to ignore. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
When you purchase through links on our site, we may earn an affiliate commission. OpenAI has signaled its intentions to become a major player in brain computer interfaces (BCIs). That's because Merge Labs, co-founded by Altman, will be going forward with $252 million in its tech advancement war chest, reports Bloomberg. Another notable investor was Gabe Newell, co-founder of Valve, which owns the gaming storefront Steam. Newell's hat is already in this ring with his own brain tech company, Starfish Neuroscience. Altman's Merge Labs will be making ripples in Musk's Neuralink pond. However, their approaches to BCIs, as we currently understand them, are quite different. These differences will likely be pivotal to their relative successes. The limited amount of Merge Labs' currently public materials confirms that the fledgling BCI outfit will be developing fundamentally new approaches to this technology. “We believe this requires increasing the bandwidth and brain coverage of BCIs by several orders of magnitude while making them much less invasive,” explains a blog penned by the freshly uncloaked firm. “To make this happen, we're developing entirely new technologies that connect with neurons using molecules instead of electrodes, transmit and receive information using deep-reaching modalities like ultrasound, and avoid implants into brain tissue.” Merge Labs also claims that the most recent breakthroughs in biotechnology, hardware, neuroscience, and computing will be adopted. The key will be whether the firm's technology can achieve workable results from “AI operating systems that can interpret intent, adapt to individuals, and operate reliably with limited and noisy signals.” Meanwhile, Neuralink is pretty deep into testing its BCIs with humans, as are various Chinese competitors. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Mark Tyson is a news editor at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Just like RAM, expect them to be made of unobtainium in short order When you purchase through links on our site, we may earn an affiliate commission. A keen-eyed Reddit user caught a brainwave while browsing for high-capacity SSDs and was struck with the following thought: with the AI-infused silicon shortages, we've reached the point where NVMe gumstick-style SSDs are more expensive than gold by weight. The thread generated discussion aplenty, so we figured we'd dig into this and look up pricing and weight across a range of models. We compiled multiple searches from Newegg, Microcenter, Best Buy, and Walmart, collecting over a hundred sample points. The requirements were: NVMe SSDs on a PCIe 4.0 or 5.0 interface, with four terabytes of capacity, sold by the store itself, and in stock. The selection excluded enterprise drives, as those would quickly throw off the math, plus everyone knows they are priced like antimatter anyway. An eyeball look says that dual-sided, higher-capacity drives don't appreciably increase their weight. Needless to say, only models without heatsinks were considered. Gold is currently sitting at a shiny $148 per gram, so even picking the lower boundary of SSD weight at 8 grams, that makes your average SSD worth around $1,148. The average price for an 8 TB consumer drive is around $1,476, and far higher than that if you want a performance unit instead of just a mass-storage model. So yes, at 8 TB, solid-state drives are indeed pricier than gold. Even 4 TB drives aren't immune to high pricing, with a portion of models now also hitting prices close to the equivalent in gold grams. Interestingly, there's definitely a strong divide between higher-priced units. A cursory observation would state that WD is pricing itself out of the market, but it may be that WD drives are in high demand, and new stock is coming in at much higher prices. Anyone who's been shopping for SSDs recently certainly has noticed a rising trend overall, and PCPartPicker's price tracker illustrates this. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. However, our guess is that those offerings are all but guaranteed to dry up really quickly, so if you're on the fence for buying one of them, it's best to pull the trigger right away on one of the best SSDs. And even mass-storage units around $500 like won't last long, either. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Bruno Ferreira is a contributing writer for Tom's Hardware. When not doing that, he's usually playing games, or at live music shows and festivals. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
There is a current "show your personal site" post on top of HN [1] with 1500+ comments. I wonder how many of those sites are or will be hammered by AI bots in the next few days to steal/scrape content.If this can be used as a temporary guard against AI bots, that would have been a good opportunity to test it out.1. If this can be used as a temporary guard against AI bots, that would have been a good opportunity to test it out.1. They don't respect robots.txt, they don't care about your sitemap, they don't bother caching, just mindlessly churning away effectively a DDOS.Google at least played nice.And so that is why things like anubis exist, why people flock to cloudflare and all the other tried and true methods to block bots. Google at least played nice.And so that is why things like anubis exist, why people flock to cloudflare and all the other tried and true methods to block bots. My site is hosted on Cloudflare and I trust its protection way more than flavor of the month method. This probably won't be patched anytime soon but I'd rather have some people click my link and not just avoid it along with AI because it looks fishy :) There have been several amplification attacks using various protocols for DDOS too... Still, I think it would be interesting to know if anybody noticed a visible spike in bot traffic(especially AI) after sharing their site info in that thread. Unless you mean DDoS protection, this one helps for sure I agree my tinfoil hat signal told me this was the perfect way to ask people for bespoke, hand crafted content - which of course AI will love to slurp up to keep feeding the bear. I shortened a link and when trying to access it in Chrome I get a red screen with this message: Dangerous site Attackers on the site you tried visiting might trick you into installing software or revealing things like your passwords, phone, or credit card numbers. Dangerous site Attackers on the site you tried visiting might trick you into installing software or revealing things like your passwords, phone, or credit card numbers. Deceptive site issueThis web page at [...] has been reported as a deceptive site and has been blocked based on your security preferences.What's going on? I can't find any setting to disable this. This web page at [...] has been reported as a deceptive site and has been blocked based on your security preferences.What's going on? I can't find any setting to disable this. I can't find any setting to disable this. But what I'd like to understand is why there are so many of the same thing. I have home made url shorteners in go, rust, java, python, php, elixir, typescript, etc. because I'm trying the language and this kind of project touches on many things: web, databases, custom logic, how and what design patterns can I apply using as much of the language as I can to build the thing. I'm not criticising the author or anyone who came before. I'm trying to understand the impetus between redoing a joke that isn't yours. You don't learn anything new by redoing the exact same gag that you wouldn't learn by being even slightly original or making the project truly useful.Ideas are a dime a dozen. You could make e.g. a Fonzie URL shortener (different lengths of “ayyyyy”), or an interstellar one (each is the name of a space object), or a binary one (all ones and zeroes)… Each of those would take about the same effort and teach you the same, but they're also different enough they would make some people remember them, maybe even look at the author and their other projects, instead of just “oh, another one of these, close”. You could make e.g. a Fonzie URL shortener (different lengths of “ayyyyy”), or an interstellar one (each is the name of a space object), or a binary one (all ones and zeroes)… Each of those would take about the same effort and teach you the same, but they're also different enough they would make some people remember them, maybe even look at the author and their other projects, instead of just “oh, another one of these, close”. Plus, I don't think I've seen another of these which is exactly like this (just extremely close in concept), so the argument doesn't hold. You may want to learn about design and novelty. Edit: I see referencnes to shadyurl in the comments and I have heard of that, but probably wouldn't have thought of it. URL Shortener is still one of the most popular System Design questions, building this project is a great way to have some experience / understanding of it, for example. But when the same joke is overdone, it's no longer funny.> building this project is a great way to have some experience / understanding of ithttps://news.ycombinator.com/item?id=46632329 > building this project is a great way to have some experience / understanding of ithttps://news.ycombinator.com/item?id=46632329 Giving the author the benefit of the doubt, they may have not seen it before, or was bored and just wanted to make a toy.And it seems like many in HN are in enough a similar boat to me to have up voted it to trending, so at least some people found it entertaining, so it fulfilled its purpose I suppose.It's a good question though, and I don't think anyone really knows the answer. And it seems like many in HN are in enough a similar boat to me to have up voted it to trending, so at least some people found it entertaining, so it fulfilled its purpose I suppose.It's a good question though, and I don't think anyone really knows the answer. It's a good question though, and I don't think anyone really knows the answer. I know people have fond memories of long ago when they thought surely some big company's URL shortener would never be taken down and learned from that when it later was. For links to that domain, they would still transform them with lnks.gd, even though:1) The emails would be very long and flashy, so they're clearly not economizing on space.2) The "shortened" URL was usually longer!3) That domain doesn't let you go straight to the root and check where the transformed URL is going.It's training users to do the very things that expose them to scammers! I use them in tests, just for fun: https://github.com/ClickHouse/ClickHouse/blob/master/tests/q... Funnily enough the domains appear to have been bought up and are now genuinely shady. While this seems like it would make it harder for them I wouldn't be surprised if scammers eventually try to abuse this service too and I have no doubt that people would happily click these if they found in them in a phishing email, that said I give the folks behind this a lot of credit for having a way to contact them and report links if that happens. And apart from that I would indeed consider DNS records a database. Msn.com Office.com Sharepoint.com Hotmail.com Etc, plus all the subdomains they insert before them. I honestly don't mind too much since it's a once a year thing (hacktober) and honestly companies should be trying to catch out employees who click any and all links. Eventually we got asked to please make it stop. https://c1ic.link/bzSBpN_login_page_2Edit: Chrome on Android warned me not to visit the site! Edit: Chrome on Android warned me not to visit the site! Was not expecting that so I changed all the “.” to “DOT” so I don't get punished for posting a spammy link despite this literally being a website to make links as spammy and creepy as possible.
Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Agents can call agents, and each agent can be designed with whatever model params make sense for your task.Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it's designed to evaluate and score conversations (or individual conversation turns).We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. Prior to Gambit, we had built an LLM based video editor, and we weren't happy with the results, which is what brought us down this path of improving inference time LLM quality.We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. We know it's missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We're really happy with how it's working with some of our early design partners, and we think it's a way to implement a lot of interesting applications:- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. - Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.- Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. - Rubric based grading to guarantee you (for instance) don't leak PII accidentally- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. - Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.We'll be around if ya'll have any questions or thoughts. We'll be around if ya'll have any questions or thoughts. I look at Gambit as more of an "agent harness", meaning you're building agents that can decide what to do more than you're orchestrating pipelines.Basically, if we're successful, you should be able to chain agents together to accomplish things extremely simply (using markdown). Mastra, as far as I'm aware, is focused on helping people use programming languages (typescript) to build pipelines and workflows.So yes it's an alternative, but more like an alternative approach rather than a direct competitor if that makes sense. Mastra, as far as I'm aware, is focused on helping people use programming languages (typescript) to build pipelines and workflows.So yes it's an alternative, but more like an alternative approach rather than a direct competitor if that makes sense. Basically any time you find yourself instructing an LLM to follow a certain recipe, just break it down to multiple agents and do what you can with code.