AI-native psychiatry, built for scale and covered by insurance We're building the AI-native operations layer for psychiatric care. Not diagnostics, but what happens outside the visit—the real operational backend: scheduling, documentation, billing, intake, risk detection, and more. If you want to build core infrastructure with real AI, own systems end-to-end, and work directly with a deeply technical founder still up to his neck in the code—read on. Hey—I'm Daniel, co-founder and CTO of Legion Health (YC S21). Mental health care is operationally broken—patients ghosted, clinicians buried in forms, payers chasing missing notes. The industry is flooded with AI startups trying to automate away diagnosis—and even providers as a whole. We're building a real-time, AI-powered backend for mental health clinics—LLM agents + structured systems that coordinate human care like it's software. Our agent infrastructure supports over 2,000+ patients — with only one support human. Unlike most AI startups, we are our own customer. The systems you build directly impact our clinicians and patients today. You're not joining an idea-stage pipe dream or a late-stage dinosaur. We move fast, shipping multiple meaningful features straight to real patients and providers every single day. You'll own entire domains end-to-end—architecture, implementation, iteration—not just JIRA tickets. This is the frontier of applied AI in healthcare. You'll help answer questions at the frontier of AI, like: What does reliable agent infrastructure look like in production? What's the role of structured data in a world with LLMs? How do we make agents auditable, evolvable, and fast? We're looking for founding technologists who can think in systems and ship fast. PHI security, audit pipelines, real-time schedulers, transcript parsing When the next 20 years of mental health care can still be shaped by a few engineers with systems taste, speed, and conviction. You'll shape core systems and help decide what we build next. If you've ever said, “I wish I could've been there when [insert legendary product] was getting built,” this is that moment. If this resonates, I want to work with you. Let's build the founding systems of AI-native mental healthcare—and make something people didn't think was possible. We're rebuilding psychiatry from the ground up—AI-native, insurance-covered, and engineered to scale. Legion Health is a high-quality psychiatry network where patients get care from licensed clinicians, fast and affordably.
When you purchase through links on our site, we may earn an affiliate commission. Last month, a team of Google security researchers released a tool that can modify microcode of AMD's processors based on the Zen microarchitecture, the Zentool. While this is a security vulnerability, for some, this is an opportunity; Members of the Chinese Jiachen Project are running a contest with an aim to develop a microcode for AMD's modern Zen-based CPU to make them execute RISC-V programs natively. However, internally, modern x86 cores rely on proprietary engines running a reduced instruction set computer (RISC) ISA to handle complicated instructions. The internal RISC ISAs are not documented, but they should generally be similar to well-known RISC ISAs, such as Arm or RISC-V. CPU microcode is a low-level layer that translates complex x86 CISC instructions into simple RISC-like internal instructions the CPU hardware executes. CPU microcode is only supposed be modifiable by CPU vendor, but sometimes this is not the case and apparently some parts of AMD's Zen 1/2/3/4 microcode can be changed using the Zentool. The Jianchen Project members want to find someone, who can modify AMD's Zen CPU microcode on a modern processor — say, an EPYC 9004-series — to execute RISC-V binaries. The patch is expected to either enable direct execution of RISC-V programs or significantly boost their runtime speed compared to emulation using the same hardware. The work must be tested using RISC-V versions of benchmarks like Coremark or Dhrystone. A complete submission includes binaries or source code, configuration files, dependencies, and test instructions. If only binaries are submitted before the deadline on June 6, identical source code must be added via pull request later. AMD's EPYC 9004-series and similar processors offer performance and core counts not achievable on currently available RISC-V-based processors, so executing proprietary RISC-V programs on EPYCs is a plausible idea. However, microcode is designed to fix internal bugs rather than replace the front-end ISA completely and it is even unclear whether the microcode can be completely re-written, people over at Ycombinator noted. Back in the mid-2010s, AMD planned to offer both x86-64 and Armv8-A Zen CPUs (something recently recalled by Mike Clarke, AMD's chief architect), so it is highly likely that there was a microcode for the Zen 1 microarchitecture that supported an Aarch64 front-end ISA. That said, Zen 1 CPUs could feature multiple microcode layer 'slots,' one supporting x86-64 and another Aarch64. AMD has hardly ever developed a microcode that supports Aarch64 or RISC-V for Zen 2/3/4 processors and therefore the microcode layer of these CPUs is strictly x86-64 and there is hardly enough microcode space for re-writing them from scratch. "This is not achievable," one commenter named Monocasa wrote. "There is not enough rewritable microcode to do this even as a super slow hack. And even if all of the microcode were rewritable, microcode is kind of a fallback pathway on modern x86 cores with the fast path being hardwired decode for x86 instructions. One commenter criticized the contest format, suggesting it is a way to get complex work done for less than $3,000 pay. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Just one day after TechCrunch revealed that Jeff Bezos is backing a secretive EV startup called Slate Auto, an early version of the company's low-cost electric pickup truck was spotted in the wild. But there are no dramatic flourishes, and that's by design. As TechCrunch exclusively reported earlier this week, the young company is attempting to build a business around the truck that involves selling it at a low price of around $25,000. Slate plans to upsell customers on customization and accessories of a wide variety. YOU MAKE IT” and has mentioned something that sounds like a customization program called “Slate University” in job listings. It is unclear if the company has settled on a name for the truck, despite being a few weeks away from coming out of stealth. The company did not immediately respond to a request for comment. Solid, which claimed to be the ‘AWS of fintech,' files for bankruptcy after raising nearly $81M in funding Meta whistleblower Sarah Wynn-Williams says company targeted ads at teens based on their ‘emotional state' Ilya Sutskever taps Google Cloud to power his AI startup's research
It's also going to be a little alarming, because that's just more data someone can access if they get your log in or otherwise access your account. Both Google and Anthropic already provide similar features for paying customers. In this case OpenAI is just catching up with competitors. Until today if you were having a conversation with a GPT model what you said stayed in that chat. If you told it you love the color yellow it couldn't remember that when you opened a new chat and asked it about your favorite color. The only way around that was to toggle on a “Reference saved memories” button and then tell ChatGPT not to forget you love the color yellow. Starting today, if you're a ChatGPT Plus or Pro user, you can click a whole new toggle in the same preferences window to “Reference chat history.” Then you can just talk to ChatGPT like usual and it will be able to remember and reference those conversations in future conversations. I've been using this ability in Anthropic's Claude and Google's Gemini for a while and I've always been throughly pleased with how magical it feels to just have the AI remember all the little things about me–particularly as I was blessed with the family curse of a goldfish memory. When I had a whole conversation with Claude about some new professional plans and then opened a new chat to start brainstorming it immediately knew why I was brainstorming and was quick to help. But I'd love to see finer control over how these AI access conversations. Right now ChatGPT offers temporary windows that don't save any of your conversations–essentially the AI version of an incognito browser window. There's also the constant inescapable feeling you're giving the AI a lot of details about yourself and your life that you probably shouldn't. Like when I first used Gmail in the 2000s to email saucy fanfiction, and when I uploaded every single one of the photos on my phone to the cloud. Will that potentially bite me in the ass later? When I asked an instance of ChatGPT 4o it said “Honestly, that totally depends on what you're hoping to get out of this.” Nice to see it and I are on the same page. This new feature will be available to ChatGPT Plus and Pro users starting today except in the UK, EU, Iceland, Liechtenstein, Norway, and Switzerland. (Not surprising given how much more the EU cares about data retention policies.) The new and improved memory will roll out to Enterprise, Edu, and Team users at a later date. To see if you've recieved it keep your eyes peeled for a popup in ChatGPT titled “Introducing new, improved memory” or check in the Settings under Personalization for the new toggle. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. Linda McMahon might need new reading glasses. The Bank of England warned that AI bots could converge on similar trading strategies, exacerbating downturns or bubbles. We may earn a commission when you buy through links on our sites.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. Among our top picks for gadgets under $30, these Occer compact binoculars are a necessity for anyone who wishes to enjoy the great outdoors or simply get some quality time with friends and family. Currently available on Amazon for $35, these binoculars are an even better deal when you add in a 10% coupon which brings the price below $30. The Occer binoculars are designed to deliver robust performance in a lightweight and ultra portable package. This makes them ideal for activities like bird watching, sporting events or sightseeing. For those not wearing glasses, dropping the eyecups provides a sharper image. The eye relief is long and the large 15mm eyepieces offer an easy-to-view comfort, even with sunglasses or eye glasses on. What's more, these binoculars are extremely light and convenient to carry, easily fitting in your pocket or handbag. The ABS plastic casing is reinforced with rubber armor for grip protection and durability. While not completely waterproof, the binoculars are splash-proof against light splashing and moisture and can be used in most outdoor environments. Whether you're an avid birdwatcher or simply looking for a fun gadget during outdoor adventures, these binoculars won't disappoint. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. We may earn a commission when you buy through links on our sites.
The cuts that Elon Musk's Department of Government Efficiency made at the National Highway Traffic Safety Administration in February “disproportionately affected” employees working on vehicle automation safety, according to The Financial Times. That division was formed in 2023 and therefore included a number of staffers who were still in their initial probationary hiring period, which could have led to their firings, according to the report. About 30 people total were let go. The cuts came just a few months ahead of Tesla launching its first-ever robotaxi service in Austin. Musk has claimed that his company will launch similar services in California and potentially other states by the end of the year — the latest in a long line of yet-unfilled promises about automated vehicle technology that the world's richest man has made. Subscribe for the industry's biggest tech news Every weekday and Sunday, you can get the best of TechCrunch's coverage. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice.
Researchers submitted a conceptual design report for the detector's design to the preprint server arXiv on March 26, where it's now hosted. “MATHUSLA” is a merciful and incredibly forced acronym for the MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles. The acronym is a reference to Methuselah, a biblical figure who lived nearly 1,000 years. Because the hodoscope would seek out especially long-lived particles in the Large Hadron Collider, which have so far escaped detection amid the collider's subatomic fireworks show. The LHC achieved one of its main goals over a decade ago, with the observation of a Higgs boson in 2012. Since then, particle physicists have been pondering how the gigantic, costly collider can yield further insights into the fundamental building blocks and interactions of classical physics. The LHC is set to be upgraded into the High-Luminosity LHC, which will increase the facility's luminosity by a factor of ten and increase the number of Higgs bosons CERN physicists will be able to study. That upgrade is expected to be completed by 2029, and MATHUSLA is proposed to work alongside the improved version of the world-famous collider. The fundamental design of the detector is thus: a massive box, 131 feet (40 meters) on each side and 36 feet (11 m) tall. The box would be filled with detectors that would sniff out long-lived particles that elude the LHC's main detectors. Yet it's designed to be cost-effective: smaller than earlier proposals, but still big enough to be game-changing. Physicists' hope is to have MATHUSLA ready to ride alongside the HL-LHC, which is slated to begin full-throttle operations in the 2030s. As long as the detector's name doesn't become a punchline for the time it takes for it to become a reality, there's a new opportunity for physicists to find physics at the brink of our current understanding. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. CERN says its Future Circular Collider has no technical hurdles—though the expected costs are exorbitant. Brookhaven National Laboratory's Relativistic Heavy Ion Collider kicked off its final year of operations this week, marking a quarter century of discoveries in particle physics. The staggeringly energetic neutrino likely came from beyond our galaxy, and physicists have two main suspects. A unique property of quantum systems is on display in one of the LHC's standard particle production methods. M87 was the first black hole to be imaged, and now it's revealing details of how some elementary particles are accelerated by the universe's most extreme environments. Axions—a popular dark matter candidate—may be floating around dense stellar remnants in a haze, and even be detectable to some telescopes. We may earn a commission when you buy through links on our sites.
Manufacturers are trying to stay one step ahead of impending market changes. When you purchase through links on our site, we may earn an affiliate commission. Reports from multiple data acquisition experts indicate that PC shipments increased globally during Q1 2025. Data shared by Canalys reflects that shipments for desktops, laptops, and workstations moved 62.7 million units so far this year, which is a total increase of 9.4%. It's important to note that each source calculates percentage increases using different metrics, but the number of units reported is almost identical. In an effort to get ahead of impending market challenges, many mainstream manufacturers are moving as much stock as possible to avoid tariff price increases for themselves and their consumers. As such, this is not likely to be a long-term trend but rather a fluctuation in response to the current political climate. Canalys shared additional information, breaking down the data by different PC types. Notebooks and other mobile workstation shipments surged 10% to 49.4 million units. We're not entirely sure how sales will shake out throughout 2025, but many factors will surely cause a stir. We'll keep an eye out for significant changes throughout each quarter, so check back regularly to see how both tariffs and Windows 10 End of Support are playing out for both corporate entities and end users. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Ash Hill is a contributing writer for Tom's Hardware with a wealth of experience in the hobby electronics, 3D printing and PCs. She manages the Pi projects of the month and much of our daily Raspberry Pi reporting while also finding the best coupons and deals on all tech. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
Among the most notable additions is an “Online” indicator that appears at the top of a group chat to show you how many people are currently around to chat. Another feature offers the option to scan and send documents on an iPhone. Also, iPhone users can now set WhatsApp as their default messaging and calling app. Plus, iPhone users can now pinch to zoom in during video calls. The app's improved bandwidth detection should also allow for more HD-quality video calls. While WhatsApp has allowed users to create events in group chats for some time now, it's now rolling out the ability to create an event in 1:1 conversations. Plus, the events feature is getting updated with the ability to RSVP as “maybe,” invite a plus one, add an end date and time, and pin events so they're easier to find. Channels are also getting three updates, as admins can now record and share short videos with followers and share unique QR codes that link directly to their channel. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Solid, which claimed to be the ‘AWS of fintech,' files for bankruptcy after raising nearly $81M in funding
What kind of latency/throughput are people getting from R2? Does it benefit from parallelism in the same way s3 does?[0]: https://developers.cloudflare.com/r2/pricing/#class-a-operat... [0]: https://developers.cloudflare.com/r2/pricing/#class-a-operat... reply Not sure about now, but upload speeds were very inconsistent when we tested it a year or so ago. reply reply reply I tried looking for that thread again and I only found the exact opposite comment from the Cloudflare founder:>Not abuse. Thanks for being a customer. Bandwidth at scale is effectively free.[0]I distinctly remember such a thread though.Edit: I did find these but neither are what I remember:https://news.ycombinator.com/item?id=42263554https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 >Not abuse. Thanks for being a customer. Bandwidth at scale is effectively free.[0]I distinctly remember such a thread though.Edit: I did find these but neither are what I remember:https://news.ycombinator.com/item?id=42263554https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 I distinctly remember such a thread though.Edit: I did find these but neither are what I remember:https://news.ycombinator.com/item?id=42263554https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 Edit: I did find these but neither are what I remember:https://news.ycombinator.com/item?id=42263554https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 https://news.ycombinator.com/item?id=42263554https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 https://news.ycombinator.com/item?id=33337183[0] https://news.ycombinator.com/item?id=38124676 [0] https://news.ycombinator.com/item?id=38124676 reply
AI labs like OpenAI claim that their so-called “reasoning” AI models, which can “think” through problems step by step, are more capable than their non-reasoning counterparts in specific domains, such as physics. According to data from Artificial Analysis, a third-party AI testing outfit, it costs $2,767.05 to evaluate OpenAI's o1 reasoning model across a suite of seven popular AI benchmarks: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2024, and MATH-500. Benchmarking Anthropic's recent Claude 3.7 Sonnet, a “hybrid” reasoning model, on the same set of tests cost $1,485.35, while testing OpenAI's o3-mini-high cost $344.59, per Artificial Analysis. All told, Artificial Analysis has spent roughly $5,200 evaluating around a dozen reasoning models, close to twice the amount the firm spent analyzing over 80 non-reasoning models ($2,400). Artificial Analysis co-founder George Cameron told TechCrunch that the organization plans to increase its benchmarking spend as more AI labs develop reasoning models. “At Artificial Analysis, we run hundreds of evaluations monthly and devote a significant budget to these,” Cameron said. Artificial Analysis isn't the only outfit of its kind that's dealing with rising AI benchmarking costs. Ross Taylor, the CEO of AI startup General Reasoning, said he recently spent $580 evaluating Claude 3.7 Sonnet on around 3,700 unique prompts. The vast majority of AI companies charge for model usage by the token, so you can see how this cost can add up. Modern benchmarks also tend to elicit a lot of tokens from models because they contain questions involving complex, multi-step tasks, according to Jean-Stanislas Denain, a senior researcher at Epoch AI, which develops its own model benchmarks. “[Today's] benchmarks are more complex [even though] the number of questions per benchmark has overall decreased,” Denain told TechCrunch. “They often attempt to evaluate models' ability to do real-world tasks, such as write and execute code, browse the internet, and use computers.” “[S]ince models have gotten better over time, it's still true that the cost to reach a given level of performance has greatly decreased over time,” Denain said. Many AI labs, including OpenAI, give benchmarking organizations free or subsidized access to their models for testing purposes. Solid, which claimed to be the ‘AWS of fintech,' files for bankruptcy after raising nearly $81M in funding Ilya Sutskever taps Google Cloud to power his AI startup's research Governments identify dozens of Android apps bundled with spyware Inside the EV startup secretly backed by Jeff Bezos
At the last minute, the Social Security Administration has announced it won't be cutting phone services for seniors—a policy it had previously claimed would go into effect on Monday. When reached for comment by Gizmodo, White House spokesperson Liz Huston said the following: “President Trump has repeatedly promised to protect social security and uproot waste, fraud and abuse across the federal government. Under President Trump's leadership, the Social Security Administration is taking bold steps to transform how they serve the public – improving frontline customer service, modernizing their technology, protecting beneficiaries and securing the integrity of their programs.” People who are flagged by this new system will still be required to undergo an in-person ID proofing check. There will be no disruptions to service, the government claimed. The announcement earlier this year that SSA would nix its phone operations spurred much public outcry, as it would have potentially forced millions of seniors to visit dwindling field offices to collect retirement benefits. The agency subsequently backtracked slightly, claiming it would maintain phone services for retirees with disabilities. The chaos at the agency, as well as its recent unpopular policy shifts, have largely been blamed on Elon Musk's Department of Government Efficiency. Earlier this year, DOGE announced lease terminations for dozens of SSA field offices across the country. Those closures, paired with the agency's attempt to nix phone services, could have seriously hampered retirees' ability to get in-person help with their benefits. Critics maintain that changes ushered in under DOGE still pose a threat to the retirement system's integrity. Indeed, according to recent reports from the Washington Post, DOGE has sought major layoffs at the SSA, which already has a historically small workforce. And last week, many retirees were wrongly informed they would no longer receive benefits. DOGE has also announced other unpopular initiatives, such as its mission to re-write the SSA's “entire codebase” in a matter of months—a move that critics worry could lead to serious digital dysfunction. Get the best tech, science, and culture news in your inbox daily. Tesla's presentation last year for a Cybercab gave sci-fi dystopia vibes that drew concern from Alcon Entertainment. What seems at first like a logical move to modernize the government's archives may in fact be an ill-advised decision. Navarro said Musk wasn't a manufacturer of cars, just an "assembler." We may earn a commission when you buy through links on our sites.
Someone else's buyer's remorse can save you hundreds. When you purchase through links on our site, we may earn an affiliate commission. A few Redditors have shared how they were able to score RTX 50 series GPUs at a significant discount despite Nvidia declaring a shortage, massive scalping (even at the system integrator level), and inflated prices. You may think that this was a one-off event, but another Redditor shared their Walmart experience. When they went to their local store, they found a completely sealed PNY RTX 5070 in the PC components cabinet, again sold as a returned product. Walmart has a 30-day return policy for most electronics, so these GPUs likely haven't been used much, if at all. Some employees say that these graphics cards are sold online only, and a few people would just walk into their stores and return the items. But whether the GPUs have been opened or not, they must mark them down and put them in the returns section. If you happen to be near Walmart, searching their return sections for bargains like this is one way you can score a deal on such a “rare” resource. Of course, there's no guarantee that there's going to be such markdown products available at your local store, so it's down to luck if you get one. Furthermore, buying returns and open-box items comes with a few downsides. For example, it doesn't have a store warranty, so you'll have to deal directly with the AIB if you run into trouble. Aside from that, there's an off chance that someone switched the cards inside, and you'll end up getting scammed instead. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. It could be that the original buyer had buyer's remorse and didn't actually need a new GPU, or somebody bought it for their special someone, but that person already has a better one installed. Jowi Morales is a tech enthusiast with years of experience working in the industry. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
When you purchase through links on our site, we may earn an affiliate commission. The U.S. government has pulled back from its plan to block Nvidia's H20 HGX GPU exports to China, following a meeting between the U.S. President Donald Trump and Nvidia's chief executive Jensen Huang at a $1 million-a-head dinner. During the tête-à-tête Huang vowed to invest in domestic AI infrastructure, reports NPR. The Trump administration reportedly planned to ban sales of Nvidia's H20 HGX GPUs to China starting this week, but changed its mind. The U.S. had spent months preparing new restrictions on shipments of H20 HGX GPUs — the highest-performing AI GPUs still permitted for sale in China — and those measures were set to take effect as early as this week, according to NPR, which cites two sources. The change in course followed a dinner at Trump's Mar-a-Lago resort, which Nvidia chief executive Jensen Huang attended, reportedly at a $1 million admission fee. Shortly afterward, the company reportedly promised to pour more money into U.S.-based AI data centers, a move that helped ease concerns from the administration. Under the new AI Diffusion Rule, China is effectively blocked from getting American processors as license exceptions that take into account limited performance or limited quantities will not apply to high-risk countries, including China. However, China cannot use LPP to legally obtain even minimal amounts of advanced U.S. AI processors as all AI processor exports to China require a license, and the default position is to deny them, making it extremely difficult for Chinese firms to legally acquire advanced AI hardware from the U.S. For Nvidia, this is a major problem, as it reportedly sold $16 billion worth H20 GPUs to China entities in the first quarter of calendar 2025. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.