“I kept hearing the same thing from creators and users, ‘I miss when social media was fun. So, instead of waiting for a platform to listen, I built one.” With TikTok's future still uncertain, Neptune hopes to attract creators looking for an alternative way to earn revenue while fostering an environment that prioritizes the quality of videos and connections instead of follower counts. The app plans to offer various revenue streams, including tips, livestreams, and subscriptions. Additionally, users can add a cover photo to their profiles, imitating what X and other networking apps offer. A key distinguishing feature of Neptune is that it lets creators hide their total followers and likes. According to the company, Neptune's algorithm emphasizes user interests and content quality rather than creator popularity. Typically, social media algorithms prioritize content with the highest engagement, often leaving lesser-known creators, or “micro-influencers,” at a disadvantage. Neptune is for connection, not clout,” chief marketing officer Timur Tugberk said. Another notable feature is “Hop Back,” which allows users to resume watching a video right where they left off, preventing them from losing their place when the app refreshes. Neptune is in beta and doesn't offer all of its intended features yet. When testing the app, we also noticed it lacks in-app editing tools and direct messaging. Nvidia says it plans to manufacture some AI chips in the US Access to future AI models in OpenAI's API may require a verified ID Jack Dorsey and Elon Musk would like to ‘delete all IP law'
When you purchase through links on our site, we may earn an affiliate commission. Intel has reached an agreement to transfer a majority stake in its Altera division to Silver Lake for $4.46 billion, which values the business at $8.75 billion. The deal is part of Intel's effort to streamline its operations and improve its financial position, while making Altera the world's largest pure-play FPGA provider. The transaction enables Altera to function as an independent company focused on programmable logic technologies. Specifically, Altera aims to double down on established fields such as automotive, aerospace, and communications, while also targeting growth in areas like artificial intelligence, cloud platforms, edge systems, and next-generation wireless networks. Intel, on the other hand, will reduce operational complexity and focus on its primary business areas: CPUs, GPUs, supporting platforms, and chip production. "Altera continues to make progress repositioning its product portfolio to participate in the fastest growing and most profitable segments of the FPGA market." Altera will be led by Raghib Hussain starting May 5, 2025. He replaces Sandra Rivera, who is stepping down after a 25-year tenure at Intel. "We are grateful for Sandra's strong leadership and lasting impact throughout her 25-year Intel career and wish her continued success as she begins a new chapter," said Tan. "Raghib is a superb executive we selected to lead the business forward based on his vast industry experience and proven track record of success. We look forward to partnering with Silver Lake upon closing of the transaction, as their industry expertise will help to accelerate Altera's efforts and unlock additional economic value for Intel." Today, Intel sells a controlling stake in Altera for some $4.46 billion as the company is now valued at $8.75 billion, a significant decrease from the sum that Intel paid for Altera 10 years ago. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Once closed, Intel will remove Altera's financial results from its consolidated statements. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Car rental giant Hertz has begun notifying its customers of a data breach that included their personal information and driver's licenses. Hertz also disclosed the breach with several U.S. states, including California and Maine. Emily Spencer, a spokesperson for Hertz, would not provide TechCrunch with a specific number of individuals affected by the breach but said it would be “inaccurate to say millions” of customers are affected. The Clop ransomware gang claimed last year to have exploited a zero-day vulnerability in Cleo's widely used enterprise file transfer products, which allow companies to share large sets of sensitive data over the internet. By breaching these systems, the hackers stole reams of data from Cleo's corporate customers. Soon after, the Clop ransomware gang claimed on its dark web leak site that it stole data from close to 60 companies by exploiting the bug in their Cleo systems. On Monday, Hertz's spokesperson told TechCrunch it found no evidence that Hertz's own network was affected by the breach, but confirmed that Hertz data “was acquired by an unauthorized third party that we understand exploited zero-day vulnerabilities within Cleo's platform in October 2024 and December 2024.” Nvidia says it plans to manufacture some AI chips in the US Access to future AI models in OpenAI's API may require a verified ID Jack Dorsey and Elon Musk would like to ‘delete all IP law' Hacked documents reveal guide to serving Elon Musk on private jets
Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Posted: Google's AI research lab, Google DeepMind, says that it has created an AI model that can help decipher dolphin vocalizations, supporting research efforts to better understand how dolphins communicate. The model, called DolphinGemma, was trained using data from the Wild Dolphin Project (WDP), a nonprofit that studies Atlantic spotted dolphins and their behaviors. Built on Google's open Gemma series of models, DolphinGemma, which can generate “dolphin-like” sound sequences, is efficient enough to run on phones, Google says. This summer, WDP plans to use Google's Pixel 9 smartphone to power a platform that can create synthetic dolphin vocalizations and listen to dolphin sounds for a matching “reply.” WDP previously was using the Pixel 6 to conduct this work, Google says, and upgrading to the Pixel 9 will enable researchers at the organization to run AI models and template-matching algorithms at the same time, according to Google. Topics Subscribe for the industry's biggest tech news Every weekday and Sunday, you can get the best of TechCrunch's coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. © 2025 Yahoo.
Now, instead of offering models that only work with either Apple or Google's lost-item finding technology, the new Chipolo POP devices work with both companies' finding networks out of the box. The Chipolo devices are among a handful of companies, including Tile, Pebblebee, and Samsung, that make their own AirTag-like trackers. Unlike Tile, which designed a finding network that leveraged the people who had its mobile app installed, Chipolo chose to work with the existing finding networks offered by platform makers Apple and Google. To date, Chipolo has sold more than 4.5 million of its devices. She joined the company after having previously spent over three years at ReadWriteWeb. Nvidia says it plans to manufacture some AI chips in the US Access to future AI models in OpenAI's API may require a verified ID Apple reportedly working on a Vision Pro that plugs into your Mac Jack Dorsey and Elon Musk would like to ‘delete all IP law' Hacked documents reveal guide to serving Elon Musk on private jets
Earth's meteorite collection just got called out for being a little biased—and what's more, a team of astronomers pinpointed exactly why that bias occurs. Carbonaceous asteroids are all over our solar system, both in the main belt and closer to Earth. But very few of the carbon-rich rocks are actually found on Earth, comprising just 4% of the meteorites recovered on our planet's surface. Their findings, published today in Nature Astronomy, indicate that carbon asteroids get obliterated by the Sun and Earth's atmosphere before they can make it to ground. “We've long suspected weak, carbonaceous material doesn't survive atmospheric entry,” said Hadrien Devillepoix, a researcher at Australia's Curtin Institute of Radio Astronomy and co-author of the paper, in a university release. “What this research shows is many of these meteoroids don't even make it that far: they break apart from being heated repeatedly as they pass close to the Sun.” The team analyzed nearly 8,000 meteoroid impacts and 540 potential falls from 19 different observation networks around the globe to understand why carbonaceous asteroids are so rare on Earth. Carbonaceous meteorites on Earth give scientists the unique opportunity to study some of the oldest material in our solar system. But researchers also recover carbon-rich asteroid material from space; Japan's Hayabusa2 mission and NASA's OSIRIS-REx both plucked rocky material from distant asteroids and brought those samples to Earth, where they can be investigated to a fuller extent than remote observations allow. “However, we have so few of them in our meteorite collections that we risk having an incomplete picture of what's actually out there in space and how the building blocks of life arrived on Earth,” Shober added. The team found that meteoroids created by tidal disruption events—when asteroids swing by planets closely enough to be broken apart by the planet's forces—are particularly fragile, and survive atmospheric entry less than other types of asteroids. Get the best tech, science, and culture news in your inbox daily. The icy planet is spinning more slowly than we thought. The nearby T Coronae Borealis system could still explode any day now, but calculations suggest the next best chance for fireworks is later this year. Skywatchers will get a rare chance to see Saturn in its full glory, without chunks of ice and rock swarming around it. We may earn a commission when you buy through links on our sites.
I would have guessed that any kind of forests have quite limited cap how much carbon it could retain in dead wood, and that this cap will be pretty much fixed. Unless something will stop natural decay processes releasing the carbon back to the atmosphere I don't see how existing grown forest could increase its capacity, since I suppose it is already at its equilibrium. (Unlike peatlands, where most of accumulated carbon remains underwater, so it presumably has much larger capacity. )Simply said, without "burying or sinking wood mass" I see no easy way to prevent carbon from returning into the atmosphere. Basically if we need to take carbon from the atmosphere, we should ideally put it back from where we have been mining it for last couple of centuries. (Unlike peatlands, where most of accumulated carbon remains underwater, so it presumably has much larger capacity. )Simply said, without "burying or sinking wood mass" I see no easy way to prevent carbon from returning into the atmosphere. Basically if we need to take carbon from the atmosphere, we should ideally put it back from where we have been mining it for last couple of centuries. Simply said, without "burying or sinking wood mass" I see no easy way to prevent carbon from returning into the atmosphere. Basically if we need to take carbon from the atmosphere, we should ideally put it back from where we have been mining it for last couple of centuries. The article says, "We found that a forest that's developing toward old-growth condition is accruing more wood in the stream than is being lost through decomposition" and "The effect will continue in coming decades, Keeton said, because many mature New England forests are only about halfway through their long recovery from 19th- and 20th-century clearing for timber and agriculture". Still a bit confused about the emphasis in wood deposits in "streams" – reportedly way more effective, but I'd guess with very limited capacity to really "lock" the mass – compared to regular hummus – not that effective, but for forest with couple of centuries of growth ahead I'd guess way more capacious. If that fraction isn't negligible, we'd be better off burning it. Determining that fraction, across a range of conditions, is nontrivial. But solar thermal with mirrors should work for lower technology. so we absolutely need trees to provide ecological functions. but in era of 5kwp PV array paying itself in 5-6 years(and still working afterwards), to heat water... its is ridiculous to cut trees and burn them to have hot water. 80% of time Canadian citizen can have 100% solar hot water (PV), less then 100% rest of the year. so photovoltaic is 15 times more land efficient then burning biomass. so we absolutely need trees to provide ecological functions. but in era of 5kwp PV array paying itself in 5-6 years(and still working afterwards), to heat water... its is ridiculous to cut trees and burn them to have hot water. 80% of time Canadian citizen can have 100% solar hot water (PV), less then 100% rest of the year. My only concern is that building those houses might actually emit more carbon than they are supposed to keep. It's the same logic for construction materials. A house has dozens of trees worth of lumber in it, and that carbon is now trapped in the house for however many decades it takes until the house eventually burns down or rots. Meanwhile the trees that were cut regrew, so the total "inventory" of trapped carbon has increased. There's a bit of nuance to be filled out, like challenges of forest plantation monoculture and so on, but it always sounded quite practical to me. Store spent fuel in massive wooden dry caskets. They stay frozen for a million years and don't rot. They stay frozen for a million years and don't rot. They stay frozen for a million years and don't rot. They stay frozen for a million years and don't rot. With saltwater it's a bit trickier because it's decently oxygenated even to depth and there is a lot of life dedicated to breaking down wood in the ocean. Basically any place where you've got high timber production within a reasonably short distance of an arid area could make for a relatively low-tech sequestration/storage pipeline. I grew up in an area known for coal and logging. Maybe it would be more effective to drop wet lumber off in the desert for a few years by rail before moving the dry lumber to permanent underground storage. This assumes two stages of transport to and from the desert would cost less carbon than transport to a kiln and then to storage.I'm not convinced that the wood even needs to be dried before burying, though. I'm not convinced that the wood even needs to be dried before burying, though.
For reference: https://skeptoid.com/episodes/4307 reply reply reply reply reply reply [*] (.gov.cn) https://english.www.gov.cn/news/202411/17/content_WS6739adf7... ("China's first deep-ocean drilling vessel enters service")If moderators see this and choose to change the URL, here's several more versions of this story:https://www.nature.com/articles/s41561-025-01675-7 ("The Moho is in reach of ocean drilling with the Meng Xiang")https://www.science.org/content/article/china-s-dreamy-new-s... ("China's ‘dreamy' new ship aims for Earth's mantle—and assumes ocean-drilling leadership") If moderators see this and choose to change the URL, here's several more versions of this story:https://www.nature.com/articles/s41561-025-01675-7 ("The Moho is in reach of ocean drilling with the Meng Xiang")https://www.science.org/content/article/china-s-dreamy-new-s... ("China's ‘dreamy' new ship aims for Earth's mantle—and assumes ocean-drilling leadership") https://www.nature.com/articles/s41561-025-01675-7 ("The Moho is in reach of ocean drilling with the Meng Xiang")https://www.science.org/content/article/china-s-dreamy-new-s... ("China's ‘dreamy' new ship aims for Earth's mantle—and assumes ocean-drilling leadership") https://www.science.org/content/article/china-s-dreamy-new-s... ("China's ‘dreamy' new ship aims for Earth's mantle—and assumes ocean-drilling leadership") reply Edit: Ah, AI written. :-( reply reply reply
I've got a little utility program that I can tell to get the weather or run common commands unique to my system. It is a lot cheaper to leverage existing user interfaces & tools (i.e., Outlook) than it is to build new UIs and then train users on them. Sure, we have phone calls, sometimes get together for lunch.But mostly it's just emails. If you don't need to have the lowest possible latency for your work and you're happy to have threads die then it's better than any bespoke solution you can build without an army of engineers to keep it chugging along.What's even better is that you can see all the context, and use the same command plane as the agents to tell them what they are doing wrong. Also I work for Val Town, happy to answer any questions. I use that for journaling: I made a little system that sends me an email every day; I respond to it and the response is then sent to a page that stores it into a db. This might not seem like much of a big deal. I would have preferred that sites had a style-free request format that returned XML or even JSON generated from HTML, rather than having to use a separate API. I have this sense that the way we do it today with a split backend/frontend, distributed state, duplicated validation, etc has been a monumental waste of time. I looked forward to this day back in the early 2000s when APIs started arriving, but felt even then that something was fishy. I would have preferred that sites had a style-free request format that returned XML or even JSON generated from HTML, rather than having to use a separate API. I have this sense that the way we do it today with a split backend/frontend, distributed state, duplicated validation, etc has been a monumental waste of time. I know note taking and journaling posts are frequent on HN, but I've thought that this is the best way to go, is universal from any client, and very expandable. - all attachments are stripped out and stored on a server in an hierarchical structure based on sender/recipient/subject line- all discussions are archived based on similar criteria, and can be reviewed EDIT: and edited like to a wiki I have not thought about adding memory log of all current things and feeding it into the context I'll try it out.Mine is a simple stateless thing that captures messages, voice memos and creates task entries in my org mode file with actionable items. I only feed current date to the context.Its pretty amusing to see how it sometimes adds a little bit of its own personality to simple tasks, for example if one of my tasks are phrased as a question it will often try to answer the question in the task description. Mine is a simple stateless thing that captures messages, voice memos and creates task entries in my org mode file with actionable items. I only feed current date to the context.Its pretty amusing to see how it sometimes adds a little bit of its own personality to simple tasks, for example if one of my tasks are phrased as a question it will often try to answer the question in the task description. Its pretty amusing to see how it sometimes adds a little bit of its own personality to simple tasks, for example if one of my tasks are phrased as a question it will often try to answer the question in the task description. - https://docs.mcp.run/tasks/tutorials/telegram-botfor memories (still not shown in this tutorial) I have created a pantry [0] and a servlet for it [1] and I modified the prompt so that it would first check if a conversation existed with the given chat id, and store the result there.The cool thing is that you can add any servlets on the registry and make your bot as capable as you want. for memories (still not shown in this tutorial) I have created a pantry [0] and a servlet for it [1] and I modified the prompt so that it would first check if a conversation existed with the given chat id, and store the result there.The cool thing is that you can add any servlets on the registry and make your bot as capable as you want. The cool thing is that you can add any servlets on the registry and make your bot as capable as you want. How did he tell Claude to “update” based on the notebook entries?2. Won't he eventually ran out of context window?3. Won't this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. Won't he eventually ran out of context window?3. Won't this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. Won't this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).Thank you for sharing, OP's project is something I have been thinking for a few months now. The prompt can then be fed just information for today and the next few days - which will always be tiny.It's possible to save "memories" that are always included in the prompt, but even those will add up to not a lot of tokens over time.> Won't this be expensive when using hosted solutions?You may be under-estimating how absurdly cheap hosted LLMs are these days. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. It's possible to save "memories" that are always included in the prompt, but even those will add up to not a lot of tokens over time.> Won't this be expensive when using hosted solutions?You may be under-estimating how absurdly cheap hosted LLMs are these days. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. You may be under-estimating how absurdly cheap hosted LLMs are these days. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. > If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors. Do we even need to think of these as agents, or will the agentic frameworks move towrads being a call_llm() sql function? This works really effectively with thinking models, because the thinking eats up tons of context, but also produces very good "summary documents". The database also provides a form of fallback, or RAG I suppose, for situations where the summary leaves out important details, but the model must also recognize this and go pull context from the DB.Right now I have been trying it to make essentially an inventory management/BOM optimization agent for a database of ~10k distinct parts/materials. Right now I have been trying it to make essentially an inventory management/BOM optimization agent for a database of ~10k distinct parts/materials. Large swathes of the stack is commoditized OSS plumbing, and hosted inference is already cheap and easy.There are obvious security issues with plugging an agent into your email and calendar, but I think many will find it preferable to control the whole stack rather than ceding control to Apple or Google. There are obvious security issues with plugging an agent into your email and calendar, but I think many will find it preferable to control the whole stack rather than ceding control to Apple or Google. "There are obivious security issues with plugging and agent into your email..." Isn't this how North Korea makes all their crypto happen? TL;DR I made shortcuts that work on my Apple watch directly to record my voice, transcribe it and store my daily logs on a Notion DB.All you need are 1) a chatgpt API key and 2) a Notion account (free).- I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. All you need are 1) a chatgpt API key and 2) a Notion account (free).- I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. - I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. - I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. I'd use a hosted platform for this kind of thing myself, because then there's less for me to have to worry about. I have dozens of little systems running in GitHub Actions right now just to save me from having to maintain a machine with a crontab. Home server AI is orders of magnitude more costly than heavily subsidized cloud based ones for this use case unless you run toy models that might hallucinate meetings.edit: I now realize you're talking about the non-ai related functionality. edit: I now realize you're talking about the non-ai related functionality. Personally, this appears to be extremely helpful for me, because instead of checking several different spots every day, I can get a coherent summary in one spot, tailored to me and my family. I'm literally checking the same things every day, down to USPS Informed Delivery. That's enough.I can't count the number of useful scripts and apps I've written that nobody else has used, yet I rely on them daily or nearly every day. That's enough.I can't count the number of useful scripts and apps I've written that nobody else has used, yet I rely on them daily or nearly every day. For me, that is an extremely low barrier to cross.I find Siri useful for exactly two things at the moment: setting timers and calling people while I am driving.For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based. I find Siri useful for exactly two things at the moment: setting timers and calling people while I am driving.For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based. For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based. That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based. (Or you don't trust them not to have security breaches that grant attackers access to logged data, which remains a genuine thread, albeit one that's true of any other cloud service.) I am wondering, how powerful the AI model need to be to power this app?Would a selfhosted Llama-3.2-1B, Qwen2.5-0.5B or Qwen2.5-1.5B on a phone be enough? > cron job which makes a call to the Claude API It's about 652 tokens according to https://tools.simonwillison.net/claude-token-counter - maybe double that once you add all of the context from the database table.1200 input tokens and 200 output tokens for Claude 3.7 Sonnet costs 0.66 cents - that's around 2/3rd of a cent.LLM APIs are so cheap these days. "I've written before about how the endgame for AI-driven personal software isn't more app silos, it's small tools operating on a shared pool of context about our lives. "I've written before about how the endgame for AI-driven personal software isn't more app silos, it's small tools operating on a shared pool of context about our lives.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. Unlike Microsoft 365, where you'll pay more over time due to its subscription model, Office 2024 is a one-time, upfront purchase that allows you lifetime access to fundamental tools like Word, Excel, PowerPoint, Outlook, and OneNote. This makes it the best choice if you rather pay a single fee and avoid ongoing fees. Office 2024 includes all the latest updates in one package and does not require additional payments. Office 2024 Home & Business is filled with features appropriate for both personal and business use: It builds on the success of Office 2021 with major improvements in performance, looks and functionality. Excel users will welcome its more intelligent handling of large datasets, PowerPoint can now record presentations with audio and live video feeds and Word introduces Focus Mode to eliminate distractions while composing, along with AI-powered Smart Compose to easily fill in sentences or generate ideas. The new user interface of the suite is based on Fluent Design principles and provides a harmonious and beautiful experience for every application. What's great is that Microsoft Office 2024 can be used offline: Unlike Microsoft 365, which is dependent on cloud services, Office 2024 will work completely offline after it has been installed. While it does not have cloud storage and co-authoring applications like Teams, the suite compensates with extensive co-authoring capabilities that enable multiple users to edit the same document in real time. For those concerned with future updates, know that Office 2024 does not receive new features after purchase—security patches and bug fixes, yes. But this is not a drawback when considering that Office 2024 already includes all the newest tools from Microsoft's productivity set (including Copilot's AI). With Word, Excel, PowerPoint, Outlook, and OneNote all current in this release, there is little to need additional upgrades. For only $159, this lifetime license is the ultimate value if you're a Mac or PC user looking to be as productive as possible at a reasonable price point. For business or individual projects, Microsoft Office 2024 delivers everything you need in one convenient package. With StackSocial‘s standing as an expert on software deals, now is the perfect time to grab this offer before prices go back to normal. Get the best tech, science, and culture news in your inbox daily. We may earn a commission when you buy through links on our sites.
When you purchase through links on our site, we may earn an affiliate commission. Any open-source pedigree of the SDK has not been mentioned, so it is likely proprietary and won't be of much benefit to developers outside China. The U.S. has implemented a series of export restrictions on China, including: advanced AI chips, high-bandwidth memory (HBM), manufacturing equipment, and silicon wafers from leading players like Intel, TSMC, and Samsung. In a bid to reduce reliance on Western hardware, China is hard at work developing its semiconductor ecosystem with in-house silicon, fab equipment, memory, CPUs, and even GPUs. The latter is of great importance, as modern-day machine learning (sometimes under the buzzword banner of AI) is largely accelerated by parallel computing, something which GPUs excel at. A strong GPU programming ecosystem offers high-level abstraction, ready-to-use libraries, documentation, and profiling tools. With high-performance Nvidia GPU exports still in limbo, Moore Threads is offering an alternative to CUDA. To ensure compatibility with already written CUDA code, the MUSA SDK also includes Musify, a tool that translates CUDA code for the MUSA environment, likely by translating PTX code at runtime, similar to zLUDA. Moore Threads is demonstrating the prowess of its stack through several demonstrations on its website, including speech synthesis, AI-image generation, image processing, AI-powered 3D face modeling, just to name a few. You can actually try out a bunch of these demos right now (though you might need an account), some of which are reportedly running on Moore Threads' MTT S3000 datacenter GPUs. Despite CUDA's clear advantage in terms of advancement, maturity, and support, MUSA could find many indigenous customers in small-scale environments, evolving over time. Breaking free from CUDA's reign requires superior alternatives, with ROCm being a key contender. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he's not working, you'll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Ankylosaurs, a group of dinosaurs often compared to Pokémon, were built like walking tanks, with bony armor plating their backs and sides. An international team of researchers has identified the first ankylosaurid footprints known to science. “While we don't know exactly what the dinosaur that made Ruopodosaurus footprints looked like, we know that it would have been about 5-6 metres long [16 to over 19 feet long], spiky and armoured, and with a stiff tail or a full tail club,” Victoria Arbour, the curator of paleontology at the Royal BC (British Columbia) Museum, said in a Taylor & Francis Group statement. Arbour and her colleagues' work is detailed in a study published today in the Journal of Vertebrate Paleontology, which is published by Taylor & Francis Group. This makes the footprints doubly exceptional—prior to their discovery, some scholars had suggested that ankylosaurids did not exist in North America during that time range, given the lack of fossil evidence. The newly discovered tracks fill in this gap in North America's fossil record, and also demonstrate that nodosaurids and ankylosaurids shared this region millions of years ago. The investigation began when Charles Helm, a co-author of the study and a scientific advisor at Tumbler Ridge Museum, documented three-toed tracks around Tumbler Ridge, a municipality in the foothills of British Columbia's Canadian Rockies, which is also in the Peace Region. “Ever since two young boys discovered an ankylosaur trackway close to Tumbler Ridge in the year 2000, ankylosaurs and Tumbler Ridge have been synonymous. It is really exciting to now know through this research that there are two types of ankylosaurs that called this region home, and that Ruopodosaurus has only been identified in this part of Canada,” said Helm. “This study also highlights how important the Peace Region of northeastern BC is for understanding the evolution of dinosaurs in North America—there's still lots more to be discovered,” Arbour added. Get the best tech, science, and culture news in your inbox daily. News from the future, delivered to your present. New analysis shows several families of dinosaurs were likely thriving in North America in the latter days of the dinosaur era. Researchers suggest that ground-based mammals fared better than their arboreal relatives during the end-Cretaceous extinction thanks to their lifestyle. Paleontologists in Denmark found a once-gloopy, now-hardened mess that they believe was spat up by a Cretaceous-era fish. The fossil, destroyed in an air raid 80 years ago, had faded from memory until a paleontologist found archival images. Researchers are calling for CT scans to confirm the authenticity of a Cretaceous period fossil that led to the identification of a new mosasaur species. We may earn a commission when you buy through links on our sites.
Elon Musk may have run tech companies, but building technology for government is an entirely different beast. Tech buzzwords are clanging through the halls of Washington, DC. The executive order that created DOGE in the first place claims the agency intends to “modernize Federal technology and software.” But jamming hyped-up tech into government workflows isn't a formula for efficiency. Successful, safe civic tech requires a human-centered approach that understands and respects the needs of citizens. Unfortunately, this administration laid off all the federal workers with the know-how for that—seasoned design and technology professionals, many of whom left careers in the private sector to serve their government and compatriots. What's going on now is not unconventional swashbuckling—it's wild incompetence. Musk may have run plenty of tech companies, but building technology for government is an entirely different beast. If this administration doesn't change its approach soon, American citizens are going to suffer far more than they probably realize. But enormous demand famously took down the website two hours after launch. On that first day, only six people were able to complete the registration process. DirectFile, the free digital tax filing system that the IRS launched last year, emerged from years of careful research, design, and engineering and a thoughtful, multi-staged release. As a result, 90% of people who used DirectFile and responded to a survey said their experience was excellent or above average, and 86% reported that DirectFile increased their trust in the IRS. Recently, Sam Corcos, a DOGE engineer, told IRS employees he plans to kill the program. When 21 experienced technologists quit their jobs at USDS in January after their colleagues were let go, they weren't objecting on political grounds. Rather, they preferred to quit rather than “compromise core government services” under DOGE, whose orders are incompatible with USDS's original mission. As DOGE bulldozes through technological systems, firewalls between government agencies are collapsing and the floodgates are open for data-sharing disasters that will affect everyone. And it threatens everyone else, albeit perhaps less imminently, as every American's Social Security number, tax returns, benefits, and health-care records are agglomerated into one massive, poorly secured data pool. Now imagine those same risks with all your government data, managed by a small crew of DOGE workers without a hint of institutional knowledge between them. Making data sets speak to each other is one of the most difficult technological challenges out there. Giants like Palantir have built entire business models around integrating government data for surveillance, and they stand to profit enormously from DOGE's dismantling of privacy protections. This is the playbook: Gut public infrastructure, pay private companies millions to rebuild it, and then grant those companies unprecedented access to our data. DOGE is also coming for COBOL, a programming language that the entire infrastructure of the Social Security Administration is built on. According to reporting by Wired, DOGE plans to rebuild that system from the ground up in mere months—even though the SSA itself estimated that a project like that would take five years. If something goes wrong, more than 65 million people in the US currently receiving Social Security benefits will feel it where it hurts. Any delay in a Social Security payment can mean the difference between paying rent and facing eviction, affording medication or food and going without. Once these systems are gutted and these firewalls are down, it could take years or even decades to put the pieces back together from a technical standpoint. Last month, an 83-year-old pastor in hospice care summoned her strength to sue this administration over its gutting of the Consumer Financial Protection Bureau, and we can follow her example. And everyday Americans who rely on government services, which is all of us, have a stake in this fight. Support the lawyers challenging DOGE's tech takeover, document and report any failures you encounter in government systems, and demand that your representatives hold hearings on what's happening to our digital infrastructure. Steven Renderos is the executive director of Media Justice. Correction: Due to a CMS error, this article was originally published with an incorrect byline. With news this week of the messaging app being used to discuss war plans, we get you up to speed on what Signal should be used for—and what it shouldn't. A conversation with Kathleen Hicks, the former deputy secretary of defense. Discover special offers, top stories, upcoming events, and more. Try refreshing this page and updating them one more time.
This article is part of Gizmodo Deals, produced separately from the editorial team. We may earn a commission when you buy through links on the site. AirTags have been a godsend for those who have a habit of misplacing their belongings: Whether it's your keys, wallet, bag or luggage (or even your pet's collar), these small Bluetooth trackers offer a simple and effective solution to keep track of your belongings. Though these prices aren't all-time lows, they're near enough to low to make this offer very attractive—especially with word on the street that tariffs might double the price of electronics any time soon. Both options are Top 5 bestsellers in Amazon's Electronics category right now. The AirTag is a tiny attachment that makes finding lost items simple: It's very simple to set up—just tap it to your iPhone or iPad, assign the AirTag a name for what you're using it for (such as “Keys” or “Backpack”) and you're ready to go. If the item is nearby but not in sight, you can use Precision Finding on recent iPhones that have Ultra Wideband technology for a very precise location with on-screen directions and sound. Or you can have the AirTag play a sound from its built-in speaker to locate it in seconds. When you misplace something outside the home (at a hotel or airport, for example), the AirTag can transmit its location anonymously through nearby Apple devices and back to you via iCloud. This feature makes your items trackable even when they're out of Bluetooth range. Obviously, privacy and security are also important aspects of the AirTag design: The device uses encrypted communication to protect your data and prevent unwanted tracking. Moreover, if someone attempts to use an AirTag to track you against your will, your iPhone will detect and alert you if there are any unfamiliar trackers nearby. As fears mount over imminent tariff hikes that would essentially see electronics prices go through the roof, it's a great time to take advantage of Amazon's discounts. Purchasing AirTags today will ensure that you are not caught off guard by rising costs in the near future. Get the best tech, science, and culture news in your inbox daily. We may earn a commission when you buy through links on our sites.
Georgia Tech scientists reckon advance could mean BCIs become more important in everyday life. When you purchase through links on our site, we may earn an affiliate commission. Researchers from Georgia Tech have developed a tiny, minimally invasive, brain-computer interface (BCI). It is thought that this super-compact new "high fidelity" sensor will make continuous everyday use of BCIs a more realistic possibility. Professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech, decided to do something about this bulky invasive issue - while maintaining optimum impedance and data quality. "I started this research because my main goal is to develop new sensor technology to support healthcare and I had previous experience with brain-computer interfaces and flexible scalp electronics," explained Yeo. The Georgia Tech blog also mentions that this tiny new sensor uses conductive polymer microneedles to capture electrical signals and conveys those signals along flexible polyimide/copper wires. In addition to this naturally flexible construction the implant device is less than a square millimeter. The tiny new hi-fi BCI might have one major drawback for certain applications. So, perhaps we should think of it as a disposable, occasional use device. In Georgia Tech field tests, six subjects used the new device for controlling an augmented reality (AR) video call. It proved to be 96.4% accurate in recording and classifying neural signals. However, the high-fidelity neural signal capture persisted only for up to 12 hours. Georgia Tech researchers stressed that during the half day, subjects could stand, walk, and run – enjoy complete freedom of movement, with the implant in place. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Perhaps we shouldn't get too excited about the possibilities of BCIs unlocking super-human powers, though. Recent research suggested that human thought runs at a leisurely 10 bits per second, so we might also need a brain overclocking upgrade to make the most of an advanced BCI's potential... Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
It's called Book Highlights and it lets you:• Import highlights from Kindle using My Clippings.txt• Create and manage your own personal book library• Add quotes manually or scan them with your phone's camera• Sync everything across your Apple devices via iCloudI tried to keep the interface clean and minimal, just focused on helping readers organize and revisit their favorite quotes. If anyone here gives it a try, I'd love to hear your thoughts or feedback! • Import highlights from Kindle using My Clippings.txt• Create and manage your own personal book library• Add quotes manually or scan them with your phone's camera• Sync everything across your Apple devices via iCloudI tried to keep the interface clean and minimal, just focused on helping readers organize and revisit their favorite quotes. If anyone here gives it a try, I'd love to hear your thoughts or feedback! • Create and manage your own personal book library• Add quotes manually or scan them with your phone's camera• Sync everything across your Apple devices via iCloudI tried to keep the interface clean and minimal, just focused on helping readers organize and revisit their favorite quotes. If anyone here gives it a try, I'd love to hear your thoughts or feedback! • Add quotes manually or scan them with your phone's camera• Sync everything across your Apple devices via iCloudI tried to keep the interface clean and minimal, just focused on helping readers organize and revisit their favorite quotes. If anyone here gives it a try, I'd love to hear your thoughts or feedback! • Sync everything across your Apple devices via iCloudI tried to keep the interface clean and minimal, just focused on helping readers organize and revisit their favorite quotes. If anyone here gives it a try, I'd love to hear your thoughts or feedback! If anyone here gives it a try, I'd love to hear your thoughts or feedback! The one issue we could not figure out, is how to import books/pdfs (side loaded is the term I think) that are on a kindle device but are not 'bought' books through Amazon. We might just add sth later to allow adding ebooks and pdfs, non Amazon related in our webapp.If you'd like to give it a try, it's called DeepRead (deepread.com), we're just a few peeps trying to create sth fun and useful ^^Ps. Sorry for the little self promotion, but I'm happy to see this issue popping up still nowadays.Pss. We were also thinking of building something that allows an easier export to Obsidian or other apps (we have an export functionality built in already which creates a markdown of the chapters and highlights). If you'd like to give it a try, it's called DeepRead (deepread.com), we're just a few peeps trying to create sth fun and useful ^^Ps. Sorry for the little self promotion, but I'm happy to see this issue popping up still nowadays.Pss. We were also thinking of building something that allows an easier export to Obsidian or other apps (we have an export functionality built in already which creates a markdown of the chapters and highlights). Sorry for the little self promotion, but I'm happy to see this issue popping up still nowadays.Pss. We were also thinking of building something that allows an easier export to Obsidian or other apps (we have an export functionality built in already which creates a markdown of the chapters and highlights). We were also thinking of building something that allows an easier export to Obsidian or other apps (we have an export functionality built in already which creates a markdown of the chapters and highlights). Now I need a computer and a usb cable, while I'm mostly living off road and off grid.This is in no way the fault of your app. Just pointing out how Amazon isn't for serious researchers. This is in no way the fault of your app. Just pointing out how Amazon isn't for serious researchers. On a related note, I built a kindle notes parser, which splits the highlights and transforms them into markdown.My workflow is to dump my highlights into obsidian and build flashcards on top of that.The app is open source:https://github.com/woile/kindle-notes-parseredit: I still don't like that I have to plug the kindle, to extract the highlights, it's a bummer, but in the end, it's just simple and works for me. My workflow is to dump my highlights into obsidian and build flashcards on top of that.The app is open source:https://github.com/woile/kindle-notes-parseredit: I still don't like that I have to plug the kindle, to extract the highlights, it's a bummer, but in the end, it's just simple and works for me. I like my Kindle but the software is very crummy.
But preparing for that without launching nukes into space means getting creative. One day, in the near or far future, an asteroid about the length of a football stadium will find itself on a collision course with Earth. If we are lucky, it will land in the middle of the vast ocean, creating a good-size but innocuous tsunami, or in an uninhabited patch of desert. As the asteroid steams through the atmosphere, it will begin to fragment—but the bulk of it will likely make it to the ground in just a few seconds, instantly turning anything solid into a fluid and excavating a huge impact crater in a heartbeat. A colossal blast wave, akin to one unleashed by a large nuclear weapon, will explode from the impact site in every direction. Homes dozens of miles away will fold like cardboard. And others are actively working on developing ways to prevent a collision should we find an asteroid that seems likely to hit us. We already know that at least one method works: ramming the rock with an uncrewed spacecraft to push it away from Earth. In September 2022, NASA's Double Asteroid Redirection Test, or DART, showed it could be done when a semiautonomous spacecraft the size of a small car, with solar panel wings, was smashed into an (innocuous) asteroid named Dimorphos at 14,000 miles per hour, successfully changing its orbit around a larger asteroid named Didymos. But there are circumstances in which giving an asteroid a physical shove might not be enough to protect the planet. If that's the case, we could need another method, one that is notoriously difficult to test in real life: a nuclear explosion. Scientists have used computer simulations to explore this potential method of planetary defense. But in an ideal world, researchers would ground their models with cold, hard, practical data. Sending a nuclear weapon into space would violate international laws and risk inflaming political tensions. Over the last few years, however, scientists have started to devise some creative ways around this experimental limitation. The effort began in 2023, with a team of scientists led by Nathan Moore, a physicist and chemical engineer at the Sandia National Laboratories in Albuquerque, New Mexico. Sandia is a semi-secretive site that serves as the engineering arm of America's nuclear weapons program. And within that complex lies the Z Pulsed Power Facility, or Z machine, a cylindrical metallic labyrinth of warning signs and wiring. It's capable of summoning enough energy to melt diamond. About 25,000 asteroids more than 460 feet long—a size range that starts with midsize “city killers” and goes up in impact from there—are thought to exist close to Earth. It took a while to sort out the details. But by July 2023, Moore and his team were ready. They waited anxiously inside a control room, monitoring the thrumming contraption from afar. If they were knocked back by those x-rays, it would prove something that, until now, was purely theoretical: You can deflect an asteroid from Earth using a nuke. This experiment “had never been done before,” says Moore. Asteroid impacts are a natural disaster like any other. You shouldn't lose sleep over the prospect, but if we get unlucky, an errant space rock may rudely ring Earth's doorbell. “The probability of an asteroid striking Earth during my lifetime is very small. Forget about the gigantic asteroids you know from Hollywood blockbusters. Space rocks over two-thirds of a mile (about one kilometer) in diameter—those capable of imperiling civilization—are certainly out there, and some hew close to Earth's own orbit. But because these asteroids are so elephantine, astronomers have found almost all of them already, and none pose an impact threat. The day-to-day odds of an impact are extremely low, but even one of the smaller ones in that size range could do significant damage if it found Earth and hit a populated area—a capacity that has led astronomers to dub such midsize asteroids “city killers.” Or it could be something that can deflect the asteroid, pushing it onto a path that will no longer intersect with our blue marble. Because disruption could accidentally turn a big asteroid into multiple smaller, but still deadly, shards bound for Earth, it's often considered to be a strategy of last resort. One way to achieve it is to deploy a spacecraft known as a kinetic impactor—a battering ram that collides with an asteroid and transfers its momentum to the rocky interloper, nudging it away from Earth. NASA's DART mission demonstrated that this can work, but there are some important caveats: You need to deflect the asteroid years in advance to make sure it completely misses Earth, and asteroids that we spot too late—or that are too big—can't be swatted away by just one DART-like mission. Instead, you'd need several kinetic impactors—maybe many of them—to hit one side of the asteroid perfectly each time in order to push it far enough to save our planet. That's a tall order for orbital mechanics, and not something space agencies may be willing to gamble on. This would irradiate one hemisphere of the asteroid in x-rays, which in a few millionths of a second would violently shatter and vaporize the rocky surface. “There are scenarios where kinetic impact is insufficient, and we'd have to use a nuclear explosive device,” says Moore. Several decades ago, Peter Schultz, a planetary geologist and impacts expert at Brown University, was giving a planetary defense talk at the Lawrence Livermore National Laboratory in California, another American lab focused on nuclear deterrence and nuclear physics research. What, he wondered, would happen if you blasted an asteroid with a nuclear weapon's x-rays? Could you forestall a spaceborne disaster using weapons of mass destruction? But Teller's dream wasn't fulfilled—and it's unlikely to become a reality anytime soon. The United Nations' 1967 Outer Space Treaty states that no nation can deploy or use nuclear weapons off-world (even if it's not clear how long certain spacefaring nations will continue to adhere to that rule). “There're still many folks that don't want to talk about it at all … even if that were the only option to prevent an impact,” says Megan Bruck Syal, a physicist and planetary defense researcher at Lawrence Livermore. Nuclear weapons have long been a sensitive subject, and with relations between several nuclear nations currently at a new nadir, anxiety over the subject is understandable. “It isn't our preference to use a nuclear explosive, of course. Mostly, researchers have turned to the virtual world, using supercomputers at various US laboratories to simulate the asteroid-agitating physics of a nuclear blast. To put it mildly, “this is very hard,” says Mary Burkey, a physicist and planetary defense researcher at Lawrence Livermore. “When a nuke goes off in space, there's just x-ray light that's coming out of it. It's shining on the surface of your asteroid, and you're tracking those little photons penetrating maybe a tiny little bit into the surface, and then somehow you have to take that micrometer worth of resolution and then propagate it out onto something that might be on the order of hundreds of meters wide, watching that shock wave propagate and then watching fragments spin off into space. But recent research using these high-fidelity simulations does suggest that nukes are an effective planetary defense tool for both disruption and deflection. Can you be sure the explosion wouldn't accidentally shatter the asteroid, turning a cannonball into a hail of bullets still headed for Earth? Simulations can go a long way toward answering these questions, but they remain virtual re-creations of reality, with built-in assumptions. “Our models are only as good as the physics that we understand and that we put into them,” says Angela Stickle, a hypervelocity impact physicist at the Johns Hopkins University Applied Physics Laboratory in Maryland. To make sure the simulations are reproducing the correct physics and delivering realistic data, physical experiments are needed to ground them. Researchers studying kinetic impactors can get that sort of real-world data. Along with DART, they can use specialized cannons—like the Vertical Gun Range at NASA's Ames Research Center in California—to fire all sorts of projectiles at meteorites. In doing so, they can find out how tough or fragile asteroid shards can be, effectively reproducing a kinetic impact mission on a small scale. Re-creating the physics of these confrontations on a small scale was long considered to be exceedingly difficult. Fortunately, those keen on fighting asteroids are as persistent as they are creative—and several teams, including Moore's at Sandia, think they have come up with a solution. “Planetary defense affects the entire planet,” he adds—making it, by default, a national security issue as well. There was “lots of scribbling on my whiteboard, running computer simulations, and getting data to our engineers to design the test fixture for the several months it would take to get all the parts machined and assembled,” he says. Although there were previous and ongoing experiments that showered asteroid-like targets with x-rays, Moore and his team were frustrated by one aspect of them. To truly test whether x-rays could deflect asteroids, targets would have to be suspended in a vacuum—and it wasn't immediately clear how that could be achieved. Generating the nuke-like x-rays was the easy part, because Sandia had the Z machine, a hulking mass of diodes, pipes, and wires interwoven with an assortment of walkways that circumnavigate a vacuum chamber at its core. When it's powered up, electrical currents are channeled into capacitors—and, when commanded, blast that energy at a target or substance to create radiation and intense magnetic pressures. Flanked by klaxons and flashing lights, it's an intimidating sight. “It's the size of a building—about three stories tall,” says Moore. The original purpose of the Z machine, whose first form was built half a century ago, was nuclear fusion research. But over time, it's been tinkered with, upgraded, and used for all kinds of science. And we can do experiments like that to better understand how planets form,” Moore says, as an example. And the machine's preternatural energies could easily be used to generate x-rays—in this case, by electrifying and collapsing a cloud of argon gas. “The idea of studying asteroid deflection is completely different for us,” says Moore. And the machine “fires just once a day,” he adds, “so all the experiments are planned more than a year in advance.” In other words, the researchers had to be near certain their one experiment would work, or they would be in for a long wait to try again—if they were permitted a second attempt. For some time, they could not figure out how to suspend their micro-asteroids. But eventually, they found a solution: Two incredibly thin bits of aluminum foil would hold their targets in place within the Z machine's vacuum chamber. In July 2023, after considerable planning, the team was ready. Within the Z machine's vacuum chamber were two fingernail-size targets—a bit of quartz and some fused silica, both frequently found on real asteroids. It was over before their ears could even register a metallic bang. But just after the Z machine was fired, one of his colleagues sent him a very concise text: IT WORKED. “We knew right away it was a huge success,” says Moore. The experimental setup was complex, but they were trying to achieve something extremely fundamental: a real-world demonstration that a nuclear blast could make an object in space move. Patrick King, a physicist at the Johns Hopkins University Applied Physics Laboratory, was impressed. Previously, pushing back objects using x-ray vaporization had been extremely difficult to demonstrate in the lab. “They were able to get a direct measurement of that momentum transfer,” he says, calling the x-ray scissors an “elegant” technique. Sandia's work took many in the community by surprise. But she notes that we can't overinterpret the results. It isn't clear, from the deflection of the very small and rudimentary asteroid-like targets, how much a genuine nuclear explosion would deflect an actual asteroid. King leads a team that is also working on this question. Upon being irradiated, the target generates an x-ray flash, similar to the one produced during a nuclear explosion in space, which can then be used to bombard various objects—in this case, some Earth rocks acting as asteroid mimics, and (crucially) some bona fide meteoritic material too. King's Omega experiments have tried to answer a basic question: “How much material actually gets removed from the surface?” says King. The hope is that these results—which the team is still considering—will hint at how different types of asteroids will react to being nuked. Although experiments with Omega cannot produce the kickback seen in the Z machine, King's team has used a more realistic and diverse series of targets and blasted them with x-rays hundreds of times. That, in turn, should clue us in to how effectively, or not, actual asteroids would be deflected by a nuclear explosion. “I wouldn't say one [experiment] has definitive advantages over the other,” says King. “Like many things in science, each approach can yield insight along different ‘axes,' if you will, and no experimental setup gives you the whole picture.” But they are likely the first in a long line of increasingly sophisticated tests. As with King's experiments, Moore hopes to place a variety of materials in the Z machine, including targets that can stand in for the wetter, more fragile carbon-rich asteroids that astronomers commonly see in near-Earth space. And it's expected that all this experimental data will be fed back into those nuke-versus-asteroid computer simulations, helping to verify the virtual results. Inevitably, Earth will be imperiled by a dangerous asteroid. But comfort should be taken from the fact that scientists are researching this scenario, just in case it's our only protection against the firmament. “We are your taxpayer dollars at work,” says Burkey. There's still some way to go before they can be near certain that this asteroid-stopping technique will succeed. Robin George Andrews is an award-winning science journalist based in London and the author, most recently, of How to Kill an Asteroid: The Real Science of Planetary Defense. Startups and legacy aerospace companies alike are aiming to take a chunk out of its launch business. Building such facilities off Earth could have a lot of benefits. Discover special offers, top stories, upcoming events, and more. Try refreshing this page and updating them one more time.