As a veteran of the war on terror, I have spent the past year watching Immigration and Customs Enforcement officers expand their operations across the country on a heretofore unprecedented scale and with a new faux-military bearing. Witness on Thursday, when White House border czar Tom Homan talked about Minneapolis as a “theater” for his agents. Overlooking that ICE is not, in fact, part of the armed services of the US—it's a civilian law enforcement agency—it is useful to break down their operations through a military lens to find the strategic implications. Squads in Iraq and Afghanistan often “dressed down” for counterinsurgency missions, where building mutual trust with locals was more important than showing up as if ready for World War III. Sometimes, units went into the opposite posture when a show of force was needed. ICE agents—civil representatives of the law, remember—often show up to raids kitted out as if they're preparing to enter Fallujah circa 2004 against a well-entrenched enemy equipped with machine guns, mortars, and explosive vests. They also arrive in a mishmash of uniforms, hoodie sweatshirts, military gear, and masks, leaving everyone confused as to whether these are law enforcement officers or just some random dudes. Where soldiers tailor their armament for the mission, ICE agents carry weapons and equipment inappropriate for simple search-and-seizure missions: ballistic helmets, bullet resistant plate carriers, magazine drop pouches on their legs, weapons loaded with a cornucopia of optics, silencers, and other attachments you wouldn't catch your average infantryman dead with (the more weight on the weapon makes it less effective in a firefight). These people have not declared an intent to resist violently and with deadly force. Yet ICE shows up with maximum force and intimidation—often roping in people on the sidelines and increasing the media impact of their actions. Generally, ICE agents move with zero military sense. They bunch up and cluster around their target or in doorways; in a combat zone, soldiers clustering up like this could be annihilated by a single grenade or burst from an automatic weapon. It also demonstrates that ICE agents often have very little clue what their mission is. ICE and federal agents have publicly killed two unarmed American citizens. Rather than resembling any type of recognizable urban warfare formation, ICE agents' tactics often seem to be modeled off what they've seen in movies or imbibed via TV shows or video games. Given that training for ICE officers has dropped down to about six a half weeks, TV and movies might be their biggest source of knowledge. It demonstrates that ICE agents aren't moving with a tactical focus; they're doing what they think will look cool and intimidating in photographs. When you get down to the basics of what ICE does—seizing and restraining people—they approach it in a chaotic fashion. Videos show officers bunching up, milling around, often turning on bystanders. They smash windows haphazardly, abandon cars with their engines still running alongside the road, and operate in a manner one Maine sheriff referred to as “bush-league policing.” Military tactics for seizure and search are far more precise. In Iraq and Afghanistan, 20 years of counterinsurgency operations demonstrated that escalating combat for no reason often just creates more violence. Units found that de-escalation often created fewer civilian casualties, which in turn meant that those civilian's families were less prone to take up arms against coalition forces, which reduced coalition casualties. Escalation, by the same token, often undermined US policy. As a result, units trained to de-escalate where possible, while preserving the option to return violence with proportional but catastrophic lethality for anyone who initiated violence. ICE tactics rarely seek to de-escalate but instead ramp up violence via threats, intimidation, and assault. Military strategy is most broadly defined as combining the ways (tactics) and means (resources) to achieve a specific end. Confusing, because if the administration hoped to increase deportations while keeping public support, ICE tactics are counterproductive. This theory, which is commonly seen in authoritarian states like the People's Republic of China or Russia, encompasses many realms of irregular conflict: media warfare, psychological warfare, and legal warfare (sometimes called “lawfare”), to name a few. Regimes then use the surrounding chaos and noise to continue its other initiatives. State-backed violence, too, can be a continuation of policy. Stephen Miller, White House deputy chief of staff for policy, has repeatedly urged ICE agents to escalate their tactics and increase arrests, and he broadcast to agents that they had “federal immunity.” Since that last remark in October 2025, ICE tactics have become far more violent. But historically speaking, host nations—and in this case we speak of California, Illinois, Oregon, Minnesota, and Maine, I guess—simply do not take well to long-term abusive behavior. If political ends of these tactics are not to sow profound mistrust, confusion, doubt, anger, and division, then they appear misdirected and misapplied. Tactics disconnected from strategy ultimately result in strategic failure. It is not fighting enemies of the United States. It is vital that we remember this and resist normalizing military-style tactics in our streets. ICE agents and US military members do share one very real commonality: We all swore an oath to uphold and defend the Constitution of the United States. While the military upholds this, ICE agents flaunt their extraconstitutional actions on our TV and phone screens every day. Personally, ICE's operations seem like a fever dream of the worst parts of the war on terror coming back home to roost. From ineffective facial recognition scans (BATS and HIIDES, anyone?) We brought the mechanisms of a surveillance state to war with us, and they snuck into our duffel bags and followed us home. We emphasized accomplishing the mission no matter what and bred a generation of yes-men. But as we watch the least effective and most morally objectionable of our tactics come home and be used amongst and against us, we are left with a profound feeling of betrayal. This, too, is a result of misaligned tactics. Let us know what you think about this article. Submit a letter to the editor at mail@wired.com. Big Story: China's renewable energy revolution might save the world Watch our livestream replay: Welcome to the Chinese century WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
OpenAI, the company best known for its AI applications like ChatGPT and Sora, is reportedly working on a social media network designed to be free from AI bots. The catch is that users may need to have their irises scanned for access. Forbes reported Wednesday, citing unnamed sources familiar with the project, that the platform is still in very early stages and is being developed by a small team of fewer than 10 people. To do that, the team is reportedly considering implementing identity verification through Apple's Face ID or through the Orb, an Orwellian eye-scanning device made by a company that was also conveniently founded by OpenAI CEO Sam Altman. This new social media platform seems to be Altman's latest attempt to solve a problem he himself and his fellow “architects of AI” helped create. But verification requires humans to get their eyes scanned by the soccer-ball-sized Orb device in exchange for a unique digital ID code stored on their phone. In theory, this could help filter out annoying AI bots from gaming, social media platforms, or even financial transactions like concert ticket sales. So far, roughly 17 million people have been verified using the Orb, a far cry from the company's stated goal of reaching one billion users. More broadly, the idea of getting your eyes scanned by a company founded by one of Silicon Valley's most controversial figures isn't any easy sell. Unsurprisingly, several countries have already temporarily banned or launched investigations into the company's biometric technology, citing concerns around privacy and data security. Now, that tech seems like it could be making its way to a new social media network. And while OpenAI has proven it can build popular apps, it's far from clear whether a new social network could meaningfully pull people away from existing platforms, especially when you add biometric verification as a barrier. ChatGPT alone now reaches roughly 700 million weekly users, and the company's AI video app racked up about one million downloads within five days of its launch. All of which already allow users to generate and share AI-generated content. Altman himself has repeatedly voiced his frustration with bots online. He went on to theorize why this might be happening, pointing to people picking up “quirks of LLM-speak” and also “probably some bots.” “But the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.” Altman wrote. A few days earlier, Altman wrote in another post that he had never taken the dead internet theory seriously, “but it seems like there are really a lot of LLM-run twitter accounts now.” The dead internet theory claims that since around 2016, much of the internet has been dominated by bots and AI-generated content rather than real human activity. But maybe there is someone other than Altman who could be trusted to find a solution. Microsoft showed record spending but slowing cloud growth and a big reliance on OpenAI Workers want it to help get us out, starting with canceling ICE contracts. "Amazing sob story: 'ChatGPT deleted all the work I hadn't done.'" The iPhone maker might be preparing to battle OpenAI's own family of AI gadgets.
When you purchase through links on our site, we may earn an affiliate commission. He insisted that the global fab construction represents new capacity growth rather than relocation. Haung also said that TSMC must expand worldwide to meet surging AI-driven demand for chips and keep Taiwan as its stronghold. Huang explained that demand for wafers is now outpacing what Taiwan's power grid can physically support, making overseas production a necessity rather than a political manoeuvre. He said that while TSMC will build and expand fabs in the U.S., Europe, and Japan, a substantial share of its output will remain in Taiwan, as no other region can replace the island's manufacturing ecosystem. According to Huang, spreading production across multiple regions strengthens resilience for both Taiwan and the U.S. and prevents supply bottlenecks as AI hardware volumes rise sharply. For Nvidia, which sells everything it can produce both in Taiwan and the U.S., vast manufacturing capacities are crucial. To that end, production capacities for DRAM and NAND in Japan, South Korea, Taiwan, Singapore, and eventually the U.S. are just as important to Nvidia as logic production. Huang said the company is coordinating closely with all major HBM suppliers —Samsung Electronics, SK hynix, and Micron Technology — to secure the volumes required for its next-generation AI accelerators, namely Rubin. When it comes to geopolitics, Huang said that lawmakers must balance three competing goals: national security, technological leadership, and economic leadership. During his Taiwan visit, Huang plans to attend internal Nvidia meetings and Lunar New Year events, as well as to meet TSMC founder Morris Chang and chairman C.C. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
When you purchase through links on our site, we may earn an affiliate commission. He insisted that the global fab construction represents new capacity growth rather than relocation. Haung also said that TSMC must expand worldwide to meet surging AI-driven demand for chips and keep Taiwan as its stronghold. Huang explained that demand for wafers is now outpacing what Taiwan's power grid can physically support, making overseas production a necessity rather than a political manoeuvre. He said that while TSMC will build and expand fabs in the U.S., Europe, and Japan, a substantial share of its output will remain in Taiwan, as no other region can replace the island's manufacturing ecosystem. According to Huang, spreading production across multiple regions strengthens resilience for both Taiwan and the U.S. and prevents supply bottlenecks as AI hardware volumes rise sharply. For Nvidia, which sells everything it can produce both in Taiwan and the U.S., vast manufacturing capacities are crucial. To that end, production capacities for DRAM and NAND in Japan, South Korea, Taiwan, Singapore, and eventually the U.S. are just as important to Nvidia as logic production. Huang said the company is coordinating closely with all major HBM suppliers —Samsung Electronics, SK hynix, and Micron Technology — to secure the volumes required for its next-generation AI accelerators, namely Rubin. When it comes to geopolitics, Huang said that lawmakers must balance three competing goals: national security, technological leadership, and economic leadership. During his Taiwan visit, Huang plans to attend internal Nvidia meetings and Lunar New Year events, as well as to meet TSMC founder Morris Chang and chairman C.C. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
The Webb space telescope has allowed us to peer farther back into the universe than ever before, providing a rare glimpse of the cosmos a mere 280 million years after its very existence began. Using Webb's Near-Infrared Spectrograph, scientists have confirmed a new cosmic record of the most distant galaxy ever observed. Galaxy MoM-z14 existed just 280 million years after the big bang, providing clues to what the universe was like during its infancy and how it has evolved over time. “With Webb, we are able to see farther than humans ever have before, and it looks nothing like what we predicted, which is both challenging and exciting,” Rohan Naidu, an astronomer at the Massachusetts Institute of Technology's (MIT) Kavli Institute for Astrophysics and Space Research, said in a statement. Webb first spotted MoM-z14 in May 2025, an exceptionally luminous and compact galaxy that's around 50 times smaller than the Milky Way. Following its discovery, scientists had to confirm just how far the galaxy is. Due to the universe's sheer size, distant objects appear as they existed millions, or even billions, years ago. The universe is also expanding, so physical distances in terms of light years becomes a tad more tricky when looking at objects this far. MoM-z14 is surprisingly luminous, adding to a growing list of exceptionally bright galaxies found in the early universe that are 100 times brighter than theoretical studies predicted. Webb's previous observations of early galaxies has shown that the stars inhabiting them have high amounts of nitrogen, which may contribute to their brightness. “We can take a page from archeology and look at these ancient stars in our own galaxy like fossils from the early universe,” Naidu said. “Except in astronomy we are lucky enough to have Webb seeing so far that we also have direct information about galaxies during that time.” Although those ancient stars would not have had enough time to produce such high amounts of nitrogen, scientists believe the dense nature of the early universe may have resulted supermassive stars capable of producing more nitrogen than the stars observed in the local cosmos. “To figure out what is going on in the early universe, we really need more information,” Yijia Li, a graduate student at the Pennsylvania State University and a member of the research team, said in a statement. “It's an incredibly exciting time, with Webb revealing the early universe like never before and showing us how much there still is to discover.” This new map is not only the most detailed view of the universe's invisible scaffolding to date, it also allows astronomers to look deeper into cosmic history. The aging infrastructure of the Deep Space Network is operating at capacity and in desperate need of an upgrade. The James Webb Space Telescope has received a record-breaking number of proposals for its fifth observation cycle.
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy. In total, Margolis and Thacker discovered that the data Bondu left unprotected—accessible to anyone who logged in to the company's public-facing web console with their Google username—included children's names, birth dates, family member names, “objectives” for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff. “It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. “Being able to see all these conversations was a massive violation of children's privacy." When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem “were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users.” He added that Bondu “found no evidence of access beyond the researchers involved.” (The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.) “We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections,” as well as hiring a security firm to validate its investigation and monitor its systems in the future. Their glimpse of Bondu's backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy's next conversation with its owner. (Bondu thankfully didn't store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.) Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. Margolis adds that this sort of sensitive information about a child's thoughts and feelings could be used for horrific forms of child abuse or manipulation. “To be blunt, this is a kidnapper's dream,” he says. Bondu's Anam Rafid responded to that point in an email, stating that the company does use “third-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing.” But he adds that the company takes precautions to “minimize what's sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren't used to train their models.” Bondu didn't respond to WIRED's question about whether the console was programmed with AI tools. Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy's conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported in December that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is a part of China. “We've had this program for over a year, and no one has been able to make it say anything inappropriate,” a line on the company's website reads. Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users' sensitive data entirely exposed. “This is a perfect conflation of safety with security,” says Thacker. “Does ‘AI safety' even matter when all the data is exposed?” Thacker says that prior to looking into Bondu's security, he'd considered giving AI-enabled toys to his own kids, just as his neighbor had. Big Story: China's renewable energy revolution might save the world WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
A cloud hanging over Seattle is usually a good thing, if you're here for the rain, or if you work in that aspect of the tech industry. But the cloud of economic uncertainty is a less welcome occurrence. That was the question on KUOW's “Booming” podcast that sounded more like dooming this week as it dug into Amazon's “right-sizing,” elimination of pandemic-fueled corporate “bloat,” and what role AI is playing in the company's largest-ever reduction in force. It's a trend being seen across U.S. corporations — UPS will cut 30,000 more jobs this year — as The Wall Street Journal reported. “I am very nervous about what's happening,” Joe Nguyen, the new president and CEO of the Seattle Metropolitan Chamber of Commerce, told KIRO 7 in a report about job losses. Not only is the market being flooded with thousands of laid off tech workers, tech-related job postings remain stuck well below pre-pandemic levels in Seattle, as GeekWire reported in December. At least one official always manages to find the silver lining in cloudy Seattle. GeekWire Studios has partnered with AWS for the Guide to re:Invent. This interview series took place on the Expo floor at AWS re:Invent 2025, and features insightful conversations about the future of cloud tech, as well as partnership success stories. Click for more about underwritten and sponsored content on GeekWire. Seattle's tech paradox: Amazon's layoffs collide with the AI boom — or is it a bubble? WSJ report highlights concerning trends — with a potential opening for startups Filing: Meta's AI layoffs hit Washington offices in Bellevue, Seattle, Redmond Outreach cuts 9% of workforce in latest layoffs at Seattle sales automation startup
There are a lot of ways to do instrumentation....I'd be very curious HOW exactly the models fail.Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. Some of the instructions don't give any guidance how to do it, some specify which libraries to use. There are a lot of ways to do instrumentation....I'd be very curious HOW exactly the models fail.Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. There are a lot of ways to do instrumentation....I'd be very curious HOW exactly the models fail.Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. I'd be very curious HOW exactly the models fail.Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. Are the test sets just incredibly specific about what output they except, and you get a lot of failures because of tiny subtle mismatches? Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. Otel libraries are often complicated to use... without reading latest docs or source code this would be quite tricky.Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. Some models have gotten better at adding dependencies, installing them and then reading the code from the respective directory where dependencies get stored, but many don't do well with this.All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. All in all, I'm very skeptical that this is very useful as a benchmark as is.I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. I'd be much more interested in tasks like:Here are trace/log outputs , here is the source code, find and fix the bug. But that was it, different parts by different teams ended up with all kinds of different behaviors.As for the AI side, this is something where I see our limited context sizes causing issues when developing architecture across multiple products. As for the AI side, this is something where I see our limited context sizes causing issues when developing architecture across multiple products. A bunch of parallel agents on separate call stacks that don't block on their logical callees is a slop factory. HN Editorialized: OTelBench: AI struggles with simple SRE tasks (Opus 4.5 scores only 29%)The task:> Your task is: Add OTEL tracing to all microservices.> Requirements:> Instrumentation should match conventions and well-known good practices.> Instrumentation must match the business domain of the microservices.> Traces must be sent to the endpoint defined by a standard OTEL environment variable.> Use the recent version of the OTEL SDK.I really don't think anything involved with multiple microservices can be called 'simple' even to humans. Perhaps to an expert who knows the specific business's domain knowledge it is. Perhaps to an expert who knows the specific business's domain knowledge it is. > Your task is: Add OTEL tracing to all microservices.> Requirements:> Instrumentation should match conventions and well-known good practices.> Instrumentation must match the business domain of the microservices.> Traces must be sent to the endpoint defined by a standard OTEL environment variable.> Use the recent version of the OTEL SDK.I really don't think anything involved with multiple microservices can be called 'simple' even to humans. Perhaps to an expert who knows the specific business's domain knowledge it is. Perhaps to an expert who knows the specific business's domain knowledge it is. Perhaps to an expert who knows the specific business's domain knowledge it is. Perhaps to an expert who knows the specific business's domain knowledge it is. > Traces must be sent to the endpoint defined by a standard OTEL environment variable.> Use the recent version of the OTEL SDK.I really don't think anything involved with multiple microservices can be called 'simple' even to humans. Perhaps to an expert who knows the specific business's domain knowledge it is. > Use the recent version of the OTEL SDK.I really don't think anything involved with multiple microservices can be called 'simple' even to humans. Perhaps to an expert who knows the specific business's domain knowledge it is. I really don't think anything involved with multiple microservices can be called 'simple' even to humans. Perhaps to an expert who knows the specific business's domain knowledge it is. I've had to work in systems where events didn't share correlation IDs, I had to go in and filter entries down to microseconds to get a small enough number of entries that I could trace what actually happened between a set of services.From what I've seen in the enterprise software side of the world is a lot of companies are particularly bad at SRE and there isn't a great amount of standardization. From what I've seen in the enterprise software side of the world is a lot of companies are particularly bad at SRE and there isn't a great amount of standardization. Enterprise app observability is purely a responsibility of each individual application/project manager. There is virtually no standardization or even shared infra, a team just stuffing plaintext logs into an unconfigured elasticsearch instance is probably above median already. There is no visibility for anything across departments and more often that not, not even across apps in a department. These aren't challenging things to do for an experienced human at all. But it's such a huge pain point for these models! It's hard for me to wrap my head around how these models can write surprisingly excellent code but fail down in these sorts of relatively simple troubleshooting paths. Very few people start their careers as SREs, it's generally something they migrate into after enjoying it and showing aptitude for it.With that said, I wouldn't expect this wall to hold up for too long. There has been a lot of low hanging fruit teaching models how to code. When that is saturated, the frontier companies will likely turn their attention to honing training environments for SRE style debug. With that said, I wouldn't expect this wall to hold up for too long. There has been a lot of low hanging fruit teaching models how to code. When that is saturated, the frontier companies will likely turn their attention to honing training environments for SRE style debug. You could make as many arbitrary bullshit abstraction and call it good, as people have done it for years with OOP. It does not matter at all, any result would do it in this cases.When you want to hit an specific gRPC endpoint, you need an specific address and the method expects an specific contract to be honored. When you wish the llms could implement a solution that captures specifics syscalls from specifics hosts and send traces to an specific platform, using an specific protocol, consolidating records on a specific bucket...you have one state that satisfy your needs and 100 requirement that needs to necessarily be fulfilled. It either meet all the requirements or it's no good.It truly is different from Vibing and llms will never be able to do in this. You could make as many arbitrary bullshit abstraction and call it good, as people have done it for years with OOP. It does not matter at all, any result would do it in this cases.When you want to hit an specific gRPC endpoint, you need an specific address and the method expects an specific contract to be honored. When you wish the llms could implement a solution that captures specifics syscalls from specifics hosts and send traces to an specific platform, using an specific protocol, consolidating records on a specific bucket...you have one state that satisfy your needs and 100 requirement that needs to necessarily be fulfilled. It either meet all the requirements or it's no good.It truly is different from Vibing and llms will never be able to do in this. When you wish the llms could implement a solution that captures specifics syscalls from specifics hosts and send traces to an specific platform, using an specific protocol, consolidating records on a specific bucket...you have one state that satisfy your needs and 100 requirement that needs to necessarily be fulfilled. It either meet all the requirements or it's no good.It truly is different from Vibing and llms will never be able to do in this. It truly is different from Vibing and llms will never be able to do in this. The models are already so good at the traditionally hard stuff: collecting that insane amount of detailed knowledge across so many different domains, languages and software stacks. I wouldn't touch this with a pole if our MTTR was dependent on it being successful though. MCP servers for monitoring tools are making our developers more competent at finding metrics and issues.It'll get there but nobody is going to type "fix my incident" in production and have a nice time today outside of the most simple things that if they are possible to fix like this, could've been automated already anyway. But between writing a runbook and automating sometimes takes time so those use cases will grow. It'll get there but nobody is going to type "fix my incident" in production and have a nice time today outside of the most simple things that if they are possible to fix like this, could've been automated already anyway. But between writing a runbook and automating sometimes takes time so those use cases will grow. Is it clicking a different result from same search?It's possible that the requirements here are not clear, given that the instructions don't detail how to handle such a situation and it's not obvious to me as a human. It's possible that the requirements here are not clear, given that the instructions don't detail how to handle such a situation and it's not obvious to me as a human. >When an app runs on a single machine, you can often trace an error by scrolling through a log file. But when it runs across 50 microservices, that single request gets scattered into a chaotic firehose of disconnected events.Yep this is about Google. No one else has quite the same level of clusterfuck and there's going to be no training for LLMs on this. No one else has quite the same level of clusterfuck and there's going to be no training for LLMs on this. In general for those tasks though the question is more "How would a human do it". If it's impossible for a human because your tooling is so bad you can't even get the logs across services for a single ID, that seems like a pretty serious design issue.In general looking at the prompt though, this is also not very representative. How do you expect new hires to onboard? In general looking at the prompt though, this is also not very representative. How do you expect new hires to onboard? This seems like typical work in any business that isn't trivial. Facebook (i've worked at Meta and Google amongst others so a good way to compare extremes) is entirely a monolith. You type a line of code, hit refresh and you see it, running fully in the context of everything else your dev server does. Every server running Facebook runs the exact same image. That's not to say Hack is a perfect language or anything. It's basically PHP made to look and act like Java which isn't great, but the fact is you never ever think of how the code runs and interacts in context of the microservice environment. Everyone who's worked at Meta and Google has the opinion that Meta moves faster and this is part of the reason.Some companies have architectures that can't deploy like this. This is the reason you move to microservices. It's not at all a developer velocity win. It's just needed if you have frameworks that don't allow you to run and deploy "all the code ever written in the company" in a reasonable way. You need to break it up in modular pieces that have defined boundaries so that you only run the parts you need as you develop (defined boundaries are a dev win sure but that can be done without microservices).Google has gotten to the point where things are getting really fined grained and honesty chaotic. Moving to a portion of code to its own microservice is basically a promo bait 6 month project, often done without justification other than "everything should be its own microservice". In my time at Google i never heard "what benefit do we get if this is a microservice?" it's just assumed to always be a good thing. 50 interacting microservices to go through in a trace is at the point where the only place I've seen such a thing is Google. Some companies have architectures that can't deploy like this. This is the reason you move to microservices. It's not at all a developer velocity win. It's just needed if you have frameworks that don't allow you to run and deploy "all the code ever written in the company" in a reasonable way. You need to break it up in modular pieces that have defined boundaries so that you only run the parts you need as you develop (defined boundaries are a dev win sure but that can be done without microservices).Google has gotten to the point where things are getting really fined grained and honesty chaotic. Moving to a portion of code to its own microservice is basically a promo bait 6 month project, often done without justification other than "everything should be its own microservice". In my time at Google i never heard "what benefit do we get if this is a microservice?" it's just assumed to always be a good thing. 50 interacting microservices to go through in a trace is at the point where the only place I've seen such a thing is Google. Google has gotten to the point where things are getting really fined grained and honesty chaotic. Moving to a portion of code to its own microservice is basically a promo bait 6 month project, often done without justification other than "everything should be its own microservice". In my time at Google i never heard "what benefit do we get if this is a microservice?" it's just assumed to always be a good thing. 50 interacting microservices to go through in a trace is at the point where the only place I've seen such a thing is Google. The only other benchmark I've come across is https://sreben.ch/ ... certainly there must be others by now? Even when it's not particularly effective, the additional information provided tends to be quite useful. - initially it wasn't working, plenty of parent/child relationships problems like described in the post- so I designed a thin a wrapper and used sealed classes for events instead of dynamic spans + some light documentationIt took me like a day to implement tracing on the existing codebase, and for new features it works out of the box using the documentation.At the end of the day, leveraging typing + documentation dramatically constrains LLMs to do a better job - so I designed a thin a wrapper and used sealed classes for events instead of dynamic spans + some light documentationIt took me like a day to implement tracing on the existing codebase, and for new features it works out of the box using the documentation.At the end of the day, leveraging typing + documentation dramatically constrains LLMs to do a better job It made me remember when I was working on the J2EE ecosystem shudder For [1]: instruction.md is very brief, quite vague and "assumes" a lot of things.- Your task is: Add OTEL tracing to all microservices. (this is good)- 6.I want to know if the microservice has OTEL instrumentation and where the data is being sent. (yeah, this won't work unless you also use an MCP like context7 or provide local docs)What's weird here is that instruct.md has 0 content regarding conventions, specifically how to name things. I guess that makes some sense, but being specific in the task.md is a must. Otherwise you're benching assumptions, and those don't even work with meatbags :)For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? - Your task is: Add OTEL tracing to all microservices. (this is good)- 6.I want to know if the microservice has OTEL instrumentation and where the data is being sent. (yeah, this won't work unless you also use an MCP like context7 or provide local docs)What's weird here is that instruct.md has 0 content regarding conventions, specifically how to name things. I guess that makes some sense, but being specific in the task.md is a must. Otherwise you're benching assumptions, and those don't even work with meatbags :)For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? (yeah, this won't work unless you also use an MCP like context7 or provide local docs)What's weird here is that instruct.md has 0 content regarding conventions, specifically how to name things. I guess that makes some sense, but being specific in the task.md is a must. Otherwise you're benching assumptions, and those don't even work with meatbags :)For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? (yeah, this won't work unless you also use an MCP like context7 or provide local docs)What's weird here is that instruct.md has 0 content regarding conventions, specifically how to name things. I guess that makes some sense, but being specific in the task.md is a must. Otherwise you're benching assumptions, and those don't even work with meatbags :)For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? What's weird here is that instruct.md has 0 content regarding conventions, specifically how to name things. I guess that makes some sense, but being specific in the task.md is a must. Otherwise you're benching assumptions, and those don't even work with meatbags :)For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? For [2]: instruction.md is more detailed, but has some weird issues:- "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? - "You should only be very minimal and instrument only the critical calls like request handlers without adding spans for business calls \n The goal is to get business kind of transaction" (??? this is confusing, even skipping over the weird grammar there)- "Draw ascii trace diagram into /workdir/traces.txt" (???? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? why are you giving it harness-specific instructions in your instruct.md? this is so dependent on the agentic loop used, that it makes no sense here.- "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? - "Success Criteria: Demonstrate proper distributed tracing \n Include essential operations without over-instrumenting (keep it focused) \n Link operations correctly \n Analyze the code to determine which operations are essential to trace and how they relate to each other. I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? I hope that's not what's used in actually computing the benchmark scores. In that case, you're adding another layer of uncertainty in checking the results...The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? The ideea is nice, but tbf some of the tests seem contrived, your instructions are not that clear, you expect static naming values while not providing instructions at all about naming conventions, and so on. It feels like a lot of this was "rushed"? First of all, familiarity with open telemetry apis is not knowledge, they are arbitrary constructs.We are implying that conforming to a standard is the only way, the right way. I would challenge that.Assuming models were good at this tasks, we could only conclude that this tasks were trivial AND sufficiently documented. Assuming they were good at this type of tasks (they can be trained to be good cheaply, we know that based on similar acquired capabilities) making a benchmark out of it would be less useful.But I am sure nobody really cares and the author just had to SEO a little bit regardless of reality I would challenge that.Assuming models were good at this tasks, we could only conclude that this tasks were trivial AND sufficiently documented. Assuming they were good at this type of tasks (they can be trained to be good cheaply, we know that based on similar acquired capabilities) making a benchmark out of it would be less useful.But I am sure nobody really cares and the author just had to SEO a little bit regardless of reality Assuming models were good at this tasks, we could only conclude that this tasks were trivial AND sufficiently documented. Assuming they were good at this type of tasks (they can be trained to be good cheaply, we know that based on similar acquired capabilities) making a benchmark out of it would be less useful.But I am sure nobody really cares and the author just had to SEO a little bit regardless of reality But I am sure nobody really cares and the author just had to SEO a little bit regardless of reality Also LLM is a very advanced autocomplete algorithm. My takeaway was more "maybe AI coding assistants today aren't yet good at this specific, realistic engineering task".... In my experience, if you have the initial framework setup done in your repo + a handful of examples, they do a great job of applying OTEL tracing to the majority of your project. In my experience, if you have the initial framework setup done in your repo + a handful of examples, they do a great job of applying OTEL tracing to the majority of your project. In my experience, if you have the initial framework setup done in your repo + a handful of examples, they do a great job of applying OTEL tracing to the majority of your project. This almost always correlates with customers having similar issues in getting things working.This has lead us to rewrite a lot of documentation to be more consistent and clear. In addition we set out series of examples from simple to complex. This shows as less tickets later, and more complex implementations being setup by customers without the need for support. In addition we set out series of examples from simple to complex. This shows as less tickets later, and more complex implementations being setup by customers without the need for support.
Sound Games, a new Seattle startup developing video games that work across multiple platforms with a single purchase, announced $6.5 million in seed funding. The startup is betting on a “pay once, play anywhere” model — players buy a game one time and can play it across PC, console, and mobile devices. The company says its goal is to focus on high-quality, re-playable games without heavy in-game purchases or monetization mechanics. Sound Games' first title, Go Ape Ship!, is scheduled to launch Feb. 18. The seed round was led by Point72 Ventures, with participation from Timeless, Daybreak, WOCstar, Hustle Fund, ZVC, and other investors. Sound Games said the funding will be used to ship multiple original titles, build cross-platform technology, and expand its development team. GeekWire Studios has partnered with AWS for the Guide to re:Invent. This interview series took place on the Expo floor at AWS re:Invent 2025, and features insightful conversations about the future of cloud tech, as well as partnership success stories. Click for more about underwritten and sponsored content on GeekWire. Xbox Ally vs. Nintendo Switch 2: Microsoft enters a handheld console war that lacks actual competition Startup radar: 5 early stage companies in Seattle that are raising cash and gaining traction Seattle indie game studio Starform raises $6M to bolster aerial combat sim ‘Metalstorm' Rec Room lays off half its staff in major shakeup at Seattle startup
A new security feature rolled out to select models of the latest iPhones and iPads this week will make it more difficult for law enforcement, spies, and malicious hackers to obtain a person's precise location data from their phone provider. Apple said switching on the feature does not affect the precision of location data shared with apps, or shared with first responders during an emergency call. Hackers also frequently target cell carriers for the sensitive data that they collect on their customers. Over the past year, several U.S. phone giants, including AT&T and Verizon, have confirmed persistent intrusions by China-backed hackers, dubbed Salt Typhoon, seeking phone call logs and messages of senior American officials. Recent threats aside, long-known vulnerabilities in global cellular networks have allowed surveillance vendors to snoop on the location data of individuals anywhere in the world. “Most people aren't aware that devices can send location data outside of just apps,” said Miller. He also authors the weekly cybersecurity newsletter, this week in security. You can also contact him by email, or to verify outreach, at zack.whittaker@techcrunch.com. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. Anthropic launches interactive Claude apps, including Slack and other workplace tools This founder cracked firefighting — now he's creating an AI gold mine TikTok users freak out over app's ‘immigration status' collection — here's what it means Researchers say Russian government hackers were behind attempted Poland power outage Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects' laptops: Reports
Waymo told the National Highway Traffic Safety Administration (NHTSA) that the child — whose age and identity are not currently public — sustained minor injuries. News of the crash comes as Waymo faces dual investigations into its robotaxis illegally passing school buses. Waymo said in its blog post that its “peer-reviewed model” shows a “fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph.” The company did not release a specific analysis of this crash. Most recently, he was a reporter at Bloomberg News where he helped break stories about some of the most notorious EV SPAC flops. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what's next. The price gap between Waymo and Uber is narrowing Anthropic launches interactive Claude apps, including Slack and other workplace tools This founder cracked firefighting — now he's creating an AI gold mine TikTok users freak out over app's ‘immigration status' collection — here's what it means Researchers say Russian government hackers were behind attempted Poland power outage Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects' laptops: Reports
Cheap money amplified this cycle, but this isn't a tech specific "failure", it's just how forecasting under uncertainty work.It's incredible how some engineers assume they understand economics, then proceed to fail on some of its most basic premises. It's incredible how some engineers assume they understand economics, then proceed to fail on some of its most basic premises. Those investments force more deliberate planning.By contrast, engineers mostly require a laptop and company hoodie... That low marginal cost makes it far easier to hire aggressively on expectations and unwind just as aggressively when those expectations change. By contrast, engineers mostly require a laptop and company hoodie... That low marginal cost makes it far easier to hire aggressively on expectations and unwind just as aggressively when those expectations change. Lines with specialty equipment and tooling can also often be sped up. That can allow for other jobs to be added to all the functions that support the processes involved before and after the specialty equipment.New employees also often require training and some apprenticeship time, meaning they can get hired ahead of actual demand. New employees also often require training and some apprenticeship time, meaning they can get hired ahead of actual demand. in tech cost of hiring is lower which makes headcount a much easier speculative bet and layoffs a much easier reset when the bet fails. My experience with seeing new shifts added is initially with only specific processes, and even with those it is with journeyman level technicians running a small crew to support relieving a bottleneck in production.Alternatively, manufacturers can outsource until they have enough volume to add a shift, but across the economy the net is just transferring production from one facility to another. Alternatively, manufacturers can outsource until they have enough volume to add a shift, but across the economy the net is just transferring production from one facility to another. Alas, gone are the days when engineers too required specialized equipment like a desktop computer on the desk that you couldn't move with you. > By contrast, engineers mostly require a laptop and company hoodie... That low marginal cost makes it far easier to hire aggressively on expectations and unwind just as aggressively when those expectations change.Software engineers also need- specialized machinery (at least when they have to upload to some computation cluster or cloud), think for example of the costs for GPU/TPU clusters for AIs at the moment- tooling: depending on the sector, the license costs for the sector-specific business software can be similar as expensive as specialized machinery- mental capacity (instead of physical capacity) Software engineers also need- specialized machinery (at least when they have to upload to some computation cluster or cloud), think for example of the costs for GPU/TPU clusters for AIs at the moment- tooling: depending on the sector, the license costs for the sector-specific business software can be similar as expensive as specialized machinery- mental capacity (instead of physical capacity) And, (assumption again) the factory boss doesn't have an incentive to increase idle worker numbers, where a dev manager often benefits from being in charge of a larger number of hardly working people. No, we need a full kubernetes cluster to run a service used by 10 secretaries.No, we can't just use PostgreSQL as a queue, we definitely need Apache Kafka for 1 msg per second. Does anyone know what the typical tool cost is for a factory worker? (And the tool cost for a factory worker can be zero if you're hiring for a second shift - they would use the same tools as the first shift, just at different times. In contrast, I don't think there are very many places that ask programmers to use the same laptop in shifts.) (And the tool cost for a factory worker can be zero if you're hiring for a second shift - they would use the same tools as the first shift, just at different times. In contrast, I don't think there are very many places that ask programmers to use the same laptop in shifts.) And on top of that, they often have many biases that they often fail to account for (e.g. a preference for neat-and tidy-systems, leading them to get seduced by oversimple neat-and-tidy explanations, e.g. Econ 101). You see this in legacy firms where it takes 10 people to make a change because each person has a small slice of permissions required to effect the change. This is not how you make high growth firms. The reason is simple - basic economics is not taught in the public schools, and economics/business is not a required course for an engineering degree.One of the best classes I ever took was a summer class in accounting. One of the best classes I ever took was a summer class in accounting. If someone corrects me, I would try to hopefully learn from that. But let's see how the author of this post responds to the GP's (valid, from what I can tell) critique.Edit: Looks like they have already responded (before I wrote it but I forgot to see their comment where they said that its not at the scale or frequency we see in tech) If someone corrects me, I would try to hopefully learn from that. But let's see how the author of this post responds to the GP's (valid, from what I can tell) critique.Edit: Looks like they have already responded (before I wrote it but I forgot to see their comment where they said that its not at the scale or frequency we see in tech) If someone corrects me, I would try to hopefully learn from that. But let's see how the author of this post responds to the GP's (valid, from what I can tell) critique.Edit: Looks like they have already responded (before I wrote it but I forgot to see their comment where they said that its not at the scale or frequency we see in tech) Edit: Looks like they have already responded (before I wrote it but I forgot to see their comment where they said that its not at the scale or frequency we see in tech) Fortunately we had a kinda-sorta sane monetary policy under the Biden administration once the pandemic started to ebb, but now we've got Mr. Manufacturers can't hire beyond the places in production that someone can stand and do something. There needs to be some kind of equipment or process for worker to contribute in some meaningful way, even if it is merely for a projection of increased production (e.g., hiring a second shift for a facility currently running one shift).What I wonder is if in tech, the "equipment" is a computer that supports everything a developer needs. From there, new things can be added to the existing product.Manufacturing equipment is generally substantially more expensive than a computer and supporting software, though not always. Might this contribute to the differences, especially for manufacturing that normally runs 24-hour shifts? From there, new things can be added to the existing product.Manufacturing equipment is generally substantially more expensive than a computer and supporting software, though not always. Might this contribute to the differences, especially for manufacturing that normally runs 24-hour shifts? Manufacturing equipment is generally substantially more expensive than a computer and supporting software, though not always. Might this contribute to the differences, especially for manufacturing that normally runs 24-hour shifts? From a supply chain management perspective, this does not make sense. As a result the stock market changed to reward profitability. We are still feeling this.I do agree that AI is not to blame for this. In fact I will go further and claim that AI is a net negative that make this worse for the employer by ultimately requiring more people who average lower confidence and lower capabilities than without, but I say that with a huge caveat.The deeper problem is not market effect or panaceas like AI. The deeper problem is poorly qualified workers and hard to identify talent. If the average employed developer is excellent at what they deliver these people would be easy to identify and tough to fire like engineers, doctors, and lawyers. If the typical developer is excellent at what they do AI would be a complete net negative.AI and these market shifts thus hide a lower level problem nobody wants to solve: qualification. I do agree that AI is not to blame for this. In fact I will go further and claim that AI is a net negative that make this worse for the employer by ultimately requiring more people who average lower confidence and lower capabilities than without, but I say that with a huge caveat.The deeper problem is not market effect or panaceas like AI. The deeper problem is poorly qualified workers and hard to identify talent. If the average employed developer is excellent at what they deliver these people would be easy to identify and tough to fire like engineers, doctors, and lawyers. If the typical developer is excellent at what they do AI would be a complete net negative.AI and these market shifts thus hide a lower level problem nobody wants to solve: qualification. The deeper problem is not market effect or panaceas like AI. The deeper problem is poorly qualified workers and hard to identify talent. If the average employed developer is excellent at what they deliver these people would be easy to identify and tough to fire like engineers, doctors, and lawyers. If the typical developer is excellent at what they do AI would be a complete net negative.AI and these market shifts thus hide a lower level problem nobody wants to solve: qualification. AI and these market shifts thus hide a lower level problem nobody wants to solve: qualification. There's also the thought nobody wants to examine: what if the consumer market total spend is kind of tapped out? Business examples include Teflon, Postit Notes, antibiotics, linux, git, and much more.The US Army changed leadership methodologies about 20 years ago to account for this. In the fewest possible words a leader provides a stated intent and then steps back to monitor while subordinate leaders exercise their own creative initiative to meet that intent. The philosophy before that was called Military Decision Making Process (MDMP). MDMP is now largely relegated to small personnel teams. The US Army changed leadership methodologies about 20 years ago to account for this. In the fewest possible words a leader provides a stated intent and then steps back to monitor while subordinate leaders exercise their own creative initiative to meet that intent. The philosophy before that was called Military Decision Making Process (MDMP). MDMP is now largely relegated to small personnel teams. if not for covid, the zirp era would end more gently. if not covid, there wouldn't be overhiring and subsequent firingthe market would be as bad as now (or dare i say, *normal*), but it would be stable bad, not whiplash. the market would be as bad as now (or dare i say, *normal*), but it would be stable bad, not whiplash. If people can't identify qualified professionals without relying on credentials, they probably aren't qualified to be hiring managers. * Peer reviews in the industry* Publications in peer reviewed journals* Owner/partner of a firm of licensed professionals* Quantity of surgeries, clients, products, and so forth* Transparency around lawsuits, license violations, ethics violations, and so forth* Multiple licenses. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. * Quantity of surgeries, clients, products, and so forth* Transparency around lawsuits, license violations, ethics violations, and so forth* Multiple licenses. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Not one, but multiple stacked on top of a base qualification license. For example an environmental lawyer will clearly have a law license, but will also have various environmental or chemistry certifications as well. Another example is that a cardiologist is not the same as a nurse practitioner or general physician.Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Compare all of that against what the typical developer has:* I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. * I have been employed for a long timeMore elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. More elite developers might have these:* Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. * Author of a published book* Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. * Open source software author of application downloaded more than a million timesThose elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Those elite items aren't taken very seriously for employment consideration despite their effort and weight compared to what their peers offer. Is the fact that you published a book about some DB architecture 10 years ago going to make you a better team member and deliver things faster? You measure and judge what you can, ignore the rest since it can be + or - and you have no clue, not even about probability spread.Doctors with many licenses may be better, or maybe they like studying more than actually working with patients and thus be possibly even worse than average, how do you want to measure that? Doctors with many licenses may be better, or maybe they like studying more than actually working with patients and thus be possibly even worse than average, how do you want to measure that? So, the goal is to not train them and replace them as conveniently as possible.The result is to not look for talent but instead equate the requirements to a lowest common denominator. That minimizes personnel management friction at cost to everything else. That minimizes personnel management friction at cost to everything else. Simple, the 80% of code monkeys who are not good at what they do will cause way more damages than the "professionals who are excellent at what they do". And out fo tech I can guarantee you the vast majority of people use llms to do less, not to do more or do betterIt's also easily verifiable, supposedly AI makes everyone a 10x developer/worker, it's been ~3 years now, where are the benefits ? Which company/industry made 10x progress or 10x revenue or cut 90% of their workforce ?How many man hours are lost on AI slop PRs? AI written tickets which seem to make sense at first but fall apart once you dig deep? AI reports from mckinsey&co which use fake sources? Which company/industry made 10x progress or 10x revenue or cut 90% of their workforce ?How many man hours are lost on AI slop PRs? AI written tickets which seem to make sense at first but fall apart once you dig deep? AI reports from mckinsey&co which use fake sources? How many man hours are lost on AI slop PRs? AI written tickets which seem to make sense at first but fall apart once you dig deep? AI reports from mckinsey&co which use fake sources? We have been able to move our "low cost" work out of India to eastern Europe, Vietnam and the Philippines; pay per worker more, but we need half as many (and can actually train them).Although our business process was already tolerant of low cost regions producing a large amount of crap; seperate teams doing testing and documentation...It's been more of a mixed bag in the "high skill" regions, we have been getting more pushback against training, people wanting to be on senior+ teams only, due to the llm code produced by juniors. This is completely new, as it's coming from people who used to see mentoring and teaching as a solid positive in their job. Although our business process was already tolerant of low cost regions producing a large amount of crap; seperate teams doing testing and documentation...It's been more of a mixed bag in the "high skill" regions, we have been getting more pushback against training, people wanting to be on senior+ teams only, due to the llm code produced by juniors. This is completely new, as it's coming from people who used to see mentoring and teaching as a solid positive in their job. It's been more of a mixed bag in the "high skill" regions, we have been getting more pushback against training, people wanting to be on senior+ teams only, due to the llm code produced by juniors. This is completely new, as it's coming from people who used to see mentoring and teaching as a solid positive in their job. I can do in 1 or 2 hours what would have previously taken a week. I can do in 1 or 2 hours what would have previously taken a week. Or they're actively enshittified, aiming to extract more short-term revenue at the cost of a long-term future... I truly believe that these new tools will actually hurt the bigger companies and conversely help smaller ones. They are one-size-fits-all behemoths that hospitals have to work against than with. What if, instead of having to reach out to the big players, the economics of having a software developer or 2 on staff make it such that you could build custom-tailored, bespoke software to work "with" your company and not against? And at least in my niche, you only have a few options. Each with their own unique quirks.Instead, EMR's could position themselves as more of a "data provider" where you build bespoke software on top of the underlying storage. Instead, EMR's could position themselves as more of a "data provider" where you build bespoke software on top of the underlying storage. There is nothing stopping someone from building up an app and having people come in to polish it up.Second, "regulatory environment" doesn't actually mean somethin because every part of the industry has different regulations and requirements. There are different standards for what big hospitals can use and the software requirements than there are for home care software. So trying to wave this "you can't because of regulations" wand doesn't make sense if you're actually in the business.Third, I was speaking more to the small-medium sized agency.Fourth,> We don't need hospitals handing over the public's health data to the cheapest person they can find to prompt it all into Claude.Means absolutely nothing since you don't need to feed health data into a Claude instance to build healthcare apps. Second, "regulatory environment" doesn't actually mean somethin because every part of the industry has different regulations and requirements. There are different standards for what big hospitals can use and the software requirements than there are for home care software. So trying to wave this "you can't because of regulations" wand doesn't make sense if you're actually in the business.Third, I was speaking more to the small-medium sized agency.Fourth,> We don't need hospitals handing over the public's health data to the cheapest person they can find to prompt it all into Claude.Means absolutely nothing since you don't need to feed health data into a Claude instance to build healthcare apps. Means absolutely nothing since you don't need to feed health data into a Claude instance to build healthcare apps. Because the hospitals those practices want to associate with say "we're on Epic and expect you to be as well"?Wife in healthcare management...overhear this conversation once or twice a week. Wife in healthcare management...overhear this conversation once or twice a week. Also from what I've seen is big city hospitals use a mix of all three. Which I believe actually creates an opening as it shows a willingness to use different walled gardens.However I think there are a lot of opportunities to just build on top of these systems rather than wholesale replace. Because they're one size fits all and the people who work on them haven't a single designer bone in their body the interfaces are terribly clunky and slow. Macros exist but seemingly no one is aware of them. It's rife to build better interfaces tied into macros behind the scenes. However I think there are a lot of opportunities to just build on top of these systems rather than wholesale replace. Because they're one size fits all and the people who work on them haven't a single designer bone in their body the interfaces are terribly clunky and slow. Macros exist but seemingly no one is aware of them. It's rife to build better interfaces tied into macros behind the scenes. They consistently try non-standard approaches if they feel like it can improve the standard of care since their commercialization team can make more money. They can also iterate faster and deliver better outcomes.I'm guessing either EHR software is uncompetitive or nobody has tried it yet. Or it's just because I live in Toronto and we have a really good healthcare system. I'm guessing either EHR software is uncompetitive or nobody has tried it yet. Or it's just because I live in Toronto and we have a really good healthcare system. And "risk" in this industry could mean any number of different things. In fact, our Microsoft tenant has a much higher level of security than the underlying EMR.I'm quite literally living what I spoke about above. The reason why I was brought in was because teh current CEO has a very tech-focused mindset, otherwise agencies usually can't afford a full-time software developer. Now, those economics are changing.Also, I haven't heard of an agency that didn't want custom reports built because they found the default ones unsuitable. So even something like the ability to mainline Power BI reports would be compelling. The reason why I was brought in was because teh current CEO has a very tech-focused mindset, otherwise agencies usually can't afford a full-time software developer. Now, those economics are changing.Also, I haven't heard of an agency that didn't want custom reports built because they found the default ones unsuitable. So even something like the ability to mainline Power BI reports would be compelling. Also, I haven't heard of an agency that didn't want custom reports built because they found the default ones unsuitable. So even something like the ability to mainline Power BI reports would be compelling. The behemoths exist especially, but not exclusively, in that space because regulations (correctly) are steep. In the case of hospital systems you're talking both the management and protection of both employee and patient data. On the other side, if Epic has a data breach, every hospital shrugs it's shoulders. And, even more fundamentally, if Epic as a product sucks ass... well. Hell, at my workplace, we actually have some in that leadership asks if we're happy with our various HR softwares and things, but fundamentally, they all pretty much suck and we're currently sitting at the least shitty one we could find, which is far from a solid fit for our smaller company. But it's the best we can do because none of these suites are designed to be good for people to use, they're designed to check a set of legal and feature checkboxes for the companies they sell to.Honestly I don't know how you fix this, short of barring B2B SAAS as an entire industry. Now crooks can crack the locks off of NetSuite and steal your whole fucking business without even knowing where the hell your HQ even is or caring for that matter, and our business universe if you will is bifurcated all to hell as a result. Companies are engaged in constant games of "pin the legal responsibility on someone else" because to compete, they need internet and software based sales and data management systems, but building those systems is a pain in the ass, and then you're responsible if they go wrong. You see these relationships (or lack thereof) all over the place in our modern world, where the people doing the work with these absurdly terrible tools are not given any decision-making power with regard to which tools to use. Hell, at my workplace, we actually have some in that leadership asks if we're happy with our various HR softwares and things, but fundamentally, they all pretty much suck and we're currently sitting at the least shitty one we could find, which is far from a solid fit for our smaller company. But it's the best we can do because none of these suites are designed to be good for people to use, they're designed to check a set of legal and feature checkboxes for the companies they sell to.Honestly I don't know how you fix this, short of barring B2B SAAS as an entire industry. Now crooks can crack the locks off of NetSuite and steal your whole fucking business without even knowing where the hell your HQ even is or caring for that matter, and our business universe if you will is bifurcated all to hell as a result. Companies are engaged in constant games of "pin the legal responsibility on someone else" because to compete, they need internet and software based sales and data management systems, but building those systems is a pain in the ass, and then you're responsible if they go wrong. Honestly I don't know how you fix this, short of barring B2B SAAS as an entire industry. Now crooks can crack the locks off of NetSuite and steal your whole fucking business without even knowing where the hell your HQ even is or caring for that matter, and our business universe if you will is bifurcated all to hell as a result. Companies are engaged in constant games of "pin the legal responsibility on someone else" because to compete, they need internet and software based sales and data management systems, but building those systems is a pain in the ass, and then you're responsible if they go wrong. But I'm curious what people think the equilibrium looks like. If the "two-tier system" (core revenue teams + disposable experimental teams) becomes the norm, what does that mean for the future of SWE as a career?A few scenarios I keep turning over: 1. Bifurcation - A small elite of "10x engineers" command premium comp while the majority compete for increasingly commoditized roles 2. Craftsmanship revival - Companies learn that the "disposable workforce" model ships garbage, and there's renewed appreciation for experienced engineers who stick around 3. The article argues AI isn't the cause, but it seems like it could accelerate whatever trend is already in motion. If companies are already treating engineers as interchangeable inventory, AI tooling gives them cover to reduce headcount further.For those of you 10+ years into your careers: are you optimistic about staying in IC roles long-term, or does management/entrepreneurship feel like the only sustainable path? Bifurcation - A small elite of "10x engineers" command premium comp while the majority compete for increasingly commoditized roles 2. Craftsmanship revival - Companies learn that the "disposable workforce" model ships garbage, and there's renewed appreciation for experienced engineers who stick around 3. The article argues AI isn't the cause, but it seems like it could accelerate whatever trend is already in motion. If companies are already treating engineers as interchangeable inventory, AI tooling gives them cover to reduce headcount further.For those of you 10+ years into your careers: are you optimistic about staying in IC roles long-term, or does management/entrepreneurship feel like the only sustainable path? Bifurcation - A small elite of "10x engineers" command premium comp while the majority compete for increasingly commoditized roles 2. Craftsmanship revival - Companies learn that the "disposable workforce" model ships garbage, and there's renewed appreciation for experienced engineers who stick around 3. The article argues AI isn't the cause, but it seems like it could accelerate whatever trend is already in motion. If companies are already treating engineers as interchangeable inventory, AI tooling gives them cover to reduce headcount further.For those of you 10+ years into your careers: are you optimistic about staying in IC roles long-term, or does management/entrepreneurship feel like the only sustainable path? For those of you 10+ years into your careers: are you optimistic about staying in IC roles long-term, or does management/entrepreneurship feel like the only sustainable path? #2 there will always be craftsmanship companies, but they will always be small companies, or a small team within a big organization. Many places have experimental or well-contained projects, or not enough ongoing work for full-time, but anywhere that custom-built software is important to the business will always need changes and maintenance. The problem with using contractors is that after their contract is over, they go find another contract, so they may not be available when you would like to re-use them, and then you've got to start over with a new one. Many places have experimental or well-contained projects, or not enough ongoing work for full-time, but anywhere that custom-built software is important to the business will always need changes and maintenance. The problem with using contractors is that after their contract is over, they go find another contract, so they may not be available when you would like to re-use them, and then you've got to start over with a new one. The Pragmatic Engineer argues it's actually trimodal (at least in Europe): https://blog.pragmaticengineer.com/software-engineering-sala... I'm in the tech industry and have been doing this for 12+ years now. In the beginning, it was because I wanted to live overseas for a few years, without a break in my career.Now, it's about survival. I buy my own health insurance (me and my family) in the marketplace every year (so I'm not tied to an employer), work with multiple clients (I never really have to worry about getting laid off), and make much more than a FTE.While all my friends in tech are getting laid off or constantly in fear of getting laid off, I don't have to worry.I also find that because I touch so many different technologies, I have to turn down work. I turned down a company last year, that wanted me in-house and one this year that would have been too demanding on my schedule.It's also flexible and always remote. I buy my own health insurance (me and my family) in the marketplace every year (so I'm not tied to an employer), work with multiple clients (I never really have to worry about getting laid off), and make much more than a FTE.While all my friends in tech are getting laid off or constantly in fear of getting laid off, I don't have to worry.I also find that because I touch so many different technologies, I have to turn down work. I turned down a company last year, that wanted me in-house and one this year that would have been too demanding on my schedule.It's also flexible and always remote. While all my friends in tech are getting laid off or constantly in fear of getting laid off, I don't have to worry.I also find that because I touch so many different technologies, I have to turn down work. I turned down a company last year, that wanted me in-house and one this year that would have been too demanding on my schedule.It's also flexible and always remote. I turned down a company last year, that wanted me in-house and one this year that would have been too demanding on my schedule.It's also flexible and always remote. I'm happy to hear it's been working out for you, though. It just felt really difficult to do both the engineering work while trying to do customer development at the same time.The fact that OP has been able to do this for so long, while supporting a family, piqued my interest. The fact that OP has been able to do this for so long, while supporting a family, piqued my interest. Working on core engineering functionality of a company is essentially the same kind of process.The difference lies in whether you're working on core functionality, or on some iterative experiment that nobody knows will succeed. In that sense, it's not fundamentally different from engineering today. Working on core engineering functionality of a company is essentially the same kind of process.The difference lies in whether you're working on core functionality, or on some iterative experiment that nobody knows will succeed. The difference lies in whether you're working on core functionality, or on some iterative experiment that nobody knows will succeed. Software developers are doing R&D, you create new stuff that continues to exist forever. This is unlike most jobs, in which you produce goods and services that are consumed and then you have to produce them again.What if we are getting to the point where all of the low-hanging stuff is invented? For example, every bank has an app which works and rarely needs to be changed. Or take a social network like Facebook, going from zero to a product with 1B users took a lot of development but since then, it's been mostly static, why employ thousands of developers dedicated to Facebook? For example, every bank has an app which works and rarely needs to be changed. Or take a social network like Facebook, going from zero to a product with 1B users took a lot of development but since then, it's been mostly static, why employ thousands of developers dedicated to Facebook? A lot of political painstaking went into which compilers and even with “better options” there's only really a couple big fish of workflows to take an orgs ideas to production.What passes as a compiler and what passes for a programming language exploded.I'm very interested in “the final compile target” of these systems AND the output of that still being human readable and influenceable. What passes as a compiler and what passes for a programming language exploded.I'm very interested in “the final compile target” of these systems AND the output of that still being human readable and influenceable. I'm very interested in “the final compile target” of these systems AND the output of that still being human readable and influenceable. In a well-functioning competitive market, no company should be able to rest on its laurels. The problem is industries have consolidated and trustbusters are nowhere to be found.Notice when tech is new (the web, smart phones, AI) there's an initial burst of competitive companies? Ask yourself how many dot-com millionaires would realistically be able to duplicate their success in 2026, given the same product but launching today.Aside from consolidation, discoverability is a huge problem, especially in the era of AI slop. Notice when tech is new (the web, smart phones, AI) there's an initial burst of competitive companies? Ask yourself how many dot-com millionaires would realistically be able to duplicate their success in 2026, given the same product but launching today.Aside from consolidation, discoverability is a huge problem, especially in the era of AI slop. Aside from consolidation, discoverability is a huge problem, especially in the era of AI slop. This is interesting, in my experience its seemed to be the opposite.In manufacturing, its must easier to come up with a specific number of employees you need for a given project or contract.If the contract is expected to sign you may hire early to get ahead of it.If the contract falls through, or an existing contract is cancelled, you know exactly how many people you need to cut to balance labor with your current commitments. In manufacturing, its must easier to come up with a specific number of employees you need for a given project or contract.If the contract is expected to sign you may hire early to get ahead of it.If the contract falls through, or an existing contract is cancelled, you know exactly how many people you need to cut to balance labor with your current commitments. If the contract falls through, or an existing contract is cancelled, you know exactly how many people you need to cut to balance labor with your current commitments. Let's not even bring the gig economy into discussion.. We can of course discuss how many people got into industry during COVID heyday and whether they should have, but mostly I think it's about those behemoths having disproportionately high impact on the entire labour market. Do anything and everything to remain in "growth stock" category. Also, if you are using AI to even write such a simple blog post, then perhaps corporations are indeed using it for all kinds of purposes too, and that undoubtedly reflects on their hiring. If you are starting a dry cleaning business, you have a cost of the equipment, rent and other well known factors. Cheap capital will result in too many dry cleaners and also too many startups that probably shouldn't have gotten funding.The downside comes in various forms. 1) existing dry cleaning businesses are less profitable because of increased competition, and 2) startups hire scarce engineers and drive up wages, which drive up costs for everyone.Cheap capital is justified bc the goal is growth, but it is a blunt instrument that creates hot spots and neglected areas simultaneously. Chinese firms face significantly more competition than firms in the capitalist US, but overall China's policies are crafted with a more deliberate eye toward their distributional consequences and notions of the greater good are much more subject to sharp critique and pressure across social and industrial strata.What we are seeing in the US is that policymakers have come to believe that the growth-focused approach is an escape hatch that can be used to reduce the effects of other bad decisions, but at some point the size of the inflated economy gets big enough that it takes on a political life of its own -- post-911 defense contractors have dramatically more lobbying and policy-influencing power than they had prior. Today, systemically risky financial industry participants have significantly more political clout than they had before the 2008 correction.In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. Cheap capital will result in too many dry cleaners and also too many startups that probably shouldn't have gotten funding.The downside comes in various forms. 1) existing dry cleaning businesses are less profitable because of increased competition, and 2) startups hire scarce engineers and drive up wages, which drive up costs for everyone.Cheap capital is justified bc the goal is growth, but it is a blunt instrument that creates hot spots and neglected areas simultaneously. Chinese firms face significantly more competition than firms in the capitalist US, but overall China's policies are crafted with a more deliberate eye toward their distributional consequences and notions of the greater good are much more subject to sharp critique and pressure across social and industrial strata.What we are seeing in the US is that policymakers have come to believe that the growth-focused approach is an escape hatch that can be used to reduce the effects of other bad decisions, but at some point the size of the inflated economy gets big enough that it takes on a political life of its own -- post-911 defense contractors have dramatically more lobbying and policy-influencing power than they had prior. Today, systemically risky financial industry participants have significantly more political clout than they had before the 2008 correction.In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. 1) existing dry cleaning businesses are less profitable because of increased competition, and 2) startups hire scarce engineers and drive up wages, which drive up costs for everyone.Cheap capital is justified bc the goal is growth, but it is a blunt instrument that creates hot spots and neglected areas simultaneously. Chinese firms face significantly more competition than firms in the capitalist US, but overall China's policies are crafted with a more deliberate eye toward their distributional consequences and notions of the greater good are much more subject to sharp critique and pressure across social and industrial strata.What we are seeing in the US is that policymakers have come to believe that the growth-focused approach is an escape hatch that can be used to reduce the effects of other bad decisions, but at some point the size of the inflated economy gets big enough that it takes on a political life of its own -- post-911 defense contractors have dramatically more lobbying and policy-influencing power than they had prior. Today, systemically risky financial industry participants have significantly more political clout than they had before the 2008 correction.In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. Chinese firms face significantly more competition than firms in the capitalist US, but overall China's policies are crafted with a more deliberate eye toward their distributional consequences and notions of the greater good are much more subject to sharp critique and pressure across social and industrial strata.What we are seeing in the US is that policymakers have come to believe that the growth-focused approach is an escape hatch that can be used to reduce the effects of other bad decisions, but at some point the size of the inflated economy gets big enough that it takes on a political life of its own -- post-911 defense contractors have dramatically more lobbying and policy-influencing power than they had prior. Today, systemically risky financial industry participants have significantly more political clout than they had before the 2008 correction.In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. What we are seeing in the US is that policymakers have come to believe that the growth-focused approach is an escape hatch that can be used to reduce the effects of other bad decisions, but at some point the size of the inflated economy gets big enough that it takes on a political life of its own -- post-911 defense contractors have dramatically more lobbying and policy-influencing power than they had prior. Today, systemically risky financial industry participants have significantly more political clout than they had before the 2008 correction.In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. In other words, the fabric of (political) reality shifts and it becomes hard to identify what normal would look like or feel like. In my view, AI adds fuel to the existing fire -- it rapidly shifts demand away from software engineers and onto strategists -- give the team a strategy and now with AI the team will have it done in a few weeks. If not, a competitor will do it without poaching anyone from your team.And market forces include both creative and destructive forces. This then causes the market to dry up again and if the interest rate hasn't dropped even further then a lot of companies that need follow up investment will now get killed off. It's a very Darwinian landscape that results from this and I've been wondering for years if there isn't a better way to do this. Capital isn't just cheap; it's strategically directed by the state with goals beyond financial return. The aim is "new quality productive forces" -- slow-burn, systemic growth that reinforces social stability and industrial upgrade, not a boom-bust race for unicorns.The current AI boom is our real-time experiment to see if this is the "better way." China's model is state-planned, focusing on the "AI Plus" integration of technology across its industrial base, despite investing less ($9.3B) and facing constraints like advanced semiconductor access.We're watching two competing logics: one seeking market-defining breakthroughs through volatile, capital-intensive competition, and another pursuing broad-based, stability-oriented technological integration. The results of this test will show which system better transforms capital into lasting, system-wide advantage. China's model is state-planned, focusing on the "AI Plus" integration of technology across its industrial base, despite investing less ($9.3B) and facing constraints like advanced semiconductor access.We're watching two competing logics: one seeking market-defining breakthroughs through volatile, capital-intensive competition, and another pursuing broad-based, stability-oriented technological integration. The results of this test will show which system better transforms capital into lasting, system-wide advantage. We're watching two competing logics: one seeking market-defining breakthroughs through volatile, capital-intensive competition, and another pursuing broad-based, stability-oriented technological integration. The results of this test will show which system better transforms capital into lasting, system-wide advantage. Central banks rotating into gold to de-risk from USD combined with their usual slow bureaucratic processes. By the time they've decided that gold needs to be bought, the price has already run up by 50% and it's no longer a good idea to buy, but they still need to execute on their decisions anyway.The USD isn't going anywhere for the simple reason that the USA can simply counterfeit any non-USD currency and there's nothing the issuer can do about it, whereas if anyone tries to counterfeit USD, they should expect a nice little missile to land inside their room no matter where they are in the world. China is cashless, so are some parts of Europe. It seems NK had mostly used it as pocket money for its embassy staff and has now stopped.From https://en.wikipedia.org/wiki/Superdollar#North_Korea:"The U.S. Secret Service estimates that North Korea has produced $45 million in superdollars since 1989. [...] Since 2004, the United States has frequently called for pressure against North Korea in an attempt to end the alleged distribution of supernotes. The U.S. eventually prohibited Americans from banking with Banco Delta Asia. From https://en.wikipedia.org/wiki/Superdollar#North_Korea:"The U.S. Secret Service estimates that North Korea has produced $45 million in superdollars since 1989. [...] Since 2004, the United States has frequently called for pressure against North Korea in an attempt to end the alleged distribution of supernotes. The U.S. eventually prohibited Americans from banking with Banco Delta Asia. "The U.S. Secret Service estimates that North Korea has produced $45 million in superdollars since 1989. [...] Since 2004, the United States has frequently called for pressure against North Korea in an attempt to end the alleged distribution of supernotes. The U.S. eventually prohibited Americans from banking with Banco Delta Asia. Really grateful that the opportunities I've been given weren't predicated on knowing things completely irrelevant to my job. .. and terrible for inflation, but that can be blamed on other people. ZIRP, AI, over hiring, and a wave of boot camp labour supply I suspect all contribute.Plus we're also likely approaching saturation on a lot of fronts with attention and ad density saturation. Things like YouTube seem to be on the edge of how much ads they can force feed without people just not using yt because it's unusable. Combine that with over hiring and it's bound to hit a wall Things like YouTube seem to be on the edge of how much ads they can force feed without people just not using yt because it's unusable. Combine that with over hiring and it's bound to hit a wall Tech industry management is locked in a death spiral because it lacks the ethics or the vision to create real value, as opposed to extractive predatory value.This applies equally to consumer relationships and employee relationships. "And that is - ironically - a failure of brain chemistry and emotional regulation.Because making number go up is a catastrophic addiction in its own right. "And that is - ironically - a failure of brain chemistry and emotional regulation.Because making number go up is a catastrophic addiction in its own right. And that is - ironically - a failure of brain chemistry and emotional regulation.Because making number go up is a catastrophic addiction in its own right. Because making number go up is a catastrophic addiction in its own right. The boom in big tech is I think almost exclusive driven by dopamine, brain rot and ad tech either directly or indirectly. Netflix - binge watching and they're trying hard to force more ads in. Amazon runs an ad empire, produces video content for binge watching and has a store designed to maximise consumerism and retail therapy.Even the portions that are more infra like AWS - I'd bet a large portion of that fulfilling demand by things that are part of the attention/dopamine economy. But the explosive growth seen in tech over past 20 years specifically is I think almost exclusively driven by adtech and attention economy.And if that reaches saturation either this changes tracks to similar on AI or the growth plateaus. Other value creation certainly exists and is valid but I don't see it filling the shoes of adtech.Google and meta indirectly acknowledged the problem years ago already with their balloons over Africa to connect more eyeballs to the internet plan. It's telling I think that a wild plan like balloons in Africa was the plan selected, rather than going for the countless other valuable opportunities as you say. But the explosive growth seen in tech over past 20 years specifically is I think almost exclusively driven by adtech and attention economy.And if that reaches saturation either this changes tracks to similar on AI or the growth plateaus. Other value creation certainly exists and is valid but I don't see it filling the shoes of adtech.Google and meta indirectly acknowledged the problem years ago already with their balloons over Africa to connect more eyeballs to the internet plan. It's telling I think that a wild plan like balloons in Africa was the plan selected, rather than going for the countless other valuable opportunities as you say. But the explosive growth seen in tech over past 20 years specifically is I think almost exclusively driven by adtech and attention economy.And if that reaches saturation either this changes tracks to similar on AI or the growth plateaus. Other value creation certainly exists and is valid but I don't see it filling the shoes of adtech.Google and meta indirectly acknowledged the problem years ago already with their balloons over Africa to connect more eyeballs to the internet plan. It's telling I think that a wild plan like balloons in Africa was the plan selected, rather than going for the countless other valuable opportunities as you say. And if that reaches saturation either this changes tracks to similar on AI or the growth plateaus. Other value creation certainly exists and is valid but I don't see it filling the shoes of adtech.Google and meta indirectly acknowledged the problem years ago already with their balloons over Africa to connect more eyeballs to the internet plan. It's telling I think that a wild plan like balloons in Africa was the plan selected, rather than going for the countless other valuable opportunities as you say. Google and meta indirectly acknowledged the problem years ago already with their balloons over Africa to connect more eyeballs to the internet plan. It's telling I think that a wild plan like balloons in Africa was the plan selected, rather than going for the countless other valuable opportunities as you say. Firing people in most of Europe is still not as easy as it is in the US.The opposite is also true, it's not that easy to leave your employer and you have to give 1/3/6 months notice before leaving, depending on your role/seniority/contract.Sometimes companies even make you sign 12 months notice contracts clause where they pay you a fixed monthly bonus but you can't leave without giving a 12 months notice, my SO has signed one. The opposite is also true, it's not that easy to leave your employer and you have to give 1/3/6 months notice before leaving, depending on your role/seniority/contract.Sometimes companies even make you sign 12 months notice contracts clause where they pay you a fixed monthly bonus but you can't leave without giving a 12 months notice, my SO has signed one. Sometimes companies even make you sign 12 months notice contracts clause where they pay you a fixed monthly bonus but you can't leave without giving a 12 months notice, my SO has signed one. Down sizing is a perfectly legal reason to fire people in Europe, and it happens all the time when big companies do mass firings. American companies find every time that doing layoffs is very hard for them in Europe. Most recently Amazon, which, unable to lay off their people in Milan went through secondary tactics like demanding Return To Office (from people that signed hybrid or fully remote contracts) and other tactics involving mobbing or generous severance packages (up to 1 year in salary). Prior to 2010 almost everything built required a huge amount of development effort, and in 2010 there was still a huge amount of useful stuff to be built.Remember – prior to 2010 a lot of major companies didn't even have basic e-commerce stores because the internet was still a desktop thing, and because of this it really only appealed to a subsection of the population who were computer literate.Post 2010 and post iPhone the internet broadened massively. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Remember – prior to 2010 a lot of major companies didn't even have basic e-commerce stores because the internet was still a desktop thing, and because of this it really only appealed to a subsection of the population who were computer literate.Post 2010 and post iPhone the internet broadened massively. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. During this time almost everything had to built by hand, and almost everything being built was a good investment because it was so obviously useful.Around 2015 I realised that e-commerce was close to being a solved problem. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Around 2015 I realised that e-commerce was close to being a solved problem. Both in how most major companies had built out fairly good e-commerce stores, and also in how it was becoming relatively easy for someone to create an e-commerce store with almost no tech skills with solutions like Shopify.I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. I'd argue somewhere between 2010 and 2020 the tech industry fundamentally changed. It become less about building useful stuff like search engines, social media sites, booking systems, e-commerce stores, etc – these were the obvious use cases for tech. Instead the tech industry started to transition to building what can only be described as "hype products" in which CEOs would promise similar profits and societal disruption as the stuff built before, except this time the market demand was much less clear.Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Around this time I noticed both I and people I knew in tech stopped building useful stuff and were building increasingly more abstract stuff which was difficult to communicate to non-technical folks. If you asked someone what they did in tech around this time they might tell you that their company are disrupting some industry with the blockchain or that they're using machine learning pick birthday cards using data sourced from Twitter.I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. I used to bring this up to people in tech but so many people in tech at this time had convinced themselves that the money was rolling in because they were just so intelligent and solving really hard problems.In reality the money was rolling in because of two back to back revolutions – the internet and the smart phone. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. These demanded almost all industries made a significant investment in technology, and for a decade or so those investments were extremely profitable. Anyone working in tech profited from those no-brainer technical investments.Post-2015 the huge amount of capital in tech and the cheap money allowed people to spend recklessly on the "next big thing" for many years. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. Companies are realising that a lot of the money they invested in tech in recent years isn't profitable and isn't even that useful. So now they're focusing in on delivering value and building up profit margins.The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. The tech market isn't broken, it's coming back down to reality. A few of us will stick around making the odd improvement and maintaining what's already there, but that boom isn't coming back. Many of us will need to seek new professions. 1. prioritizing bets for things that could be as profitable as social media or e-commerce instead of betting on more incremental improvement products.2. Focusing on pricing everything with reoccurring revenue and thus increasing the lifetime cost for end users instead of selling products at a discrete costs and providing end users value3. Treating people as fungible resources and moving them around all the time rather than letting people develop unique expertise skillsets.As a result, any product that can't achieve $10+ billion annual revenue within a couple of years with a ship of Theseus team is deemed a failure and scrapped. Focusing on pricing everything with reoccurring revenue and thus increasing the lifetime cost for end users instead of selling products at a discrete costs and providing end users value3. Treating people as fungible resources and moving them around all the time rather than letting people develop unique expertise skillsets.As a result, any product that can't achieve $10+ billion annual revenue within a couple of years with a ship of Theseus team is deemed a failure and scrapped. Treating people as fungible resources and moving them around all the time rather than letting people develop unique expertise skillsets.As a result, any product that can't achieve $10+ billion annual revenue within a couple of years with a ship of Theseus team is deemed a failure and scrapped. Treating people as fungible resources and moving them around all the time rather than letting people develop unique expertise skillsets.As a result, any product that can't achieve $10+ billion annual revenue within a couple of years with a ship of Theseus team is deemed a failure and scrapped. Maintaining it is much less work.For a while the industry has done a thing where you do e.g. infrastructure in five different ways across ten different teams across three departments. That didn't go so well, either.You get the feeling that all this merry but ultimately futile kerfuffle was done to fuel the hype of growth but the actual job positions were completely uncoupled from revenue growth. It doesn't take twice as many workers to service twice as many employees in this industry.When the global expansion didn't have anywhere else to expand to and revenue stopped growing, the workforce-sustaining illusion fell apart. For a while the industry has done a thing where you do e.g. infrastructure in five different ways across ten different teams across three departments. That didn't go so well, either.You get the feeling that all this merry but ultimately futile kerfuffle was done to fuel the hype of growth but the actual job positions were completely uncoupled from revenue growth. It doesn't take twice as many workers to service twice as many employees in this industry.When the global expansion didn't have anywhere else to expand to and revenue stopped growing, the workforce-sustaining illusion fell apart. That didn't go so well, either.You get the feeling that all this merry but ultimately futile kerfuffle was done to fuel the hype of growth but the actual job positions were completely uncoupled from revenue growth. It doesn't take twice as many workers to service twice as many employees in this industry.When the global expansion didn't have anywhere else to expand to and revenue stopped growing, the workforce-sustaining illusion fell apart. It doesn't take twice as many workers to service twice as many employees in this industry.When the global expansion didn't have anywhere else to expand to and revenue stopped growing, the workforce-sustaining illusion fell apart. When the global expansion didn't have anywhere else to expand to and revenue stopped growing, the workforce-sustaining illusion fell apart. You and your good reasoning, but it means nothing to me.Many of us will seek new a profession indeed: AI engineer. Many of us will seek new a profession indeed: AI engineer. Everybody who laughs now about software engineers being replaced has become serious, for the AI engineer has replaced the software engineer.Let's see who is right in 2035. Yes it's kind of obvious to anyone who's looking at the actual work being done: the constant churn of OS updates, the JS-framework-du-jour, apps being updated constantly...It seems to me like a lot of this is just busy work, as if engineers need to justify having a job by being releasing inconsequential updates all the time. Bullshit jobs anyone?I for one would really like things to slow down, we all deserve it! It seems to me like a lot of this is just busy work, as if engineers need to justify having a job by being releasing inconsequential updates all the time. Bullshit jobs anyone?I for one would really like things to slow down, we all deserve it! I for one would really like things to slow down, we all deserve it! Here is Google complaint about not serving lobster biscue:https://x.com/Andercot/status/1768346257486184566?s=20Zero interest rate phenomenon. The reason you cannot get shit done at work quickly isn't because of bureaucracy or management layers standing in the way so much as it's the vested interest in weakening IT and Ops teams so that those higher-ups can retain more of the profit pie for themselves.My entire job is to make technology become so efficient that it fades into the background as a force amplifier for your actual work, and I've only spent ~1/3rd of my 15+ year career actually doing that in some form. I should be the one making sure documentation is maintained so that new hires onboard in days, not months. I should be the one driving improvements to the enterprise technology stack in areas we could benefit from, not some overpriced consultant justifying whatever the CIO has a hard-on for from his recent country club outing.Consultants, outsourcing, and fad-chasing aren't efficient. AI won't magically fix broken pipelines, bad datasets, or undocumented processes, because it is only ever aware of what it is told to be aware of, and none of those groups have any interest or incentive in actually fixing broken things.The tech industry is woefully and powerfully inefficient. It hoards engineers and then blocks them from solving actual problems in favor of prestige projects. It squanders entire datacenters on prompt ingestion and token prediction instead of paying a handful of basically competent engineers a livable salary to buy a home near the office and fucking fix shit. Its leaders demand awards and recognition for existing, not for actually contributing positively back to society - which leads to stupid and short-sighted decision-making processes and outcomes.And all of this, as OP points out, is built on a history of government bailouts for failures and cheap debt for rampant speculation. My entire job is to make technology become so efficient that it fades into the background as a force amplifier for your actual work, and I've only spent ~1/3rd of my 15+ year career actually doing that in some form. I should be the one making sure documentation is maintained so that new hires onboard in days, not months. I should be the one driving improvements to the enterprise technology stack in areas we could benefit from, not some overpriced consultant justifying whatever the CIO has a hard-on for from his recent country club outing.Consultants, outsourcing, and fad-chasing aren't efficient. AI won't magically fix broken pipelines, bad datasets, or undocumented processes, because it is only ever aware of what it is told to be aware of, and none of those groups have any interest or incentive in actually fixing broken things.The tech industry is woefully and powerfully inefficient. It hoards engineers and then blocks them from solving actual problems in favor of prestige projects. It squanders entire datacenters on prompt ingestion and token prediction instead of paying a handful of basically competent engineers a livable salary to buy a home near the office and fucking fix shit. Its leaders demand awards and recognition for existing, not for actually contributing positively back to society - which leads to stupid and short-sighted decision-making processes and outcomes.And all of this, as OP points out, is built on a history of government bailouts for failures and cheap debt for rampant speculation. AI won't magically fix broken pipelines, bad datasets, or undocumented processes, because it is only ever aware of what it is told to be aware of, and none of those groups have any interest or incentive in actually fixing broken things.The tech industry is woefully and powerfully inefficient. It hoards engineers and then blocks them from solving actual problems in favor of prestige projects. It squanders entire datacenters on prompt ingestion and token prediction instead of paying a handful of basically competent engineers a livable salary to buy a home near the office and fucking fix shit. Its leaders demand awards and recognition for existing, not for actually contributing positively back to society - which leads to stupid and short-sighted decision-making processes and outcomes.And all of this, as OP points out, is built on a history of government bailouts for failures and cheap debt for rampant speculation. It hoards engineers and then blocks them from solving actual problems in favor of prestige projects. It squanders entire datacenters on prompt ingestion and token prediction instead of paying a handful of basically competent engineers a livable salary to buy a home near the office and fucking fix shit. Its leaders demand awards and recognition for existing, not for actually contributing positively back to society - which leads to stupid and short-sighted decision-making processes and outcomes.And all of this, as OP points out, is built on a history of government bailouts for failures and cheap debt for rampant speculation. And all of this, as OP points out, is built on a history of government bailouts for failures and cheap debt for rampant speculation. I am admitting up front that this is specific to me, my career, and the specific life experiences I've had with it thus far.Like…I won't even entertain the rest of your comment if you're not even going to read the entirety of mine before vomiting out an “UhM aHkShUaLlY” retort. I am admitting up front that this is specific to me, my career, and the specific life experiences I've had with it thus far.Like…I won't even entertain the rest of your comment if you're not even going to read the entirety of mine before vomiting out an “UhM aHkShUaLlY” retort. Like…I won't even entertain the rest of your comment if you're not even going to read the entirety of mine before vomiting out an “UhM aHkShUaLlY” retort. You said these businesses aren't efficient but they have extremely high profit margins, which objectively paints a different picture: they are sufficiently efficient. They are more efficient than most other businesses.All this financial risk in the industry where employees are over-hired and low interest rate VC funds are thrown around haphazardly is accepted because even a modestly successful software business has high profit margins.The investors that funded less than $10 million to Airbnb's series A ended up a significant shareholder of a company that has the same market cap as PNC Financial Services in less than 20 years. PNC spent 180 years getting to that size.Software-driven products have huge profit potential and they can grow fast.If you think I'm being some kind of pedantic loser, fine, so be it. You can put the blindfolds on and ignore a data-supported viewpoint. All this financial risk in the industry where employees are over-hired and low interest rate VC funds are thrown around haphazardly is accepted because even a modestly successful software business has high profit margins.The investors that funded less than $10 million to Airbnb's series A ended up a significant shareholder of a company that has the same market cap as PNC Financial Services in less than 20 years. PNC spent 180 years getting to that size.Software-driven products have huge profit potential and they can grow fast.If you think I'm being some kind of pedantic loser, fine, so be it. You can put the blindfolds on and ignore a data-supported viewpoint. The investors that funded less than $10 million to Airbnb's series A ended up a significant shareholder of a company that has the same market cap as PNC Financial Services in less than 20 years. PNC spent 180 years getting to that size.Software-driven products have huge profit potential and they can grow fast.If you think I'm being some kind of pedantic loser, fine, so be it. You can put the blindfolds on and ignore a data-supported viewpoint. Software-driven products have huge profit potential and they can grow fast.If you think I'm being some kind of pedantic loser, fine, so be it. You can put the blindfolds on and ignore a data-supported viewpoint. If you think I'm being some kind of pedantic loser, fine, so be it. You can put the blindfolds on and ignore a data-supported viewpoint. If you want stability, go write Java for an insurance company for $85,000/year in Hartford, CT.OP is horrified to discover risky gambling happening in Las Vegas. OP is horrified to discover risky gambling happening in Las Vegas. Amazon is fundamentally a logistics + robotics company and is one of the worst companies to join for 'stability' as they have razor-thin margins.With almost 1.6M workers, the layoffs there are at least in the 5 figures and they will not stop to do the easiest thing possible in order to increase profit margins and that is to take jobs away from warehouse workers (using robots) and corporate jobs (using AI agents).> Most engineers (including me) spent months grinding LeetCode at least twice in their career, studying system design, and passing grueling 6-round interviews to prove they are the “top 1%.”Leetcode can be easily gamed and cheated and is a waste of time.Now you need to make money for yourself instead of dancing around performative interviews since an AI Agent + Human out performs over 90% of workers doing the mundane work at Amazon anyway.You are being scammed without knowing it. With almost 1.6M workers, the layoffs there are at least in the 5 figures and they will not stop to do the easiest thing possible in order to increase profit margins and that is to take jobs away from warehouse workers (using robots) and corporate jobs (using AI agents).> Most engineers (including me) spent months grinding LeetCode at least twice in their career, studying system design, and passing grueling 6-round interviews to prove they are the “top 1%.”Leetcode can be easily gamed and cheated and is a waste of time.Now you need to make money for yourself instead of dancing around performative interviews since an AI Agent + Human out performs over 90% of workers doing the mundane work at Amazon anyway.You are being scammed without knowing it. > Most engineers (including me) spent months grinding LeetCode at least twice in their career, studying system design, and passing grueling 6-round interviews to prove they are the “top 1%.”Leetcode can be easily gamed and cheated and is a waste of time.Now you need to make money for yourself instead of dancing around performative interviews since an AI Agent + Human out performs over 90% of workers doing the mundane work at Amazon anyway.You are being scammed without knowing it. Leetcode can be easily gamed and cheated and is a waste of time.Now you need to make money for yourself instead of dancing around performative interviews since an AI Agent + Human out performs over 90% of workers doing the mundane work at Amazon anyway.You are being scammed without knowing it. Now you need to make money for yourself instead of dancing around performative interviews since an AI Agent + Human out performs over 90% of workers doing the mundane work at Amazon anyway.You are being scammed without knowing it. Amazon the logistics company is paid for by the $100+ per year that its customers just give to Amazon to get basically nothing in return. Compared to the other FAANG companies, these margins not only thin, but terrible and Amazon has the worst margins out of FAANG regardless of AWS or not. ZIRP (especially the "double tap" ZIRP in 2021/2022) created this monster (bootcamp devs getting hired, big tech devs making "day in the life of" tiktok vids).contractors give:instant scale up/down without layoff opticsno benefits overheadno severance obligationseasy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. contractors give:instant scale up/down without layoff opticsno benefits overheadno severance obligationseasy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. instant scale up/down without layoff opticsno benefits overheadno severance obligationseasy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. no benefits overheadno severance obligationseasy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. no severance obligationseasy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. easy performance management (just don't renew)this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. this mirrors what other industries typically do after large restructuring waves ... manufacturing got temp agencies and staffing firms as permanent fixtures post-rust belt collapse. tech is just catching up to the same playbook. The actual reason tech companies overhire is because people get promoted based on the number of people that are "under" them. Keep the ERP system running, build a new efficiency report, trouble shoot why the payroll missed bob last week because of a un-validated text entry field.Or because of the years of zero interest, tons more people went into software, so now it is over-populated, and thus puts pressure on regular hum-drum software jobs. Or because of the years of zero interest, tons more people went into software, so now it is over-populated, and thus puts pressure on regular hum-drum software jobs. - Senior Engineers now often is sufficient for most tasks, Junior Engineers seems like a burden rather than a boost during development process- Companies feel comfortable with hiring fast and firing fast- Tech Market is now flooded with not-so-good engineers having good experience with good AI coding assistants - which already are capable of solving 80% of the problems - they are ready to work for much less than really experienced engineersIn general, yes, companies overhired many software developers, hoping they will continue hyper growing, but then reality has kicked in - this was not sustainable for most businesses. - Companies feel comfortable with hiring fast and firing fast- Tech Market is now flooded with not-so-good engineers having good experience with good AI coding assistants - which already are capable of solving 80% of the problems - they are ready to work for much less than really experienced engineersIn general, yes, companies overhired many software developers, hoping they will continue hyper growing, but then reality has kicked in - this was not sustainable for most businesses. - Tech Market is now flooded with not-so-good engineers having good experience with good AI coding assistants - which already are capable of solving 80% of the problems - they are ready to work for much less than really experienced engineersIn general, yes, companies overhired many software developers, hoping they will continue hyper growing, but then reality has kicked in - this was not sustainable for most businesses. In general, yes, companies overhired many software developers, hoping they will continue hyper growing, but then reality has kicked in - this was not sustainable for most businesses. Tech companies make these systems of automation and provide them to other industries, so they can automate.Making a program or an IT system is something you only do once. Massive amounts of work to build it, and when it's finished, most workers have to move on.Of course an IT company can continue to expand into perpetuity, but what if they don't have the leadership talent or resources to create a new giant project after one has been finished? "Well don't hire too many people in the first place to rush your project into completion" - Then you get left behind. Making a program or an IT system is something you only do once. Massive amounts of work to build it, and when it's finished, most workers have to move on.Of course an IT company can continue to expand into perpetuity, but what if they don't have the leadership talent or resources to create a new giant project after one has been finished? "Well don't hire too many people in the first place to rush your project into completion" - Then you get left behind. Of course an IT company can continue to expand into perpetuity, but what if they don't have the leadership talent or resources to create a new giant project after one has been finished? "Well don't hire too many people in the first place to rush your project into completion" - Then you get left behind. "Well don't hire too many people in the first place to rush your project into completion" - Then you get left behind. Some software is finished, like Microsoft Office 2003, and requires no additional work except to force ads on people. Most [sane] software out there, but not all, has a main development time which is ridiculous compared to its life cycle (you could code in binary machine code, for several ISAs, it would not even matter).Then, it is extremely hard to justify _HONESTLY_ a permanent income in software development. Then, it is extremely hard to justify _HONESTLY_ a permanent income in software development. Author started their career after 2010 so they are not basing that on personal experience. CEO Failure News Example (2013): https://techcrunch.com/2013/08/06/fail-week-kevin-ryan/'Stock Price Jumps' Example (2026): https://finance.yahoo.com/news/amazon-stock-jumps-pre-market...Not sure if it is the articles I picked up, but the amazon example doesn't have a single mention of Andy Jassy! 'Stock Price Jumps' Example (2026): https://finance.yahoo.com/news/amazon-stock-jumps-pre-market...Not sure if it is the articles I picked up, but the amazon example doesn't have a single mention of Andy Jassy! Not sure if it is the articles I picked up, but the amazon example doesn't have a single mention of Andy Jassy! That article even says “ Wall Street, in keeping with its cheerful attitude about layoffs, […] investors bet that profit-sweetening job cuts, though perhaps not as dramatic as AT&T's, would remain in vogue among large corporations.Large layoffs have always been looked upon favorably by investors. Large layoffs have always been looked upon favorably by investors. But I literally mean if you have a crappy business and put AI into it you're just gonna make your business worse.AI as a tool is not actually a solution for very much. There is remarkably little pushback on company narratives about layoffs or ailing economic fortunes from journalists which is weird because it's more normal that they are not truthful.The Brexit vote is nothing like this though. AI is probably the biggest corporate gaslighting exercise I've ever seen in my entire life. AI is probably the biggest corporate gaslighting exercise I've ever seen in my entire life. Also very common to blame things on health and safety, GDPR, etc. The LLM proponents are trying the same naive move with intangible assets, but dismissed finite limits of externalized costs on surrounding community infrastructure. "AI" puts it in direct competition with the foundational economic resources for modern civilization. The technical side just added a facade of legitimacy to an economic fiction.https://en.wikipedia.org/wiki/Competitive_exclusion_principl...Thus, as energy costs must go up, the living standards of Americans is bid down. Individuals can't fix irrational movements, but one may profit from its predictable outcome. We look forwards to stripping data centers for discounted GPUs. https://en.wikipedia.org/wiki/Competitive_exclusion_principl...Thus, as energy costs must go up, the living standards of Americans is bid down. Individuals can't fix irrational movements, but one may profit from its predictable outcome. We look forwards to stripping data centers for discounted GPUs. Thus, as energy costs must go up, the living standards of Americans is bid down. Individuals can't fix irrational movements, but one may profit from its predictable outcome. We look forwards to stripping data centers for discounted GPUs. The point is mass media communication and frictionless money movements across the world and market access which is so freely availible to the small retail investor.It's a recipe for disaster because an extraordinary claim can attract billions of dollars with nothing but hope and dreams to back it up.Imagine if the Wright Brothers had today markets and mass media at their disposal, they'd be showered in billions or even trillions but the actual model didn't make any money because it was R&D It's a recipe for disaster because an extraordinary claim can attract billions of dollars with nothing but hope and dreams to back it up.Imagine if the Wright Brothers had today markets and mass media at their disposal, they'd be showered in billions or even trillions but the actual model didn't make any money because it was R&D Imagine if the Wright Brothers had today markets and mass media at their disposal, they'd be showered in billions or even trillions but the actual model didn't make any money because it was R&D Apparently 73% of LLM resources are used for emotional context support.People need to go outside for a daily walk, and meet real people.
“Who here believes involuntary death is a good thing?” Nathan Cheng has been delivering similar versions of this speech over the last couple of years, so I knew what was coming. And that defeating it should be humanity's number one priority—quite literally, that it should come above all else in the social and political hierarchy. “If you believe that life is good and there's inherent moral value to life,” he told them, “it stands to reason that the ultimate logical conclusion here is that we should try to extend lifespan indefinitely.” Solving aging, he added, is “a problem that has an incredible moral duty for all of us to get involved in.” It was part of a longer, two-month residency (simply called Vitalist Bay) that hosted various events to explore tools—from drug regulation to cryonics—that might be deployed in the fight against death. One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Cheng and his colleague Adam Gries a few years ago. No relation to the lowercase vitalism of old, this Vitalism has a foundational philosophy that's deceptively simple: to acknowledge that death is bad and life is good. The strategy for executing it, though, is far more obviously complicated: to launch a longevity revolution. Interest in longevity has certainly taken off in recent years, but as the Vitalists see it, it has a branding problem. The term “longevity” has been used to sell supplements with no evidence behind them, “anti-aging” has been used by clinics to sell treatments, and “transhumanism” relates to ideas that go well beyond the scope of defeating death. Not everyone in the broader longevity space shares Vitalists' commitment to actually making death obsolete. As Gries, a longtime longevity devotee who has largely become the enthusiastic public face of Vitalism, said in an online presentation about the movement in 2024, “We needed some new word.” “Vitalism” became a clean slate: They would start a movement to defeat death, and make that goal the driving force behind the actions of individuals, societies, and nations. Consider it longevity for the most hardcore adherents—a sweeping mission to which nothing short of total devotion will do. But that's sort of the point: They believe they could exist if Vitalists are able to spread their gospel, influence science, gain followers, get cash, and ultimately reshape government policies and priorities. For the past few years, Gries and Cheng have been working to recruit lobbyists, academics, biotech CEOs, high-net-worth individuals, and even politicians into the movement, and they've formally established a nonprofit foundation “to accelerate Vitalism.” Today, there's a growing number of Vitalists (some paying foundation members, others more informal followers, and still others who support the cause but won't publicly admit as much), and the foundation has started “certifying” qualifying biotech companies as Vitalist organizations. Perhaps most consequentially, Gries, Cheng, and their peers are also getting involved in shaping US state laws that make unproven, experimental treatments more accessible. Vitalism cofounders Nathan Cheng and Adam Gries want to launch a longevity revolution. All this is helping Vitalists grow in prominence, if not also power. Even the scientists who think that Vitalist ideas of defeating death are wacky, unattainable ones, with the potential to discredit their field, have shown up on stage with Vitalism's founders, and these serious researchers provide a platform for them at more traditionally academic events. Faculty members from Harvard, Stanford, and the University of California, Berkeley, all spoke at events. “I have very different ideas in terms of what's doable,” he told me. “But that's part of the [longevity] movement—there's freedom for people to say whatever they want.” Many other well-respected scientists attended, including representatives of ARPA-H, the US federal agency for health research and breakthrough technologies. And as I left for a different event on longevity in Washington, DC, just after the Vitalist Bay Summit, a sizable group of Vitalist Bay attendees headed that way too, to make the case for longevity to US lawmakers. After all, death has become an important part of human culture the world over. In the meantime, though, some ethicists are concerned that experimental and unproven medicines—including potentially dangerous ones—are becoming more accessible, in some cases with little to no oversight. Gries, ultimately, has a different view of the ethics here. I was told that around 300 people had signed up for that day's events, which was more than had attended the previous week. That might have been because arguably the world's most famous longevity enthusiast, Bryan Johnson, was about to make an appearance. The key to Vitalism has always been that “death is humanity's core problem, and aging its primary agent,” cofounder Adam Gries told me. Athletic and energetic, he bounded across a stage wearing bright yellow shorts and a long-sleeved shirt imploring people to “Choose Life: VITALISM.” Gries is a tech entrepreneur who describes himself as a self-taught software engineer who's “good at virality.” He's been building companies since he was in college in the 2000s, and grew his personal wealth by selling them. As with many other devotees to the cause, his deep interest in life extension was sparked by Aubrey de Grey, a controversial researcher with an iconic long beard and matching ponytail. He's known widely both for his optimistic views about “defeating aging” and for having reportedly made sexual comments to two longevity entrepreneurs. (In an email, de Grey said he's “never disputed” one of these remarks but denied having made the other. “My continued standing within the longevity community speaks for itself,” he added.) In an influential 2005 TED Talk (which has over 4.8 million views), de Grey predicted that people would live to 1,000 and spoke of the possibility of new technologies that would continue to stave off death, allowing some to avoid it indefinitely. “It was kind of evident to me that life is great,” says Gries. A second turning point for Gries came during the early stages of the covid-19 pandemic, when he essentially bet against companies that he thought would collapse. “It was kind of like living through The Big Short.” Gries and his wife fled from San Francisco to Israel, where he grew up, and later traveled to Taiwan, where he'd obtained a “golden visa” and which was, at the time, one of only two countries that had not reported a single case of covid. He didn't want to experience the “journey of decrepitude” that aging often involves. He had dropped out of a physics PhD a few years earlier after experiencing what he describes on his website as “a massive existential crisis” and shifted his focus to “radical longevity.” (Cheng did not respond to email requests for an interview.) The pair “hit it off immediately,” says Gries, and they spent the following two years trying to figure out what they could do. After all, Gries reasons, that's how significant religious and social movements have happened in the past. He says they sought inspiration from the French and American Revolutions, among others. The Apollo program got people to the moon with less than 1% of US GDP; imagine, Gries asks, what we could do to human longevity with a mere 1% of GDP? It makes sense, then, that Gries and Cheng launched Vitalism in 2023 at Zuzalu, a “pop-up city” in Montenegro that provided a two-month home for like-minded longevity enthusiasts. The gathering was in some ways a loose prototype for what they wanted to accomplish. Not only was it close to the biotech hub of Boston, but they believed it had a small enough population for an influx of new voters sharing their philosophy to influence local and state elections. “Five to ten thousand people—that's all we need,” he said. The ultimate goal was to recruit Vitalists to help them establish a “longevity state”—a recognized jurisdiction that “prioritizes doing something about aging,” Cheng said, perhaps by loosening regulations on clinical trials or supporting biohacking. This idea is popular among many vocal members of the Vitalism community. It borrows from the concept of the “network state” developed by former Coinbase CTO Balaji Srinivasan, defined as a new city or country that runs on cryptocurrency; focuses on a goal, in this case extending human lifespan; and “eventually gains diplomatic recognition from preexisting states.” Some people not interested in dying have made progress toward realizing such a domain. The goal was to create a low-regulation biotech hub to fast-track the development of anti-aging drugs, though the “city” was more like a gated resort that hosted talks from a mix of respected academics, biohackers, biotech CEOs, and straight-up eugenicists. There was a strong sense of community—many attendees were living with or near each other, after all. A huge canvas where attendees could leave notes included missives like “Don't die,” “I love you,” and “Meet technoradicals building the future!” But Vitalia was short-lived, with events ending by the start of March 2024. Patri Friedman, a 49-year-old libertarian and grandson of the economist Milton Friedman who says he attended Zuzalu, Vitalia, and Vitalist Bay, envisions something potentially even bolder. His company is exploring various types of potential network states, but he says he's found that medical tourism—and, specifically, a hunger for life extension—dominates the field. “I can always fucking shoot myself in the head—I don't need anybody's help.”) The past year shows that it may in fact be easier to lobby legislators in states that are already friendly to deregulation. Anzinger and a lobbying group called the Alliance for Longevity Initiatives (A4LI) were integral to making Montana the first US hub for experimental medical treatments, with a new law to allow clinics to sell experimental therapies once they have been through preliminary safety tests (which don't reveal whether a drug actually works). Meanwhile, three other bills that expand access even further are under consideration. Ultimately, Gries stresses, Vitalism is “agnostic to the fixing strategies” that will help them meet their goals. There is, though, at least one strategy he's steadfast about: building influence. “If you want people to take action, you need to focus on a small number of very high-leverage people,” he tells me. That, perhaps unsurprisingly, includes wealthy individuals with “a net worth of $10 million or above,” he says. He wants to understand why (with some high-profile exceptions, including Thiel, who has been investing in longevity-related companies and foundations for decades) most uber-wealthy people don't invest in the field—and how he might persuade them to do so. These “high-leverage” people might also include, Gries says, well-respected academics, leaders of influential think tanks, politicians and policymakers, and others who work in government agencies. Cheng talks of putting out a “bat signal” for like-minded people, and he and Gries say that Vitalism has brought together people who have gone on to collaborate or form companies. There's also their nonprofit Vitalism International Foundation, whose supporters can opt to become “mobilized Vitalists” with monthly payments of $29 or more, depending on their level of commitment. In addition, the foundation works with longevity biotech companies to recognize those that are “aligned” with its goals as officially certified Vitalist organizations. “Designation may be revoked if an organization adopts apologetic narratives that accept aging or death,” according to the website. One of them is Shift Bioscience, a company using CRISPR and aging clocks—which attempt to measure biological age—to identify genes that might play a significant role in the aging process and potentially reverse it. Shift cofounder Daniel Ives, who holds degrees in mitochondrial and computational biology, tells me he was also won over to the longevity cause by de Grey's 2005 TED Talk. Ives calls himself the “Vitalist CEO” of Shift Bioscience. He thinks the label is important first as a way for like-minded people to find and support each other, grow their movement, and make the quest for longevity mainstream. He refers to unnamed companies and individuals who claim that drinking juices, for example, can reverse aging by five years or so. “You don't have to convince the mainstream,” says ARPA-H science and engineering advisor Mark Hamalainen. Though “kind of a terrible example,” he notes, Stalinism started small. “Sometimes you just have to convince the right people.” “Somebody will make these claims and basically throw legitimate science under the bus,” he says. He doesn't want spurious claims made on social media to get lumped in with the company's serious molecular biology. Shift's head of machine learning, Lucas Paulo de Lima Camillo, was recently awarded a $10,000 prize by the well-respected Biomarkers of Aging Consortium for an aging clock he developed. Another out-and-proud Vitalist CEO is Anar Isman, the cofounder of AgelessRx, a telehealth provider that offers prescriptions for purported longevity drugs—and a certified Vitalist organization. (Isman, who is in his early 40s, used to work at a hedge fund but was inspired to join the longevity field by—you guessed it—de Grey.) But he also claimed his company wasn't doing too badly commercially. He views each as an opportunity to “evangelize” his views on “radical life extension.” “I don't see a difference between … dying tomorrow or dying in 30 years,” he says. Vitalism, though, isn't just appealing to commercial researchers. Transhumanism—the position that we can use technologies to enhance humans beyond the current limits of biology—covers a broad terrain, but “Vitalism is like: Can we just solve this death thing first? In government, he works with individuals like Jean Hébert, a former professor of genetics and neuroscience who has investigated the possibility of rejuvenating the brain by gradually replacing parts of it; Hébert has said that “[his] mission is to beat aging.” He spoke at Zuzalu and Vitalist Bay. Both Brack and Hébert oversee healthy federal budgets—Hébert's brain replacement project was granted $110 million in 2024, for example. Neither Hébert nor Brack has publicly described himself as a Vitalist, and Hébert wouldn't agree to speak to me without the approval of ARPA-H's press office, which didn't respond to multiple requests for an interview with him or Brack. Brack did not respond to direct requests for an interview. Gries says he thinks that “many people at [the US Department of Health and Human Services], including all agencies, have a longevity-positive view and probably agree with a lot of the ideas Vitalism stands for.” And he is hoping to help secure federal positions for others who are similarly aligned with his philosophy. On both Christmas Eve and New Year's Eve last year, Gries and Cheng sent fundraising emails describing an “outreach effort” to find applicants for six open government positions that, together, would control billions of dollars in federal funding. “We're starting a systematic search to reach, screen, and support the best candidates.” “You don't have to convince the mainstream,” he says. Though “kind of a terrible example,” Hamalainen notes, Stalinism started small. “Sometimes you just have to convince the right people.” Having been in the field for over 20 years, de Grey tells me, he's seen various terms fall in and out of favor. Those terms now have “baggage that gets in the way,” he says. Though one of the five principles of Vitalism is a promise to “carry the message,” some people who agree with its ideas are reluctant to go public, including some signed-up Vitalists. I've asked Gries multiple times over several years, but he won't reveal how many Vitalists there are, let alone who makes up the membership. Around 30 people were involved in developing the movement, Gries says—but only 22 are named as contributors to the Vitalism white paper (with Gries as its author), including Cheng, Vitalia's Ion, and ARPA-H's Hamalainen. He acknowledges that some people just don't like to publicly affiliate with any organization. Some people worry that associating with a belief system that sounds a bit religious—even cult-like, some say—won't do the cause any favors. For instance, Anzinger—the other Vitalia founder—won't call himself a Vitalist. And Dylan Livingston, CEO of A4LI and arguably one of the most influential longevity enthusiasts out there, won't describe himself as a Vitalist either. Many other longevity biotech CEOs also shy away from the label—including Emil Kendziorra, who runs the human cryopreservation company Tomorrow Bio, even though that's a certified Vitalist organization. Kendziorra says he agrees with most of the Vitalist declaration but thinks it is too “absolutist.” He also doesn't want to imply that the pursuit of longevity should be positioned above war, hunger, and other humanitarian issues. Still, because Kendziorra agrees with almost everything in the declaration, he believes that “pushing it forward” and bringing more attention to the field by labeling his company a Vitalist organization is a good thing. (He also offered Vitalist Bay attendees a discount on his cryopreservation services.) “There's a lot of closeted scientists working in our field, and they get really excited about lifespans increasing,” explains Ives of Shift Bioscience. “But you'll get people who'll accuse you of being a lunatic that wants to be immortal.” He claims that people who represent biotech companies tell him “all the time” that they are secretly longevity companies but avoid using the term because they don't want funders or collaborators to be “put off.” “You don't have to be public about it.” He says he's spoken to others about “coming out of the closet” and that it's been going pretty well. And he hints that there are now many people in powerful positions—including in the Trump administration—who share his views, even if they don't openly identify as Vitalists. For Gries, this includes Jim O'Neill, the deputy secretary of health and human services, whom I profiled a few months after he became Robert F. Kennedy Jr.'s number two. (More recently, O'Neill was temporarily put in charge of the US Centers for Disease Control and Prevention.) O'Neill has long been interested in both longevity and the idea of creating new jurisdictions. He also served as CEO of the SENS Research Foundation, a longevity organization founded by de Grey, between 2019 and 2021, and he represented Thiel as a board member there for many years. (Tristan Roberts, a biohacker who used to work with a biotech company operating in Próspera, tells me he served O'Neill gin when he visited his Burning Man camp, which he describes as a “technology gay camp from San Francisco and New York.” Hamalainen also recalls meeting O'Neill at Burning Man, at a “techy, futurist” camp.) O'Neill's views are arguably becoming less fringe in DC these days. The DC event took place over three days in late April. But the third day was different and made me think Gries may be right about Vitalism's growing reach. It began with a congressional briefing on Capitol Hill, during which Representative Gus Bilirakis, a Republican from Florida, asked, “Who doesn't want to live longer, right?” As he explained, “Longevity science … directly aligns with the goals of the Make America Healthy Again movement.” “There's a lot of closeted scientists working in our field, and they get really excited about lifespans increasing,” says Daniel Ives of Shift Bioscience. Bilirakis and Representative Paul Tonko, a New York Democrat, were followed by Mehmet Oz, the former TV doctor who now leads the Centers for Medicare and Medicaid Services; he opened with typical MAHA talking points about chronic disease and said US citizens have a “patriotic duty” to stay healthy to keep medical costs down. The audience was enthralled as Oz talked about senescent cells, the zombie-like aged cells that are thought to be responsible for some age-related damage to organs and tissues. (The offices of Bilirakis and Tonko did not respond to a request for comment; neither did the Centers for Medicare and Medicaid Services.) Whether or not Vitalism starts a revolution, it will almost always be controversial in some quarters. Gries and Cheng often make the case for deregulation in their presentations. But ethicists—and even some members of the longevity community—point out that this comes with risks. Is it really that great not to die … ever? Some ethicists argue that for many cultures, death is what gives meaning to life. Imparato is concerned that Vitalists are ultimately seeking to change what it means to be human—a decision that should involve all members of society. Alberto Giubilini, a philosopher at the University of Oxford, agrees. “Death is a defining feature of humanity,” he says. Imparato's family is from Naples, Italy, where poor residents were once laid to rest in shared burial sites, with no headstones to identify them. “It speaks to what I consider the cultural relevance of death,” he says. Gries seems aware of the stigma around such “immortality quests,” as Imparato calls them. In his presentations, Gries shares lists of words that Vitalists should try to avoid—like “eternity,” “radical,” and “forever,” as well as any religious terms. He also appears to be dropping, at least publicly, the idea that Vitalism is a “moral” movement. Morality was “never part of the Vitalist declaration,” Gries told me in September. “Our point … was always that death is humanity's core problem, and aging its primary agent,” he told me. A decade ago, I don't think there would have been any way that the views espoused by Gries, Anzinger, and others who support Vitalist sentiments would have been accepted by the scientific establishment. After all, these are people who publicly state they hope to live indefinitely and who have no training in the science of aging, and who are open about their aims to find ways to evade the restrictions set forth by regulatory agencies like the FDA—all factors that might have rendered them outcasts not that long ago. Last year's Aging Research and Drug Discovery (ARDD) conference in Copenhagen—widely recognized as the most important meeting in aging science—was sponsored in part by Anzinger's new Próspera venture, Infinita City, as well as by several organizations that are either certified Vitalist or led by Vitalists. There was certainly an air of optimism at the Vitalist Bay Summit in Berkeley. “All the people who want a fun and awesome surprise gift, come on over!” he called out early on the first day. “Raise your voice if you're excited!” The audience whooped in response. He then proceeded to tell everyone, Oprah Winfrey–style, that they were all getting a free continuous glucose monitor. You get a CGM!” Plenty of attendees actually attached them to their arms on the spot. This piece has been updated to clarify a quote from Mark Hamalainen. A startup's ads for controversial embryo tests hit the New York City subway. In a bid to treat blindness, Life Biosciences will try out potent cellular reprogramming technology on volunteers. Why personalized gene editing, genetic resurrections and embryo scoring made our list. Discover special offers, top stories, upcoming events, and more. Try refreshing this page and updating them one more time.