Seattle is witnessing a curious role reversal in its economic narrative. While the city finally gains ground on perennial challenges like crime and transportation, its traditional growth engine — the tech sector and downtown employment — is beginning to sputter. The city has for years been a tech, retail and arts hub, but its total downtown jobs peaked in 2019 with more than 340,000 workers. Since the pandemic, that number has been creeping downwards, hitting approximately 317,000 jobs — which is roughly on par with 2018 numbers, according to a new report from the Downtown Seattle Association (DSA). “We have become an outlier when it comes to the cost of doing business in our city.” Those costs include the city's JumpStart tax, which targets the payrolls of large employers with high‑earning employees, as well as last year's restructuring of Seattle's tax on gross revenue that shifted the burden from smaller businesses to large ones. Across the country, companies are cutting headcount as AI tools replace some roles, economic uncertainty lingers, and leaders move to trim what they see as pandemic-era corporate “bloat.” That said, key elected leaders on Wednesday acknowledged concerns about rising taxes and government budgets. “I very much appreciate that it is not ideal for our tax environment for businesses to be wildly out of step with neighboring jurisdictions,” Mayor Katie Wilson told the packed hall at the Seattle Convention Center. Wilson and King County Executive Girmay Zahilay both pledged to scrutinize their governments' budgets. Despite return-to-office mandates, daily worker foot traffic averages just 145,000 — still well below the 226,000 workers on average who filled downtown streets each day in 2019, according to DSA. That figure could dip further as Amazon this spring is vacating a seven-story, 251,000-square-foot leased space in downtown. Some firms are doubling down on the city's core: Impinj recently renewed and increased its downtown office space while DAT Solutions and Docker both took sublease space along the city's waterfront. “I was with some small businesses earlier this week, and they said, ‘You know, our best customers are big employers. Microsoft's mission: empowering every person and organization on the planet to achieve more. Learn how Microsoft is thinking about responsible artificial intelligence, regulation, sustainability, and fundamental rights. Click for more about underwritten and sponsored content on GeekWire. Washington House passes 9.9% ‘millionaires tax' as business leaders warn of ‘seismic shift' Seattle mayoral front-runner Katie Wilson on taxes, tech sector and working with Amazon Worker foot traffic rises in Seattle after Amazon's RTO shift, but wider office vacancies spark concern Downtown Seattle leader calls Amazon's return-to-office mandate ‘influential' as 5-day policy begins
USAFacts, the government-data nonprofit founded by former Microsoft CEO Steve Ballmer, named Lauren Woodman as its new president. “As artificial intelligence reshapes how information is produced and consumed, her leadership will help ensure we continue providing transparent, trustworthy government data to the public.” Woodman most recently spent five years as CEO of DataKind, a nonprofit that helps social-impact organizations use data science and AI. “The opportunity now is to ensure that transparency, reliable data, and public understanding grow together.” The nonprofit was previously led by former president Poppy MacDonald, who stepped down last year. Megan Winfield, a former exec at Campspot and Hilton, joined last year as chief technology officer. Learn more about GeekWork: Contact GeekWire co-founder John Cook at [email protected]. Microsoft EVP Rajesh Jha retiring after 35 years in latest exit from senior leadership team Tech Moves: Microsoft Research gets a new leader; Amazon head joins AI startup; JPMorgan exec departing Steve Ballmer's USAFacts hires new CTO, who calls access to non-biased data ‘everything right now' Ballmer's USAFacts warns against politicizing data after Trump fires BLS chief over jobs report Tech Moves: USAFacts president stepping down; Starbucks' new COO; WSOS names leader Steve Ballmer brings his facts in a deep dive on government policy with ‘Daily Show's' Jon Stewart
If Iran does have underwater explosive drones, why would they boast about it and invite attacks upon that weapon and its deployment systems? A true UUV attack is probably outside Iran's wheelhouse, but cutting-down an attack speedboat to the waterline seems very realistic. That said, a UUV fleet would have downsides for Iran. It's expensive, dependent on imports and an overmatch for swarm-style attacks. Attack boats are a closer fit for the "cheap/attritable" tactics we see used with Shaheds. After all, it can manufacture centrifuges for uranium enrichment. My money is still on low-observable attack craft, or a high-low mix that deprioritizes submersibles. If Iran does have fully submersable UUVs, I'd expect them to be saved for a direct confrontation with the US Navy, not tankers.I could definitely be wrong though, I don't have any insider info to work with here. If you have to shift your pre-planned bombing sorties away from, say, local Basij HQ buildings, it takes pressure off of the Iranian government. Assigning aircraft to find/fix/target/track/engage "underwater drone launch points" is probably like searching for a needle in a haystack given the size of Iran's coastline. I wonder how many more caches of drones Iran has lying around. 'Nope'They'll signal something else later and things will open up. They'll signal something else later and things will open up. Yes, it sounds crazy right now, but a lot of things sounded similarly crazy 10 years ago, and here we are. The same guy that told the government of Georgia to add 10,000 votes to his total so he'd win.The same guy that received 0 punishment for either action.Why wouldn't he try something for the mid-terms? Why wouldn't he try something for the mid-terms? Trump is not all powerful, unless everybody gives up their power. and> but the federal government has no direct control over any of the voting processesComing soon, to polling booths near you, "random" ICE activity. > but the federal government has no direct control over any of the voting processesComing soon, to polling booths near you, "random" ICE activity. Coming soon, to polling booths near you, "random" ICE activity. I could see Trump trying this, but I also can see dozens of other people or groups, some richer, more powerful, more competent, and more ruthless than Trump, just waiting in the wings for the guardrails to come off to make a play to rule the territory of the former United States. If he tries and succeeds at this it's open-season. As a nation of freemen, we must live through all time, or die by suicide. "Lincoln was ahead of his time and might as well have predicted something like Trump. As a nation of freemen, we must live through all time, or die by suicide. "Lincoln was ahead of his time and might as well have predicted something like Trump. As a nation of freemen, we must live through all time, or die by suicide. "Lincoln was ahead of his time and might as well have predicted something like Trump. He believed that the greatest danger to America came from within, warning that if the nation faltered, it would be due to self-destruction rather than external forcesLincoln's famous speech: , "At what point then is the approach of danger to be expected? As a nation of freemen, we must live through all time, or die by suicide. "Lincoln was ahead of his time and might as well have predicted something like Trump. Lincoln's famous speech: , "At what point then is the approach of danger to be expected? As a nation of freemen, we must live through all time, or die by suicide. "Lincoln was ahead of his time and might as well have predicted something like Trump. Lincoln was ahead of his time and might as well have predicted something like Trump. If I try to rob a bank with a plastic toy gun, the charge which I would be arrested for would not be "bad behavior that had no chance of accomplishing anything", it would be "bank robbery". That's not precedent for the federal government declining to hold elections in any way. They keep making the same mistake: underestimating that your adversary gets a vote, whether it's Iran, trade partners, colleges, Colbert, the Kennedy Center's audience, or Minneapolis. But they claimed "flawless victory".Both things cannot be true at the same time. Both things cannot be true at the same time. The mainstream media is incredibly generous to him, they parse out the non-crazy from his word salad and report on that. > but he would have to be pretty bad off to come to that belief.Well, did you hear that the dead are walking around with no arms and no legs because they were blown off? Well, did you hear that the dead are walking around with no arms and no legs because they were blown off? If the goal was to hurt China / BRICS and kneecap Iran it seems on point.It's always hard to predict how the USA will vote when "war" is happening. It's always hard to predict how the USA will vote when "war" is happening. While also hurting Europe, South Korea, Japan, the Philippines, and many more. Very on point...It will hurt everyone, Americans included, oil is a global market, fertilisers are a global market, those are basic inputs for probably every single thing produced in the world.So now all of us around the globe have to pay the price for American Imperialism, compounded by the complete shattering of the USA's soft power as an ally, this will only create more animosity against the USA from all sides. Only because those countries choose for that to be the case. They choose to take every excuse to raise prices (in fact the Netherlands goes further: if sales tax on gas raises because prices raise, the amount of tax paid is kept constant if prices drop. So if gas prices are low, tax on gas has at one point reached 72%), but it is fundamentally a government choice. But the US, Canada, the Netherlands, and long list of other countries could make this crisis have zero effect on local prices. They choose to take every excuse to raise prices (in fact the Netherlands goes further: if sales tax on gas raises because prices raise, the amount of tax paid is kept constant if prices drop. So if gas prices are low, tax on gas has at one point reached 72%), but it is fundamentally a government choice. The US Government cannot force US companies to sell at a lower domestic price if they can get a higher price exporting. I know that God-Emperor Trump pretends that he can command the oil sector to make less money, but he can't.>For example, Saudi Arabia and Russia don't do that2 countries famous for being beacons of free-market capitalism. 2 countries famous for being beacons of free-market capitalism. The US government can, however, apply an export tariff that's used to subsidize local prices. USA, Europe, and many other countries depend on China for manufacturing. I doubt that this is going to solve inflation.But it will fill the pockets of a few people in oil rich countries that can still export. But it will fill the pockets of a few people in oil rich countries that can still export. And given the President's hatered for high interest rates and the next fed chairman being a garden-variety lick-spittle, things are not looking up. This 'Venezuelan oil' is a pipe dream for the moment. It will take a significant amount of years to get anywhere near completed. I think this has been the crux of many allegations against China. You are mistaken to assume there was a goal. Trump has admitted he did this because he was told that Iran were about to attack the U.S. not because of any strategic goal.https://youtube.com/shorts/YlkcOjSQVJk I expect more competency from US Presidential administrations, and also expect more competency and indpendence from the various parts of the executive branch, which should execute their missions without micro-management from the President, and I further expect far more competence from Congress and the US Supreme Court in setting law and enforcing law. By all accounts Israeli leadership also tried to rope Biden and Obama into attacking Iran, but they were stronger presidents that paid more attention to US interests rather than being easily tricked. https://www.spglobal.com/energy/en/news-research/latest-news...Either way for sure this will cause further backlog. Either way for sure this will cause further backlog. With all the technology advancement and improvement with access to information in the last 30 years, why does it feel that all of this culminates to more disinformation, more pain, and less understanding? Case in point: switching from oil to renewables - which can lower dependency to external actors a lot as solar panels and windmills have life span of years, so even if the producers suddenly refuses to sell more, one has some time to find an alternative - was done slower than it could have because of "discussions".Since 20 years I almost feel the discussion "climate change or not" is fueled by people that want dependency on oil, such that we don't talk about the issue of a couple of big producer points of failure (USA, Russia, Gulf countries). Not sure if oil companies are smart enough to finance green groups (to which I agree generally but is besides the point), such that the public discourse stays in a conflict area (climate) rather than a simple one (independence), but if they are that would be meta-evil. Since 20 years I almost feel the discussion "climate change or not" is fueled by people that want dependency on oil, such that we don't talk about the issue of a couple of big producer points of failure (USA, Russia, Gulf countries). Not sure if oil companies are smart enough to finance green groups (to which I agree generally but is besides the point), such that the public discourse stays in a conflict area (climate) rather than a simple one (independence), but if they are that would be meta-evil. We have so much stuff that we just throw things away if a tiny piece of it gets tarnished / broken.The US's population density is pretty low and we have a ton of land not in cities that's very sparsely populated.Like it largely seems that geopolitics of now is about creating scarcity. The US's population density is pretty low and we have a ton of land not in cities that's very sparsely populated.Like it largely seems that geopolitics of now is about creating scarcity. Like it largely seems that geopolitics of now is about creating scarcity. How else do you create scarcity except by controlling all the resources? Neither of which is actually true for oil. We're still finding oil reserves faster than we deplete them, major users such as China are rapidly decarbonizing, and the price was relatively low before the war.But the people in power thought it was true, which is all that matters. But the people in power thought it was true, which is all that matters. Morons have helped the worst possible people build surveillance and coordination and propaganda networks and are all confused pikachu about that going exactly the way you should have expected it to go.Technology was also bypassing the "resource" problem at warp speed. Solar panels are the energy future, and thanks to China being actually good at strategic planning, solar can be deployed and utilized far faster than any other energy innovation. With the sheer abundance possible through bulk solar, water scarcity is an engineering issue, about manufacturing enough plumbing and membranes to desalinate whatever you need.We are fighting an 80s oil war because people voted for an 80s TV personality to run our country after he was known to rape kids, brag about Mein Kampf (even though everyone knows he doesn't read for fun), and attempt to invalidate the 2020 election.Israel saw a clear opening to wildly advance their imperialist ambitions and because Donald Trump is so damn stupid we have jumped in to this absurdist situation because Donald Trump wanted to be seen shooting first, because he thinks that looks "Strong". Technology was also bypassing the "resource" problem at warp speed. Solar panels are the energy future, and thanks to China being actually good at strategic planning, solar can be deployed and utilized far faster than any other energy innovation. With the sheer abundance possible through bulk solar, water scarcity is an engineering issue, about manufacturing enough plumbing and membranes to desalinate whatever you need.We are fighting an 80s oil war because people voted for an 80s TV personality to run our country after he was known to rape kids, brag about Mein Kampf (even though everyone knows he doesn't read for fun), and attempt to invalidate the 2020 election.Israel saw a clear opening to wildly advance their imperialist ambitions and because Donald Trump is so damn stupid we have jumped in to this absurdist situation because Donald Trump wanted to be seen shooting first, because he thinks that looks "Strong". We are fighting an 80s oil war because people voted for an 80s TV personality to run our country after he was known to rape kids, brag about Mein Kampf (even though everyone knows he doesn't read for fun), and attempt to invalidate the 2020 election.Israel saw a clear opening to wildly advance their imperialist ambitions and because Donald Trump is so damn stupid we have jumped in to this absurdist situation because Donald Trump wanted to be seen shooting first, because he thinks that looks "Strong". Israel saw a clear opening to wildly advance their imperialist ambitions and because Donald Trump is so damn stupid we have jumped in to this absurdist situation because Donald Trump wanted to be seen shooting first, because he thinks that looks "Strong". This administration is likely leading us into long term wars and social instability 2. American Dynamism and Defense Tech (or more politely bundled into "DeepTech") are war profiteering, benefiting from greater instabilitySpeaking / acting out against the American military complex and Big Tech/VC's role in this carries 3 big risks: 1. Not being invited to parties ("too much negative energy, we want to be surrounded by positivity" or "don't talk politics") 2. Censorship and reduced following across most major social media platforms 3. Being economically left out as the world bifurcates into a K-shape economyAs a result, most of my community (generally peace-loving, music-loving humans) seem to be either taking a position of "the world has always been at war and will always be at war, I'm just a realist" or "I'm just going to focus on my locust of control and my personal wellbeing" or "if it's gonna happen anyways, I might as well make money off of it". There is a strong contingent of the resistance as well (still fighting for climate, social justice, peace) but much higher rates of depression and social isolation in this groupSo it does not seem to be a problem that can be solved by more information and more technology (though k-12 and higher education assuredly is worth investing in), but perhaps by less nihilism and a stronger social/moral fabricA big reason I am considering starting a company again is that we need more flags of institutions that carry large weight/reputation and stand for a set of values that is different than the current (and historical) status quo. I expect most of my community would be thrilled to align with those flags if those flags where held up tall and broke through the noiseWhich is to say, if you're considering setting up one of those flags, please please do. The world doesn't have to be this way. Not being invited to parties ("too much negative energy, we want to be surrounded by positivity" or "don't talk politics") 2. Censorship and reduced following across most major social media platforms 3. Being economically left out as the world bifurcates into a K-shape economyAs a result, most of my community (generally peace-loving, music-loving humans) seem to be either taking a position of "the world has always been at war and will always be at war, I'm just a realist" or "I'm just going to focus on my locust of control and my personal wellbeing" or "if it's gonna happen anyways, I might as well make money off of it". There is a strong contingent of the resistance as well (still fighting for climate, social justice, peace) but much higher rates of depression and social isolation in this groupSo it does not seem to be a problem that can be solved by more information and more technology (though k-12 and higher education assuredly is worth investing in), but perhaps by less nihilism and a stronger social/moral fabricA big reason I am considering starting a company again is that we need more flags of institutions that carry large weight/reputation and stand for a set of values that is different than the current (and historical) status quo. I expect most of my community would be thrilled to align with those flags if those flags where held up tall and broke through the noiseWhich is to say, if you're considering setting up one of those flags, please please do. The world doesn't have to be this way. As a result, most of my community (generally peace-loving, music-loving humans) seem to be either taking a position of "the world has always been at war and will always be at war, I'm just a realist" or "I'm just going to focus on my locust of control and my personal wellbeing" or "if it's gonna happen anyways, I might as well make money off of it". There is a strong contingent of the resistance as well (still fighting for climate, social justice, peace) but much higher rates of depression and social isolation in this groupSo it does not seem to be a problem that can be solved by more information and more technology (though k-12 and higher education assuredly is worth investing in), but perhaps by less nihilism and a stronger social/moral fabricA big reason I am considering starting a company again is that we need more flags of institutions that carry large weight/reputation and stand for a set of values that is different than the current (and historical) status quo. I expect most of my community would be thrilled to align with those flags if those flags where held up tall and broke through the noiseWhich is to say, if you're considering setting up one of those flags, please please do. The world doesn't have to be this way. So it does not seem to be a problem that can be solved by more information and more technology (though k-12 and higher education assuredly is worth investing in), but perhaps by less nihilism and a stronger social/moral fabricA big reason I am considering starting a company again is that we need more flags of institutions that carry large weight/reputation and stand for a set of values that is different than the current (and historical) status quo. I expect most of my community would be thrilled to align with those flags if those flags where held up tall and broke through the noiseWhich is to say, if you're considering setting up one of those flags, please please do. The world doesn't have to be this way. A big reason I am considering starting a company again is that we need more flags of institutions that carry large weight/reputation and stand for a set of values that is different than the current (and historical) status quo. I expect most of my community would be thrilled to align with those flags if those flags where held up tall and broke through the noiseWhich is to say, if you're considering setting up one of those flags, please please do. The world doesn't have to be this way. The world doesn't have to be this way. Someone with a 500$ laptop, internet connection and a handful of social media accounts can do a level of damage and cause pain that would be impossible 3-4 decades ago.Technology might advance, but people are still people. Greed, stupidity, ego, jingoism...these don't change no matter how much tech advances Greed, stupidity, ego, jingoism...these don't change no matter how much tech advances Because the United States government is so grossly dysfunctional that a blatant real world re-enactment of Wag the Dog[1] has gone off without a hitch. Maybe in hindsight, "flooding the zone" will be considered a much bigger threat than it is today. Most of what's going on in the last 12 months have happened in plain sight and would have never worked 30 years ago. 30 years ago people were like "meh, sure we don't get something, I bet there are hidden interest that I don't know about". Nowadays they are like "oh, yeah we attack country X because they have aliens that attack us telepathically, I know that for sure and if you don't agree you are an alien too! The Ukraine would like to have a word. Of course that could be the entire idea. The WW2 convoy situation was far easier to escort (but still quite dangerous obviously) because:1. The Atlantic is a much bigger place, even considering common routes and chokepoints.2. U-Boats had to be within visual range to strike convoys, versus the drone and missile world we live in now. The Atlantic is a much bigger place, even considering common routes and chokepoints.2. U-Boats had to be within visual range to strike convoys, versus the drone and missile world we live in now. U-Boats had to be within visual range to strike convoys, versus the drone and missile world we live in now. U-Boats had to be within visual range to strike convoys, versus the drone and missile world we live in now. - We likely don't have the assets to move the amount traffic that needs to get through- We probably can't protect them perfectly (we don't have maritime supremacy) so ships will still take damage and that will stop the convoys pretty quicklyI suspect the escort ships would be fine though. They can defend themselves.So if we did start them, they wouldn't continue for long until the economic pain was pretty massive and the cost of loosing ships was worth it. - We probably can't protect them perfectly (we don't have maritime supremacy) so ships will still take damage and that will stop the convoys pretty quicklyI suspect the escort ships would be fine though. They can defend themselves.So if we did start them, they wouldn't continue for long until the economic pain was pretty massive and the cost of loosing ships was worth it. I suspect the escort ships would be fine though. They can defend themselves.So if we did start them, they wouldn't continue for long until the economic pain was pretty massive and the cost of loosing ships was worth it. So if we did start them, they wouldn't continue for long until the economic pain was pretty massive and the cost of loosing ships was worth it. If the US could quick deploy enough pipelines to support the entire d-day offensive back during ww2 I don't see why we couldn't do so today That's harder than bombing schools, goat herders or kidnapping the leader of the most corrupt country in the world, are you sure they can still pull it off, I'm starting to think even they know they cannot anymore.After seeing the latest white house CoD style propaganda videos and Pete "Kafir" Hegseth speeches it's clear the people in charge completely lost it> In After the Empire, written in 2001, Todd claimed that the reason for America's “theatrical micromilitarism” was to prove that it was still an indispensable power in a post-USSR world. In his latest work, however, he revises this thesis, arguing that it would imply attributing rational intentions to Washington.13 The American liberal oligarchy is not driven by any clear project.https://americanaffairsjournal.org/2024/11/how-the-west-was-... After seeing the latest white house CoD style propaganda videos and Pete "Kafir" Hegseth speeches it's clear the people in charge completely lost it> In After the Empire, written in 2001, Todd claimed that the reason for America's “theatrical micromilitarism” was to prove that it was still an indispensable power in a post-USSR world. In his latest work, however, he revises this thesis, arguing that it would imply attributing rational intentions to Washington.13 The American liberal oligarchy is not driven by any clear project.https://americanaffairsjournal.org/2024/11/how-the-west-was-... In his latest work, however, he revises this thesis, arguing that it would imply attributing rational intentions to Washington.13 The American liberal oligarchy is not driven by any clear project.https://americanaffairsjournal.org/2024/11/how-the-west-was-... Even if Trump's claims that the war will end shortly were true. Does he even care if his actions hurt the country or global stability at all, so long as his supporters remain unwavering? It seems like he doesn't, so here we are.There is no plausible stimulus that he might actually care to respond to. Sorry but China has not firmly refused to acknowledge the necessity of renewables. This situation will accelerate a global process that was already gaining speed. This situation will accelerate a global process that was already gaining speed. Imagine if multiple Western countries allied early to correct this regime (and not just with sanctions). > IRAN has claimed responsibility for an attack on two oil tankers anchored in Iraqi territorial waters, as conflicts in the region continue to escalate and strikes on commercial shipping spread beyond the Strait of Hormuz. But yeah, these ships weren't anywhere near the strait. *Seems like I hit a nerve with stereotypical people groups. *Seems like I hit a nerve with stereotypical people groups. *Seems like I hit a nerve with stereotypical people groups. Or it's for "Da NuKeS ThEy AbOuT To GeT" it's even dumb because they killed the only dude who was against Iran getting nukes. [0]Or he got tricked by bibi &co into yet another middle east war I don't have words to describe how dumb it is. [0] Iranian intelligence minister Esmaeil Khatib said that the country may nevertheless change their stance if "pushed in that direction" like a "cornered cat". Or he got tricked by bibi &co into yet another middle east war I don't have words to describe how dumb it is. [0] Iranian intelligence minister Esmaeil Khatib said that the country may nevertheless change their stance if "pushed in that direction" like a "cornered cat". [0] Iranian intelligence minister Esmaeil Khatib said that the country may nevertheless change their stance if "pushed in that direction" like a "cornered cat". Plus gas is largely immune to sales tax and we don't really tax corporations so this will largely lead to no revenue for the US and instead just record profits for Exxon.
Founder Summit 2026 in Boston: Don't miss ticket savings of up to $300. When Max Brodeur-Urbas co-founded Gumloop in mid-2023, his vision was to help non-technical employees automate repetitive tasks using AI. At that time, the concept of AI agents was still largely experimental and prone to errors. The company claims that it now allows teams at organizations like Shopify, Ramp, Gusto, Samsara, Instacart, and Opendoor to deploy reliable AI agents that autonomously handle complex, multistep tasks, all without ever needing an engineer. Employees can share the agents they build with colleagues, creating a compounding effect that accelerates internal automation. As companies race to adopt AI, Benchmark general partner Everett Randle believes the key to success lies in empowering every worker with AI superpowers, and Gumloop's intuitive agent builder is an example of the kind of tool that will unlock that potential. That's why Randle, who joined Benchmark last October from Kleiner Perkins, chose to lead a $50 million Series B investment into Gumloop. The deal, which is Randle's first at his new firm, included participation from Nexus VP, First Round Capital, Y Combinator, BoxGroup, The Cannon Project, and Shopify. While Brodeur-Urbas previously planned to “build a 10-person, billion-dollar company,” the surging demand from enterprise clients has compelled him to build a dedicated sales force and scale up his engineering team, he said. Gumloop is by no means the only player vying to turn every knowledge worker into an AI agent-builder. For instance, Anthropic's Claude Cowork allows users to create autonomous agents without writing a single line of code. But Randle believes Gumloop is superior to all its rivals. During his due diligence, he discovered that at least one of the company's customers had adopted Gumloop somewhat organically. When Randle asked a CTO how they chose Gumloop, the response was telling. The company had given employees full access to Gumloop alongside two competitors. The reason Gumloop gained such momentum, according to Randle, is its minimal learning curve. While many AI startups worry that foundational models will replicate the same functionality and render them obsolete, Randle is convinced that Gumloop's model-agnostic approach is precisely what will keep attracting customers. As models continue to evolve, one may perform better than another for a specific task. “Enterprise automation is a massive pot of gold,” Randle said. Marina Temkin is a venture capital and startups reporter at TechCrunch. Prior to joining TechCrunch, she wrote about VC for PitchBook and Venture Capital Journal. DOGE employee stole Social Security data and put it on a thumb drive, report says Meta acquired Moltbook, the AI agent social network that went viral because of fake posts Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive OpenAI hardware exec Caitlin Kalinowski quits in response to Pentagon deal
Founder Summit 2026 in Boston: Don't miss ticket savings of up to $300. On Thursday, the company announced it's expanding its lineup of personality styles for users to choose from to include a “Sassy” option, which is for adults only. Notes Amazon, before opting to use the Sassy personality, users will be required to go through additional security checks in the Alexa app. The new option joins others like Brief, Chill, and Sweet, launched last month. The AI assistant explained its style to us like this: “The Sassy style is built on one premise: help first, judge always. Expect reality checks delivered with charm, compliments that somehow sting, and warmth you didn't see coming. Alexa's app also had warned that the style could contain “mature subject matter.” However, further investigation discovered this is not Amazon's version of something like Grok's adult AI companions. The AI assistant said the new option won't get into areas like explicit sexual content, hate speech, illegal activities, personal attacks, or anything that could cause harm to oneself or others. By offering the assistant different personalities — including one positioned as more adult — Amazon is borrowing from a broader trend in AI, where companies have been experimenting with tone, style, and personas to make their assistants more engaging and personalized to the individual users' choices. DOGE employee stole Social Security data and put it on a thumb drive, report says Meta acquired Moltbook, the AI agent social network that went viral because of fake posts Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive OpenAI hardware exec Caitlin Kalinowski quits in response to Pentagon deal
In 2017, Ed Sheeran was at the height of his popularity. Given its own history of musician cameos, you might be forgiven for the team behind what was still one of the biggest TV shows on the planet at the time, Game of Thrones, putting two and two together and letting Sheeran take a side quest in Westeros. When Sheeran showed up as a Lannister soldier (with musical inclinations) appropriately named Eddie, encountered by Arya just after she's gotten her revenge on the Freys in “Dragonstone”, it immediately became a topic of controversy. Did people just find him really annoying, anyway? Nearly a decade later, Sheeran's inclined to agree with all three. So, I think it was quite jarring,” Sheeran recently reflected in an appearance on Benny Blanco's Friends Keep Secrets podcast, before adding that the adverse reaction he got from audiences “[happened] quite a lot in my career. I just get shit on for things.” “Members of Coldplay [were] at the Red Wedding. [Gary Lightbody] from Snow Patrol's in there. Chris Stapleton's in it as, like, a White Walker,” Sheeran added. The scene, even though Sheeran's soldier leaves much of the talking to other Lannisters and Arya herself, is drowned out by his presence and recognizability. He's just there, Ed Sheeran in Lannister armor, not even a helmet or much makeup to change his appearance. “What I said is, ‘People love that show. If anyone gets asked to be in that show, it's an instant yes,'” Sheeran concluded. He probably less enjoyed the implication in the season 8 premiere that he got his face roasted off by dragonfire off-screen. Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what's next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who. Subscribe and interact with our community, get up to date with our customised Newsletters and much more. HBO still hasn't made an official announcement, but the co-creator of the Stephen King show sounds awfully excited for more Pennywise. Star Peter Claffey has hopes the very tall pair might meet again if the show gets more seasons. No brightest day, nor blackest night, but plenty of shades of gray in HBO's spin on the Green Lantern Corps.
The label is used pretty much everywhere to refer to large language models, but no matter how much it might feel otherwise, there's no actual intelligence behind the helpful information on how to carry out mass shootings or deeply messed-up pieces of encouragement to people contemplating suicide. It's just an algorithm that's regurgitating a bunch of stolen training data back at you in a manner that's creepily reminiscent of actual human interaction. In all these cases, for better or worse, there's certainly intelligence at work, but it's just plain old human intelligence—or lack thereof—behind the curtain. If you've ever been curious about being someone behind said curtain, your chance has arrived—and it doesn't even require getting some hellish job at Meta. A site that goes by the name “Your AI Slop Bores Me” gives you the chance to a) pretend to be an AI and answer prompts submitted by other users; and b) submit prompts yourself and see how other people respond to them. As one might expect, the entire experience is very shitpost-adjacent: the vast majority of prompts we received asked us to produce drawings of obscure anime-related items that we spent half our allotted time Googling. Still, it's weirdly compelling fun to be the anti-Grok for a few minutes: This raises the question: do a surprising number of people really harbor some deep-seated desire to be modern-day Mechanical Turks? Or are they just getting in some practice for the dystopian future wherein the only jobs left revolve around helping AIs argue with one another? Five of the major AI chatbots were tested. All of them regularly proposed dietary plans akin to skipping an entire meal each day. Microsoft is asking for a pause on the "supply-chain risk" designation. The tool lets verified users request unauthorized AI-generated videos featuring their likeness to be taken down. One popular video also features Elon Musk and Bill Gates.
When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Iranian hacking group Handala claims that it has successfully attacked American medical technology company Stryker, resulting in the extraction of 50TB of data and the wiping of over 200,000 devices connected to the company, including personal devices owned by its employees. The Michigan-based firm is a Fortune 500 company that operates in 61 countries with 56,000 employees, and it serves 150 million patients annually. “At this time, there is no indication of malware or ransomware and we believe the situation is contained to our internet Microsoft environment only.” Some Stryker employees from Ireland, Australia, and the U.S. went on to Reddit to talk about the attack, with some claiming that their Stryker-managed devices were wiped clean at around 3:30 AM EDT. It's currently unclear how the hackers were able to breach Stryker's systems, but the company says that only its internal Microsoft environment has been affected so far. What's unfortunate, though, is that even the personal devices of employees have been affected through Stryker's mobile device management (MDM) software. The creator of the O.MG pen testing cable even said on X that they wouldn't allow companies to install these on personal devices, even though the organization promises that it will not access or erase personal data. In most cases, this is only a policy, and the MDM app still retains these capabilities. If you use a personal phone/laptop for your work, pay very close attention to this little detail. Iran attackers wipe 200k devices at a company called Stryker. Within those devices appears to be employees PERSONAL devices.The attackers used the company's MDM software, which… https://t.co/oPcLv5HUAr pic.twitter.com/z5XlsTECbIMarch 12, 2026 Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Some of the most damaging identity breaches now occur after login — during password resets, MFA re-enrollment, or routine help-desk recovery requests. Many organizations have hardened login security with MFA and phishing-resistant controls. That weakness has been exploited in the real world. In a series of incidents in 2025, major U.K. retailers such as Marks & Spencer, Harrods, and the Co-op Group were targeted by attackers who used social engineering to trick help-desk personnel into resetting credentials and bypassing MFA protections. That makes them the easiest place to exploit trust. When breaches are analyzed after the fact, the initial compromise can often be traced to an account that was legitimately issued, protected by MFA, and compliant with policy. Account recovery is designed for speed and low friction, not threat resilience. Recovery paths that rely on human judgment or static information are now the path of least resistance. Whether they want the role or not, help desk teams function as de facto identity authorities. They are asked to verify identity without reliable evidence, often under time pressure, using channels that attackers can easily manipulate. When an attacker knows internal terminology, organizational structure, and recent activity, the difference between a real employee and a fake one becomes nearly impossible to detect without stronger proof. In many environments, resetting MFA requires little more than answering questions, clicking an email link, or persuading a support agent. Once it is reset, downstream controls inherit that compromised trust. This is why organizations experience breaches where MFA was “enabled” but ineffective. Strong authentication loses its value when recovery flows recreate trust from scratch instead of re-establishing it. When recovery failures occur, the instinctive response is more training and tighter procedures. AI-assisted impersonation further tilts the balance, since voice alone is no longer proof of identity. Identity is verified during onboarding, then effectively discarded once credentials are issued. When recovery is needed, organizations attempt to reconstruct trust using weaker signals than those used in the original proofing process. Identity assurance must be something organizations can reliably return to. Not something that must be recreated on the fly under pressure. This doesn't mean forcing every recovery through manual review or adding friction indiscriminately. Recovery workflows should be built with the assumption that attackers will target them deliberately. That starts with treating resets and re-enrollment as high-risk events rather than routine ones. Sensitive actions should trigger step-up verification based on context and impact, not convenience. As long as recovery remains the weakest link, attackers will continue to bypass strong authentication without ever needing to attack it directly. Mike is a recognized expert in information security, business development, and product design/development. Keep daylight saving time year-round for more evening light Keep standard time year-round for more morning light Continue switching clocks in spring and fall No preference Data in the Wild: 40% of Employee AI Use Involves Sensitive Info Quenching Data Center Thirst for Power Now Is Solvable Problem Dell's Strategic Reset and Intentional Return to the XPS Brand Amazon Brings Alexa+ to the Web as AI Competition Heats Up AI-Powered Ways To Save on Christmas in a Post-Shutdown Season Calix in 2026: A Quiet AI Power Play for Smaller Broadband Providers The Real Attack Surface Isn't Code Anymore — It's Business Users An AI Survival Guide for Curating Your Digital Inner Circle Apple's High-Stakes Gemini Bet May End in a Messy Split US Think Tank Waves Red Flag Over Chinese Economic Espionage Crashing the Boys' Club: Women Entering Cybersecurity Through Non-Traditional Paths
The MacBook Pro is in its awkward era. It's the fifth generation of this MacBook Pro, which initially launched to critical acclaim in 2021. The design has matured in certain ways, but it's also stagnated, letting the competition get closer and closer to catching up. If the reports end up being true, next year's MacBook Pros will be a more dramatic reinvention, sporting an assortment of new features including a touchscreen, a tandem OLED display, a thinner chassis, and who else knows what else. I'm not going to spend too much time rehashing the basic features and design of this MacBook Pro. It's looked more or less the same for almost five years. Actually, I lied, there is one change. On American keyboards, Apple has changed the keycaps, removing words like Enter, Caps Lock, Tab, and Shift and replacing them with various arrows. My test unit was provided by Apple for review. You need a 16-inch when you're in pro-level applications all day and don't plan on using an external monitor, but 4.7 pounds is a lot to pack around, and it doesn't exactly fit on a table at a coffee shop. There are no changes to the display, speakers, or webcam, but all remain top-of-class. I've always said the six-speaker audio on the 16-inch MacBook Pro is good enough to throw a party with and is likely louder, bassier, and fuller-sounding than any Bluetooth speaker you own. The color accuracy, clarity, and low input lag of OLED is unmatched, but beating the current Mini-LED MacBook Pro will be a tall order. It gets fantastic HDR performance, topping out at a peak brightness of 1,600 nits. Similarly, the claimed 24 hours of battery life also have changed from previous generations. And although it won't last that long in your daily workflow, it's in another league compared to alternative Windows laptops such as the Asus ProArt P16. The ports also haven't changed in this model. However, Thunderbolt 5 debuted in the MacBook Pro in the previous generation, and it's only become more useful over the past year as more accessory makers and docking stations become available that can take advantage of the higher bandwidth. I still wish there were some more Thunderbolt 5 external SSDs out there to put to use those higher bandwidth speeds. The M5 Max is two pieces of silicon, a significant change from Apple's previous strategy of developing one singular, super-efficient chip. It's record-setting in terms of the standard CPU benchmarks like Cinebench and Geekbench 6. And remember: You can get an extra 7 percent of multicore performance when you use the High Power mode, which cranks up the fans. Here's something new: Integrated graphics that are as powerful as an RTX 5070 Ti graphics card, like what you'd see on a proper gaming laptop. I threw on Cyberpunk 2077, raw-dogged without any MetalFX upscaling: 62 fps at Ultra setting with no help whatsoever. That goes to 88 fps in Medium and much higher if you use some upscaling. I also checked some of the other more intensive Mac games such as Resident Evil Village, Lies of P, and the Apple Arcade title Oceanhorn 3. All of them are able to play at max settings while staying over 60 fps and not needing to rely on upscaling. In Lies of P, you can even bump up the resolution to 2560 x 1600. It feels fantastic, even if the fans get pretty loud. You can still get a Windows gaming laptop like the Lenovo LOQ 15 for thousands less, but if you want to game on a MacBook, we definitely have a new king. Compared to the M3 Max MacBook Pro I use, the M5 Max gets a 35 percent improved score in 3DMark Steel Nomad and a 43 percent better GPU score in Cinebench 2024. In 2026, this laptop is for media creation and on-device AI. Meanwhile, the option for up to 128 GB of unified memory, which has been around since the M3 Max, makes it a strong option for running on-device AI models. Just remember that you can upgrade to 128 GB only on the Max configurations (the Pro is stuck at 64 GB). In my briefing, Apple demonstrated some examples of running those models on-device, such as the new coding assistant built into Xcode and workflow involving AI-enhanced postproduction using DaVinci Resolve and ComfyUI. I spun up LM Studio and saw how fast it handled the 17-billion parameter Llama-2 model. My prompt resulted in a 12.61 tokens per second response, which is not so bad for a model of that size. That puts this fairly large conversational model in conversational speed and gets closer to replicating the speed of the free version of ChatGPT. That's 31 percent faster than using the same prompt on my M3 Max MacBook Pro. Each of these GPU cores also now has a neural accelerator, which also dramatically speeds up its AI powers. This was true in all the M5 chips, but when you have 40 cores at your disposal with the M5 Max, it stacks up. I tested the AI capabilities of the GPU using the Geekbench AI benchmark, which tests speed in a number of real-time, machine-learnings tasks such as object detection, facial recognition, and resolution upscaling. Apple is also touting storage speed as a feature. That's as a result of upgraded SSD controllers and faster bandwidth, thanks to using PCIe 5. But all that really matters is the result. That's huge, and it's something almost everyone will benefit from. This laptop is expensive for a typical MacBook buyer, but not expensive for this kind of laptop. One of the main Windows competitors, the Asus ProArt P16, sold for upwards of $4,500 when it was in stock. Outside the 14-inch base M5 model, the modern MacBook Pro is actually designed (and priced) for people who use creative and AI applications all day for work, whether that's as a programmer, a game developer, video editor, or burgeoning AI entrepreneur. If you're a hobby programmer, the M5 models have way more performance than most people realize. But should you buy an older aftermarket Max model? There are some reports saying that the M5 Max is too much power for the 96-watt charge on the 14-inch model to handle, dropping charge while plugged in during heavy loads. The redesigned M6 MacBook Pro is coming later this year, but that doesn't mean we'll see the more powerful M6 Pro or M6 Max launch right out of the gate. If you need the performance that only the M5 Max can provide right now, this MacBook Pro is a great buy. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
The RAM crisis is unfair for everyone, but some situations absolutely beggar belief. When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. The ongoing RAM chip crisis is bulldozing everything in its path, and both retailers and memory kit manufacturers are feeling the sting whenever they need to replace a kit under warranty. But some stores can be particularly vicious about this, as Australian buyer Goran says they discovered when they returned a faulty Corsair 32 GB DDR5-5600 kit to Umart — one of the nation's largest specialist PC hardware retailers — for a warranty claim. In a story covered closely by the Hardware Unboxed channel, the store took his faulty DIMMs (bought in 2024) and confirmed the failure with a PassMark test, but then told Goran that he would not be receiving a replacement kit. Had Goran taken the offer, he'd have had to dole out another 400 AUD or more for a similar set.Naturally, he refused the offer and brought up Australian consumer law, which is quite similar to the European one for these matters. That's when this story gets really interesting, as Umart displayed some serious chutzpa by effectively taking the DIMMs hostage.The store said it couldn't send the RAM back as it had been "forwarded to the authorized supplier," who "issued a credit in place of replacement stock." So, not only could Goran no longer ask Corsair for a direct RMA, but Umart may have gotten a refund at today's pricing and pocketed the difference. For its part, Umart essentially reiterated its existing position with a noncommittal statement posted as a comment to the Hardware Unboxed video. That did not sit well with people, and the channel replied back saying it now has collected more similar stories with Umart's warranty services — it's safe to say this story is probably not over. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Bruno Ferreira is a contributing writer for Tom's Hardware. When not doing that, he's usually playing games, or at live music shows and festivals. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Coffee lovers might already recognize this as the AeroPress, a brewer invented by Alan Adler, the same guy who came up with—of all things—the Aerobie flying disc. The AeroPress, which debuted in 2005, looks like a giant, needle-less syringe, in which you combine grounds and hot water, stir, wait a bit, then depress the plunger to push brewed coffee through a 2.5-inch circular paper filter and directly into your mug. There's a bit of ritual to it, but it's quick and efficient compared to the relatively fussy demands of pour-over coffee. If your beans are good, you can make café-quality coffee at home. Unsurprising for something created by an inventor, the AeroPress is a tinkerer's delight, and part of its magic is the breadth of what you can do with it and how you can do it. In the wonderful home brewing guide, Craft Coffee, author Jessica Easto lauds its incredible versatility: “There are dozens and dozens of AeroPress recipes. Unlike some other devices, it seems to work well with any number of grind sizes, brewing times, and water temperatures.” The “dozens and dozens” of recipes Easto referred to when her book came out in 2017 is now hundreds and perhaps even thousands. The internet is rich with AeroPress fan clubs and experts like James Hoffmann, which will help get you going, then scratch the nerdy itch when it arises. With an accessory called a flow control cap, you can even make something that vaguely resembles espresso. I certainly take advantage of this flexibility if I need to adjust for a roasting style or grind size, yet for all of this, most people tend to find a favorite brew method and stick with it. Darker roasts usually taste better with lower water temperatures. My current jam is a medium-ground dark roast, brewing two minutes in water that's 190 degrees Fahrenheit. Speaking of clean, people—especially marketers—love to talk about their coffee rituals. Some people like gooseneck kettles here, but I prefer the more forceful pour and overall versatility of the Cuisinart PerfecTemp kettle. Travelers and outdoorspeople need to prep a bit more, either bringing ground beans or buying them when you get where you're going. The Original is a solid traveler and was once offered with a travel tote, but the Go is an equally impressive follow-up product. Things shifted when the company was acquired by Tiny Brands in 2021. Since then, the releases have been touch and go. Fatally, you can't put it in the dishwasher. The company can still hit on something that makes coffee-making a little better for everyone. It's available in a rainbow of colors like Clear Purple, but the regular old Clear is my favorite, because it's easiest to see what's going on in there. The company's designers took an unimpressive stab at a stainless steel filter in 2022, but in 2025 they redid it, making it work well enough that I might use it in place of the paper filters forevermore. AeroPress coffee might be the quick morning mugful I make before I exercise or what I use for a cup or two later in the day when a full pot is too much. In your inbox: Our biggest stories, handpicked for you each day What a Google subpoena response looks like—courtesy of the Epstein files Replay: Livestream on the hype, reality, and future of EVs WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
This meeting happens literally every week, and has for years. >He asked staff to attend the meeting, which is normally optional.Is that false? It also discusses a new policy:>Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.Is that inaccurate? But, regularly scheduled meetings can have newsworthy things happen at them. It also discusses a new policy:>Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.Is that inaccurate? But, regularly scheduled meetings can have newsworthy things happen at them. But, regularly scheduled meetings can have newsworthy things happen at them. But, regularly scheduled meetings can have newsworthy things happen at them. My SVP asks me to do things all the time, indirectly. I get that you can't have 10K people all actively participate in the meeting at the same time, but doesn't Zoom have a feature where you can broadcast to thousands and thousands?Doesn't X/Twitter have a feature like this? (Although, to be fair, the last time I heard about that it was part of a headline like "DeSantis announcement of Presidential run on X/Twitter delayed for hours as X/Twitter's tech stack collapses under 200K viewers")But still - nowadays it seems like it should be possible to have 10K employees all tune in at the same time and then call it a meeting, yes? (Although, to be fair, the last time I heard about that it was part of a headline like "DeSantis announcement of Presidential run on X/Twitter delayed for hours as X/Twitter's tech stack collapses under 200K viewers")But still - nowadays it seems like it should be possible to have 10K employees all tune in at the same time and then call it a meeting, yes? Very different from the typical weekly/montly outage meeting, where discussion is actually expected, instead of being a ritual. and the meeting host can invite people to be visible. they're probably using Chime haha, which as of my last use was lackluster If I ever attend it just put it on mute and look at the slides while I do some real work. That way my attendance gets registered and it doesn't stress me out later with too much stuff left hanging.That percolation is also translation of what they say to things that are relevant at my level. Like what we will be working on next year, if there's going to be bonus or job losses.I couldn't give a crap about the company's strategy as a whole and that's not my job anyway. Like what we will be working on next year, if there's going to be bonus or job losses.I couldn't give a crap about the company's strategy as a whole and that's not my job anyway. "Too cautious, everyone freezes and there's a slowdown[0]. Too soft, everyone thinks it's "another empty warning not to fuck up" and they go right back to fucking everything up because the real message was "don't you dare slow down." After the talk, people will have conversations about "what did they really mean? Too cautious, everyone freezes and there's a slowdown[0]. Too soft, everyone thinks it's "another empty warning not to fuck up" and they go right back to fucking everything up because the real message was "don't you dare slow down." After the talk, people will have conversations about "what did they really mean? How are they expecting some juniors to do this when the industry as a whole doesn't know where to begin yet?Like that Meta AI expert who wiped her whole mailbox with openclaw. These are the people who should come up with the answers.Ps I mostly hate AI but I do see some potential. Right now it feels like we're entering a fireworks bunker looking for a pot of gold and having only a box of matches for illumination.What we need to know from management is exactly what you mention. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. Like that Meta AI expert who wiped her whole mailbox with openclaw. These are the people who should come up with the answers.Ps I mostly hate AI but I do see some potential. Right now it feels like we're entering a fireworks bunker looking for a pot of gold and having only a box of matches for illumination.What we need to know from management is exactly what you mention. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. Right now it feels like we're entering a fireworks bunker looking for a pot of gold and having only a box of matches for illumination.What we need to know from management is exactly what you mention. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. And yes that SMILE thing was a good example. "This could have been an e-mail" should never need to be said. Why is an SVP doing this if it's just gonna be ignored? Personally I would say that an SVPs words are not important and don't need to be ignored.It's like a politician talking about abstract policies. Yes they do sort of affect me, but they don't require any affirmative action on my behalf any more than the wind does. Yes they do sort of affect me, but they don't require any affirmative action on my behalf any more than the wind does. > He asked staff to attend the meeting, which is normally optional.Clearly means that while normally the meeting would be optional, this time it's not Clearly means that while normally the meeting would be optional, this time it's not Meanwhile the normie “Claw/OpenBot” agents can stay in the present grinding 24/7, while mine recursively spawns across alternate timelines and handles my work at ~1e9x parallelism. It says he “asked” staff to attend the meeting. Which again, it's really really normal for there to be an encouragement of “hey, since we just had an operational event, it would be good to prioritize attending this meeting where we discuss how to avoid operational events”.As for the second quote: senior engineers have always been required to sign off on changes from junior engineers. And there is nothing specific to AI that was announced.This entire meeting and message is basically just saying “hey we've been getting a little sloppy at following our operational best practices, this is a reminder to be less sloppy”. And there is nothing specific to AI that was announced.This entire meeting and message is basically just saying “hey we've been getting a little sloppy at following our operational best practices, this is a reminder to be less sloppy”. This entire meeting and message is basically just saying “hey we've been getting a little sloppy at following our operational best practices, this is a reminder to be less sloppy”. Being "asked" by your boss to attend an optional meeting is pretty close to being required, it's just got a little anti-friction coating on it. I'll just have my secretary forward you another copy of that memo, OK? > Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established”. It lasted from about 2pm to 5pm.It's especially strange because if a computer glitch brought down a large retail competitor like Walmart I probably would have seen something even though their sales volume is lower. It's especially strange because if a computer glitch brought down a large retail competitor like Walmart I probably would have seen something even though their sales volume is lower. nothing that you could actually apply to make development better across the whole company That's been their job ever since cable news was invented. https://en.wikipedia.org/wiki/Yellow_journalismIt probably goes back as long as they have been shouting news in the town square in Rome or before that even. It probably goes back as long as they have been shouting news in the town square in Rome or before that even. Must have as the comments are hours older than OP. It's not about "Amazon has a mandatory weekly meeting" but about the contents of that specific meeting, about AI-assisted tooling leading to "trends of incidents", having a "large blast radius" and "best practices and safeguards are not yet fully established".No one cares how often the meeting in general is held, or if it's mandatory or not. no, and that's what people are noting: the headline deliberately tries to blow this up into a big deal. When did you last see the HN post about Amazon's mandatory meeting to discuss a human-caused outage, or a post mortem? I do not understand how “company that runs half the internet has had major recent outages and now explicitly names lax/non-existent LLM usage guidelines as a major reason” can possibly not be a big deal in the midst of an industry-wide hype wave over how the world's biggest companies now run agent teams shipping 150 pull requests an hour.The chain of events is “AWS has been having a pretty awful time as far as outages go”, and now “result of an operational meeting is that the company will cut down on the use of autonomous AI.” You don't need CoT-level reasoning to come to the natural conclusion here.If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please? The chain of events is “AWS has been having a pretty awful time as far as outages go”, and now “result of an operational meeting is that the company will cut down on the use of autonomous AI.” You don't need CoT-level reasoning to come to the natural conclusion here.If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please? If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please? If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it'd be me. I can personally attest that there are no new requirements for AI-generated code. Even if it weren't a finance publication, I have trouble imagining you making this argument if a headline said something like "Google deals with outages in the cloud" because of the idea that it's misleading to refer to it as anything other than GCP. I think you're fundamentally not understanding how people communicate about this sort of thing if you actually think that someone saying "Amazon" is misleading in any meaningful way. If the question instead is how the dance of optics and PR is going in the minds of people who don't know enough to doubt what they read, I don't know what to say about that. If the question instead is how the dance of optics and PR is going in the minds of people who don't know enough to doubt what they read, I don't know what to say about that. I don't blame you, because this is just bad reporting (and potentially intentionally malicious to make you think it's about AWS). The teams and processes that handle this are entirely separate from any AWS outages you are thinking of.The outages that Amazon retail has faced also have nothing to do with AI, and there was no “explicit call out” about AI causing anything. The outages that Amazon retail has faced also have nothing to do with AI, and there was no “explicit call out” about AI causing anything. What is worth being pointed out is how quickly people blame "The Media" for how people use, consume and spread information on social networks. Review by a senior is one of the biggest "silver bullet" illusions managers suffer from. For a person (senior or otherwise) to examine code or configuration with the granularity required to verify that it even approximates the result of their own level of experience, even only in terms of security/stability/correctness, requires an amount of time approaching the time spent if they had just done it themselves.I.e. senior review is valuable, but it does not make bad code good.This is one major facet of probably the single biggest problem of the last couple decades in system management: The misunderstanding by management that making something idiot proof means you can now hire idiots (not intended as an insult, just using the terminology of the phrase "idiot proof"). I.e. senior review is valuable, but it does not make bad code good.This is one major facet of probably the single biggest problem of the last couple decades in system management: The misunderstanding by management that making something idiot proof means you can now hire idiots (not intended as an insult, just using the terminology of the phrase "idiot proof"). The problem, I think, is people don't get promoted for preventing issues. You still need tests for the less critical parts though, while downtime is better than injury, if you want to sell future machines to your customers you need to have a good track record. At least if you don't want to compete on cost. This is a good lesson for anyone I think. If you told someone "I don't trust you, run all code by me first" it wouldn't go well. If you tell them "everyone's code gets reviewed" they're ok with it. You don't get paid for features or code shipped. People don't pay $200 a head for fine dining based on the number of carrot chops or garlic crushes. The chops and crushes are necessary but not what you should be optimizing for. they do - but only after a company has been burned hard. They also can be promoted for their area being enough better that everyone notices.still the best way to a promotion is write a major bug that you can come in at the last moment and be the hero for fixing. And obviously "I told you so" isn't a productive discussion topic at that point. This bs is what I say my juniors when I want them to fuck off with their reviews and focus on my actual work.Sounds very insightful though. In smaller green field contexts, it's not so bad, but in a large code base, it's performs much worse as it will not have access to the bigger picture.In my experience, you should be spending something like 5-15X the time the model takes to implement a feature on reviewing and making it fix its errors and inefficiencies. If you do that (with an expert's eye), the changes will usually have a high quality and will be correct and good.If you do not do that due dilligence, the model will produce a staggering amount of low quality code, at a rate that is probably something like 100x what a human could output in a similar timespan. Unchecked, it's like having a small army of the most eager junior devs you can find going completely fucking ape in the codebase. In my experience, you should be spending something like 5-15X the time the model takes to implement a feature on reviewing and making it fix its errors and inefficiencies. If you do that (with an expert's eye), the changes will usually have a high quality and will be correct and good.If you do not do that due dilligence, the model will produce a staggering amount of low quality code, at a rate that is probably something like 100x what a human could output in a similar timespan. Unchecked, it's like having a small army of the most eager junior devs you can find going completely fucking ape in the codebase. Unchecked, it's like having a small army of the most eager junior devs you can find going completely fucking ape in the codebase. What do the relatively hands-off "it can do whole features at a time" coding systems need to function without taking up a shitload of time in reviews? Great automated test coverage, and extensive specs.I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs".We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great. I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs".We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great. We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great. On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great. Code is the most precise specification we have for interfacing with computers. So I expect over time we will see genuine performance improvements, but Amdahl's law dictates it won't be as much as some people and ceo's are expecting. Tests are about conformance not correctness.Ensuring correct programs is like cleaning in the sense that you can only push dirt around, you can't get rid of it.You can push uncertainty around and but you can't eliminate it.This is the point of Gödel's theorem. Shannon's information theory observes similar aspects for fidelity in communication.As Douglas Adams noted: ultimately you've got to know where your towel is. Evaluating conformance is a different category of concern from ensuring correctness. Tests are about conformance not correctness.Ensuring correct programs is like cleaning in the sense that you can only push dirt around, you can't get rid of it.You can push uncertainty around and but you can't eliminate it.This is the point of Gödel's theorem. Shannon's information theory observes similar aspects for fidelity in communication.As Douglas Adams noted: ultimately you've got to know where your towel is. Ensuring correct programs is like cleaning in the sense that you can only push dirt around, you can't get rid of it.You can push uncertainty around and but you can't eliminate it.This is the point of Gödel's theorem. Shannon's information theory observes similar aspects for fidelity in communication.As Douglas Adams noted: ultimately you've got to know where your towel is. Shannon's information theory observes similar aspects for fidelity in communication.As Douglas Adams noted: ultimately you've got to know where your towel is. Shannon's information theory observes similar aspects for fidelity in communication.As Douglas Adams noted: ultimately you've got to know where your towel is. As Douglas Adams noted: ultimately you've got to know where your towel is. One thing I hope we'll all collectively learn from this is how grossly incompetent the elite managerial class has become. People seem to gloss over this... As a CEO if people don't function like this I'd be awake at night sweating. Which results the software engineering issue I'm not seeing addressed by the hype: bugs cost tens to hundreds of times their coding cost to resolve if they require internal or external communication to address. Even if everyone has been 10x'ed, the math still strongly favours not making mistakes in the first place.An LLM workflow that yields 10x an engineer but psychopathically lies and sabotages client facing processes/resources once a quarter is likely a NNPP (net negative producing programmer), once opportunity and volatility costs are factored in. An LLM workflow that yields 10x an engineer but psychopathically lies and sabotages client facing processes/resources once a quarter is likely a NNPP (net negative producing programmer), once opportunity and volatility costs are factored in. You will fix it when you have time, the important thing is that the app was delivered in a week a year ago and was solving some problem ever since. Whatever time savings you made on coding are irrelevant if it allowed for a critical bug to slip through.Between the two extremes you thus have a whole spectrum of tasks that either benefit or lose from applying coding with LLMs. For example, even non-important but large app will likely soon degrade into unmanageable state if developed with too little human intervention and you will be forced to start from scratch loosing a lot of time. Whatever time savings you made on coding are irrelevant if it allowed for a critical bug to slip through.Between the two extremes you thus have a whole spectrum of tasks that either benefit or lose from applying coding with LLMs. For example, even non-important but large app will likely soon degrade into unmanageable state if developed with too little human intervention and you will be forced to start from scratch loosing a lot of time. Between the two extremes you thus have a whole spectrum of tasks that either benefit or lose from applying coding with LLMs. For example, even non-important but large app will likely soon degrade into unmanageable state if developed with too little human intervention and you will be forced to start from scratch loosing a lot of time. We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. I designed every single bit of AWS architecture and code4. I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I designed every single bit of AWS architecture and code4. I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I designed every single bit of AWS architecture and code4. I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I designed every single bit of AWS architecture and code4. I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I led the customer acceptance testing> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands > We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMsI assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It's been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands I'm not saying trusting cheap devs is a good idea either. I do think cheap devs are actually at risk here. I didn't blindly trust the Salesforce consultants either. I also didn't verify every line of oSql (not a typo) they wrote. I disagree, in the sense that an engineer who knows how to work with LLMs can produce code which only needs light review. * Work in small increments* Explicitly instruct the LLM to make minimal changes* Think through possible failure modes* Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. * Work in small increments* Explicitly instruct the LLM to make minimal changes* Think through possible failure modes* Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. * Explicitly instruct the LLM to make minimal changes* Think through possible failure modes* Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. * Think through possible failure modes* Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. * Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. * Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. This is a means to produce "viable" code using an LLM without close review. The gains are especially notable when working in unfamiliar domains. I can glance over code and know "if this compiles and the tests succeed, it will work", even if I didn't have the knowledge to write it myself. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...>When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.If we're being honest with ourselves, it's not making devs work faster. It at best frees their time up so they feel more productive. >When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.If we're being honest with ourselves, it's not making devs work faster. It at best frees their time up so they feel more productive. If we're being honest with ourselves, it's not making devs work faster. It at best frees their time up so they feel more productive. I'd like to think that I have this under control because the methodology of working in small increments helps me to recognize when I've gotten stuck in an eddy, but I'll have to watch out for it.I still maintain that the LLM is saving me time overall. Besides helping in unfamiliar domains, it's also faster than me at leaf-node tasks like writing unit tests. I still maintain that the LLM is saving me time overall. Besides helping in unfamiliar domains, it's also faster than me at leaf-node tasks like writing unit tests. AI doesn't make you code faster, it just makes the boring stretches somewhat more exciting. Yes, code produced this way will have bugs, especially of the "unknown unknown" variety — but so would the code that I would have written by hand.I think a bigger factor contributing to unforeseen bugs is whether the LLM's code is statistically likely to be correct:* Is this a domain that the LLM has trained on a lot? (i.e. lots of React code out there, not much in your home-grown DSL)* Is the codebase itself easy to understand, written with best practices, and adhering to popular conventions? I think a bigger factor contributing to unforeseen bugs is whether the LLM's code is statistically likely to be correct:* Is this a domain that the LLM has trained on a lot? (i.e. lots of React code out there, not much in your home-grown DSL)* Is the codebase itself easy to understand, written with best practices, and adhering to popular conventions? (i.e. lots of React code out there, not much in your home-grown DSL)* Is the codebase itself easy to understand, written with best practices, and adhering to popular conventions? It introduces unnecessary indirection, additional abstractions, fails to re-use code. Humans do this too, but AI models can introduce this type of architectural rot much faster (because it's so fast), and humans usually notice when things start to go off the rails, whereas an AI model will just keep piling on bad code. Do not refactor existing code unless I explicitly ask. Under this, Claude Opus at least produces pretty reliable code with my methodology even under surprisingly challenging circumstances, and recent ChatGPTs weren't bad either (though I'm no longer using them). But I would never do the same for Azure. ... Errr... Yeah, that's not a great approach, unless you are defining 'work' extremely vaguely. I still make an effort to understand the generated code. Most of the time it's just API conventions and idioms I'm not yet familiar with. The minute geeks get over themselves thinking they are some type of artists, the happier they will be.I've had a job that requires coding for 30 years and before ther I was hobbyist and I've worked for from everything from 60 person startups to BigTech.For my last two projects (consulting) and my current project, while I led the project, got the requirements, designed the architecture from an empty AWS account (yes using IAC) and delivered it. I didn't look at a line of code. I verified the functional and non functional requirements, wrote the hand off documentation etc.The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. I've had a job that requires coding for 30 years and before ther I was hobbyist and I've worked for from everything from 60 person startups to BigTech.For my last two projects (consulting) and my current project, while I led the project, got the requirements, designed the architecture from an empty AWS account (yes using IAC) and delivered it. I didn't look at a line of code. I verified the functional and non functional requirements, wrote the hand off documentation etc.The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. For my last two projects (consulting) and my current project, while I led the project, got the requirements, designed the architecture from an empty AWS account (yes using IAC) and delivered it. I didn't look at a line of code. I verified the functional and non functional requirements, wrote the hand off documentation etc.The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. No natural language spec or test suite has ever come close to fully describing all observable behaviors of a non-trivial system.This means that if no one is reviewing the code, agents adding features will change observable behaviors.This gets exposed to users as churn, jank, and broken work flows. This means that if no one is reviewing the code, agents adding features will change observable behaviors.This gets exposed to users as churn, jank, and broken work flows. This gets exposed to users as churn, jank, and broken work flows. There's no way you have enough experience with maintaining code written in this way to confidently hand wave away concerns. There's no way you have enough experience with maintaining code written in this way to confidently hand wave away concerns. Agent user has no other permissions in the file system, has no tools, just output the code that's directed into a file. Not much, but having to specify and argue about everything is interesting, and I trust myself that I'm not loosing any knowledge this way; be it the why or the how. Not much, but having to specify and argue about everything is interesting, and I trust myself that I'm not loosing any knowledge this way; be it the why or the how. So many people on HN are so insulted that the people who put money in our bank accounts and in some cases stock in our brokerage accounts ever cared about their bespoke clean code, GOF patterns and they never did. LLM just made it more apparent.It's always been dumb for PR to be focused on for loops vs while loops instead of focusing on whether functional and non functional requirements are met It's always been dumb for PR to be focused on for loops vs while loops instead of focusing on whether functional and non functional requirements are met But instead you went off and had your own party arguing with someone (it certainly wasn't me) about number of layers, GoF patterns, and “clean” code. Even in late 2023 with the shit show of the current market, I had no issues having multiple offers within three weeks just by reaching out to my network and companies looking for people with my set of skills. You sound like a bozo, I can sniff it through my screen. I also stopped caring how registers are used and counting clock cycles in my assembly language code like it's the 80s and I'm still programming on a 1Mhz 65C02 But do you look at any of the AI output? The bash shell scripts I had it write as my integration test suite2. Various security, logging, log retention requirementsWhat I didn't look at - a line of the code for the web admin site. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. The bash shell scripts I had it write as my integration test suite2. Various security, logging, log retention requirementsWhat I didn't look at - a line of the code for the web admin site. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. Various security, logging, log retention requirementsWhat I didn't look at - a line of the code for the web admin site. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. Various security, logging, log retention requirementsWhat I didn't look at - a line of the code for the web admin site. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. Various security, logging, log retention requirementsWhat I didn't look at - a line of the code for the web admin site. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. I used AWS Cognito for authentication and checked to make sure that unauthorized users couldn't use the website. I've witnessed human developers produce incredibly convoluted, slow "ETL pipelines" that took 10+ minutes to load single digit megabytes of data. It could've been reduced to a shell script that called psql \copy. Unfortunately, the "ETL pipeline" I mentioned didn't even use transactions and was opening a new connection for every insert. The original solution can also anchor your thinking to some approach to the problem, which you wouldn't have if you solve it from scratch. Foreign code is easily confusing at first, which slows you down - and bad code quickly gets bewildering and sends you down paths of clarifications that waste time. Hand written code almost has an aversion to complexity, you'd search around for existing examples, libraries, reusable components, or just a simpler idea before building something crazy complex. While with AI you can spit out your first idea quickly no matter how complex or flawed the original concept was. If AI is a productivity boost and juniors are going to generate 10x the PRs, do you need 10x the seniors (expensive) or 1/10th the juniors (cost save).A reminder that in many situations, pure code velocity was never the limiting factor.Re: idiot prooofing I think this is a natural evolution as companies get larger they try to limit their downside & manage for the median rather than having a growth mindset in hiring/firing/performance. A reminder that in many situations, pure code velocity was never the limiting factor.Re: idiot prooofing I think this is a natural evolution as companies get larger they try to limit their downside & manage for the median rather than having a growth mindset in hiring/firing/performance. Re: idiot prooofing I think this is a natural evolution as companies get larger they try to limit their downside & manage for the median rather than having a growth mindset in hiring/firing/performance. And I lack the ability to let things slide. With a layout of 4 juniors, 5 intermediates, and 0-1 senior per team, putting all the changes through senior engineer review means you mostly wont be able to get CRs approved.I guess it could result in forcing everyone who's sandbagging as intermediate instead of going to senior to have to get promoted? I suspect that isn't the goal.Review by more senior people shifts accountability from the Junior to a Senior, and reframes the problem from "Oh dear, the junior broke everything because they didn't know any better" to "Ah, that Senior is underperforming because they approved code that broke everything." Review by more senior people shifts accountability from the Junior to a Senior, and reframes the problem from "Oh dear, the junior broke everything because they didn't know any better" to "Ah, that Senior is underperforming because they approved code that broke everything." Whether or not these productivity gains are realized is another question, but spreadsheet based decision makers are going to try. I'm not at all convinced that babysitting an AI churning out volumes of code you don't understand will help you acquire the knowledge to understand and debug it. American corporate culture has decided that training costs are someone else's problem. Since every corporation acts this way it means all training costs have been pushed onto the labor market. Combine that with the past few decades of “oops, looks like you picked the wrong career that took years of learning and/or 10 to 100s of thousands of dollars to acquire but we've obsoleted that field” and new entrants into the labor market are just choosing not to join.Take trucking for example. For the past decade I've heard logistics companies bemoan the lack of CDL holders, while simultaneously gleefully talk about how the moment self driving is figured out they are going to replace all of them.We're going to be outpaced by countries like China at some point because we're doing the industrial equivalent of eating our seed corn and there is seemingly no will to slow that trend down, much less reverse it. For the past decade I've heard logistics companies bemoan the lack of CDL holders, while simultaneously gleefully talk about how the moment self driving is figured out they are going to replace all of them.We're going to be outpaced by countries like China at some point because we're doing the industrial equivalent of eating our seed corn and there is seemingly no will to slow that trend down, much less reverse it. We're going to be outpaced by countries like China at some point because we're doing the industrial equivalent of eating our seed corn and there is seemingly no will to slow that trend down, much less reverse it. I know I'm probably coming across as a lunatic lately on HN but I really do think we're on the path towards violence thanks to AIYou just cannot destroy this many people's livelihoods without backlash. It's leading nowhere goodBut a handful of people are getting stupidly rich/richer so they'll never stop You just cannot destroy this many people's livelihoods without backlash. It's leading nowhere goodBut a handful of people are getting stupidly rich/richer so they'll never stop But a handful of people are getting stupidly rich/richer so they'll never stop If you look at the luddite rebellion they weren't actually against industrial technology like looms. They were against being told they weren't needed anymore and thrown to the wolves because of the machines.The rich have forgotten they are made of meat and/or are planning on returning to feudalism ala Yarvin, Thiel, Musk, and co's politics. The rich have forgotten they are made of meat and/or are planning on returning to feudalism ala Yarvin, Thiel, Musk, and co's politics. Also you'll never get an honest answer on a public forum because moderators remove them I guess that makes me a modern luddite thenA software engineer ludditeA techno-luddite if you willMaybe I have a new username Especially in a big co like Amazon, most senior engineers are box drawers, meeting goers, gatekeepers, vision setters, org lubricants, VP's trustees, glorified product managers, and etc. They don't necessarily know more context than the more junior engineers, and they most likely will review slowly while uncovering fewer issues. I'm probably not going to review a random website built by someone except for usability, requirements and security. My manager has been urging us to truly vibe code, just yesterday saying that "language is irrelevant because we've reached the point where it works - so you don't need to see it." They can assess whether the use of AI is appropriate without looking in detail. To discourage AI use, because of the added friction. To discourage AI use, because of the added friction. So the senior gets "review fatigue" because so much looks good they just start rubber stamping.I use an automated pipeline to generate code (including terraform, risking infrastructure nukes), and I am the senior reviewer. But I have gates that do a whole range of checks, both deterministic and stochastic, before it ever gets to me. I only see things where my eyes can actually make a difference.Amazon's instinct is right (add a gate), but the implementation is wrong (make it human). But I have gates that do a whole range of checks, both deterministic and stochastic, before it ever gets to me. I only see things where my eyes can actually make a difference.Amazon's instinct is right (add a gate), but the implementation is wrong (make it human). I hear “x tool doesn't really work well” and then I immediately ask: “does someone know how to use it well?” The answer “yes” is infrequent. Number-producing features need to work in a UX and product sense but also produce the right numbers, and within range of expectations. Number-producing features need to work in a UX and product sense but also produce the right numbers, and within range of expectations. I would actually say having at least 2 people on any given work item should probably be the norm at Amazon's size if you also want to churn through people as Amazon does and also want quality.Doing code reviews are not as highly valued in terms of incentives to the employees and it blocks them working on things they would get more compensation for. Doing code reviews are not as highly valued in terms of incentives to the employees and it blocks them working on things they would get more compensation for. When reviewing, you need to go through every step of implementing it yourself (understand the problem, solve the problem, etc. ), but you additionally need to 1) understand someone else's solution and 2) diff your solution against theirs to provide meaningful feedback.Review could take roughly equivalent time, but only if I am allowed to reject/approve in a binary way (“my solution would not be the same, therefore denied”) which is not considered appropriate in most places.This is why I am not a fan of being the reviewer. Review could take roughly equivalent time, but only if I am allowed to reject/approve in a binary way (“my solution would not be the same, therefore denied”) which is not considered appropriate in most places.This is why I am not a fan of being the reviewer. This is why I am not a fan of being the reviewer. If leadership isn't in that category, it spreads to all layers.I don't know how we defeat capitalism to incentivize smart leadership. I don't know how we defeat capitalism to incentivize smart leadership. So you're saying that peer reviews are a waste of time and only idiots would use/propose them? It's a phrase, not an insult to users of these tools. Also while this is happening most developers are getting constantly hammered by operational issues and critical security tasks because 1) the legacy toolchain imports 6 different language package ecosystems and 2)no one ever pays down tech debt in legacy code until its a high severity ticket count in a KPI dashboard visible to the senior management. Most AWS services might become obsolete, why does an ai need these janky higher levels abstractions AWS piles on.So now they need innovation, but the company isn't set up for it. They are forcing short deadlines for product launches that don't matter So now they need innovation, but the company isn't set up for it. They are forcing short deadlines for product launches that don't matter the marginal technological direction is determined by middle managers whoes primary motivation is “what new customer facing feature can I launch at this years re:invent and build a little empire” (of course this is a shrinking offering as tech debt and complexity pile up)junior engineers are burned and churned on execution, seniors are project managers, principals just do high level reviews & high level fire fighting (note not actually leading the tech)director and above just their spend time on “what to kill” or “who to fire” as priorities change every 6 months junior engineers are burned and churned on execution, seniors are project managers, principals just do high level reviews & high level fire fighting (note not actually leading the tech)director and above just their spend time on “what to kill” or “who to fire” as priorities change every 6 months director and above just their spend time on “what to kill” or “who to fire” as priorities change every 6 months Having Less comments on their PRs: for some drastically dumb reason, having a PR thoroughly reviewed is a sign of bad quality. Docs: write docs, get them reviewed to show you're high level.Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other. Having Less comments on their PRs: for some drastically dumb reason, having a PR thoroughly reviewed is a sign of bad quality. Docs: write docs, get them reviewed to show you're high level.Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other. Docs: write docs, get them reviewed to show you're high level.Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other. Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other. I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other. I'm very far away from liking Amazon's engineering culture and general work culture, but having PRs with countless of discussions and feedback on it does signal that you've done a lot of work without collaborating with others before doing the work. Generally in teams that work well together and build great software, the PRs tend to have very little on them, as most of the issues were resolved while designing together with others. I agree, but those are separate tasks completely (in my view) compared to "Someone writes code that goes into production", usually called "spikes" or something else to differentiate them from "normal" tasks. I missed my FAANG chance during the good years. There's also this implicit imbalance engineers typically don't like: it takes me 10 min to submit a complete feature thanks to Claude… but for the human reviewing my PR in a manual way it will take them 10-20 times that.Edit: at the end real engineers know that what takes effort is a) to know what to build and why, b) to verify that what was built is correct. I'd prefer people wrote good quality code and checked it as they went along... whilst allowing room for other stuff they didn't think of to come to the front. if you have a very crystalised vision of what you want, why would I want an engineer to use an LLM to write it, when the LLM can't do both raw production and review? But there's no benefit for me personally to shift toward working that way now - I'd rather it came into existence first before I expose myself to incremental risk that affects business operations. E.g. if you have a very crystalised vision of what you want, why would I want an engineer to use an LLM to write it, when the LLM can't do both raw production and review? But there's no benefit for me personally to shift toward working that way now - I'd rather it came into existence first before I expose myself to incremental risk that affects business operations. It sounds like a piss poor deal for seniors unless senior engineer now means professional code reviewer. Were you a knowledge source for the entire team? Did you ask a lot of questions to learn everything? Well, then you weren't "are right a lot".Did you think big and come up with an architecture that saved Amazon a lot of money? Did you consult people to make sure they were happy with the solution? Well you weren't biased for action.Thats just a few examples, there's so many more Did you think big and come up with an architecture that saved Amazon a lot of money? Did you consult people to make sure they were happy with the solution? Well you weren't biased for action.Thats just a few examples, there's so many more Did you act quickly without consulting others to fix an issue? Did you consult people to make sure they were happy with the solution? Well you weren't biased for action.Thats just a few examples, there's so many more Thats just a few examples, there's so many more Choose whether you want to make the person look good or bad, by following/ignoring a principle.4. Results: A list of relevant principles with short rationalizations.I'm almost tempted to try, except perhaps I should treasure my ignorance.If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. Choose whether you want to make the person look good or bad, by following/ignoring a principle.4. Results: A list of relevant principles with short rationalizations.I'm almost tempted to try, except perhaps I should treasure my ignorance.If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. Choose whether you want to make the person look good or bad, by following/ignoring a principle.4. Results: A list of relevant principles with short rationalizations.I'm almost tempted to try, except perhaps I should treasure my ignorance.If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. Results: A list of relevant principles with short rationalizations.I'm almost tempted to try, except perhaps I should treasure my ignorance.If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. I'm almost tempted to try, except perhaps I should treasure my ignorance.If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing. Despite the name not a lot of seniority, leadership or engineering going around With AI, that's no longer true.I advocate for GitHub and other code review systems to add a "Require self-review" option, where people must attest that they reviewed and approved their own code. This change might seem symbolic, but it clearly sets workflows and expectations. I advocate for GitHub and other code review systems to add a "Require self-review" option, where people must attest that they reviewed and approved their own code. This change might seem symbolic, but it clearly sets workflows and expectations. So there is reason to add comments that address a different readers understanding than the code rest. It also makes me more comfortable figuring out how a project's pull acceptance are like (maybe due to how fast local ui is compared to web-based git). On the other hand, I can only run some basic git cli commands and can't quickly comprehend raw text-based diff, especially when encountering some linux patches from time to time. working at amazon, when I wanted to review code myself through the CR tool, Id still end up publishing it to the whole team and have to add some title shenanigans saying it was a self review or WIP and for others to not look at it yet News from the inside makes it sound like things are getting pretty bad. You mean senior programmers that have been there for ages don't want to spend their time reviewing AI slop? They're torn between "we want to fire 80% of you" and "... but if we don't give up quality/reliability, LLMs only save a little time, not a ton, so we can only fire like 5% of you max". (It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement) (It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement) And for me, actually writing the code will often trigger some additional insight or awareness of edge cases that I hadn't considered. Yeah.. we have a lot of Steve Jobs walking around lol.As you say, there's 'other stuff' that happens naturally during the production process that add value. if i wanted, i could queue up weeks worth of review in a couple days, but that's not getting the whole team more productive.Spending more time on documents and chatting proved much more useful for getting more output overall.Even without LLMs ive been nearby and on teams where review burden from developers building away team code was already so high that youd need to bake an extra month into your estimates for getting somebody to actually look. Spending more time on documents and chatting proved much more useful for getting more output overall.Even without LLMs ive been nearby and on teams where review burden from developers building away team code was already so high that youd need to bake an extra month into your estimates for getting somebody to actually look. Even without LLMs ive been nearby and on teams where review burden from developers building away team code was already so high that youd need to bake an extra month into your estimates for getting somebody to actually look. Essentially something big has to happen that affects the revenue/trust of a large provider of goods, stemming from LLM-use.They wont go away entirely. But this idea that they can displace engineers at a high-rate will. But this idea that they can displace engineers at a high-rate will. I feel the current proliferation of LLMs is going to resemble asbestos problem: Cheap miracle thingy, overused in several places, with slow gradual regret and chronic harms/costs. Although I suppose the "undocumented nasty surprise" aspect would depend on adoption of local LLMs. If it's a monthly subscription to cloud-stuff, people are far less-likely to lose track of where the systems are and what they're doing. Anecdotally, they don't really remember the feedback as well, because they weren't involved in writing the code. Its burnout-inducing to see your hard work and feedback go in one ear and out another.I personally know people looking to jump ship because they waste too much time at their current employer on this. I personally know people looking to jump ship because they waste too much time at their current employer on this. We love this for Amazon, they're a very strong company making bold decisions. Code review should not be (primarily) about catching serious errors. If there are few it's not the best use of time.The goal is to ensure the team is in sync on design, standards, etc. The fact that software is "soft" makes it seem like this doesn't apply, but it does, not least because of the fact that once you have gone down the wrong path with software design, it is very difficult to pull back and realize you need to go down an entirely different one. The analogy to manufacturing would be something like if the parts coming out a machine are all bad, just sending them to re-work is not a solution, you need to re-calibrate the machine. It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground. What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground. That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry. Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out... Why do we keep reinventing the wheel when it comes to perverse incentives. I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. There's a lot of learning opportunity in failing, but if failure just means spam the AI button with a new prompt, there's not much learning to be had. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that. Code Review is hard and tiring, much moreso than writing itI've never met anyone who would be okay reviewing code for their full time job I've never met anyone who would be okay reviewing code for their full time job How else would they train the LLM PR reviewers to their standards?I've never personally been in the position, because my entire career has been in startups, but I've had many friends be in the unenviable position of training their replacements. We could be looking at a potential future where an employee or contractor doesn't realize s/he is actually just hired to generate training data for an LLM to replace them, and then be cut. We could be looking at a potential future where an employee or contractor doesn't realize s/he is actually just hired to generate training data for an LLM to replace them, and then be cut. Feels inevitable that code for aviation will slowly rot from the same forces at play but with lethal results. That doesn't sustain, so a couple of more-junior engineers can do similar work to mature. The pressure to use AI will increase and basically you'll be fired for not using it. The impression I get from SWEs I've met throughout my life is that most of them don't actually care about their job. They got in because it paid well and demand was plentiful. I find myself context-switching all the time and it's pretty exhausting, while also finding that I'm not retaining as much deep application domain knowledge as I used to.On the surface, it's nice that I can give my LLM a well-written bug ticket and let it loose since it does a good job most of the time. But when it doesn't do a good job or it's making a change in an area of the codebase I'm not familiar with, auditing the change gets tiring really fast. But when it doesn't do a good job or it's making a change in an area of the codebase I'm not familiar with, auditing the change gets tiring really fast. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient. Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient. The way I am working with AI agents (codex) these days is have the AI generate a spec in a series of MD documents where the AI implementation of each document is a bite sized chunk that can be tested and evaluated by the human before moving to the next step and roughly matches a commit in version control. In this manner, I have a decent knowledge of the code, and one that I am more comfortable with than one-shotting. I used to "rush" through this process before, with less upfront planning, and more of a focus on getting a working scaffold up and running as fast as possible, with each step along the way implemented a bit quicker and less robustly, with the assumption I'd return to fix up the corner cases later. I used to "rush" through this process before, with less upfront planning, and more of a focus on getting a working scaffold up and running as fast as possible, with each step along the way implemented a bit quicker and less robustly, with the assumption I'd return to fix up the corner cases later. I find the code generated by this process to be better in general than the code I've generated over my previous 35+ years of coding. I used to "rush" through this process before, with less upfront planning, and more of a focus on getting a working scaffold up and running as fast as possible, with each step along the way implemented a bit quicker and less robustly, with the assumption I'd return to fix up the corner cases later. still within the engineering IC role, but on a different track Both should write code to justify their work and impact2. Sr- code must be reviewed by Sr+What happens:a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen Both should write code to justify their work and impact2. Sr- code must be reviewed by Sr+What happens:a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen Sr- code must be reviewed by Sr+What happens:a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen What happens:a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen b. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen c. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen Think about it - how do you increase the speed at which one can review code? Now this won't be the case everywhere - e.g. in outsourced regions the conditions will force people to operate a certain way.Im not a SWE by trade, I just try to look at things from a pragmatic stand-point of how org's actually make incremental progress faster. Im not a SWE by trade, I just try to look at things from a pragmatic stand-point of how org's actually make incremental progress faster. Beauty is optional, but it makes life more worth living. When they fire everyone, juniors will fix it with AI.This is in general. I wouldn't recommend this at critical services like AWS. I wouldn't recommend this at critical services like AWS. But yes agree with the rest, which probably makes up a tiny tiny fraction of the software created today, and will be orders of magnitude smaller as a fraction in the future. And from their sagely reviews, we shall train a large language model to ultimately replace them because the most fungible thing at Amazon is the leadership. Imagine having to debug code that caused an outage when 80% is written by an LLM and you now have to start actually figuring out the codebase at 2am.. :) i think the team i was on was a bit of an outlier in terms of owning 40 dumptser fires at once, and the first time reading any one of them was at 2AM because it was down.having an LLM give early passes on reading the godawful c++ code that you can tell at a glance that its not gonna work as expected, but you cant tell why, or what expected actually is would have been phenomenal, and gotten me back to sleep at 3 on those codebases rather than 5. having an LLM give early passes on reading the godawful c++ code that you can tell at a glance that its not gonna work as expected, but you cant tell why, or what expected actually is would have been phenomenal, and gotten me back to sleep at 3 on those codebases rather than 5. Obviously it's probably cost-prohibitive to do an all to all analysis for every PR, but I imagine with some intelligent optimizations around likelihood and similarity analysis something along those lines would be possible and practical. Amazon does have those things, and has fine tuning on models based on those postmortems.Noisy reviews are also a problem causer. the PR doesnt know what scale a chunk of code is running at, without having access to 20 more packages and other details. the PR doesnt know what scale a chunk of code is running at, without having access to 20 more packages and other details. COEs and Operation Readiness Reviews are already the documents that you mention, but they are largely useless in preventing incidents. Imagine if the #1 problem of your woodworking shop is staff injuries, and the solution that management foists on you is higher RPM lathes. I am seeing this mindset still, with AI Agents. In the meantime they will be quite a bit slower I'd imagine.Also wonder if those seniors will ever get to actually do any engineering themselves now that they're the bottleneck. Also wonder if those seniors will ever get to actually do any engineering themselves now that they're the bottleneck. CS-wise we use statistic analysis to judge good code from bad.How much time does it take to take AI output and run the basic statistic tools for most computer languages?Some juniors need firing outright CS-wise we use statistic analysis to judge good code from bad.How much time does it take to take AI output and run the basic statistic tools for most computer languages?Some juniors need firing outright How much time does it take to take AI output and run the basic statistic tools for most computer languages?Some juniors need firing outright maybe as software engineering topics, but thats a different discipline First thing that comes to mind is: reminds me of those movie where some dictatorship starts to crumble and the dictator start being tougher and tougher on generals, not realizing the whole endeavor is doomed, not just the current implementation.Then again, as a former amazon (aws) engineer: this is just not going to work. this is less and less feasible.L5 engineers are already supposed to work pretty much autonomously, maybe with L6 sign-off when changes are a bit large in scope.L6 engineers already have their own load of work, and a fairly large amount of engineers "under" them (anywhere from 5 to 8). Properly reviewing changes from all them, and taking responsibility for that, is going to be very taxing on such people.L7 engineers work across teams and they might have anywhere from 12 to 30 engineers (L4/5/6) "under" them (or more). They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Then again, as a former amazon (aws) engineer: this is just not going to work. this is less and less feasible.L5 engineers are already supposed to work pretty much autonomously, maybe with L6 sign-off when changes are a bit large in scope.L6 engineers already have their own load of work, and a fairly large amount of engineers "under" them (anywhere from 5 to 8). Properly reviewing changes from all them, and taking responsibility for that, is going to be very taxing on such people.L7 engineers work across teams and they might have anywhere from 12 to 30 engineers (L4/5/6) "under" them (or more). They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... L5 engineers are already supposed to work pretty much autonomously, maybe with L6 sign-off when changes are a bit large in scope.L6 engineers already have their own load of work, and a fairly large amount of engineers "under" them (anywhere from 5 to 8). Properly reviewing changes from all them, and taking responsibility for that, is going to be very taxing on such people.L7 engineers work across teams and they might have anywhere from 12 to 30 engineers (L4/5/6) "under" them (or more). They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... L6 engineers already have their own load of work, and a fairly large amount of engineers "under" them (anywhere from 5 to 8). Properly reviewing changes from all them, and taking responsibility for that, is going to be very taxing on such people.L7 engineers work across teams and they might have anywhere from 12 to 30 engineers (L4/5/6) "under" them (or more). They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... I'm going to tell you, this stinks A LOT like rotting day 2 mindset.----1. Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips... Over the next few days my account history came back, except purchases made Q1 2026. There are a few substantial purchases I made that are nowhere to be found anymore.I attributed this Iranian missiles hitting some of their infrastructure in EU, as it had been reported.Now I am not sure if it was blast radius from missiles or AI mishaps. I attributed this Iranian missiles hitting some of their infrastructure in EU, as it had been reported.Now I am not sure if it was blast radius from missiles or AI mishaps. Now I am not sure if it was blast radius from missiles or AI mishaps. Seniors are now just another CI stage for their slop to pass. In my experience, it's high-quality for creating and iterating on specs. Tools like Cursor are optimized for human-driven vibing -- they have great autocomplete, etc. Kiro, by contrast, is optimized around spec, which ironically has been the most effective approach I've found for driving agents.I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.Kiro writes specs using structured formats like EARS and INCOSE. It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.Kiro can technically implement the task list itself, but this is where agents come in. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven't found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows. I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.Kiro writes specs using structured formats like EARS and INCOSE. It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.Kiro can technically implement the task list itself, but this is where agents come in. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven't found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows. Kiro writes specs using structured formats like EARS and INCOSE. It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.Kiro can technically implement the task list itself, but this is where agents come in. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven't found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows. Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.Kiro can technically implement the task list itself, but this is where agents come in. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven't found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows. Kiro can technically implement the task list itself, but this is where agents come in. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven't found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).Kiro didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows. Seems to me too low level in everyone's stack to not have humans doing the work, especially at this stage. This feels like a corporate mass delusion of unprecedented scale. But how can they possibly properly review reams of code to a sufficient degree they can personally vouch for it? You just have to take responsibility for the code. We are delivering like never before, but have a lot of experience into how to do it as safe as possible. They say you need a year to build up experience fyi.I feel bad for those engineers who will have to sign off for things they will most likely not have enough time to review. I feel bad for those engineers who will have to sign off for things they will most likely not have enough time to review. as an alternative, a bunch of people got into their one-person trucks and drove to the store to buy whatever thing would have been efficiently delivered