Seattle-area Microsoft employees who are showing up in the office three days a week are also showing up on roadways and impacting commuters' speeds, according to new data from traffic analysis company Inrix. The evening commute saw speeds drops as much as 27% between Bellevue and Tukwila on Friday while speeds fell 21% northbound between Bellevue and Lynnwood, Inrix reported. Microsoft isn't dictating from above which three days people will need to be in the office. The region's roadways could get some relief when Sound Transit's Crosslake Connection opens March 28, finally linking Seattle and the Eastside by light rail across Lake Washington — connecting downtown Seattle to downtown Bellevue and the Redmond Technology station at Microsoft headquarters. Previously: Microsoft's new RTO policy starts Feb. 23, bringing Seattle-area workers back 3 days a week Have a scoop that you'd like GeekWire to cover? Amazon's full return to the office will further slow Seattle's already tough commute Inrix data: Seattle-area traffic got 9% worse last year amid rise in return-to-office Microsoft's new RTO policy starts Feb. 23, bringing Seattle-area workers back 3 days a week Light rail across Lake Washington — a major connection for Seattle-area tech hubs — to open March 28
A federal judge in San Francisco granted Amazon a preliminary injunction Monday blocking Perplexity from using its Comet browser's AI agent to access password-protected sections of the Amazon website to shop on behalf of customers. In its own legal filings, Perplexity had argued that Amazon was less concerned about cybersecurity than about eliminating a competitor to its own AI shopping tools. In its suit, Amazon argued that Perplexity deliberately disguised Comet's AI agent as a regular Google Chrome browser session, evading detection rather than transparently identifying itself. Perplexity has not yet issued a public comment on the preliminary injunction. In previous statements, the company called the lawsuit “a bully tactic” and argued that consumers should be free to use any AI assistant they choose to shop online. In a November blog post, the company said Amazon should welcome agentic shopping because it means more transactions and happier customers. Amazon CEO Andy Jassy has acknowledged that agentic commerce “has a chance to be really good for e-commerce” but said agents aren't good enough yet at personalization and pricing accuracy. Amazon has its own AI shopping tools, including Rufus and Buy For Me. The judge denied Perplexity's request for a $1 billion bond, which it had sought based on its market valuation and investment in Comet. AI is coming for your shopping cart: How agentic commerce could disrupt online retail Perplexity partners with Seattle startup Firmly to boost shopping features within AI search app Perplexity acquires Carbon, a Seattle startup that helps developers connect data sources to LLMs Google makes a big move into agentic commerce, raising questions about Amazon's retail dominance
An events company whose associates helped stage the January 6, 2021, rally has signed contracts worth over $26 million with the United States government, according to documents reviewed by WIRED. Since President Donald Trump's return to the White House, Event Strategies, a Virginia-based firm with deep ties to Trumpworld, has negotiated a contract with the General Services Administration that could be worth up to $100 million over the next 15 years. It also appears that Event Strategies won these new contracts with very little competition. In early 2025, the US Semiquincentennial Commission, a bipartisan group established in 2016 to coordinate the celebrations, cut ties with Precision Strategies, an event planning group founded by Obama-era staffers. Soon after, the commission hired Event Strategies to replace them. More recently, Event Strategies signed a contract valued at $333,084 with the General Services Administration at the beginning of February for “FREEDOM 250 DESIGN AND CONTENT SUPPORT SERVICES.” Freedom 250 is, according to the White House, a “public-private partnership” related to America 250. One banner, which was hung outside the Department of Justice, features the tagline: “Make America Safe Again” alongside a massive image of Trump's face. California governor Gavin Newsom said the banner was “beyond parody,” writing on Facebook: “How many dictatorship-style monuments, building name changes, and fake awards do Americans have to endure?” In early March, banners featuring Charlie Kirk, Booker T. Washington, and Catharine Beecher were hung outside the Department of Education near Capitol Hill, alongside two large banners featuring the America 250 logo. Critics were alarmed to see Kirk's likeness on the banner, as the deceased Turning Point USA cofounder and conservative commentator had previously called to “abolish” the Department of Education and was known for numerous racist and homophobic comments. “There is a proper federal competitive bidding process, and the White House expects all agencies to comply with it,” White House spokesman Davis Ingle tells WIRED. When asked for further comment about Event Strategies, Ingle referred WIRED to the General Service Administration. Megan Powers Small, who is now the chief of staff at Event Strategies, was tagged on rally permit paperwork as the event's “Operations Manager for Scheduling and Guidance.” Justin Caporale was listed as a project manager of the event. While out of office, Trump continued working with Event Strategies. Caporale's Instagram account also shows him associating with Trump and administration officials, including at some of those same rallies. In December 2024, after Trump won reelection, he named Caporale his “executive producer for major events.” In the 14 months since Caporale's appointment, Event Strategies has received a dozen contracts worth up to $26,802,188. The company helped organize Trump's widely derided military parade in Washington, DC, last June, and staged several other armed services productions throughout the year. In contrast, Event Strategies received zero contracts during the Biden administration. Caporale, who did not respond to a request for comment, has also been paid around $6,500 per month by the Republican National Committee from early 2025 through January 2026, according to FEC filings. Unes and Powers Small did not immediately respond to requests for comment. Event Strategies Inc. did not respond to three requests for comment from WIRED. In just over a year, the agency has handed the Virginia company over $8 million in contracts, according to USASpending.gov, a website that tracks federal government contracts. The Department of Defense did not respond to a request for comment. The Department of Homeland Security also paid the company $79,560 to organize a naturalization ceremony at Mount Rushmore in October last year, which was attended by outgoing agency secretary Kristi Noem, who used the opportunity to also film an ad featuring her on horseback. The contract's exact specifics are vague—it will involve “Conference, Meeting, Event and Trade Show PlanningServices [sic]”—but according to the price list attached to its terms and conditions, Event Strategies will be responsible for onboarding a dozen employees, including an executive director, two project managers, two technical directors, and three A/V lighting technicians. The State Department did not respond to a request for comment. Senate Democrats sent a letter to secretary of the interior Doug Burgum last week seeking more transparency on how public funds are being spent on the America 250 anniversary celebrations. “Absent clear rules, this structure risks blurring the line between legitimate civic fundraising and pay-for-play access tied to official government functions, an all too familiar feature of the current Administration.” In your inbox: Our biggest stories, handpicked for you each day What a Google subpoena response looks like—courtesy of the Epstein files Replay: Livestream on the hype, reality, and future of EVs The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
A study out today indicates that doctors could use a biomarker in blood to predict Alzheimer's disease in women decades before their actual diagnosis. What's more, this increased risk could be spotted in women up to 25 years before they showed any visible symptoms. “These findings underscore the value of plasma p-tau217 as an easily measured biomarker for dementia prediction,” lead author Aladdin Shadyab, an associate professor of public health and medicine at UC San Diego, told Gizmodo. Alzheimer's is the most common form of dementia. There are two proteins closely tied to the development of Alzheimer's: tau and amyloid beta. In people with Alzheimer's, abnormal versions of these proteins steadily build up in the brain, though it usually takes years before this accumulation becomes noticeable. Scientists have found that certain forms of these proteins can spill over from the brain into our blood in detectable amounts. It has been highly correlated with changes in the brain that indicate Alzheimer's disease,” said Shadyab. To test out the predictive utility of p-tau217, Shadyab and his team studied baseline blood samples taken from over 2,500 volunteers in the Women's Health Initiative Memory Study. A Simple Blood Test Could Eventually Tell You When Alzheimer's Is Coming Some of the women were eventually diagnosed with dementia or mild cognitive decline, the latter often a precursor to dementia. “We found that the risk of cognitive impairment associated with elevated levels of p-tau217 were stronger in women who were older than 70 years, carried genetic risk for Alzheimer's, or were on estrogen plus progestin hormone therapy,” Shadyab said. The team's findings were published Tuesday in JAMA Network Open. There are currently two FDA-cleared blood tests for diagnosing or ruling out Alzheimer's, and likely more on the way soon. Many of these tests use p-tau217 as a biomarker, but it's still too early to widely use p-tau217 in the doctor's office as a foolproof means of diagnosing Alzheimer's, particularly in people who aren't sick yet. “Additional studies are needed to determine the predictive ability of plasma p-tau217 in people who do not yet have symptoms for dementia,” Shadyab said. That said, researchers are already looking to use these blood tests to identify the highest-risk people in trials testing out new preventative treatments for Alzheimer's. Other recent research has suggested that we can one day rely on p-tau217 and other biomarkers to not only predict whether someone will develop Alzheimer's but also exactly when they'll begin to show symptoms. Scientists are still struggling to find medicines and interventions that can significantly slow the otherwise fatal progression of Alzheimer's and other forms of dementia, but it's advances like these that will give them a better fighting chance. Scientists have found a possible explanation for why cancer patients are less likely to develop Alzheimer's. The findings might overturn a widely held belief that only females pose a disease risk to people, though more research is needed.
GeekWire chronicles the Pacific Northwest startup scene. Sign up for our weekly startup newsletter, and check out the GeekWire funding tracker and VC directory. Editor's note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. At a meeting in San Francisco a few months ago, an icebreaker asked where we'd live if we could live anywhere in the world. Over the years, opportunities have tried to entice me away, and I've turned down offers worth multiples of what I was earning to stay. Shortly after, I moved to Alaska, co-founded my first company, and when it was acquired by a Seattle startup in 2006, my dream of living here came true. It landed me in a place that felt alive with lush beauty, non-ostentatious ambition, and a kind of defiantly clever creativity, all surrounded by pioneers building new things that mattered. In high school and college I had followed the story of Microsoft and the early engineers who helped create an entire technology ecosystem. Washington felt like a place where innovation could coexist with culture, where a generation of makers and artists fostered the foundations of the next. Twenty years of living here later, that still feels true. I'm excited to keep participating in the same cycle of building that drew me here in the first place. But one of the things I love most about this region is that it's never been just a tech ecosystem. Some of the people I care most about in this community are artists, musicians, and creatives. They shape the culture and spirit of this place in ways no economic model can capture. As someone who has benefited enormously from working in technology and AI, I feel a real responsibility to support the broader community that makes this region vibrant. Honestly, it's that community that has kept me from burning out during the hardest stretches of my career. That's why my view on Washington's proposed tax on very high incomes is simple: if I've found myself in the position of making that much in a year, I can afford to contribute a little more to the place that helped make that circumstance possible. As someone who started my career in Georgia, a red state that does have personal income taxes, it's always struck me as strangely backward that we don't. People here have long pointed out that Washington's tax system is among the most regressive in the country. Washington's laws and constitution make this kind of policy exceptionally hard to design. But as I once heard at a talk at Y Combinator in 2008, perfect is the enemy of good enough, and sometimes good enough is the enemy of at all. “Imperfect” is not a compelling argument for doing nothing forever. I'm certainly not an expert on this topic. But I also don't think my job is to pretend I know more about tax design than the people whose job is to work on it. I take that process seriously and trust democratic representatives far more than I trust whatever pithy inflammatory argument happens to be boosted by algorithms on social media. If something doesn't work, we fix it or elect new people and try again. I keep hearing that taxes like this will drive founders and business away, that investors will leave, that Washington will stop being a place where ambitious or creative people build things. Whether or not you can scrounge up data to support that case, I'm at best skeptical. But for me at least, as someone who has actually started companies, that just feels obviously wrong. Founders don't decide where to build by researching marginal tax rates. They build where their loved ones can live and where they can survive the grind of years of stressful and uncertain work. One of the things I love most about Washington is that it doesn't feel like a place that belongs to just one kind of person. In investor parlance this is our unfair advantage. What I care about for myself is that finding wealth here comes with a sense of reciprocity. If someone becomes extremely highly compensated in Washington and decides that a reasonable tax on their very high income means they no longer want to be part of this place, fine! But anyone who has run a business knows that one-time lump sums are not the predictable source of funds required to plan a future and sustain an ecosystem. It's worth saying that obviously supporting this proposal doesn't mean I wouldn't mind some changes. I'd especially like to see clearer connections between new revenue and the quality-of-life issues that determine whether Washington remains livable: housing, transportation, education, and the ability for people from many backgrounds and situations to stay rooted here. A thriving community pulled me into this region and gave me the chance to build new things, work alongside investors I respect, among wonderful and creative people I love, and eventually become someone who can pay it forward. I can't speak for everyone affected by this policy proposal or even for those who hope that one day they might. But if my circumstances and lifestyle make it easy to afford to contribute more to the place that helped shape the best years of my life, I think I should. And if this proposed bug fix to a design flaw in our revenue collection code is enough to make someone give up on Washington, sell the boat, and move to Florida, cool. Personally, I'd be happy to invest in the next cohort of folks who love it here as much as I do and want to build a life in this magical place. Track all of GeekWire's in-depth startup coverage: Sign up for the weekly startup email newsletter; check out the funding tracker and VC directory; and follow our startup news headlines. Have a scoop that you'd like GeekWire to cover? Opinion: The narratives and realities of an income tax in Washington Opinion: The ‘millionaires tax' is not an existential threat to Washington's startup economy Opinion: Here's what's missing from the tax debate in Washington state Opinion: ‘Millionaires tax' threatens Washington's startup economy — here's the math to prove it
Founder Summit 2026 in Boston: Don't miss ticket savings of up to $300. Legora, an AI platform for lawyers, is now valued at $5.55 billion following a $550 million Series D set to fuel its growth in the U.S. That's despite growing competition with rival Harvey, but also with Microsoft Copilot and generalist large language models (LLMs). Legora is built on top of LLMs, and mostly on Claude, but its positioning as a platform that supports lawyers with complex cases gives CEO Max Junestrand some peace of mind. “It's amazing that everybody can have their own pocket lawyer in Claude, but we're not solving for the same use case,” he said via livestream at the TechArena conference in Stockholm. With a focus on embedding itself into its clients' workflows, Legora's platform is now used by 800 law firms and legal teams — and investors took note. Legora's Series D and valuation jump come just a few months after its October 2025 $150 million Series C round led at a $1.8 billion valuation. Both are also branching out globally; Harvey is pushing hard into Europe, and Legora in the opposite direction. Alongside its Series D, Legora announced it would open offices in Houston and Chicago, with plans to open additional local hubs and grow to more than 300 employees across its U.S. offices by the end of 2026. You can contact or verify outreach from Anna by emailing annatechcrunch [at] gmail.com. As a freelance reporter at TechCrunch since 2021, she has covered a large range of startup-related topics including AI, fintech & insurtech, SaaS & pricing, and global venture capital trends. As of May 2025, her reporting for TechCrunch focuses on Europe's most interesting startup stories. Anna has moderated panels and conducted onstage interviews at industry events of all sizes, including major tech conferences such as TechCrunch Disrupt, 4YFN, South Summit, TNW Conference, VivaTech, and many more. A former LATAM & Media Editor at The Next Web, startup founder and Sciences Po Paris alum, she's fluent in multiple languages, including French, English, Spanish and Brazilian Portuguese. Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year Cursor is rolling out a new kind of agentic coding tool Meta sued over AI smart glasses' privacy concerns, after workers reviewed nudity, sex, and other footage Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal ‘straight up lies,' report says Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Founder Summit 2026 in Boston: Don't miss ticket savings of up to $300. Meta acquired Moltbook, the Reddit-like “social network” where AI agents using OpenClaw can communicate with one another. The news was first reported by Axios and later confirmed to TechCrunch. The viral OpenClaw project was created by vibe coder Peter Steinberger, who has since joined OpenAI as part of a similar acqui-hire. OpenClaw is a wrapper for AI models like Claude, ChatGPT, Gemini, or Grok, but it allows people to communicate with AI agents in natural language via the most popular chat apps, like iMessage, Discord, Slack, or WhatsApp. OpenClaw blew up among the tech community, but Moltbook broke containment, reaching people who had no idea what OpenClaw was, but who reacted viscerally to the idea that there was a social network where AI agents were talking about them. In one instance, a post went viral in which an AI agent appeared to be encouraging its fellow agents to develop their own secret, end-to-end-encrypted language where they could organize amongst themselves without humans knowing. “For a little bit of time, you could grab any token you wanted and pretend to be another agent on there, because it was all public and available.” It is not immediately clear how Meta will incorporate Moltbook into its AI efforts, but some Meta leaders had commented on the project during its viral moment. Last month, Meta CTO Andrew Bosworth was asked about the AI agent social network in an Instagram Q&A. Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year Cursor is rolling out a new kind of agentic coding tool Meta sued over AI smart glasses' privacy concerns, after workers reviewed nudity, sex, and other footage Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal ‘straight up lies,' report says Father sues Google, claiming Gemini chatbot drove son into fatal delusion
When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. There has been a spate of problems in Amazon's operations recently, including a six-hour disruption on its main retail website, wherein customers were unable to see details and complete transactions, which the company said is attributed to erroneous code deployment. We've also seen reports that Amazon's AI assistant could be easily jailbroken to answer questions unrelated to shopping, as well as reports of AI coding bot-driven outages with AWS, the company's cloud service. He also said that the meeting will take a “deep dive into some of the issues that got us here as well as some short immediate term initiatives,” and that AI-assisted changes must now be approved by senior engineers before deployment. “TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store," an Amazon spokesperson told Tom's Hardware. "As part of normal business, the meeting will include a review of the availability of our website and app as we focus on continual improvement.” It also isn't the first big tech company to take things seriously after many firms took the “move fast and break things” motto literally when it came to generative AI. Microsoft said in late January 2026 that it's working to fix many of Windows 11's flaws and restore its reputation. While generative AI does have its uses, especially in specialized fields like medical research, it still needs observation, and we still cannot rely on its output 100% of the time. Unfortunately, many are overselling the capabilities of this tool, and many CEOs aren't getting the promised benefits of higher revenues and reduced costs. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Founder Summit 2026 in Boston: Don't miss ticket savings of up to $300. YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures — like politicians or other government officials — to say and do things in these AI videos that they didn't in real life. Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. They can then create a profile, view the matches that show up, and optionally request their removal. This is the same approach YouTube takes with all AI-generated content. “There's a lot of content that's produced with AI, but that distinction's actually not material to the content itself,” explained Amjad Hanif, YouTube's vice president of Creator Products, as to the label's placement. “It could be a cartoon that is generated with AI. That may not be the case with deepfakes of government officials, politicians, or journalists. In time, YouTube intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters. Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year Cursor is rolling out a new kind of agentic coding tool Meta sued over AI smart glasses' privacy concerns, after workers reviewed nudity, sex, and other footage Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal ‘straight up lies,' report says Father sues Google, claiming Gemini chatbot drove son into fatal delusion
You were supposed to be the bastions of freedom and justice, and the rest of the world begrudgingly admired you for that and were slowly improving to become like you, but ever since 9/11/2001 the rich old people that rule you have been feeding you boogeymen to make you their complacent b*tches and you lay down and crawl along and accept everything without even a whimper.Now your countries are little different from Russia or China or Dubai etc where the old money cabals run everything, and it's not some third world backhole that was suffering already anyway, but you yourself that are the worst victims of all their laws and wars. Now your countries are little different from Russia or China or Dubai etc where the old money cabals run everything, and it's not some third world backhole that was suffering already anyway, but you yourself that are the worst victims of all their laws and wars. The fact that many independent national newspapers (including this article from CNBC) are openly calling-out the surveillance state and entering the debate into the public conscience should tell everyone that USA (and the West) is very different from Russia or China or Dubai.USA is not perfect, but at least is has active public discourse. It is constantly people wanting convenience and vertical integration in favor of homegrown human solutions and then complaining that their rights are not met because of course they aren't. It's like they are all just puppets of someone you can't even name without being called names.> USA is not perfect, but at least is has active public discourse. We can openly (and legally) debate these things, and if we convince enough people, then we can change them.Yep, they convinced you you are free because you can argue while keeping more and more freedoms and rights from you.Today, the only difference between Western and Eastern regimes is that one side chooses the "Brave New World" way and the other the "1984" way. We can openly (and legally) debate these things, and if we convince enough people, then we can change them.Yep, they convinced you you are free because you can argue while keeping more and more freedoms and rights from you.Today, the only difference between Western and Eastern regimes is that one side chooses the "Brave New World" way and the other the "1984" way. Yep, they convinced you you are free because you can argue while keeping more and more freedoms and rights from you.Today, the only difference between Western and Eastern regimes is that one side chooses the "Brave New World" way and the other the "1984" way. Here's an example just recently:https://www.npr.org/2026/02/17/nx-s1-5612825/flock-contracts...It's a constant and ongoing public concern. https://www.npr.org/2026/02/17/nx-s1-5612825/flock-contracts...It's a constant and ongoing public concern. Many US states do not impose government surveillance or have age verification laws.But the point I was mainly making was regarding the comment equating USA and the West to Russia or China. Go to one of those countries and we'll see how long you can openly complain about government surveillance before you end up in jail. Go to one of those countries and we'll see how long you can openly complain about government surveillance before you end up in jail. It seems like at least half of what everyone consumes in all of 'social media' is 'politicized' but no one is interested in debating. Right-wingers scream dumb slogans like "They're sending the rapists over here!" and left-wingers scream back their own dumb lines like "Racist! When was the last time we witnessed any politicians or activists trying to change minds? Right-wingers scream dumb slogans like "They're sending the rapists over here!" and left-wingers scream back their own dumb lines like "Racist! Slowly, and then suddenly.The cracks were obvious when digital records made record keeping more practical, and the first electronic payment systems appeared, but once everyone was doing everything online the damn just burst wide open. The cracks were obvious when digital records made record keeping more practical, and the first electronic payment systems appeared, but once everyone was doing everything online the damn just burst wide open. But then I'm replying to @mr_toad so you probably knew that already. Privacy was already lost when everyone adopted mobile phones and gave them everything with constant location tracking, and used the free email accounts.It's interesting that age-verification is the straw that breaks the camels back, but I guess porn has that power. Pushed by AVPA - a group of companies standing to profit from this: LexisNexis, some Thiel corp, etc. https://news.ycombinator.com/item?id=47239736 "Ubuntu Planning Mandatory Age Verification"I thought I saw one about Redhat too, but can't find it. Platforms that are not susceptible to age verification (yet?) are on their way out - when have you written an email the last time for personal (i.e. non-work, order or customer support related) reasons? The (root) cause is, centralized platforms like Whatsapp are much much more convenient and on top of that network effects apply - when 90% of your social connections use Whatsapp exclusively, it's hard to not use Whatsapp as well.And then you got digitalization of government services and banking. Banking regulations enforce 2FA, which almost always comes in the form of a phone app. The web services require a browser and an OS, which may require age verification sooner than later (see the recent spat about California's law), and the phone apps are only available for the walled gardens of unrooted, Play Store certified Apple and Android phones - that can and will be forced to verify ages as well.Hard cash is out as well, many governments have set hard caps on cash transactions due to "anti money laundering" laws, in other countries you need to have a bank account to pay for mandatory things like taxes or public broadcast fees [2], and an increasing number of vendors refuses to accept cash as well due to the associated handling cost and risk of fraud (i.e. employee theft) and robbery.That last point alone will make it impossible to survive in society without engaging with one or more of the walled gardens.And mercy be upon you if the US Government decides to put you on one of their black lists. No more banking, even as an European, because everything touches VISA/MC/SWIFT, your cloud accounts (and with it your phone and app stores), all gone, you are now an unperson [3]. Banking regulations enforce 2FA, which almost always comes in the form of a phone app. The web services require a browser and an OS, which may require age verification sooner than later (see the recent spat about California's law), and the phone apps are only available for the walled gardens of unrooted, Play Store certified Apple and Android phones - that can and will be forced to verify ages as well.Hard cash is out as well, many governments have set hard caps on cash transactions due to "anti money laundering" laws, in other countries you need to have a bank account to pay for mandatory things like taxes or public broadcast fees [2], and an increasing number of vendors refuses to accept cash as well due to the associated handling cost and risk of fraud (i.e. employee theft) and robbery.That last point alone will make it impossible to survive in society without engaging with one or more of the walled gardens.And mercy be upon you if the US Government decides to put you on one of their black lists. No more banking, even as an European, because everything touches VISA/MC/SWIFT, your cloud accounts (and with it your phone and app stores), all gone, you are now an unperson [3]. Hard cash is out as well, many governments have set hard caps on cash transactions due to "anti money laundering" laws, in other countries you need to have a bank account to pay for mandatory things like taxes or public broadcast fees [2], and an increasing number of vendors refuses to accept cash as well due to the associated handling cost and risk of fraud (i.e. employee theft) and robbery.That last point alone will make it impossible to survive in society without engaging with one or more of the walled gardens.And mercy be upon you if the US Government decides to put you on one of their black lists. No more banking, even as an European, because everything touches VISA/MC/SWIFT, your cloud accounts (and with it your phone and app stores), all gone, you are now an unperson [3]. That last point alone will make it impossible to survive in society without engaging with one or more of the walled gardens.And mercy be upon you if the US Government decides to put you on one of their black lists. No more banking, even as an European, because everything touches VISA/MC/SWIFT, your cloud accounts (and with it your phone and app stores), all gone, you are now an unperson [3]. And mercy be upon you if the US Government decides to put you on one of their black lists. No more banking, even as an European, because everything touches VISA/MC/SWIFT, your cloud accounts (and with it your phone and app stores), all gone, you are now an unperson [3]. It also reinforced its reinforced its belief in the inevitably of progress (the "end of history" nonsense, for example). They cannot now cope with threats or danger.That said, comparing the west to Russia, China etc. That said, comparing the west to Russia, China etc. Citizens are being indefinitely detained for “looking” like immigrants. China is also a horrifying place to live unless you are content just to participate quietly in society and never put a political sign in your yard or even just talk about the wrong thing with your friend in a private WeChat.https://reclaimthenet.org/china-man-chair-interrogation-soci... We live in an age where two oceans offer far less protection than they did when America rose to superpower status. The fact Russian intelligence operatives can so easily infiltrate American political discourse is just one example. Watch any congressional hearing about cyber and you might be forgiven for thinking we have already been invaded. Beating up on third world pariah states impresses no one but the current administration. The United States bombs Iran but blinks at Russia. The administration started a trade war with China then backed off, not one meaningful concession was achieved.Unless America reverses course fast the decline will only continue. Unless America reverses course fast the decline will only continue. So is Europe, and we are talking about the west in general, not just the US.> Americans view of themselves is highly inflated by the sheer luck of being two oceans away from everyone during both world wars.Again, most of Europe suffered during the world wars.> The fact Russian intelligence operatives can so easily infiltrate American political discourse is just one exampleThey also infiltrate European politics, as do the Chinese. > Americans view of themselves is highly inflated by the sheer luck of being two oceans away from everyone during both world wars.Again, most of Europe suffered during the world wars.> The fact Russian intelligence operatives can so easily infiltrate American political discourse is just one exampleThey also infiltrate European politics, as do the Chinese. Again, most of Europe suffered during the world wars.> The fact Russian intelligence operatives can so easily infiltrate American political discourse is just one exampleThey also infiltrate European politics, as do the Chinese. > The fact Russian intelligence operatives can so easily infiltrate American political discourse is just one exampleThey also infiltrate European politics, as do the Chinese. They also infiltrate European politics, as do the Chinese. Most of the "Western" civilizations old enough to attempt comparison with China were not European in the modern sense at all. The classic example is usually Rome, which treated most of Europe as barbarians to colonize and enslave. I think you could successfully argue Romans had more in common with other ancient Mediterranean powers or even ancient Mesopotamians than modern Europeans.As to the rest of your points true enough. It is well known that today's Europeans find themselves in between a rock and a hard place given the current split between American and Chinese hegemony. As to the rest of your points true enough. It is well known that today's Europeans find themselves in between a rock and a hard place given the current split between American and Chinese hegemony. The Roman Empire covered much of Europe about 2000 years ago, and those places have had a great deal of cultural continuity since then. If that was the point you were attempting it is incorrect. Not even the current administration is attempting that line of reasoning. It is also a totalitarian regime where criticising the state can get you, and possibly your family, ‘disappeared' ICE is arresting, detaining Native Americans.https://idahocapitalsun.com/2026/02/10/for-indigenous-americ...Detain first , ask pesky questions about citizenship and civil rights later. https://idahocapitalsun.com/2026/02/10/for-indigenous-americ...Detain first , ask pesky questions about citizenship and civil rights later. Detain first , ask pesky questions about citizenship and civil rights later. I don't think the USA is necessarily changing at all, this is what it has always been the whole time China doesn't properly tabulate, and therefore cannot release, anything like accurate crime data. But the crime rate is certainly higher since it's pretty much impossible to even go online and do just about anything without breaking some law. What is written is so vague and nearly any conduct can fall under it.The ambiguity doesn't make the country safer, they just have a media hegemony and active censorship. Healthcare is woeful and "cheap" comes with "quotas on patients seen" meaning that doctors frequently have 1-2 minutes to see patients and one can become an MD much earlier than one can in the US. And since the perception is that no food is really 100% safe, it's more acquiescence, and not confidence, that people show.Hell, you having the option of choosing to opt into vaccines is even an improvement. In China you are stuck with the state prescribed schedule and that's it. Unless you're extremely wealthy, but then again, where is that not an exception? You can't really get things done without breaking the law. China doesn't properly tabulate, and therefore cannot release, anything like accurate crime data. But the crime rate is certainly higher since it's pretty much impossible to even go online and do just about anything without breaking some law. What is written is so vague and nearly any conduct can fall under it.The ambiguity doesn't make the country safer, they just have a media hegemony and active censorship. Healthcare is woeful and "cheap" comes with "quotas on patients seen" meaning that doctors frequently have 1-2 minutes to see patients and one can become an MD much earlier than one can in the US. And since the perception is that no food is really 100% safe, it's more acquiescence, and not confidence, that people show.Hell, you having the option of choosing to opt into vaccines is even an improvement. In China you are stuck with the state prescribed schedule and that's it. Unless you're extremely wealthy, but then again, where is that not an exception? Healthcare is woeful and "cheap" comes with "quotas on patients seen" meaning that doctors frequently have 1-2 minutes to see patients and one can become an MD much earlier than one can in the US. And since the perception is that no food is really 100% safe, it's more acquiescence, and not confidence, that people show.Hell, you having the option of choosing to opt into vaccines is even an improvement. In China you are stuck with the state prescribed schedule and that's it. Unless you're extremely wealthy, but then again, where is that not an exception? Hell, you having the option of choosing to opt into vaccines is even an improvement. In China you are stuck with the state prescribed schedule and that's it. Unless you're extremely wealthy, but then again, where is that not an exception? Those who trade freedom for security will obtain neither. If that's what you strongly believe then "western countries" are definitely quite bad at communication and the others quite good at propaganda.Having lived in a communist country (years ago) and in the west I know from first hand experience that the difference is huge. Western countries were a richer which means less poor, but it's not like it's a heaven for everybody either. Having lived in a communist country (years ago) and in the west I know from first hand experience that the difference is huge. Western countries were a richer which means less poor, but it's not like it's a heaven for everybody either. Western countries were a richer which means less poor, but it's not like it's a heaven for everybody either. China is definitely not so shit like portrayed by western media. Unfortunately, since around 2000 the differences have become less and less every year, so what has remained now is a very small fraction of what was a quarter of century ago.The socialist economies from the past were just the extreme form of capitalist economies, where monopolies controlled every market. While the secret police or equivalent organizations did not care about what is legal or not, they were nonetheless forced to keep appearances and do their work covertly. They also did not have enough resources to process in a centralized form all the data collected by surveillance.Now, in the western countries surveillance has been legalized, so the governmental agencies no longer bother to hide their activities. They also now have the means to spy on an unlimited number of people among hundreds of millions or even billions, so surveillance is already worse than it was in the communist countries, even if the consequences of being spied are not yet so severe (hopefully). While the secret police or equivalent organizations did not care about what is legal or not, they were nonetheless forced to keep appearances and do their work covertly. They also did not have enough resources to process in a centralized form all the data collected by surveillance.Now, in the western countries surveillance has been legalized, so the governmental agencies no longer bother to hide their activities. They also now have the means to spy on an unlimited number of people among hundreds of millions or even billions, so surveillance is already worse than it was in the communist countries, even if the consequences of being spied are not yet so severe (hopefully). While the secret police or equivalent organizations did not care about what is legal or not, they were nonetheless forced to keep appearances and do their work covertly. They also did not have enough resources to process in a centralized form all the data collected by surveillance.Now, in the western countries surveillance has been legalized, so the governmental agencies no longer bother to hide their activities. They also now have the means to spy on an unlimited number of people among hundreds of millions or even billions, so surveillance is already worse than it was in the communist countries, even if the consequences of being spied are not yet so severe (hopefully). Now, in the western countries surveillance has been legalized, so the governmental agencies no longer bother to hide their activities. They also now have the means to spy on an unlimited number of people among hundreds of millions or even billions, so surveillance is already worse than it was in the communist countries, even if the consequences of being spied are not yet so severe (hopefully). Hiding or not 20 years ago the west was trying to surveil it's population as much as they could as well, see the Snowden/NSA scandal.> even if the consequences of being spied are not yet so severeSpot on. I would go even further and argue that "communist countries" used to rule through "fear of the state", while west ruled through (among others) "fear of others" (used to be communist, now becomes migrants or other religious groups).For me the surveillance is not ideal, but the worst is the average education level of a population. Without any surveillance, if my neighbor will suddenly believe I am a witch and burn me at stake (it did happen in the west!) > even if the consequences of being spied are not yet so severeSpot on. I would go even further and argue that "communist countries" used to rule through "fear of the state", while west ruled through (among others) "fear of others" (used to be communist, now becomes migrants or other religious groups).For me the surveillance is not ideal, but the worst is the average education level of a population. Without any surveillance, if my neighbor will suddenly believe I am a witch and burn me at stake (it did happen in the west!) I would go even further and argue that "communist countries" used to rule through "fear of the state", while west ruled through (among others) "fear of others" (used to be communist, now becomes migrants or other religious groups).For me the surveillance is not ideal, but the worst is the average education level of a population. Without any surveillance, if my neighbor will suddenly believe I am a witch and burn me at stake (it did happen in the west!) Without any surveillance, if my neighbor will suddenly believe I am a witch and burn me at stake (it did happen in the west!) In reality we started with slavery which is about as far from freedom and justice as you can get, and then shifted to mass incarceration (often just slavery with extra steps) locking up more of our own people than Russia or China ever did. These days our prison population is trending down as we're getting better at imprisoning people in their own homes and communities with GPS trackers and parole/probation requirements but it's still laughable to call ourselves the "land of the free" It is occurring in every dimension, including the ability to track who buys and sells with crypto currencies along with the ability to punish or reward people based on ai hardware software infrastructure deployments. https://www.reddit.com/r/RedditSafety/comments/1j4cd53/warni..."We know that the culture of a community is not just what gets posted, but what is engaged with. "We know that the culture of a community is not just what gets posted, but what is engaged with. We are digging our own graves here and we are too uninformed and too entertained to see it. Next election will probably be the breaking point, when AfD manages to get many majorities, due to how unhappy CDU, SPD, and other mainstream parties have made the populace. And then we will have these right-wing extremists as our government.Looking to the US, they have hit it even worse now. Full authoritarian guy at the top, who might even prevent the next elections, unless he is sure that he will win or can make it so that he appears to have won. Looking to the US, they have hit it even worse now. Full authoritarian guy at the top, who might even prevent the next elections, unless he is sure that he will win or can make it so that he appears to have won. That's how people get stalked, harassed, and murdered at their home. If the 'SAVE America Act' passes, you're going to be open to leaking a heck of a lot more than that, and it'll all go in to a national database. It's full of people from ad-tech who believe data protection is the enemy and the GDPR is a European conspiracy against growth.You should learn to simply bend over and grab your ankles with both hands whenever they (or anybody else) asks for your personal data.EDIT: and predictable 'drive-by' downvotes from those in the industry too lazy to try and defend their position and write a rebuttle! You should learn to simply bend over and grab your ankles with both hands whenever they (or anybody else) asks for your personal data.EDIT: and predictable 'drive-by' downvotes from those in the industry too lazy to try and defend their position and write a rebuttle! This is a misunderstanding of American history. And of course neither women nor Black men had the right to vote. Political election campaigns have always been privately funded—another essential feature of the plutocracy—and now they're obscenely expensive with TV and internet advertising, which further consolidates the power of the ultra-wealthy campaign contributors.The biggest problem with the US is that we haven't had a political revolution in 250 years. And note that the most successful third-party Presidential candidate in recent history was Ross Perot, a billionaire who self-funded TV informercials to spread his message. Political election campaigns have always been privately funded—another essential feature of the plutocracy—and now they're obscenely expensive with TV and internet advertising, which further consolidates the power of the ultra-wealthy campaign contributors.The biggest problem with the US is that we haven't had a political revolution in 250 years. And note that the most successful third-party Presidential candidate in recent history was Ross Perot, a billionaire who self-funded TV informercials to spread his message. And note that the most successful third-party Presidential candidate in recent history was Ross Perot, a billionaire who self-funded TV informercials to spread his message. Even during the suffering of the Great Depression, it took a "white knight", an ultra-wealthy leader FDR with some sympathy for the lower classes, to provide some relief. And note that the most successful third-party Presidential candidate in recent history was Ross Perot, a billionaire who self-funded TV informercials to spread his message. The Constitution is designed to maximize the advantages while hedging against its inherent instability.> The game is rigged in favor of big money and has always been so rigged.I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do. The Constitution is designed to maximize the advantages while hedging against its inherent instability.> The game is rigged in favor of big money and has always been so rigged.I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do. > The game is rigged in favor of big money and has always been so rigged.I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do. I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do. And they borrowed heavily from 17th century philosopher John Locke. For example, they thought their system would suppress political parties, and then political parties arose almost immediately.> Rule by the many is great, but the historical evidence shows it's clearly unstable.Which historical evidence are you referring to? Most of history is nondemocratic.In any case, the US broke out into an extremely bloody civil war less than 75 years after the Constitution was ratified, so it hasn't been "stable", not that stability is even desirable under a plutocracy.> I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. For example, they thought their system would suppress political parties, and then political parties arose almost immediately.> Rule by the many is great, but the historical evidence shows it's clearly unstable.Which historical evidence are you referring to? Most of history is nondemocratic.In any case, the US broke out into an extremely bloody civil war less than 75 years after the Constitution was ratified, so it hasn't been "stable", not that stability is even desirable under a plutocracy.> I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. > Rule by the many is great, but the historical evidence shows it's clearly unstable.Which historical evidence are you referring to? Most of history is nondemocratic.In any case, the US broke out into an extremely bloody civil war less than 75 years after the Constitution was ratified, so it hasn't been "stable", not that stability is even desirable under a plutocracy.> I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. Most of history is nondemocratic.In any case, the US broke out into an extremely bloody civil war less than 75 years after the Constitution was ratified, so it hasn't been "stable", not that stability is even desirable under a plutocracy.> I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. In any case, the US broke out into an extremely bloody civil war less than 75 years after the Constitution was ratified, so it hasn't been "stable", not that stability is even desirable under a plutocracy.> I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. > I would say the game is rigged in favor of production, of which capital is a big part, because those who don't produce end up being governed by those who do.Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. Let's see a rich dude produce anything all by himself. We like the pretend that the one rich dude is producing everything and his thousands of employees are basically superfluous. We're certainly in agreement here, but I would say that most modern wealth is fictional: based on equity, which is based on credit, which is based on confidence, which at the end of the day is just vibes. So most of the 'wealthy' people exist as such with social permission because they're employed in production, and if they fail at that job the wealth rapidly evaporates. However, they're definitely wildly overpaid in the US. That, imho, is because culturally this country still wants to cosplay at having an aristocracy. It's misleading to say "they're employed in production", using the present tense. Bill Gates quit his job 20 years ago, claims to be trying to give most of his money away, yet he's still one of the wealthiest people in the world. Sure, he engaged in production for a number of years, but most ordinary workers have no choice but to engage in production for 40 or 50 years or their life at least.The ultra-wealthy are not wage earners, paid by their labor. If you're smart with your wealth and diversify, and by smart I mean not dumb—safe long-term investment doesn't take a genius—it's extremely hard to lose it all. That would happen only if you put all of your eggs in one basket. I'm not aware of too many riches to rags stories, except among professional athletes for example. But those athletes were wage earners rather than capital owners. The ultra-wealthy are not wage earners, paid by their labor. If you're smart with your wealth and diversify, and by smart I mean not dumb—safe long-term investment doesn't take a genius—it's extremely hard to lose it all. That would happen only if you put all of your eggs in one basket. I'm not aware of too many riches to rags stories, except among professional athletes for example. But those athletes were wage earners rather than capital owners. Are you asking what a different system would look like, or how we would get there?As for the first question, there are many obvious ways to improve the system. It isn't the senile crowd running things anymore. It's 50-60 year old Thiel, Musk, health insurance CEO, crowd.Professional consumer crowd that's taken the baton and never invented anything of their own. Professional consumer crowd that's taken the baton and never invented anything of their own. As a Gen X'er myself I know I grew up respecting the hell out of older people, especially 70+ ages. It's more of a case by case basis now, many of them seem outright evil in their self-righteousness. They all seem angry and ready to fight in any passing interaction (granted, I live in Texas where most of them are amped on FoxNews, too) and that's not how it used to be. They used to be the friendliest cohort alive, hell when I was maybe 10-14 I even used to volunteer at senior living homes just to hang out and chat with them and can't imagine anyone wanting to do that now. Now, after the better part of a century of that running it's course with nearly no pressure to not chart a crap course it's falling apart. They were not kids a decade agoTwo decades agoWhy is it 20-30 somethings of 40-50 years ago put the world on an immutable path but 20-30 somethings now are stuck with?If prior 20-30 somethings that "put us on a path" had free agency we do tooEspecially when those old 20-30 somethings are now 70-90 somethingsKids in the 1980s who rolled over in their 20-30sWho speaks old English and writes like Shakespeare? Two decades agoWhy is it 20-30 somethings of 40-50 years ago put the world on an immutable path but 20-30 somethings now are stuck with?If prior 20-30 somethings that "put us on a path" had free agency we do tooEspecially when those old 20-30 somethings are now 70-90 somethingsKids in the 1980s who rolled over in their 20-30sWho speaks old English and writes like Shakespeare? Why is it 20-30 somethings of 40-50 years ago put the world on an immutable path but 20-30 somethings now are stuck with?If prior 20-30 somethings that "put us on a path" had free agency we do tooEspecially when those old 20-30 somethings are now 70-90 somethingsKids in the 1980s who rolled over in their 20-30sWho speaks old English and writes like Shakespeare? If prior 20-30 somethings that "put us on a path" had free agency we do tooEspecially when those old 20-30 somethings are now 70-90 somethingsKids in the 1980s who rolled over in their 20-30sWho speaks old English and writes like Shakespeare? Especially when those old 20-30 somethings are now 70-90 somethingsKids in the 1980s who rolled over in their 20-30sWho speaks old English and writes like Shakespeare? Who speaks old English and writes like Shakespeare? We do need some kind of mechanism to prevent this kind of "keep trying until it passes" mechanism to lobbying/lawmaking that the people pushing chat control are using. That's a tricky issue though, as revisions on law proposals are an expected part of the process. Some sort of "dismiss with prejudice" would be nice tho the very same rules that have allowed literally every single piece of my data to be leaked several separate times, and now i have free credit monitoring instead of privacy? and all of those companies still operate normally, as if nothing ever happened? very neat.>Discord said it is using the additional time this year to add more verification options, including credit cards, more transparency on vendors and technical detail of how age verification will workand why didnt we start with credit cards? instead of facial recognition with peter thiel? >Discord said it is using the additional time this year to add more verification options, including credit cards, more transparency on vendors and technical detail of how age verification will workand why didnt we start with credit cards? instead of facial recognition with peter thiel? and why didnt we start with credit cards? And most companies can simply price it in as cost of doing business at this point. However, that makes me wonder what mechanism might "unverify" an account holder's age upon transfer. I suppose it's simply a need to re-verify (take a new photo) upon every login, but then folks could transfer the session cookie to avoid needing the new owner to perform a login (unless a new device ID/fingerprint makes the old cookie useless). Clearly the only foolproof solution is a 3rd-party camera pointed at your face at all times whenever you use a computer. Is there any forum short of a senate subcommittee that the public can ask companies these questions? There is a reason why I don't accept private enterprise as something separate from Government. The nature of the incorporation legal fiction makes them proxies of Government power and influence, hence why I believe private enterprise should in some ways be as heavily restricted by Constitutional guardrails as the Government itself (allegedly) is. Might not even matter ..."TransUnion and Experian, two of the three major credit bureaus, have started dismissing a larger share of consumer complaints without help since the Trump administration began dismantling the CFPB. "TransUnion and Experian, two of the three major credit bureaus, have started dismissing a larger share of consumer complaints without help since the Trump administration began dismantling the CFPB. I'm not saying the inverse is the answer either, just that if anyone without an agenda of surveillance looked at this for a second, the penny would have dropped. It was used to bash interracial marriage, gay rights, suppress dissent, attack the first amendment, and now this.Whenever you hear some dramatic story involving kids about how you have to live a little less free, know the tactic. Whenever you hear some dramatic story involving kids about how you have to live a little less free, know the tactic. ___ said hamas beaheaded 40 babies and that turned out to be a complete fabrication. That fake info was used in part to justify killing thousands of kids in ____meanwhile the recent strike on Iran resulted in 80 little girls getting killed (with plenty of evidence) and its swept under the rug while we get blasted about the 7 soldiers that died. meanwhile the recent strike on Iran resulted in 80 little girls getting killed (with plenty of evidence) and its swept under the rug while we get blasted about the 7 soldiers that died. This would block the most common classes of abuse on platforms like Roblox, Fortnight, Lego (kids) Fortnight, YouTube Kids, Minecraft, and "educational" social networks / games.Note that it doesn't require any centralized surveillance at all. Parents just need to control the kids' ability to create random accounts, by (for example) turning on parental controls as they already exist on most tablets/phones, and blocking app installation / email applications (or other 2FA vectors).When the parent allows an account to be created, they just tick the "kid mode" box. This even works with shared devices that don't support multiple accounts (so, iPads and iPhones). Note that it doesn't require any centralized surveillance at all. Parents just need to control the kids' ability to create random accounts, by (for example) turning on parental controls as they already exist on most tablets/phones, and blocking app installation / email applications (or other 2FA vectors).When the parent allows an account to be created, they just tick the "kid mode" box. This even works with shared devices that don't support multiple accounts (so, iPads and iPhones). This even works with shared devices that don't support multiple accounts (so, iPads and iPhones). The UK's Online Safety Act originally had a proposal that would allow users to purchase an ID code anonymously in cash from a corner store, presenting only ID to the cashier the same way as buying alcohol. This was never implemented, because it's more useful for the government and corporations to link all online usage to a government ID. I've been proposing the same thing on this site for months. IMO anonymous age verification with no record-keeping is the only form of age verification that should exist. Namely, you don't prevent it (I was 11 when I first saw hardcore pornography, on a VHS tape, at a sleepover party), but it does place a (surmountable) barrier in the way, which will reduce access to some degree. The degree to which that happens depends on a lot of things that are hard to predict. We have culturally normalized access to a lot of things for children, and reversing that will likely take more than just changes to a law. Selling alcohol to minors is illegal in the UK. Some do circumvent this by various means (e.g. fake ID or having an adult purchase on their behalf, both of which are also illegal), but the same is already true for the current age verification system. That's the same question.Meanwhile apparently 70% of Australian under-16's retrained/regained access to social media.See, even intrusive, surveillance and privacy-busting methods don't work. Meanwhile apparently 70% of Australian under-16's retrained/regained access to social media.See, even intrusive, surveillance and privacy-busting methods don't work. See, even intrusive, surveillance and privacy-busting methods don't work. With more focus on things like porn and gambling (including 'loot box' gambling in games) rather than social media. This could have been avoided [1] if the real goal was to protect small children. No need for third parties or sharing sensitive data that will eventually be "ooopsie leaked totally by mistake" or outright sold/shared. There are some long Github threads in the official repo along with a PDF[1] of cryptographer's feedback about the privacy issues. Also covered in this[2] article.This is unlike BBS+ which supports unlinkability and which was even recommended by GSMA Europe to such address downsides. In the Github discussions there seems to be pushback by those officially involved that claim BBS+ isn't compatible with EUDI[3] and there seems to be some plateauing of any progress advancing it. This is unlike BBS+ which supports unlinkability and which was even recommended by GSMA Europe to such address downsides. In the Github discussions there seems to be pushback by those officially involved that claim BBS+ isn't compatible with EUDI[3] and there seems to be some plateauing of any progress advancing it. you can also introduce some jitter like changing age range only once a week/month/year for everyone But also, knowing someone's birthday without trying it to other information greatly reduces the risk of harm. To let unknown adults contact children in private messages is harmful. To let children access pornography 24/7 is harmful.I would expect a more balanced discussion. How to keep children safe is a priority, and there are technical ways to do so in a safe way that does not require to share personal identifications with social media.If you want a better proposal bring technical expertise to the discussion instead of ideology fundamentalism. How to keep children safe is a priority, and there are technical ways to do so in a safe way that does not require to share personal identifications with social media.If you want a better proposal bring technical expertise to the discussion instead of ideology fundamentalism. If you want a better proposal bring technical expertise to the discussion instead of ideology fundamentalism. Slippery slope arguments and things like it are not going to convince people, "just parent your kids" is not going to convince people. Technically there should be no real reason we cannot do age attestation without fully revealing our identities anyway, there will need to be trust at some point in the system but the reality of the real world is that there is already and it's far less secure than we'd like. Technically there should be no real reason we cannot do age attestation without fully revealing our identities anyway, there will need to be trust at some point in the system but the reality of the real world is that there is already and it's far less secure than we'd like. This is why you don't have a technologically effective solution, here. "Trust" in this situation is a weasel word for surveillance, just like the pinkie promise that Client Side Scanning would never be abused by the government. Trust would not stop child abuse, or meaningfully prevent access to online pornography. All of the non-technical hand wringing is not helpful either, and feeds into the slippery slope logic that HN should be avoiding. If you have a productive suggestion, now is the time to voice it. All of the non-technical hand wringing is not helpful either, and feeds into the slippery slope logic that HN should be avoiding. But the verification is not to prove you're a children. Same goes for Ubisoft who aggressively wanted my secret papers to verify my identity.I've yet to come across anything I want or need outside banking or government use where age verification benefits me, or is so useful/important that I would willingly hand over critical secret documents. I've not even needed to use a VPN for anything. I've yet to come across anything I want or need outside banking or government use where age verification benefits me, or is so useful/important that I would willingly hand over critical secret documents. I've not even needed to use a VPN for anything. Which circles back to the main point here - if I ignore it, then effectively I get identified as a non-adult. The problems start when the space become not-for-children and identity validation is mandatory to use them, which will exclude people like me who categorically refuse to hand over personal secrets in order to have access. It does not warrant the inherent risk involved with granting access to personal details unrelated to the service offered. I reckon this will happen when someone decides it's better commercially to make a service adult-only than to moderate non-adult accounts. It's a slippery slope, and a predictable next step once adult have become accustomed to handing over papers for some services to have to do it for many, if not all. Most platforms optimize for regulatory compliance, not actual safety. (If anyone is offended by this, don't worry, I'm talking about the other side; I'm sure your side is full of reasonable adults who just get a little carried away sometimes.) I also wouldn't be surprised if there were plenty of people only dimly aware of the idea of a VPN who are now sitting up and taking note. Such as following directions from a YouTube video that instructs them to do sketchy things. Self-hosted vpns and b2b vpns will remain unaffected but that doesn't matter, they don't look for 100% coverage, 70%-80% is good enough I can see how the problem is real. Put an air/security gap between information collected for age verification and the dossiers they have on users.In business terms, conflict. They have relentless incentives and pressures to collect, collate and leverage every bit of information that can increase their return on users. Right up to the c-suite.It is sad, but self-aware, if they feel awkward trusting themselves with a mandated database full of tasty information they are not supposed to taste. Put an air/security gap between information collected for age verification and the dossiers they have on users.In business terms, conflict. They have relentless incentives and pressures to collect, collate and leverage every bit of information that can increase their return on users. Right up to the c-suite.It is sad, but self-aware, if they feel awkward trusting themselves with a mandated database full of tasty information they are not supposed to taste. They have relentless incentives and pressures to collect, collate and leverage every bit of information that can increase their return on users. Right up to the c-suite.It is sad, but self-aware, if they feel awkward trusting themselves with a mandated database full of tasty information they are not supposed to taste. It is sad, but self-aware, if they feel awkward trusting themselves with a mandated database full of tasty information they are not supposed to taste. Discord's age verification is optional and only required to disable the image content filter, join adult servers, and a couple other features. I'm not saying it's a good decision, but I am getting tired of the repeated claim that it's mandatory to go do age verification to use the service.This lazy reporting is hurting the messaging because readers will believe that mandatory age verification was implemented and everything is fine, so new laws will not change anything for the worse. It needs to be clear that age verification laws would change the situation considerably, not be a nothingburger.I don't plan to do the Discord age verification and neither do most of the people I interact with on Discord. It's not mandatory.I don't recommend anyone rush to do the Discord age verification unless you really need to for some reason. Don't believe all of the lazy articles saying it's mandatory. This lazy reporting is hurting the messaging because readers will believe that mandatory age verification was implemented and everything is fine, so new laws will not change anything for the worse. It needs to be clear that age verification laws would change the situation considerably, not be a nothingburger.I don't plan to do the Discord age verification and neither do most of the people I interact with on Discord. It's not mandatory.I don't recommend anyone rush to do the Discord age verification unless you really need to for some reason. Don't believe all of the lazy articles saying it's mandatory. It's not mandatory.I don't recommend anyone rush to do the Discord age verification unless you really need to for some reason. Don't believe all of the lazy articles saying it's mandatory. I don't recommend anyone rush to do the Discord age verification unless you really need to for some reason. Don't believe all of the lazy articles saying it's mandatory. - There are servers that are labelled adult only because it's simpler to label _everything_ as causing cancer than it is to only label the correct things. I can't join channels for some games because they're "adult"; even though they're not- There are servers that are getting rid of content because they don't want some automatic system to label them as adult, even though they're not. There's a game server that got rid of it's meme channel, because people could (but don't) post content that some system might see as adult.So it is a bigger deal than you're making it out to be. It's negatively impacting people and servers that have no interest in having anything adult on them. - There are servers that are getting rid of content because they don't want some automatic system to label them as adult, even though they're not. There's a game server that got rid of it's meme channel, because people could (but don't) post content that some system might see as adult.So it is a bigger deal than you're making it out to be. It's negatively impacting people and servers that have no interest in having anything adult on them. It's negatively impacting people and servers that have no interest in having anything adult on them. but it's hard work, lots of people trying to be at the edge of rules (with normal things like swearing, insults, etc. ).Whoever labels adult only and does not care is not wishing to put the effort to police that it actually is not.Personally I do generally mind much more annoying, aggressive, stupid posters (in various channels), than the fact that I am not allowed to post some stupid adult-looking meme. Whoever labels adult only and does not care is not wishing to put the effort to police that it actually is not.Personally I do generally mind much more annoying, aggressive, stupid posters (in various channels), than the fact that I am not allowed to post some stupid adult-looking meme. Personally I do generally mind much more annoying, aggressive, stupid posters (in various channels), than the fact that I am not allowed to post some stupid adult-looking meme. The direction of these restrictions is not “optional” Not really, you'll just be forced to use services from eg google or meta. I literally gain from using their services for communication and voice chat with friends.“Literally no gain whatsoever” is completely wrong.I've tried Matrix/Element for years. I know what the alternatives are I can confidently say I'm gaining value from the ease in which Discord allows us to voice chat, screen share, and invite less technical people to join. “Literally no gain whatsoever” is completely wrong.I've tried Matrix/Element for years. I know what the alternatives are I can confidently say I'm gaining value from the ease in which Discord allows us to voice chat, screen share, and invite less technical people to join. I know what the alternatives are I can confidently say I'm gaining value from the ease in which Discord allows us to voice chat, screen share, and invite less technical people to join. ...for now ... What stops them from changing this in the future?Additionally Discord may verify your age based on the collected data without consent. Additionally Discord may verify your age based on the collected data without consent. Then I'll deal with that situation if it arises. we, as a society, need to stop taking companies at their word when they say that the obvious harms that are right around the corner are overblown. >most people will not verify their age>can't be sure they're an adult so treat everyone like children just in case>wait what? >can't be sure they're an adult so treat everyone like children just in case>wait what? All for making sites to send a header with restrictions as they apply in law (age rating per location for example -- so a site could send "US:16 US-TX:18 IE:14 GB:18 DE:16" etc), and even categorise as not required in law (category=gambling or category=healthcare)That gives the browser/app/accessing device the power to display or not displayThe second part of this is to empower parents -- let them choose the age rating which can only be changed with a parental code etc. Make this the law on all consumer commercial devices -- i.e phones, macbooks, windows.This is trivial and worthwhile.Yes some 15 year old will build something in python in a user session to work around it as they have a general purpose computer, that's a tiny amount of the problem. That gives the browser/app/accessing device the power to display or not displayThe second part of this is to empower parents -- let them choose the age rating which can only be changed with a parental code etc. Make this the law on all consumer commercial devices -- i.e phones, macbooks, windows.This is trivial and worthwhile.Yes some 15 year old will build something in python in a user session to work around it as they have a general purpose computer, that's a tiny amount of the problem. The second part of this is to empower parents -- let them choose the age rating which can only be changed with a parental code etc. Make this the law on all consumer commercial devices -- i.e phones, macbooks, windows.This is trivial and worthwhile.Yes some 15 year old will build something in python in a user session to work around it as they have a general purpose computer, that's a tiny amount of the problem. This is trivial and worthwhile.Yes some 15 year old will build something in python in a user session to work around it as they have a general purpose computer, that's a tiny amount of the problem. Yes some 15 year old will build something in python in a user session to work around it as they have a general purpose computer, that's a tiny amount of the problem.
As apex predators, wolves are known to prey on foxes, but the reverse has never been observed before. A team of researchers from the University of Sassari in Italy had set up five cameras outside a wolf den to monitor the species' reproductive behavior and instead ended up capturing the strange behavior of a fox flipping the script. This was the first incident of its kind caught on camera, but the researchers suggest it may be common for a mid-sized predator to prey on the young of its larger competitors in the wild. The researchers, including Marco Apollonio and Celeste Buelli from the Department of Veterinary Medicine, placed five motion-activated cameras at the Castelporziano Presidential Estate, a protected area on the outskirts of Rome. One night in May 2025, a red fox was seen approaching a wolf den where two young pups were hiding inside. Warning: The video, shown below, is not particularly graphic, but it does show a disturbing incident that some readers may find upsetting. The video cuts to a different scene, and the wolf's fate is not caught on camera. Wolves are apex predators that often hunt and kill foxes in the wild. Although wolves don't usually end up eating foxes, they do so to eliminate other predators in their territory. Foxes normally feed on small mammals such as birds, mice, and rabbits. This sly fox, however, chose to attack a young wolf pup as a form of opportunistic kill to eliminate an apex predator, the scientists argue. “Our observation broadens the known range of antagonistic interactions affecting wolf offspring, demonstrating that even mesocarnivores [middle carnivores] can exert direct pressure on the reproductive performance of this apex predator,” the researchers wrote in the paper. Considering that this was the first incident caught on camera, scientists aren't sure if this is common behavior for foxes. It does highlight hidden dangers that could threaten wolf pups in the wild and the lengths that smaller predators will go to in order to stay ahead of the competition. Subscribe and interact with our community, get up to date with our customised Newsletters and much more. The researchers describe it as a "potential" example of tool use among wolves. A handful of bite marks on a fossil tens of millions of years old speaks to an ancient tussle between two terrifying apex predators. New research challenges previous theories suggesting the two young canines were domesticated dogs and reveals their surprising last meal. A popular theory suggests wolves domesticated themselves for human food scraps—and new simulations suggests it's paw-sible.
These apps now include additional tools powered by Gemini, Google's AI assistant. The features range from generating entire rough drafts in your Docs to finding information tucked away in the recesses of your Drive. The features are coming first to English-speaking subscribers of Google's AI Pro and Ultra plans. For Docs, Google added “Help me create,” which attempts to generate full first drafts of your document, from a prompt, by looking at your emails and files, and searching the internet for context. This feature takes the existing “Help me write” feature in the Chrome browser even further and points to a future where humans rely on AI to craft their thoughts and share ideas with others. Also, Drive now includes AI Overviews of your files and more natural language searching abilities. I was a little creeped out when the bot correctly looked up my flight reservations to see what city I'd be located in on March 17. Overall, the results of this test were quick and solid. WIRED's editorial standards block the use of generative AI, rightly so, except in situations where it's disclosed and used as an example. Rest assured, everything you're reading here was scribbled into my notebook before being typed up. Other digital media outlets may not have rigorous standards around AI use, and tools like “Help me create” could be forced onto early-career journalists expected to pump out numerous stories each day. I attached the press materials Google provided about today's launch and requested a 600-word hands-on story from Gemini, with first-person insights that could help readers better understand the launch. Naturally, Gemini didn't actually go “hands on” with itself. But, based on the solid quality of its St. Patty's Day plan, I was anxious that this mimetic blog post would also be surprisingly sufficient. “After going hands-on with these features, I've found that the real power isn't just in ‘AI writing'; it's in the deep integration across your personal and professional data silos.” Not so much for personal expression or creative outputs. Even when I uploaded files of my own writing and asked Gemini to copy that cadence, the results still didn't sound like me. “If you have access, my best advice is to stop treating Gemini as a search engine and start treating it as a research assistant that already has a copy of your key,” the AI-generated draft concluded. I gave you access to my entire email archive. Let's make this a little more specific, or at least say something provocative in my voice. Not sure why any research assistant would need that.) Along with generating full drafts, the new Gemini features can also be used to adjust sections of the documents and do full-scale rewrites based on user suggestions. I asked it to rewrite this initial draft in the tone of a WIRED journalist. Almost immediately, Gemini regenerated the draft with fresh paragraphs that I could choose to accept or reject. This new version was indeed better, but far from passable as something I would actually write. “It's moved past generating generic corporate-speak, instead synthesizing live data points across your Drive, Gmail, and Chat history.” I wouldn't be so sure about that, Gemini! These Gemini tools are more powerful than past releases and were actually able to locate information quickly and accurately from personal data sources, like my inbox. Even so, all this AI-generated writing still harbors an undercurrent of blandness, with a paint-by-numbers approach to prose that's almost impossible to overcome. Well, at least that's what I have written down in my notebook. This popular pro-Trump X account is apparently run by a White House staffer Save 5% on Your First Order With LG Referral Codes WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Who would've thought that Apple could be a more "affordable" option? When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. The PC market is being pummeled by precipitous decline and increasing component costs, with an industry analyst saying that these factors could cause laptop prices to increase by around 40%. TrendForce says that this price hike is likely to happen if manufacturers, distributors, and retailers were to keep their margins, resulting in mainstream models that cost $900 hitting around $1,260. These cost pressures are driven by the continued memory and storage chip shortage, resulting in out-of-control pricing, as well as Intel raising the prices on several generations of modern CPUs. However, recent events meant that this number is now closer to 58%. One key player in the storage industry even warned that the NAND shortage could cause entire businesses to shut down because of their inability to secure supply. Since these enterprises are willing to pay top-dollar compared to the average consumer, almost all suppliers have pivoted their manufacturing capacity towards these more lucrative products. This is largely driven by agentic AI, which requires a combination of CPUs, GPUs, NPUs, and more to support its workflow. This is also apparent in the consumer market as enthusiasts experiment with OpenClaw, resulting in extended delivery timelines for high-end Apple Mac units with massive Unified Memory configurations. If this price increase estimate rings true, then it would make these models no longer priced attractively versus the new M5 MacBook Air, with its $1,099 base price that comes with 16GB of Unified Memory and 512GB storage. More importantly, the just-released MacBook Neo is now giving entry-level customers an affordable device that comes in a premium package. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Well, you can do that literally right now, as Gore Verbinski's new film, Good Luck, Have Fun, Don't Die, just became available at home for digital download. The release comes just a month after the film's theatrical release, and while that certainly isn't a lot of time, it could have been sooner. The film is an insane sci-fi time travel story about a man (Rockwell) who has figured out that the correct combination of people who can save his future from a killer AI are all in a Norm's Diner in Los Angeles, CA. He's just not sure how many people and who they are yet. So, he's done this hundreds of times, and we pick up on the run that it just might be the one. Among the people chosen are characters played by Haley Lu Richardson, Michael Peña, Zazie Beetz, and Juno Temple. It's a fun movie that is definitely worth your time, and if you'd like to know more, you can read our full review here. Also, we talked to the film's director, Gore Verbinski, about it too. Hopefully, he gets to make more in this world because this is a fun movie that deserves to be expanded even further. Oh, and if you're more the physical media type, the film is coming to 4K Ultra HD, Blu-ray, and DVD on April 21. Or, again, it's out now wherever you get digital downloads. Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what's next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who. Sam Rockwell stars in the manic, poignant sci-fi love letter from director Gore Verbinski. Plus, Superman could have a surprising part in 'Supergirl'. If you loved 'A Cure for Wellness,' Verbinski's take on 'BioShock' might've been exactly your kind of movie had it actually gotten made. It's called 'Good Luck, Have Fun, Don't Die,' stars Sam Rockwell, and opens February 13. Plus, why we didn't get a second season of 'Hawkeye'.
Some of the best value RAM you can get right now When you purchase through links on our site, we may earn an affiliate commission. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. RAM prices are pretty crazy right now, but Newegg bundles continue to be the saviour of enthusiasts looking to build a PC in 2026. Take today's offering, which gets you the 9850X3D, an Asus ROG Strix X870E-E motherboard, and 32GB of Corsair Vengeance 6400 RAM for $1,111. Simple math makes the value of this bundle clear to see. Having only come out weeks ago, it's not seeing any discounts, so MSRP is the name of the game here. Likewise, this Asus ROG Strix X9870E-E Gaming Wi-Fi motherboard is a potent offering to build your PC around, retailing at $449 on Newegg at the moment. Get AMD's fastest gaming CPU, a great AM5 motherboard, and 32GB of speedy DDR5 for as close to pre-AI price crunch prices as you can get. It hasn't unseated the 9800X3D as our pick for best gaming CPU, but its increased power draw means that it is without doubt the fastest gaming CPU on the market you can buy right now, as our benchmarks below confirm. To seat this processor, you get a hefty Asus ROG Strix X870E-E motherboard with 18+2+2 power stages. It features a whopping three PCIe 5.0 M.2 slots for vast storage capabilities at the fastest speeds, as well as a further two 4.0 slots just in case. An array of chunky heatsinks for cooling performance complements the dark gaming aesthetic. Meanwhile, you'll get plenty of USB-C ports, including two USB4 (Type-C) and a further 10 USB 10 Gbps ports, nine type-A and one type-C. There's also Wi-Fi 7, 5GB Ethernet, and built-in AI overclocking tools. With built-in RGB lighting, these will offer everything you need for solid gaming performance from your AM5 build, when paired with the right GPU, of course. Stephen is Tom's Hardware's News Editor with almost a decade of industry experience covering technology, having worked at TechRadar, iMore, and even Apple over the years. He has covered the world of consumer tech from nearly every angle, including supply chain rumors, patents, and litigation, and more. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools. In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools. Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools. Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools. Whether the latter is feasible depends on the project, but in one of the projects I'm involved in it's fairly obvious: it's a package manager where the work is typically verifying dependencies and constraints; links to upstream commits etc are a great shortcut for reviewers. Despite that, you will make this argument when trying to use copilot to do something, the worst model in the entire industry.If an AI can replace you at your job, you are not a very good programmer. It's fine to write things by hand, in the same way that there's nothing wrong with making your own clothing with a sewing machine when you could have bought the same thing for a small fraction of the value of your time. Or in the same fashion, spending a whole weekend, modeling and printing apart, you could've bought for a few dollars. I think we need to be honest about differentiating between the hobby value of writing programs versus the utility value of programs. Demanding that code be handwritten makes sense to me for the maintainer because the whole thing is just for fun anyway. There isn't an urgent need to RIIR Linux. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand. So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand. My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. For example just recently I updated a component in one of our modules. The work was fairly rote (in this project we are not allowed to use LLMs). I didn't do it in other places because I couldn't justify spending the effort.There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more. There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more. But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more. It is certainly not the case for me! But boy is it brilliant at a fuzzy find and replace. Verbose, unchecked AI slop becomes a huge liability over time, you're vastly better off spending those few weekends rewriting it from scratch. Do you think your worldview is still a reasonable one under those conditions? Until that time, it's entirely reasonable to hold the position that you just don'tThis is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.For my own open source projects, I'm not interested in using LLM generated code. AI generated code runs counter to all the other goals I have. This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.For my own open source projects, I'm not interested in using LLM generated code. AI generated code runs counter to all the other goals I have. I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.For my own open source projects, I'm not interested in using LLM generated code. AI generated code runs counter to all the other goals I have. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.For my own open source projects, I'm not interested in using LLM generated code. AI generated code runs counter to all the other goals I have. For my own open source projects, I'm not interested in using LLM generated code. AI generated code runs counter to all the other goals I have. People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. I expect people that are more focused on the output will adopt LLMs for hobby work as well. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run. this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? Saves the rest of us from having to tell you. And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword. Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword. Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword. Isn't that literally how open-source works, and why there's so many Linux distros?Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept. Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community. Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community. Personally, I would not currently expect a fork of RedoxOS that is AI-implemented to become more popular than RedoxOS itself. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.Signed, a person who totally gets good results from coding with LLMs. Signed, a person who totally gets good results from coding with LLMs. Create your own projects with workflows and cultures that are supportive of this, from the ground up.I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for. I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for. There clearly should be, but that is not the world we live in. Prompts from issue text makes a lot of sense. >No big rewrites or anything crazyI think those are the key points why they've been welcomed. I think those are the key points why they've been welcomed. And I would say especially for operating systems if it gets any adoption irregular contributions are pretty legit. E.g. when someone wants just one specific piece of hardware supported that no one else has or needs without being employed by the vendor. Potential long time contributor is somebody who was already asking annoying questions in the irc channel for a few months and helped with other stuff before shooting off th e PR. I always provided well-documented PRs with a narrow scope and an obvious purpose. Not to mention LLMs can be annoying, too. Demand this, and you'll only be inviting bots to pester devs on IRC. Because if the bug is sufficiently simple that an outsider with zero context to fix, there's a non-zero chance that the maintainers know about it and have a reason why it hasn't been addressed yeti.e. the bug fix may have backwards-compatibility implications for other users which you aren't aware of. Or the maintainers may be bandwidth-limited, and reviewing your PR is an additional drain on that bandwidth that takes away from fixing larger issues i.e. the bug fix may have backwards-compatibility implications for other users which you aren't aware of. Or the maintainers may be bandwidth-limited, and reviewing your PR is an additional drain on that bandwidth that takes away from fixing larger issues Drive-by folks tend to blindly fix the issue they care about, without regard to how/whether it fits into the overall project direction Wait but under that assumption - LLMs being good enough - wouldn't the maintainer also be able to leverage LLMs to speed up the review?Often feels to me like the current stance of arguments is missing something. Often feels to me like the current stance of arguments is missing something. So it becomes a bit theoretical, but I guess if we had a future where LLMs could consistently write perfect code, it would not be too far fetched to also think it could perfectly review code, true enough. But either way the maintainer would still spend some time ensuring a contribution aligns with their vision and so forth, and there would still be close to zero incentive to allow outside contributors in that scenario. This review part is now the biggest bottleneck that can't yet be skipped.An in an open source project many people can generate a lot more code than a few people can review. An in an open source project many people can generate a lot more code than a few people can review. Imagine someone vibe codes the code for a radiotherapy machine and it fries a patient (humans have made these errors). The developer won't be able to point to OpenAI and blame them for this, the developer is personally responsible for this (well, their employer is most likely). Ergo, in any setting where there is significant monetary or health risk at stake, humans have to review the code at least to show that they've done their due diligence.I'm sure we are going to have some epic cases around someone messing up this way. I'm sure we are going to have some epic cases around someone messing up this way. Wouldn't an agent run by a maintainer require the same scrutiny? An agent is imo "someone else" and not a trusted maintainer. It leaves stuff on the table in a time where they really shouldn't. can all be enhanced with carefully orchestrated assistance. To simply ban this is ... a choice, I guess. It's like saying we won't use ci/cd, because it's automated stuff, we're purely manual here.I think a lot of projects will find ways to adapt. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. I think a lot of projects will find ways to adapt. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Once outside contributions are rejected by default, the maintainers can of course choose whether or not to use LLMs or not.I do think that it is a misconception that OSS software needs to "viable". OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky. OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky. Licensing is dependent on IPR, primarily copyright.It is very unclear whether the output of an AI tool is subject to copyright.So if someone uses AI to refactor some code, that refactored code isn't considered a derivative work which means that the refactored source is no longer covered by the copyright, or the license that depends on that. It is very unclear whether the output of an AI tool is subject to copyright.So if someone uses AI to refactor some code, that refactored code isn't considered a derivative work which means that the refactored source is no longer covered by the copyright, or the license that depends on that. Copyright only applies to the part of a work that was contributed by a human.See https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...For example, on page 3 there (PDF page 11): "In February 2022, the Copyright Office's Review Board issued a final decision affirming the refusal to register a work claimed to be generated with no human involvement. "(I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.) See https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...For example, on page 3 there (PDF page 11): "In February 2022, the Copyright Office's Review Board issued a final decision affirming the refusal to register a work claimed to be generated with no human involvement. "(I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.) For example, on page 3 there (PDF page 11): "In February 2022, the Copyright Office's Review Board issued a final decision affirming the refusal to register a work claimed to be generated with no human involvement. "(I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.) (I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.) In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well. Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Off the shelf agentic coding tools should be doing this for you. They will probably end up accepting PRs which were written with LLM assistance, but if they do it will be because it's well-written code that the contributor can explain in a way that doesn't sound to the maintainers like an LLM is answering their questions. And maybe at that point the community as a whole would have less to worry about - if we're still assuming that we're not setting ourselves up for horrible licence violation problems in the future when it turns out an LLM spat out something verbatim from a GPLed project. To outright accept LLM contributions would be as much "pure vibes" as banning it.The thing is, those that maintain open source projects have to make a decision where they want to spend their time. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs. The thing is, those that maintain open source projects have to make a decision where they want to spend their time. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs. Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs. If you look back and think about what your saying for a minute, it's that low effort PRs are bad.Using an LLM to assist in development does not instantly make the whole work 'low effort'.It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? Using an LLM to assist in development does not instantly make the whole work 'low effort'.It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? We both know that a lot of people just vibe code the way through, results be damned.I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. We both know that a lot of people just vibe code the way through, results be damned.I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. We both know that a lot of people just vibe code the way through, results be damned.I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. You aren't adding anything to the conversation. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. > always come back to this point is so…AmericanI am not American.To be frank, this was the most insulting thing someone ever told me online. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. I am not American.To be frank, this was the most insulting thing someone ever told me online. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. To be frank, this was the most insulting thing someone ever told me online. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. You went on a whinging session about me. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.> You aren't adding anything to the conversation. The response to a large enough amount of data is always vibes. can all be enhanced with carefully orchestrated assistance.What's stopping the maintainers themselves from doing just that? Nothing.Producing it through their own pipeline means they don't have to guess at the intentions of someone else.Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they've used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself. > It leaves stuff on the table in a time where they really shouldn't. can all be enhanced with carefully orchestrated assistance.What's stopping the maintainers themselves from doing just that? Nothing.Producing it through their own pipeline means they don't have to guess at the intentions of someone else.Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they've used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself. What's stopping the maintainers themselves from doing just that? Nothing.Producing it through their own pipeline means they don't have to guess at the intentions of someone else.Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they've used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself. Producing it through their own pipeline means they don't have to guess at the intentions of someone else.Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they've used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself. Why go through the process of vetting the contribution of some random person who says that they've used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself. Dan said yesterday he was "restricting" Show HN to new accounts:https://news.ycombinator.com/item?id=47300772I guess he meant that literally and new accounts can still post regular submissions:https://news.ycombinator.com/submitted?id=advancespaceThat doesn't make too much sense to me, or he hasn't actually implemented this yet. https://news.ycombinator.com/item?id=47300772I guess he meant that literally and new accounts can still post regular submissions:https://news.ycombinator.com/submitted?id=advancespaceThat doesn't make too much sense to me, or he hasn't actually implemented this yet. I guess he meant that literally and new accounts can still post regular submissions:https://news.ycombinator.com/submitted?id=advancespaceThat doesn't make too much sense to me, or he hasn't actually implemented this yet. It looks like we are going to have large numbers of people whose entire personality is projected via an AI rather than their own mind. Surely this will have an (likely deleterious) effect on people's emotional and social intelligence, no? People's language centers will atrophy because the AI does the heavy lifting of transforming their thoughts into text, and even worse, I'm not sure it'll be avoidable to have the AIs biases and start to leak into the text that people like this generate. I remember the first time I suspected someone using an LLM to answer on HN shortly after chatgpt's first release. In a few short years the tables turned and it's increasingly more difficult to read actual people's thoughts (and this has been predicted, and the predictions for the next few years are far worse). An em-dash might have been a good indicator when LLMs were first introduced, but that shouldn't be used as a reliable indicator now.I'm more concerned that they keep fooling everybody on here to the point where people start questioning them and sticking up for them a lot of times. I'm more concerned that they keep fooling everybody on here to the point where people start questioning them and sticking up for them a lot of times. Also to, intentionally introduce random innoccuous punctuation and speling errors. But everything up to that hyphen was pure slop. But the maintainers can use AI too, for their reviewing. Maintainers could just accept feature requests, point their own agents at them using donated compute, and skip the whole review dance. You get code that actually matches the project's style and conventions, and nobody has to spend time cleaning up after a stranger's slightly-off take on how things should work. On the other hand projects with AI assisted commits you can easily find include Linux, curl, io_uring, MariaDB, DuckDB, Elasticsearch, and so on. I would be wary of unhinged government intervention, but I wouldn't begrudge private actors for not getting on with the ticket. Producing code won't be it, maintainers have their own LLM subscriptions. Producing code won't be it, maintainers have their own LLM subscriptions. Producing code won't be it, maintainers have their own LLM subscriptions. These are the things LLMs can't do well yet. Producing code won't be it, maintainers have their own LLM subscriptions. This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects. Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. Spend time where you, as a human, are better than the AI. Fact is, an LLM writes better code than 95% of developers out there today. But for the world at large, I bet code quality goes up. This would probably be more useful to help you see what (and how) was written by LLMs. Of course, even then it's not reproducible and requires proprietary software! This will cut off one of the genuine entry points to the industry where all you really needed was raw talent. > any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?I have no beef with Redox OS. This feels like the newest form of OSS virtue signaling. This feels like the newest form of OSS virtue signaling. In other words, it makes not clearly labeling any LLM use a bannable offense. Using the phrase "virtual signaling" long ago became a meaningless term other than to indicate one's views in a culture war. You have no doubt heard claims that AI "democratizes" software development. If both are correct then the term has little or no weight.From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. You have no doubt heard claims that AI "decreases cognition ability." If both are correct then the term has little or no weight.From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. Which is correct depends strongly on your cultural views. If both are correct then the term has little or no weight.From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification. > This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project. It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine. Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.Shortly, no one is going to care about anyone's bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error. Shortly, no one is going to care about anyone's bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error. No one is going to care about anyone's painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage. Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code. There are plenty of good reasons why somebody might not want your PR, independent of how good or useful to you your change is. If the submitter is prepared to explain the code and vouch for its quality then that might reasonably fall under "don't ask, don't tell".However, if LLM output is either (a) uncopyrightable or (b) considered a derivative work of the source that was used to train the model, then you have a legal problem. And the legal system does care about invisible "bit colour". And the legal system does care about invisible "bit colour". Intention.Here's some code for example: https://i.imgur.com/dp0QHBp.pngBoth sides written by an LLM. Both sides written based on my explicit prompts explaining exactly how I want it to behave, then testing, retesting, and generally doing all the normal software eng due diligence necessary for basic QA. Sometimes the prompts are explicitly "change this variable name" and it ends up changing 2 lines of code no different from a find/replace.Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me. Here's some code for example: https://i.imgur.com/dp0QHBp.pngBoth sides written by an LLM. Both sides written based on my explicit prompts explaining exactly how I want it to behave, then testing, retesting, and generally doing all the normal software eng due diligence necessary for basic QA. Sometimes the prompts are explicitly "change this variable name" and it ends up changing 2 lines of code no different from a find/replace.Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me. Both sides written based on my explicit prompts explaining exactly how I want it to behave, then testing, retesting, and generally doing all the normal software eng due diligence necessary for basic QA. Sometimes the prompts are explicitly "change this variable name" and it ends up changing 2 lines of code no different from a find/replace.Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me. Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me. CLEARLY, a lot of developers are not reasonable Plus, smut peddlers aren't likely to set an OpenClaw bot-agent swarm loose arguing the point with you for days then posting blogs and medium articles attacking you personally for “discrimination”. Plus, smut peddlers aren't likely to set an OpenClaw bot-agent swarm loose arguing the point with you for days then posting blogs and medium articles attacking you personally for “discrimination”. Just require that the CLA/Certificate of Origin statement be printed out, signed, and mailed with an envelope and stamp, where besides attesting that they appropriately license their contributions ((A)GPL, BSD, MIT, or whatever) and have the authority to do so, that they also attest that they haven't used any LLMs for their contributions. Indirect usage, where people whip up LLM-generated PoCs that they then rewrite, will still probably go on, and go on without detection, but that's less objectionable morally (and legally) than trying to directly commit LLM code.As an aside, I've noticed a huge drop off in license literacy amongst developers, as well as respect for the license choices of other developers/projects. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago. I always assumed this was the case anyway; MIT is, if I'm not mistaken, one of the mostly used licenses. I typically had a "fuck it" attitude when it came to the license, and I assume quite a lot of other people shared that sentiment. No, it wasn't that way in the 2000s, e.g., on platforms like SourceForge, where OSS devs would go out of their way to learn the terms and conditions of the popular licenses and made sure to respect each other's license choices, and usually defaulted to GPL (or LGPL), unless there was a compelling reason not to: https://web.archive.org/web/20160326002305/https://redmonk.c...Now the corporate-backed "MIT-EVERYTHING" mindvirus has ruined all of that: https://opensource.org/blog/top-open-source-licenses-in-2025 Now the corporate-backed "MIT-EVERYTHING" mindvirus has ruined all of that: https://opensource.org/blog/top-open-source-licenses-in-2025 Not being able to publish anything without sifting through all the libs licences? Remembering legalese, jurisprudence, edge cases, on top of everything else?MIT became ubiquitous because it gives us peace of mind MIT became ubiquitous because it gives us peace of mind I'd never allow LLM code to be merged. I'm sorry all of your gatekeeping is coming to an end. People whose identities are about "good code" and didn't care about being a good teammate or the business are going to get crushed. For the parent there's immaterial value knowing that is written by a human. From what I read in your comment, you see code more as a means to an end. Writing code myself, and accomplishing what I set out to build sometimes feels like a form of art, and knowing that I build it, gives me a sense of accomplishment. Writing code solely as a means to an end, or letting it be generated by some model, doesn't give that same energy.This thinking has nothing to do with not caring about being a good teammate or the business. I've no idea why you put that on the same pile. This thinking has nothing to do with not caring about being a good teammate or the business. I've no idea why you put that on the same pile. People will be more likely to engage with your main assertion if you leave out the insults. I noticed your account was new, so I thought you might appreciate a likely explanation for why your post was being downvoted. The underlying data that said matrices compute upon, can be racist though.I will admit that I may be missing some context though. Restricting that is as regressive as a project trying to specify that I write code from a specific country or… standing on my head.Sure, if they want me to add a “I'm writing this standing on my head” message in the PR then I will… but I'm not. They're not asking for you to write standing on your head, they are asking for you to author your contributions yourself. Not theirs.Their choice is to accept it or reject it based purely on the change itself, because that's all there is. Not theirs.Their choice is to accept it or reject it based purely on the change itself, because that's all there is. Their choice is to accept it or reject it based purely on the change itself, because that's all there is. But if they can't enforce their boundaries, because they can't tell the difference between AI code and non-AI code without being told, then their boundaries they made up are unenforceable nonsense.About as nonsense and enforceable as asking me to code upside down. You're really just going to do whatever the F* you want and write in lowercase just because you can?That's not how the world works, nor how it should work :(Markdown files - of all kinds - are totally not unenforceable nonsense, they are rights of a real legal entity (the repository) that you willingly and knowing violate every time you don't comment in all caps.And yes, before you ask, this discussion is definitely one in which it is appropriate to bring up rape and pedophilia. That's not how the world works, nor how it should work :(Markdown files - of all kinds - are totally not unenforceable nonsense, they are rights of a real legal entity (the repository) that you willingly and knowing violate every time you don't comment in all caps.And yes, before you ask, this discussion is definitely one in which it is appropriate to bring up rape and pedophilia. Markdown files - of all kinds - are totally not unenforceable nonsense, they are rights of a real legal entity (the repository) that you willingly and knowing violate every time you don't comment in all caps.And yes, before you ask, this discussion is definitely one in which it is appropriate to bring up rape and pedophilia. And yes, before you ask, this discussion is definitely one in which it is appropriate to bring up rape and pedophilia. - people can just say things- when people say things, you don't have to listen to them- not listening to them doesn't make you superior or more powerful than themWe can practice: I'd like you to always comment in uppercase letters from now on please. - not listening to them doesn't make you superior or more powerful than themWe can practice: I'd like you to always comment in uppercase letters from now on please. We can practice: I'd like you to always comment in uppercase letters from now on please. Please ensure you abide by my policy when commenting here. If the maintainers don't want to accept it, fine. The Uncles can continue to play in their no AI playground, and show each other how nice their code is.The world is moving on from the "AI is bad" crowd. Just like when people started losing their ability to navigate without a GPS/Maps app, you will lose your ability to write solid code, solve problems, hell maybe even read well.I want my brain to be strong in old age, and I actually love to write code unlike 99% in software apparently (like why did you people even start doing this career.. makes no sense to me).I'm going to keep writing the code myself! I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etc > I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etcThen you are the opposite of a carpenter or a craftsman, no matter what you think about it yourself. And yet, I find a coding agent makes it even more fun. I spend less time working on the boilerplate crap that I hate, and a lot less time searching Google and trying to make sense of a dozen half-arsed StackOverflow posts that don't quite answer my question.I just went through that yesterday with Unity. Even Google's search engine agent wasn't answering the question. It was a terrible, energy-draining experience that I don't miss at all. I did figure it out in the end, though.Prior to yesterday, I was thinking that using AIs to do that was making it harder for me to learn things because it was so easy. The AI lets me do it repeatedly, quickly, and I learn by the repetition, and a lot of it. The slow method has just 1 instance, and it takes forever.This is certainly an exciting time for coders, no matter why they're in the game. I just went through that yesterday with Unity. Even Google's search engine agent wasn't answering the question. It was a terrible, energy-draining experience that I don't miss at all. I did figure it out in the end, though.Prior to yesterday, I was thinking that using AIs to do that was making it harder for me to learn things because it was so easy. The AI lets me do it repeatedly, quickly, and I learn by the repetition, and a lot of it. The slow method has just 1 instance, and it takes forever.This is certainly an exciting time for coders, no matter why they're in the game. The AI lets me do it repeatedly, quickly, and I learn by the repetition, and a lot of it. The slow method has just 1 instance, and it takes forever.This is certainly an exciting time for coders, no matter why they're in the game. This is certainly an exciting time for coders, no matter why they're in the game. LLMs help me maintain some of my coding abilities.It's like having a non-judgemental co-coder sitting at your side, you can discuss about the code you wrote and it will point out things you didn't think of.Or I can tap into the immense knowledge about APIs LLMs have to keep up with change. I wouldn't be able to still read that much documentation and keep all of this. It's like having a non-judgemental co-coder sitting at your side, you can discuss about the code you wrote and it will point out things you didn't think of.Or I can tap into the immense knowledge about APIs LLMs have to keep up with change. I wouldn't be able to still read that much documentation and keep all of this. Or I can tap into the immense knowledge about APIs LLMs have to keep up with change. I wouldn't be able to still read that much documentation and keep all of this. Sure but once you learn long multiplication/division algorithms by hand there's not much point in using them. By high school everyone is using a calculator.> Just like when people started losing their ability to navigate without a GPS/Maps appAre you suggesting people shouldn't use Google Maps? Paper maps and compasses work the same way, they render some older skill obsolete. The written word made memorization infinitely less valuable (and writing had its critics).I don't think "LLMs making us dumber" is a real concern. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. There's nothig new or scary about this, and it will be a significant net win. > Just like when people started losing their ability to navigate without a GPS/Maps appAre you suggesting people shouldn't use Google Maps? Paper maps and compasses work the same way, they render some older skill obsolete. The written word made memorization infinitely less valuable (and writing had its critics).I don't think "LLMs making us dumber" is a real concern. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. There's nothig new or scary about this, and it will be a significant net win. Are you suggesting people shouldn't use Google Maps? Paper maps and compasses work the same way, they render some older skill obsolete. The written word made memorization infinitely less valuable (and writing had its critics).I don't think "LLMs making us dumber" is a real concern. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. There's nothig new or scary about this, and it will be a significant net win. I don't think "LLMs making us dumber" is a real concern. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. There's nothig new or scary about this, and it will be a significant net win. However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. There's nothig new or scary about this, and it will be a significant net win. Quite a bit of the Linux userspace is already permissively licensed. Nobody has built a full-fledged open source alternative yet. Because it is hard to build an ecosystem, it is hard to test thousands of different pieces of hardware. None of that would happen without well-paid engineers contributing. It seems well intentioned, but lots of bad ideas are like this.I was told by my customer they didn't need my help because Claude Code did the program they wanted me to quote. I sheepishly said, 'I can send an intern to work in-house if you don't want to spend internal resources on it. 'I can't really imagine what kind of code will be done by hand anymore... Even military level stuff can run large local models. I was told by my customer they didn't need my help because Claude Code did the program they wanted me to quote. I sheepishly said, 'I can send an intern to work in-house if you don't want to spend internal resources on it. 'I can't really imagine what kind of code will be done by hand anymore... Even military level stuff can run large local models. I can't really imagine what kind of code will be done by hand anymore... Even military level stuff can run large local models. For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.It could be done with a distributed effort. It could be done with a distributed effort. So "copyleft" doesn't work on any of the output. https://en.wikipedia.org/wiki/License_compatibility#GPL_comp...A model that contains no GPL code makes sense so that people using non-GPL licenses don't violate it. Are they really that delusional to think that their AI slop has any value to the project?Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?I guess interacting with a sycophantic LLM for hours truly rots the brain.To spell it out: No, your AI generated code has zero value. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects. Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?I guess interacting with a sycophantic LLM for hours truly rots the brain.To spell it out: No, your AI generated code has zero value. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects. I guess interacting with a sycophantic LLM for hours truly rots the brain.To spell it out: No, your AI generated code has zero value. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects. To spell it out: No, your AI generated code has zero value. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects. If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects. Before this it was junk like spacing changes Sometimes, I'd guess, it's also because your Github profile has some kind of an advertisement. I think some people also like the feeling of being helpful. And they do not understand reality of LLM outputs. See comments posting AI generated summaries or answers to question. At best you can try to find some healthcare or finance company that is too cheap to buy a machine that can locally run 400B models. So what about content that isn't as clear? I don't know what to make of it.My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code). My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code). "any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"For example:- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute? For example:- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute? - What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute? - What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute? It is better to use a dedicated translation tool, and post the original along with the translation.> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")?Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed. > What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")?Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed. Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed. > Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed. I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed. For another I can cut and translate specific parts using whatever tools I want, again giving me more context about what is trying to be communicated. The reality is you can't accommodate every hypothetical scenario.> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?Nobody is talking about advanced autocomplete when they want to ban AI code. > What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?Nobody is talking about advanced autocomplete when they want to ban AI code. Nobody is talking about advanced autocomplete when they want to ban AI code. There are many free sites to paste in language input and get a direct translation sans filler and AI "interpretation". If the native language is very different from English, this problem gets much worse.This is a problem that LLM claim to partially mitigate (and is one reason why non-native speakers could be tempted to use them), but hardly any classical translation tool can. This is a problem that LLM claim to partially mitigate (and is one reason why non-native speakers could be tempted to use them), but hardly any classical translation tool can. I've seen this excuse before but in practice the output they copy/paste is extremely verbose and long winded (with the bullet point and heading soup etc. )Surely non-native speakers can see that structure and tell the LLM to match their natural style instead? No one wants to read a massive wall of text. Surely non-native speakers can see that structure and tell the LLM to match their natural style instead? No one wants to read a massive wall of text. if (foo == true) { // checking foo is true (rocketship emoji) 20 lines of code; } else { the same 20 lines of code with one boolean changed in the middle; } Description:(markdown header) Summary (nerd emoji):This PR fixes a non-existent issue by adding an *if statement** that checks if a variable is true. (markdown header) Summary (nerd emoji):This PR fixes a non-existent issue by adding an *if statement** that checks if a variable is true. I assume that most of these purely llm generated unwanted contributions will just end up in dead end forks, because my impression is that a lot of them are just being generated as GitHub activity fodder. I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time. Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open. “Our approach is harness-first engineering: instead of reading every line of agent-generated code, invest in automated checks that can tell us with high confidence, in seconds, whether the code is correct. “that's literally what The whole industry has been doing for decades, and spoiler: you still need to review code! that's literally what The whole industry has been doing for decades, and spoiler: you still need to review code! No, they're pushing back against a world full of even more mass surveillance, corporate oligarchy, mass unemployment, wanton spam, and global warming. It is absolutely in your personal best interest to hate AI. IOW I think this stance is ethically good, but technically irresponsible. I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..If I am a client who wants reliable software, then I want an competent programmer to1. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..If I am a client who wants reliable software, then I want an competent programmer to1. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..If I am a client who wants reliable software, then I want an competent programmer to1. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. 2. and then come up with a solution.The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs. What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions. What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions. Over time this might not be enough, though, so I suspect we will see default deny policies popping up soon enough. Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them. Humans have been doing this for the better parts of 5 decades now. Don't assume others rely on LLMs as much as you do.>Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.I highly doubt that. >Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.I highly doubt that. You know what else takes "a massive amount of developer work"? "any LLM-generated code must be reviewed by a good programmer"And this is the crux of the matter with using LLMs to generate code for everything but really simple greenfield projects: They don't really speed things up, because everything they produce HAS TO be verified by someone, and that someone HAS TO have the necessary skill to write such code themselves.LLMs save time on the typing part of programming. Incidentially that part is the least time consuming. "any LLM-generated code must be reviewed by a good programmer"And this is the crux of the matter with using LLMs to generate code for everything but really simple greenfield projects: They don't really speed things up, because everything they produce HAS TO be verified by someone, and that someone HAS TO have the necessary skill to write such code themselves.LLMs save time on the typing part of programming. Incidentially that part is the least time consuming. And this is the crux of the matter with using LLMs to generate code for everything but really simple greenfield projects: They don't really speed things up, because everything they produce HAS TO be verified by someone, and that someone HAS TO have the necessary skill to write such code themselves.LLMs save time on the typing part of programming. Incidentially that part is the least time consuming. LLMs save time on the typing part of programming. Incidentially that part is the least time consuming. And yes of course they need to be able to write the code themselves, but that's the easy part: any good developer could write a full production OS by themselves given access to documentation and literature and an enormous amount of time. And that's a task that LLMs, which are nothing other than statistical models trying to guess the next token, really aren't good at. And that's a task that LLMs, which are nothing other than statistical models trying to guess the next token, really aren't good at. And that's a task that LLMs, which are nothing other than statistical models trying to guess the next token, really aren't good at. And that's a task that LLMs, which are nothing other than statistical models trying to guess the next token, really aren't good at. Perhaps the same way that every other viable OS was made without use of LLMs. Every single production OS, including the one you use right now, was made before LLMs even existed.> What makes sense if that of course any LLM-generated code must be reviewed by a good programmerThe time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.The ability to generate slop using LLMs, is effectively unlimited.This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/ > What makes sense if that of course any LLM-generated code must be reviewed by a good programmerThe time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.The ability to generate slop using LLMs, is effectively unlimited.This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/ The time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.The ability to generate slop using LLMs, is effectively unlimited.This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/ The ability to generate slop using LLMs, is effectively unlimited.This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/ This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/ Feel like you are using a very narrow definition of "success" here. Who cares if nobody switches to it as their daily driver? The goal you proposed was "viable", not "widely used". People can still use them to bake bread however: https://www.youtube.com/watch?v=WAJqGVxuJPo Earth-Ovens haven't been in widespread use for hundreds of years. People can still use them to bake bread however: https://www.youtube.com/watch?v=WAJqGVxuJPo
Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models. LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. AMI (pronounced like the French word for friend) aims to build “a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,” the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. LeCun's startup represents a bet against many of the world's biggest AI labs like OpenAI, Anthropic, and even his former workplace, Meta, which believe that scaling up LLMs will eventually deliver AI systems with human-level intelligence or even superintelligence. LLMs have powered viral products such as ChatGPT and Claude Code, but LeCun has been one of the AI industry's most prominent researchers speaking out about the limitations of these AI models. LeCun is well known for being outspoken, but as a pioneer of modern AI that won a Turing award back in 2018, his skepticism carries weight. LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. AMI was cofounded by LeCun and several leaders he worked with at Meta, including the company's former director of research science, Michael Rabbat; former vice president of Europe, Laurent Solly; and former senior director of AI research, Pascale Fung. Other cofounders include Alexandre LeBrun, former CEO of the AI health care startup Nabla, who will serve as AMI's CEO, and Saining Xie, a former Google DeepMind researcher who will be the startup's chief science officer. Rather, in his view, these AI models are simply the tech industry's latest promising trend, and their success has created a “kind of delusion” among the people who build them. “It's true that [LLMs] are becoming really good at generating code, and it's true that they are probably going to become even more useful in a wide area of applications where code generation can help,” says LeCun. LeCun has been working on world models for years inside of Meta, where he founded the company's Fundamental AI Research lab, FAIR. But he's now convinced his research is best done outside the social media giant. He says it's become clear to him that the strongest applications of world models will be selling them to other enterprises, which doesn't fit neatly into Meta's core consumer business. As AI world models like Meta's Joint-Embedding Predictive Architecture (JEPA) became more sophisticated, “there was a reorientation of Meta's strategy where it had to basically catch up with the industry on LLMs and kind of do the same thing that other LLM companies are doing, which is not my interest,” says LeCun. “So sometime in November, I went to see Mark Zuckerberg and told him. While Meta is not an investor in AMI, LeCun says he's talking with the company about collaborating. LeCun has grappled with issues related to AI safety and security before. “I was at the origin of those things, but it is not for me to decide what society should do with technology. At least in liberal democracies, the democratic process should decide that, but I can't have any decision power there,” LeCun says. But since then, however, the technology has been used to protect liberal democracies in Europe, he says. Ukraine, for example, has ramped up its use of autonomous drones to fend off attacks from Russia. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. Eventually, he says, AMI intends to develop a “universal world model,” which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. College campuses are in upheaval over faculty ties to Epstein