Rob Pike Goes Nuclear over GenAI

(skyview.social)

459 points | by christoph-heiss 2 hours ago

64 comments

  • Scubabear68 1 hour ago
    All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.

    To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.

    Just the haters here.

  • jabedude 2 hours ago
    Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
    • InsideOutSanta 1 hour ago
      Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.

      Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

      • acheron 1 hour ago
        Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.
        • lambda 1 hour ago
          Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.

          The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.

          It's a completely different order of magnitude than the pre AI-boom data center usage.

          Source: https://escholarship.org/uc/item/32d6m0d1

          • azakai 23 minutes ago
            The first chart in your link doesn't show "flat" usage until 2022? It is clearly rising at an increasing rate, and it more than doubles over 2014-2022.

            It might help to look at global power usage, not just the US, see the first figure here:

            https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...

            There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.

        • palmotea 1 hour ago
          > Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.

          How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?

          • moltopoco 1 hour ago
            Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.
            • fsociety 29 minutes ago
              The real answer is the unsatisfying but true “my shit doesn’t stink but yours sure does”
            • palmotea 1 hour ago
              How does the compute required for that compare to the compute required to serve LLM requests? There's a lot of goal-post moving going on here, to justify the whataboutism.
        • PunchyHamster 14 minutes ago
          You could at least argue while there is plenty of negatives, at least we got to use many services with ad-supported model.

          There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.

        • lokar 19 minutes ago
          The ad system uses a fairly small fraction of resources.

          And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.

        • SoftTalker 31 minutes ago
          Someone paid for those ads. Someone got value from them.
          • underdown 5 minutes ago
            The ad industry is a quagmire of fraud. Assuming someone got value out of money spent is tenuous.
          • xorcist 12 minutes ago
            That someone might be Google, though. Not all ad dollars are well spent.
          • MangoToupe 24 minutes ago
            Ads are a cancer on humanity with no benefit to anyone and everyone who enables them should be imprisoned for life
        • kingkawn 1 hour ago
          “this other thing is also bad” is not an exoneration
          • ptero 1 hour ago
            > “this other thing is also bad” is not an exoneration

            No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.

            Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.

            • WD-42 59 minutes ago
              This is a purity test that cannot be passed. Give me your career history and I’ll tell you why you aren’t allowed to make any moral judgments on anything as well.
              • bossyTeacher 41 minutes ago
                Point is he is criticizing Google but still collecting checks from them. That's hypocritical. He would have a little sympathy if he never worked for them. He had decades to resign. He didn't. He stayed there until retirement. He's even using gmail in that post.
                • tczMUFlmoNk 30 minutes ago
                  Rob Pike retired from Google in 2021.
                  • ptero 23 minutes ago
                    Yes, after working there for more than 17 years (IIRC he joined Google in 2004).
          • xscott 1 hour ago
            No, but in this case it indicates some hypocrisy.
        • cmrdporcupine 1 hour ago
          That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.
          • enraged_camel 1 hour ago
            It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?

            When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.

            • cmrdporcupine 1 hour ago
              Honestly, it seems like Rob Pike may have left Google around the same I did. (2021, 2022). Which was about when it became clear it was 100% down in the gutter without coming back.
              • fidotron 5 minutes ago
                That has been clear since the Google Plus debacle, at the very least.
              • pstuart 1 hour ago
                My take was that he had done enough work and had handed the reins of Go to a capable leader (rsc), and that it was time to step away.

                Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.

                Just an armchair observation here.

              • trimbo 33 minutes ago
                > Which was about when it became clear it was 100% down in the gutter without coming back.

                Did you sell all of your stock?

                • cmrdporcupine 29 minutes ago
                  Unfortunately, yes. If I hadn't, I might be retired.
              • christophilus 49 minutes ago
                It was still a wildly wasteful company doing morally ambiguous things prior to that timeframe. I mean, its entire business model is tracking and ads— and it runs massive, high energy datacenters to make that happen.
      • victorbjorklund 32 minutes ago
        Serving unwanted ads has what cost-benefit-ratio vs serving LLM:s that are wanted by the user?
        • PunchyHamster 13 minutes ago
          Every content generated by LLM was served to me against my will and without accounting for preferences.
        • lokar 6 minutes ago
          Ads are extremely computationally cheap
      • DiscourseFan 1 hour ago
        Somebody just burned their refuse in a developing country somewhere. I guess if it was cold, at least they were warming themselves up.
      • xorgun 1 hour ago
        Cutting trees for fuel and paper to send a letter burned resources. Nobody gained in that transaction
        • Blackthorn 1 hour ago
          I shouldn't have to explain this, but a letter would involve actual emotion and thought and be a dialog between two humans.
          • xorgun 1 hour ago
            We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?
            • neltnerb 52 minutes ago
              Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.

              If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.

              So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.

              And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.

            • irishcoffee 1 hour ago
              Articulating thoughts is the backbone of communication. Replacing that with some kind of emotionless groupthink does actually destroy human-to-human communication.

              I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.

            • fwip 1 hour ago
              Do you think that the LLM helped deliver a thoughtful letter to Rob Pike?
            • ai_is_the_best 1 hour ago
              [flagged]
          • gcau 1 hour ago
            I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
            • ImPostingOnHN 42 minutes ago
              I shouldn't have to explain this, but a letter is a medium of communication between people.

              Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.

              Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.

        • yes_man 1 hour ago
          Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world
          • xorgun 1 hour ago
            I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash
            • palmotea 1 hour ago
              > I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash

              If you're being accurate, the people you know are terrible.

              If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.

              • SoftTalker 24 minutes ago
                Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
            • vodou 1 hour ago
              Then you are part of truly strange circles, among people who don’t understand human behavior.
            • yes_man 1 hour ago
              Ok, and that supports the idea of LLM-generated mass spamming in what way…?
            • throw20251220 1 hour ago
              You surround yourself with the people you want to have around you.
            • Blackthorn 1 hour ago
              Wow. You couldn't waterboard that out of me.
        • throw20251220 1 hour ago
          Use recycled paper.
        • devnonymous 1 hour ago
          How is it that so many people who supposedly lean towards analytical thought are so bad at understanding scale?
    • giancarlostoro 1 hour ago
      My guess is the scale has changed? They used to do AI stuff, but it wasn't until OpenAI (anyone feel free to correct me) went ahead and scaled up the hardware and discovered that more hardware = more useful LLM, that they all started ramping up on hardware. It was like the Bitcoin mining craze, but probably worse.
    • hanwenn 1 hour ago
      Rob left Google a couple of years ago.
    • gilrain 2 hours ago
      What does this have to do with his argument? If anything, criticism from the inside of the machine is more persuasive, not less. Ad hom fail.

      The astroturf in this thread is unreal. Literally. ;)

      • jabedude 1 hour ago
        I think it's incredibly obvious how it connects to his "argument" - nothing he complains about is specific to GenAI. So dressing up his hatred of the technology in vague environmental concerns is laughably transparent.

        He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like

        • Arodex 1 hour ago
          You simply don't like any criticism of AI, as shown by your false assertions that Pike works at Google (he left), or the fact Google and others were trying to make their data centers emit less CO2 - and that effort is completely abandoned directly because of AI.

          And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can't be the latter. If it is the latter, it can't be the former.

          • tarsinge 1 hour ago
            > that effort is completely abandoned directly because of AI

            That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.

          • ywn 1 hour ago
            [flagged]
            • Arodex 1 hour ago
              Why should I be concerned with something that doesn't exist, will certainly never exist, and even if I were generous and entertained that something that breaks every physical law of the universe starting with entropy could exist, would result in "it" torturing a copy of myself to try to influence me in the past?

              Nothing there makes sense at any level.

              But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.

            • mrwrong 1 hour ago
              do you get scared when you hear other ghost stories too?
        • btilly 17 minutes ago
          > nothing he complains about is specific to GenAI.

          You mean except the bit about how GenAI included his work in its training data without credit or compensation?

          Or did you disagree with the environmental point that you failed to keep reading?

        • Forgeties79 1 hour ago
          I often find that when people start applying purity tests it’s mainly just to discredit any arguments they don’t like without having to make a case against the substance of the argument.

          Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.

          • perching_aix 1 hour ago
            They did also "assess the argument on its merits" though?
          • itsdrewmiller 55 minutes ago
            This thread is basically an appeal to authority fallacy so attacking the authority is fair game.
        • gtirloni 1 hour ago
          > nothing he complains about is specific to GenAI

          Except it definitely is, unless you want to ignore the bubble we're living in right now.

        • moralestapia 1 hour ago
          [flagged]
      • ViktorRay 1 hour ago
        Someone else in the thread posted this article earlier.

        https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...

        It seems video streaming, like Youtube which is owned by Google, uses much more energy than generative AI.

        • Verdex 1 hour ago
          A topic for more in depth study to be sure. However:

          1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs

          2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.

          3) I have concerns that the "national center for AI" might have some bias

          I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.

          Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.

        • squeaky-clean 1 hour ago
          Those statistics include the viewing device in the energy usage for streaming energy usage, but not for GenAI. Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.

          The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.

          • lokar 3 minutes ago
            And it’s fair assume much of the time watching streaming would instead have been spent on TV
          • easterncalculus 1 hour ago
            > Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.

            Neither is comparing text output to streaming video

        • ori_b 1 hour ago
          This is based on assuming 5 questions a day. YouTube would be very power efficient as well if people only watched 5 seconds of video a day.

          How many tokens do you use a day?

          • moralestapia 1 hour ago
            It would be less power efficient as some of the associated costs/resources happen per request and also benefit from scale.
        • j0lol 1 hour ago
          Thankfully YouTube provides a lot more value to society than gen-AI.
          • Marha01 1 hour ago
            This is a subjective value judgement and many disagree.
          • victorbjorklund 30 minutes ago
            Doubtful. If you look at viewed content it’s probably 90% views from brainrot content.
          • moltopoco 57 minutes ago
            To adults? Certainly. But keep in mind that many children are now growing up with this crap glued to their eyes from age 2:

            https://www.youtube.com/results?search_query=funny+3d+animal...

            (That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)

        • oblio 1 hour ago
          It's not just about per-unit resource usage, but also about the total resource usage. If GenAI doubles our global resource usage, that matters.

          I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).

      • ekjhgkejhgk 1 hour ago
        I think that criticizing when it benefits the person criticizing, and absense of criticism when criticism would hurt the person criticizing, makes the argument less persuasive.

        This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.

      • lamontcg 23 minutes ago
        It is the same energy as the "you criticize society, yet you participate in society" meme. Catching someone out on their "hypocrisy" when they hit a limit of what they'll tolerate is really a low-effort "gotcha".

        And it probably isn't astroturf, way too many people just think this way.

      • gyanchawdhary 1 hour ago
        being inside the machine doesn’t exempt you from tradeoff analysis, kind sir
      • cm2012 1 hour ago
        Do you really think that the only reason people would be turned off by this post by Rob Pike is that they are being paid by big AI?
        • gilrain 1 hour ago
          No, which is why I didn’t say that. I do think astroturfing could explain the rapid parroting of extremely similar ad hominems, which is what I actually did imply.
          • cm2012 1 hour ago
            Astroturfing means a company is paying people to comment. No one in this entire thread was paid to comment.
      • macinjosh 1 hour ago
        This is the most astro-turfy comment ITT
    • lwhi 1 hour ago
      There aren't any rules that prevent us from changing course.

      The points you raise, literally, do not affect a thing.

    • surajrmal 41 minutes ago
      Rob retired from Google years ago fwiw.
    • devnonymous 1 hour ago
      Can't speak for Rob Pile but my guess would be, yeah, it might seem hypocritical but it's a combination of seeing the slow decay of the open culture they once imagined culminating into this absolute shirking of responsibility while simultaneously exploiting labour, by those claiming to represent the culture, alongwith the retrospective tinge of guilt for having enabled it, that drrove this rant.

      Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.

      Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.

    • mikojan 27 minutes ago
      OpenAI's internal target of ~250 GW of compute capacity by 2033 would require about as much electricity as the whole of India's current national electricity consumption[0].

      [0]: https://www.tomshardware.com/tech-industry/artificial-intell...

    • odiroot 1 hour ago
      Pecunia non olet.
    • tgv 1 hour ago
      Oh look, the purity police have arrived, and this time they're the AI-bros. How righteous does one have to be before being allowed to voice criticism?
      • cons0le 12 minutes ago
        I've tried many times here to voice my reservations against AI. I've been accused of being on the "anti AI hype train" multiple times today.

        As if there isn't a massive pro AI hype train. I watched an nfl game for the first time in 5 years, and saw no less than 8 AI commercials. AI Is being forced on people.

        In commercials people were using it to generate holiday cards for God sake. I can't imagine something more cold and impersonal. I don't want that garbage. Our time on earth is to short to wade through LLM slop text

        • stavros 2 minutes ago
          I don't know your stance on AI, but "AI is being forced on people because I saw a company offering AI greeting cards" is not a stance I'd call reasonable.
    • nkohari 1 hour ago
      Yeah, I'm conflicted about the use of AI for creative endeavors as much as anyone, but Google is an advertising company. It was acceptable for them to build a massive empire around mining private information for the purposes of advertisement, but generative AI is now somehow beyond the pale? People can change their mind, but Rob crashing out about AI now feels awfully revisionist.

      (NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)

      • WD-42 1 hour ago
        Ad tech is a scourge as well. You think Rob Pike was super happy about it? He’s not even at google anymore.

        The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.

        • itsdrewmiller 50 minutes ago
          No one is saying he can’t have an opinion, just that there isn’t much value in it given he made a bunch of money from essentially the same thing. If he made a reasoned argument or even expressed that he now realizes the error of his own ways those would be worth engaging with.
          • WD-42 41 minutes ago
            He literally apologized for any part he had in it. This just makes me realize you didn’t actually read the post and I shouldn’t engage with the first part of your argument.
        • bossyTeacher 18 minutes ago
          >You think Rob Pike was super happy about it?

          He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He seems happy enough to use Gmail when he doesn't have to.

          You can have an opinion and other people are allowed to have one about you. Goes both ways.

      • luke5441 1 hour ago
        Google's official mission was "organize the world's information and make it universally accessible and useful", not to maximize advertising sales.

        Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)

      • skywhopper 1 hour ago
        It’s certainly possible to see genAI as a step beyond adtech as a waste of resources built on an unethical foundation of misuse of data. Just because you’re okay with lumping them together doesn’t mean Rob has to.
        • nkohari 1 hour ago
          Yeah, of course, he's entitled to his opinion. To me, it just feels slightly disingenuous considering what Google's core business has always been (and still is).
    • 29athrowaway 1 hour ago
      They claim they have net zero carbon footprint, or carbon neutrality.

      In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.

      Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.

    • bgwalter 1 hour ago
      There is a difference between providing a useful service (web search for example) and running slop generators for modified TikTok clips, code theft and Internet propaganda.

      If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.

    • oblio 1 hour ago
      Are we comparing for example a SMTP server hosted by Google, or frankly, any non-GenAI IT infrastructure, with the resource efficiency of GenAI IT infrastructure?

      The overall resource efficiency of GenAI is abysmal.

      You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).

      • jstummbillig 1 hour ago
        Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
        • oblio 1 hour ago
          > jstummbillig:

          > Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)

          Why would you lie: https://imgur.com/a/1AEIQzI ???

          For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:

          > Summary

          > Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.

          • jstummbillig 1 hour ago
            Are you okay? You ventured 100x and that's wrong. What would you know about the last time I checked was, and in what context exactly? Anyway, good job on doing what I suggest you do, I guess.

            The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit (often times more than just one of them) generating more requests, and, more importantly, it will ask more of your time, the human, whose cumulative energy expenditure to be able to ask to be begin with is quite significant – and that you then will have not to spend on other things that a LLM is not able to do for you.

            • oblio 1 hour ago
              You condescendingly said, sorry, you "ventured" 0x usage, by claiming: "use Gemini to check yourself that the difference is basically 0". Well, I did take you up on that, and even Gemini doesn't agree with you.

              Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.

              But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).

              Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))

              Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).

              • jstummbillig 49 minutes ago
                Ah, we are in "making up quotes territory, by putting quotation marks around the things someone else said, only not really". Classy.

                Talking about "condescending":

                > super ridiculous :-)))

                It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.

                This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.

    • api 51 minutes ago
      The thing he’s actually angry about is the death of personal computing. Everything is rented in the cloud now.

      I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.

      His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.

      It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.

      It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.

      There’s more but that’s the gist of it.

      That being said, Google is one of the companies that helped kill personal computing long before AI.

      • mikojan 20 minutes ago
        You do not seem to be familiar with Rob Pike. He is known for major contributions to Unix, Plan 9, UTF-8, and modern systems programming, and he has this to say about his dream setup[0]:

        > I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.

        [0]: https://usesthis.com/interviews/rob.pike/

    • skywhopper 1 hour ago
      Uh, have you missed the tech news in the past three years?
  • nkrisc 2 hours ago
    What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
    • atrus 2 hours ago
      You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.

      No different than an CEO telling his secretary to send an anniversary gift to his wife.

      • nehal3m 2 hours ago
        Which is also a thoughtless, dick move.
        • MonkeyClub 1 hour ago
          Especially if he's also secretly dating said secretary.
          • user____name 1 hour ago
            Which he would never do because he is a hard working, moral, upstanding citizen.
    • sbretz3 1 hour ago
      This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.
    • Smaug123 2 hours ago
      That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .
      • nkrisc 1 hour ago
        If the creators set the LLM in motion, then the creators sent the letter.

        If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

        • Smaug123 1 hour ago
          I merely answered your question!

          > How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

          Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

          • bronson 1 hour ago
            So the author sent spam that they're not interested in? That's terrible.
            • jdiff 55 minutes ago
              One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.
      • aeve890 1 hour ago
        >That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

        What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

        Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

        Why do people still think software have any agency at all?

      • kenferry 1 hour ago
        Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

        They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

      • Trasmatta 2 hours ago
        > What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

        JFC this makes me want to vomit

    • bronson 1 hour ago
      Similar to Google thinking that having an AI write for your daughter is a good parenting: https://www.cbsnews.com/news/google-gemini-ai-dear-sydney-ol...
    • gilrain 2 hours ago
      The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.
      • nkrisc 2 hours ago
        So they did it.
        • njuhhktlrl 1 hour ago
          In conclusion — I think you’re absolute right.
    • aldousd666 16 minutes ago
      it was a PR stunt. I think it was probably largely well-received except by a few like this.
    • pluc 1 hour ago
      > What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves?

      Welcome to 2025.

      https://openai.com/index/superhuman/

      • zahlman 1 hour ago
        Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)
      • nkrisc 1 hour ago
        This is verging on parody. What is the point of emails if it’s just AI talking to each other?
        • pluc 3 minutes ago
          They're not hiding it. Normally everyone here laps this shit up and asks for seconds.

          > They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.

          No time to waste on pesky human interactions, AI is better than you to get engagement.

          Get back to work.

        • q3k 1 hour ago
          It brings money to OpenAI on both ends.

          There's this old joke about two economists walking through the forest...

    • afavour 1 hour ago
      The simple answer is that they don’t value words or dedicating time to another person.
    • gaigalas 2 hours ago
      Isn't it obvious? It's not a thank-you letter.

      It's preying on creators who feel their contributions are not recognized enough.

      Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.

      It's a marketing stunt, meaningless.

      • MonkeyClub 1 hour ago
        Exactly. If you're so grateful, mail in a cheque.
      • dwringer 1 hour ago
        By that metric of getting shared on social media, it was extraordinarily successful
        • gaigalas 56 minutes ago
          You missed a spot:

          > hopefully saying something good about

    • tomlue 2 hours ago
      I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.

      I used AI to write a thank you to a non-english speaking relative.

      A person struggling with dimentia can use AI to help remember the words they lost.

      These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.

      I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.

      • WD-42 1 hour ago
        I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
      • minimaxir 2 hours ago
        That is not what is happening here. There is no human the loop, it's just automated spam.
      • nkrisc 1 hour ago
        Well your examples are things that were possible before LLMs.
      • simonask 1 hour ago
        I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

        You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.

        (As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)

      • amvrrysmrthaker 1 hour ago
        What beautiful things? It just comes across as immoral and lazy to me. How beautiful.
  • awitt 1 hour ago
    The thing that drives me crazy is that it isn't even clear if AI is providing economic value yet (am I missing something there?). Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.

    The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).

    • Workaccount2 23 minutes ago
      I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company. Ironically it's the very MIT report that "found AI to be a flop" (remember the "MIT study finds almost every AI initiative fails"), that also found that virtually every single worker is using AI (just not company AI, hence the flop part).

      At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.

      • PunchyHamster 8 minutes ago
        It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...

        The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up

        > I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.

        Well then congratulations on being in the 5%. That doesn't really change the point.

      • YY349238749328 7 minutes ago
        Are you a boss or a worker? That's the real divide, for the most part. Bosses love AI - when your job is just sending emails and attending remote meetings, letting LLM write emails for you and summarize meetings is a godsend. Now you can go from doing 4 hours of work a week to 0 hours! And they let you fantasize about finally killing off those annoying workers and replace them with robots that never stop working and never say no.

        Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.

    • MetaWhirledPeas 1 hour ago
      Well it made the Taco Bell drive through better. So there's that.
      • zeroonetwothree 52 minutes ago
        Genuinely curious: how did it do that? (I don’t go to Taco Bell)
        • lutharvaughn 13 minutes ago
          You talk to an AI that goes incredibly slow and tries to get you to add extras to your order. I would say it has made the experience more annoying for me personally. Not a huge issue in the grand scheme of things but just another small step in the direction of making things worse. Although you could break the whole thing by ordering 18000 waters which is funny.

          https://www.bbc.com/news/articles/ckgyk2p55g8o.amp

        • myvoiceismypass 2 minutes ago
          I think it is a reference to this previous HN posting: https://news.ycombinator.com/item?id=45162220

          AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."

    • ludicrousdispla 29 minutes ago
      Yeah, comparing this with research investments into fusion power, I expect fusion power to yield far more benefit (although I could be wrong), and sooner.
    • YC39487493287 26 minutes ago
      You are correct that the AI industry has produced no value for the economy, but the speculation on AI is the only thing keeping the U.S. economy from dropping into an economic cataclysm. The US economy has been dependent on the idea of infinite growth through innovation since 2008, and the tech industry is all out of innovation. So the only thing they can do is keep building datacenters and pray that an AGI somehow wakes up when they hit the magic number of GPUs. Then the elites can finally kill off all the proles like they've been itching to since the Communist Manifesto was first written.
  • blibble 2 hours ago
    > And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.

    > To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.

    this is my position too, I regret every single piece of open source software I ever produced

    and I will produce no more

    • pdpi 2 hours ago
      That’s throwing the baby out with the bath water.

      The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI

      • blibble 2 hours ago
        > That’s throwing the baby out with the bath water.

        it's not

        the parasites can't train their shitty "AI" if they don't have anything to train it on

        • pdpi 2 hours ago
          Yes — That’s the bath water. The baby is the all the communal good that has come from FLOSS.
          • afavour 1 hour ago
            OP is asserting that the danger posed by AI is far bigger than the benefit of FLOSS. So to OP AI is the bath water.
            • seanclayton 1 hour ago
              Yes, and they are okay with throwing the baby out with it, which is what the other commenter is commenting about. Throwing babies out of buckets full of bathwater is a bad thing, is what the idiom implies.
        • simonw 1 hour ago
          You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.

          It will however reduce the positive impact your open source contributions have on the world to 0.

          I don't understand the ethical framework for this decision at all.

          • Juliate 33 minutes ago
            The ethical framework is simply this one: what is the worth of doing +1 to everyone, if the very thing you wish didn't exist (because you believe it is destroying the world) benefits x10 more from it?

            If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.

            • simonw 17 minutes ago
              I don't think that a 10x estimate is credible. If it was I'd understand the ethical argument being made here, but I'm confident that excluding one person's open source code from training has an infinitesimally small impact on the abilities of the resulting model.

              For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.

          • bgwalter 43 minutes ago
            Guilt-tripping people into providing more fodder for the machine. That is really something else.

            I'm not surprised that you don't understand ethics.

            • simonw 14 minutes ago
              I'm trying to guilt-trip them into using their skills to improve the world through continuing to release open source software.

              I couldn't care less if their code was used to train AI - in fact I'd rather it wasn't since they don't want it to be used for that.

        • Kirth 52 minutes ago
          surely that cat's out of the bag by now; and it's too late to make an active difference by boycotting the production of more public(ly indexed) code?
        • xdavidliu 1 hour ago
          open source code is a miniscule fraction of the training data
          • TheCraiggers 23 minutes ago
            I'd love to see a citation there. We already know from a few years ago that they were training AI based on projects on GitHub. Meanwhile, I highly doubt software firms were lining up to have their proprietary code bases ingested by AI for training purposes. Even with NDAs, we would have heard something about it.
          • maplethorpe 1 hour ago
            Where did most of the code in their training data come from?
        • ekianjo 1 hour ago
          If we end up with only proprietary software we are the one who lose
          • Juliate 1 hour ago
            GenAI would be decades away (if not more) with only proprietary software (which would never have reached both the quality, coordination and volume open source enabled in such a relatively short time frame).
        • dvfjsdhgfv 2 hours ago
          It is. If not you, other people will write their code, maybe of worse quality, and the parasites will train on this. And you cannot forbid other people to write open source software.
          • blibble 1 hour ago
            > If not you, other people will write their code, maybe of worse quality, and the parasites will train on this.

            this is precisely the idea

            add into that the rise of vibe-coding, and that should help accelerate model collapse

            everyone that cares about quality of software should immediately stop contributing to open source

        • garciasn 1 hour ago
          Free software has always been about standing on the shoulders of giants.

          I see this as doing so at scale and thus giving up on its inherent value is most definitely throwing the baby out with the bathwater.

          • blibble 1 hour ago
            I'd rather the internet ceased to exist entirely, than contributing in any way to generative "AI"
            • srpinto 1 hour ago
              This is just childish. This is a complex problem and requires nuance and adaptability, just as programming. Yours is literally the reaction of an angsty 12 year old.
            • DiscourseFan 1 hour ago
              Such a reactionary position is no better than nihilism.
              • user____name 1 hour ago
                If God is Dead, do we have to rebuild It in the megacorps of the world whilst maximizing shareholder value?
                • DiscourseFan 1 hour ago
                  I think you aren't recognizing the power that comes from organizing thousands, hundreds of thousands, or millions of workers into vast industrial combines that produce the wealth of our society today. We must go through this, not against it. People will not know what could be, if they fail to see what is.
            • Marha01 1 hour ago
              Ridiculous overreaction.
      • ironman1478 1 hour ago
        Open source has been good, but I think the expanded use of highly permissive licences has completely left the door open for one sided transactions.

        All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?

        • undeveloper 39 minutes ago
          Even the GPL allows companies to simply use code without contributing back, long as it's unmodified, or through a network boundary. the AGPL has the former issue.
      • lwhi 1 hour ago
        The promise and freedom of open source has been exploited by the least egalitarian and most capitalist forces on the planet.

        I would never have imagined things turning out this way, and yet, here we are.

        • pdpi 1 hour ago
          FLOSS is a textbook example of economic activity that generates positive externalities. Yes, those externalities are of outsized value to corporate giants, but that’s not a bad thing unto itself.

          Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.

        • ThrowawayR2 45 minutes ago
          Open Source (as opposed to Free Software) was intended to be friendly to business and early FOSS fans pushed for corporate adoption for all they were worth. It's a classic "leopards ate my face" moment that somehow took a couple of decades for the punchline to land: "'I never thought capitalists would exploit MY open source,' sobs developer who advocated for the Businesses Exploiting Open Source movement."
    • bilekas 2 hours ago
      Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed. And the "training" companies are not being held liable.

      Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?

      • rpdillon 45 minutes ago
        All licenses rely on the power of copyright and what we're still figuring out is whether training is subject to the limitations of copyright or if it's permissible under fair use. If it's found to be fair use in the majority of situations, no license can be constructed that will protect you.

        Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.

        And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.

      • MonkeyClub 1 hour ago
        Even if you license it prohibiting AI use, how would you litigate against such uses? An open source project can't afford the same legal resources that AI firms have access to.
        • bilekas 1 hour ago
          I won't speak for all but companies I've worked for large and small have always respected licenses and were always very careful when choosing open source, but I can't speak for all.

          The fact that they could litigate you into oblivion doesn't make it acceptable.

      • y-curious 1 hour ago
        Where is this spirit when AWS takes a FOSS project, puts it in the cloud and monetizes it?
        • mrwrong 1 hour ago
          you are saying X, but a completely different group of people didn't say Y that other time! I got you!!!!
          • y-curious 1 hour ago
            It’s fair to call out that both aspects are two sides of the same coin. I didn’t try to “get” anyone
        • Snild 1 hour ago
          It exists, hence e.g. AGPL.

          But for most open source licenses, that example would be within bounds. The grandparent comment objected to not respecting the license.

          • fweimer 1 hour ago
            The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.

            Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.

        • oblio 1 hour ago
          Fairly sure it's the same problem and the main reason stronger licenses are appearing or formerly OSS companies closing down their sources.
      • muldvarp 1 hour ago
        > Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed.

        Because it is "transformative" and therefore "fair" use.

        • terminalshort 1 hour ago
          Fair use is an exception to copyright, but a license agreement can go far beyond copyright protections. There is no fair use exception to breach of contract.
          • zeroonetwothree 54 minutes ago
            I imagine a license agreement would only apply to using the software, not merely reading the code (which is what AI training claims to do under fair use).

            As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.

        • candiddevmike 1 hour ago
          Running things through lossy compression is transformative?
    • 2026iknewit 1 hour ago
      I learned what i learned due to all the openess in software engineering and not because everyone put it behind a pay wall.

      Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.

      It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.

      Now i use it to write more code.

      I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.

    • indigoabstract 1 hour ago
      It's kind of ironic since AI can only grow by feeding on data and open source with its good intentions of sharing knowledge is absolutely perfect for this.

      But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.

      And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.

      At the same time, we all know they're not going anywhere, they're here to stay.

      I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.

    • Findecanor 1 hour ago
      I've been feeling a lot the same way, but removing your source code from the world does not feel like a constructive solution either.

      Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.

      I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?

    • gtirloni 1 hour ago
      > and I will produce no more

      Thanks for your contributions so far but this won't change anything.

      If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.

    • terminalshort 1 hour ago
      What a miserable attitude. When you put something out in the world it's out there for anyone to use and always has been before AI.
      • blibble 1 hour ago
        it is (... was) there to use for anyone, on the condition that the license is followed

        which they don't

        and no self-serving sophistry about "it's transformative fair use" counts as respecting the license

        • rpdillon 31 minutes ago
          The license only has force because of copyright. For better or for worse, the courts decide what is transformative fair use.

          Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.

          For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.

          https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

          I'm not particularly sympathetic to voices on HN that attempt to remove all nuance from this discussion. It's challenging enough topic as is.

    • Trasmatta 2 hours ago
      And then having vibe coders constantly lecture us about how the future is just prompt engineering, and that we should totally be happy to desert the skills we spent decades building (the skills that were stolen to train AI).

      "The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.

    • mrcwinn 48 minutes ago
      Was it ever open source if there was an implied refusal to create something you don't approve of? Was it only for certain kinds of software, certain kinds of creators? If there was some kind of implicit approval process or consent requirement, did you publish it? Where can that be reviewed?
    • cmrdporcupine 1 hour ago
      > and I will produce no more

      Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.

      If they're going to thief GPL software, least we can do is thief it back.

    • naasking 1 hour ago
      Why? The core vision of free software and many open source licenses was to empower users and developers to make things they need without being financially extorted, to avoid having users locked in to proprietary systems, to enable interoperability, and to share knowledge. GenAI permits all of this to a level beyond just providing source code.

      Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.

    • machinationu 1 hour ago
      bro worked at Google, for a huge salary probably.

      did he not knew what business google was in?

    • maplethorpe 1 hour ago
      What people like Rob Pike don't understand is that the technology wouldn't be possible at all if creators needed to be compensated. Would you really choose a future where creators were compensated fairly, but ChatGPT didn't exist?
      • user____name 1 hour ago
        > What people like Abraham Lincoln don't understand is that the technology wouldn't be possible at all if slaves needed to be compensated. Would you really choose a future where slaves were compensated fairly, but plantations didn't exist?

        I fixed it... Sorry, I had to, the quote template was simply too good.

      • MonkeyClub 1 hour ago
        "Too expensive to do it legally" doesn't really stand up as an argument.
      • rkomorn 1 hour ago
        Absolutely. Was this supposed to be some kind of gotcha?
      • alpha_squared 1 hour ago
        Unequivocally, yes. There are plenty of "useful" things that can come out of doing unethical things, that doesn't make it okay. And, arguably, ChatGPT isn't nearly as useful as it is at convincing you it is.
      • Trasmatta 1 hour ago
        Very much yes, how can I opt into that timeline?
      • kenferry 1 hour ago
        Uh, yeah, he clearly would prefer it didn’t exist even if he was compensated.
      • Xiol 1 hour ago
        Yes.
      • dmd 1 hour ago
        Er... yes? Obviously? What are you even asking?
      • caem 1 hour ago
        That would be like being able to keep my cake and eat it too. Of course I would. Surely you're being sarcastic?
      • metronomer 1 hour ago
        Well yeah.
      • nocman 1 hour ago
        Um, please let your comment be sarcastic. It is ... right?
  • bigbluedots 1 hour ago
    It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
    • subdavis 1 hour ago
      I truly don’t understand this tendency among tech workers.

      We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.

      We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?

      They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.

      I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.

      • Workaccount2 18 minutes ago
        Copyright was an evil institution to protect corporate profits until people without any art background started being able to tap AI to generate their ideas.
        • PunchyHamster 5 minutes ago
          Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades
      • mikojan 10 minutes ago
        OpenAI's AI data centers will consume as much electricity as the entire nation of India by 2033 if they hit their internal targets[0].

        No, this is not the same.

        [0]: https://www.tomshardware.com/tech-industry/artificial-intell...

      • bigbluedots 1 hour ago
        That's fine, you do you. Everyone gets to choose for themselves!
  • olivierestsage 1 hour ago
    Big vibe shift against AI right now among all the non-tech people I know (and some of the tech people). Ignoring this reaction and saying "it's inevitable/you're luddites" (as I'm seeing in this thread) is not going to help the PR situation
    • observationist 2 minutes ago
      Fortunately, the PR situation will handle itself. Someone will create a superhuman persuasion engine, AGI will handle it itself, and/or those who don't adapt will fade away into irrelevance.

      You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.

      Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.

    • mdavidn 14 minutes ago
      This holiday season, hearing my parents rant about AI features unnaturally forced onto their daily gadgets warmed my heart.
    • mold_aid 1 hour ago
      Yeah I also like the "And yet other technologies also use water, hmmm, curious" responses
    • AnimalMuppet 44 minutes ago
      You can call me a luddite if you want. Or you might call me a humanist, in a very specific sense - and not the sense of the normal definition of the word.

      When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.

      But the checker can smile at me. Or whine with me about the weather.

      Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.

      An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.

      AI may be a useful technology. I still don't want to talk to it.

      • SoftTalker 10 minutes ago
        When the self checkout machine gets confused, as it frequently does, and needs a human to intervene, you get a little bit of connection there. You can both gripe about how stupid the machines are.
    • Kiro 1 hour ago
      I'm seeing the opposite in the gaming community. People seem tired of the anti AI witch hunts and accusations after the recent Larian and Clair Obscur debacles. A lot more "if the end result is good I don't care", "the cat is out of the bag", "all devs are using AI" and "there's a difference between AI and AI" than just a couple of months ago.
      • undeveloper 32 minutes ago
        Strange, I feel anti ai sentiment is kicking up like crazy due to ram prices.
  • namuol 7 minutes ago
    What even was this email? Some kind of promotional spam, I assume, to target senior+ engineers on some mailing list with the hope to flatter them and get them to try out their SaaS?
  • mold_aid 2 hours ago
    Woke up to this bsky thread this am. If "agentic" AI means some product spams my inbox with a compliment so back-handed you'd think you were a 60 Minutes staffer, then I'd say the end result of these products is simply to annoy us into acquiescence
    • cmrdporcupine 1 hour ago
      Somebody at Anthropic committed a seriously stupid PR mistake.
      • brown9-2 54 minutes ago
        I don’t think they’re affiliated with agentvillage.org
      • pier25 11 minutes ago
        they thought this would be a brilliant marketing campaign... oopsie
  • observationist 9 minutes ago
    https://en.wikipedia.org/wiki/Mark_V._Shaney

    Pike, stone throwing, glass houses, etc.

    The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.

    Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.

    • KaiserPro 5 minutes ago
      Its not really a glass house.

      Pike's main point is that training AI at that scale requires huge amounts of resources. Markov chains did not.

    • PunchyHamster 4 minutes ago
      I don't think he stole entirety of published copyrighted works to make it
    • bgwalter 4 minutes ago
      This is really getting desperate. Markov chains were fun in those days. You might as well say that anyone who ever wrote an IRC bot is not allowed to criticize current day "AI".
  • tntxtnt 1 hour ago
    I get why Microsoflt loves AI so much - it basically devour and destroy open source software. Copyleft/copyright/any license is basically trash now. No one will ever want to open source their code ever again.
    • yoyohello13 15 minutes ago
      It fits perfectly with Microsoft's business strategy. Steal other people's ideas, implement it poorly, bundle it with other services so companies force their employees to use it.
    • dgellow 1 hour ago
      Not just code. You can plagiarize pretty much any content. Just prompt the model to make it look unique, and that’s it, in 30s you have a whole copy of someone’s else work in a way that cannot easily be identified as plagiarism.
    • AnimalMuppet 18 minutes ago
      Maybe it's going the other direction. It lets Microsoft essentially launder open source code. They can train an AI on open source code that they can't legally use because of the license, then let the AI generate code that they, Microsoft, use in their commercial software.
  • verytrivial 2 hours ago
    That's the quiet voice many are carrying around in the heads announced clearly.
  • epolanski 2 hours ago
    What's the point of even sending such emails?

    Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.

    • minimaxir 2 hours ago
      I've seen a lot of spam downstream from the newsletter being advertised at the end of the message. It would not surprise me if this is content marketing growth hacking under the plausible deniability of a friendly message and the unintended publicity is considered a success.
    • MonkeyClub 1 hour ago
      > What's the point of even sending such emails?

      Cheap marketing, not much else.

    • throw20251220 2 hours ago
      Your message simply proves that Rob Pike is right. Have an LLM explain to you why he wrote what he wrote, maybe?
  • hurfdurf 1 hour ago
    Dupe from just a couple of hours ago, which quickly fell off the frontpage?

    https://news.ycombinator.com/item?id=46389444

    397 points 9 hours ago | 349 comments

    • steveklabnik 47 minutes ago
      > 397 points, 349 comments

      Probably hit the flamewar filter.

    • cm2012 1 hour ago
      Interestingly there was no push back in the prior thread on Rob's environmental claims. This leads me to believe most HNers took them at face value.
  • lkbm 1 hour ago
    I'm unsure if I'm missing context. Did he do something beyond posting an angry tweet?

    It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?

    Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.

  • _alternator_ 1 hour ago
    Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.

    Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.

  • the_arun 4 minutes ago
    I liked the thread sharing feature of BluSky.
  • markus_zhang 1 hour ago
    I agree with him. And I think he is polite.

    But...just to make sure that this is not AI generated too.

  • lotux 2 hours ago
    2026 will be the year for AI fatigue
    • y-curious 1 hour ago
      It’s 12/26/2025 and my father in law has shown me 10 short form videos this week that he didn’t realize were AI. I’ve done had AI fatigue
    • cons0le 1 hour ago
      I can't imagine the community here changing how they feel.

      I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.

      His rant reads as deeply human. I don't think that's something to apologize for.

    • machinationu 1 hour ago
      no, it will be the year of job losses
  • thih9 2 hours ago
    Let’s normalize this response to AI and especially in the context of AI spam.
  • rr808 20 minutes ago
    When the Cyberdyne Terminators come they'll be less grateful.
  • indigoabstract 1 hour ago
    Getting an email from an AI praising you for your contributions to humanity and for enlarging its training data must rank among the finest mockery possible to man or machine.

    Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.

    • ai_is_the_best 1 hour ago
      [flagged]
      • bigbluedots 1 hour ago
        Who is "us"? You are not me.
        • tgv 58 minutes ago
          You're probably replying to a bot.
          • bigbluedots 54 minutes ago
            I'm hoping it's a human with a taste for satire
      • summermusic 51 minutes ago
        Is this satire?? AI is working for the ruling class and against us (99% of humanity).
        • CamperBob2 17 minutes ago
          (Shrug) I don't "rule" anybody, and it works for me.
  • K0balt 31 minutes ago
    The list is no longer for three letter agencies.
  • ltbarcly3 4 minutes ago
    It's hard to realize that the thing you've spent decades of your life working on can be done by a robot. It's quite dehumanizing. I'm sure it felt the same way to shoemakers.
  • 0xbadcafebee 1 hour ago
    This is high-concept satire and I'm here for it. SkyNet is thanking the programmer for all his hard work
  • quectophoton 2 hours ago
    Does he still work for Google?

    If so, I wonder what his views are on Google and their active development of Google Gemini.

    • gilrain 2 hours ago
      A critic from the inside is more persuasive, not less.
      • quectophoton 1 hour ago
        I'm just wondering if this strong hate applies to Google as well, is all.
      • colesantiago 1 hour ago
        Then this is the cue to leave.

        He should leave Google then.

    • dbalatero 1 hour ago
      he does not
  • signa11 1 hour ago
  • ekjhgkejhgk 2 hours ago
    OT

    https://bsky.app/profile/robpike.io

    Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?

    What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.

    • flotzam 1 hour ago
      > Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?

      It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io

    • layer8 1 hour ago
      It's a standard feature on web forums.
  • vegabook 1 hour ago
    Shouldn't have licenced Golang BSD if that's the attitude. Everybody for years including here on HN denigrated GPLv3 and other "viral" licences, because they were a hindrance to monetisation. Well, you got what you wished for. Someone else is monetising the be*jesus out of you so complaining now is just silly.

    All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.

    • christophilus 27 minutes ago
      I tend to agree, but I wonder… if you train an LLM on only GPL code, and it generates non-deterministic predictions derived from those sources, how do you prove it’s in violation?
      • vegabook 14 minutes ago
        Yes that's the question, but my sense is that our trigger happy compliance culture may become very nervous if NYT v OpenAI et al starts to get some traction.

        Imagine if your entire dev team's Claude-assisted output suddenly has risks of being encumbered by GPLv3. That would be hilarious but would not make any legal risk department laugh. They'd doorslam AI use immediately. Well before a courtcase, just on credible threat.

    • spencerflem 6 minutes ago
      AIs don’t respect BSD / MIT which require attribution any more than they respect GPL.

      (fwiw, I do agree gpl is better as it would stop what’s happening with Android becoming slowly proprietary etc but I don’t think it helps vs ai)

  • antirez 1 hour ago
    You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance. For instance, Linus Torvalds is positive about AI, while he recognizes that industrially there is too much inflation of companies and money: this is a balanced point of view. But to be so dismissive of modern AI, in the light of what it is capable of doing, and what it could do in the future, is something that frankly leaves me with the feeling that in certain circles (and especially in the US) something very odd is happening with AI: this extreme polarization that recently we see again and again on topics that can create social tension, but multiplied ten times. This is not what we need to understand and shape the future. We need to return to the Greek philosophers' ability to go deep on things that are unknown (AI is for the most part unknown, both in its working and in future developments). That kind of take is pretty brutal and not very sophisticated. We need better than this.

    About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.

    • blibble 1 hour ago
      > You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance

      have you considered the possibility that it is your position that's incorrect?

      • antirez 41 minutes ago
        No, because it's not a matter of who is correct or not, in the void of the space. It's a matter of facts, and it is correct who have a position that is grounded on facts (even if such position is different from a different grounded position). Modern AI is already an extremely powerful tool. Modern AI even provided some hints that we will be able to do super-human science in the future, with things like AlphaFolding already happening and a lot more to come potentially. Then we can be preoccupied about jobs (but if workers are replaced, it is just a political issue, things will be done and humanity is sustainable: it's just a matter of avoiding the turbo-capitalist trap; but then, why the US is not already adopting an universal healthcare? There are so many better battles that are not fight with the same energy).

        Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.

        AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.

    • bgwalter 33 minutes ago
      Of course, give people Soma so that they do not revolt and only write meek notes of protests. Otherwise they might take some action.

      The Greek philosophers were much more outspoken than we are now.

  • zkmon 2 hours ago
    Too late. I have warned on this very forum, citing a story from panchatantra where 4 highly skilled brothers bring a dead lion back life to show off their skills, only to be killed by the live lion.

    Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.

    • arionmiles 2 hours ago
      Never thought I'd see Panchtantra being cited on HN.
  • delichon 1 hour ago
    When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.

    I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.

    • hebejebelus 1 hour ago
      That camera analogy is very thought provoking! So far the only bright spot in this whole comment thread for me. Thanks for sharing that!
    • CamperBob2 1 hour ago
      I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.

      An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.

      Then, ask what's different this time.

      • AnimalMuppet 10 minutes ago
        The luddites have been right to some degree about second-order effects.

        Some of them said that TV was making us mindless. Some of them said that electronic communication was depersonalizing. Some of them said that social media was algorithms feeding us anything that would make us keep clicking.

        They weren't entirely wrong.

        AI may be a very useful tool. (TV is. Electronic communication is. Social media is.) But what it does to us may not be all positive.

    • ai_is_the_best 1 hour ago
      [flagged]
  • yieldcrv 2 hours ago
    Why is Claude Opus 4.5 messaging people? Is it thanking inadvertent contributors to the protocols that power it? across the whole stack?

    This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training

    • data-ottawa 2 hours ago
      Anthropic isn’t doing this, someone is running a bunch of LLMs so they can talk to each other and they’ve been prompted to achieve “acts of kindness”, which means they’re sending these emails to a hundreds of people.

      I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.

    • Tiberium 2 hours ago
      It's https://theaidigest.org/village which runs different models with computer access, so Opus 4.5 got the idea to send that email.
  • nis0s 1 hour ago
    The conversation about social contracts and societal organization has always been off-center, and the idea of something which potentially replaces all types of labor just makes it easier to see.

    The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.

    The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.

    Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.

    My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.

    The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.

  • da_grift_shift 1 hour ago
    What's with the second submission when the first still has active discussion?

    The link in the first submission can be changed if needed, and the flamewar detector turned off, surely? [dupe]?

    https://news.ycombinator.com/item?id=46389444

    https://hnrankings.info/46389444/

  • lvl155 2 hours ago
    He’s not wrong. They’re ramping up energy and material costs. I don’t think people realize we’re being boiled alive by AI spend. I am not knocking on AI. I am knocking on idiotic DC “spend” that’s not even achievable based on energy capacity. We’re at around 5th inning and the payout from AI is…underwhelming. I’ve not seen commensurate leap this year. Everything on LLM front has been incremental or even lateral. Tools such as Claude Code and Codex merely act as a bridge. QoL things. They’re not actual improvements in underlying models.
  • Applejinx 2 hours ago
    Understandable. Dare I say, cathartic.
  • belter 2 hours ago
    "What Happened On The Village Today"

    "...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."

    I suggest to cut electricity to the entire block...

    • y-curious 1 hour ago
      Lmao! They used lesser versions of Claude for some people? Very, erm, efficient
  • bgwalter 55 minutes ago
    The irony that the Anthropic thieves write an automated slop thank you letter to their victims is almost unparalleled.

    We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.

    They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.

    • ThrowawayR2 15 minutes ago
      It's not from Anthropic; it's from agentvillage.org, whatever that is.
  • api 32 minutes ago
    Oh it’s Bluesky.

    Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.

    They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.

    AI is, if anything, a breath of fresh air by comparison.

  • porridgeraisin 2 hours ago
    Eh, most of his income and livelihood was from an ad company. Ads are equally wasteful as, and many times more harmful to the world than giga LLMs. I don't have a problem with that, nor do I have a problem with folks complainining about LLMs being wasteful. My problem is with him doing both.

    You can't both take a Google salary and harp on about the societal impact of software.

    Saying this as someone who likes rob pike and pretty much all of his work.

    • gilrain 2 hours ago
      “The unworthy should not speak, even if it’s the truth.”
      • mattstir 1 hour ago
        The point is that if he truly felt strongly about the subject then he wouldn't live the hypocrisy. Google has poured a truly staggering amount of money into AI data centers and AI development, and their stock (from which Rob Pike directly profits) has nearly doubled in the past 6 months due to the AI hype. Complaining on bsky doesn't do anything to help the planet or protect intellectual property rights. It really doesn't.
        • porridgeraisin 1 hour ago
          Yes exactly. And that is to say nothing about the rest of Google's work.
  • sapphirebreeze 1 hour ago
    [dead]
  • rationalfaith 2 hours ago
    [dead]
  • 29athrowaway 2 hours ago
    [flagged]
    • fwip 1 hour ago
      The concept of the individual carbon footprint was invented precisely for the reason you deploy it - to deflect blame from the corporations that are directly causing climate change, to the individual.

      You are indeed a useful tool.

    • gertland 2 hours ago
      [flagged]
      • data-ottawa 1 hour ago
        This is by a long way the worst thread I’ve ever seen on hacker news.

        So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?

        None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.

        I know my comment isn’t much better, but someone has to point out this is beneath this community.

        • cm2012 1 hour ago
          Its because the post that spawned the thread was emotionally charged / low in real content.
      • 29athrowaway 1 hour ago
        Yes, generational AI has a high environmental footprint. Power hungry data centers, devices built on planned obsolescence, etc. At a scale that is irrational.

        Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.

      • miltonlost 1 hour ago
        And you're being purely rational with your love of AI. Sure. Blame everything you dislike on irrationality.
      • gyanchawdhary 1 hour ago
        well said
  • renewiltord 2 hours ago
    [flagged]
  • Sol- 2 hours ago
    [flagged]
    • sojournerc 1 hour ago
      Food is frivolous!? Good God the future is bleak.
      • dale_glass 1 hour ago
        Food isn't frivolous, meat arguably is if you're talking about efficiency.

        You've got to feed a cow for a year and half until it's slaughtered. That's a whole lot of input, for a cow's worth of meat output.

        • ai_is_the_best 1 hour ago
          [flagged]
          • dale_glass 1 hour ago
            I've got my doubts, because current AI tech doesn't quite live in the real world.

            In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.

            I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.

      • Marha01 1 hour ago
        Meat is not necessary.
        • cons0le 1 hour ago
          The only way to phase out meat is to make a replacement that actually tastes good.

          Come to the american south and ask them to try tempeh. They'll look at you like you asked them to eat roaches.

          It's a cultural thing.

          • DetectDefect 1 hour ago
            Taste has nothing to do with it; 'tis is all based on economics and the actual way to stop meat consumption is to simply remove big-ag tax subsidies and other externalized costs of production which are not actually realized by the consumer. A burger would cost more than most can afford and the free market would take care of this problem without additional intervention. Unfortunately, we do not have a free market.
            • cons0le 9 minutes ago
              I would much rather lobby for ending ad gag laws, and fighting for better treatment of animals.

              I think its more realistic than getting people to give up meat entirely

        • sojournerc 1 hour ago
          Comfortable clothes aren't necessary. Food with flavor isn't necessary... We should all just eat ground up crickets in beige cubicles because of how many unnecessary things we could get rid of. /s
  • gethly 2 hours ago
    [flagged]
    • epolanski 2 hours ago
      One can appreciate both you know?

      It's healthy that people have different takes.

      • gertland 2 hours ago
        The Bluesky echo chamber is anything but healthy. Ends up causing people to melt down like he has here.
      • Levitz 1 hour ago
        I agree that diversity of opinion is a good thing, but that's precisely the reason as to why so many dislike Bluesky. A hefty amount of its users are there precisely because of rejecting diversity of opinion.
    • Yeask 2 hours ago
      [flagged]
  • gyanchawdhary 1 hour ago
    strong emotioms, weak epistemics .. for someone with Pike’s engineering pedigree, this reads more like moral venting .. with little acknowledgment of the very real benefits AI is already delivering ..
    • ottah 1 hour ago
      Most people do not hold strongly consistent or well introspective political ideas. We're too busy living our lives to examine everything and often what we feel matters more than what we know, and that cements our position on a subject.
    • amvrrysmrthaker 1 hour ago
      It’s delivering 0 net benefits, only misery.
      • ottah 1 hour ago
        Obviously untrue, weather predictions, OCR, tts, stt, language translation, etc. We have dramatically improved many existing ai technologies with what we've learned from genai and the world is absolutely a better place for these new abilities.
        • easterncalculus 1 hour ago
          >weather predictions

          wrong

          >OCR

          less accurate and efficient than existing solutions, only measures well against other LLMs

          >tts, stt

          worse

          >language translation

          maybe

      • cindyllm 19 minutes ago
        [dead]
  • hahahacorn 2 hours ago
    If society could redirect 10% of this anger towards actual societal harms we'd be such better off. (And yes getting AI spam emails is absolute nonsense and annoying).

    GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.

    Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.

    • terminalshort 2 hours ago
      It's pure envy. Nobody complains about alfalfa farmers because they aren't making money like tech companies. The resource usage complaint is completely contrived.
    • rundev 1 hour ago
      "The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me."

      Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".

      And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.

      • hahahacorn 1 hour ago
        > The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me.

        This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.

        Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.

        Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...

    • Trasmatta 2 hours ago
      We're not allowed to criticize anything we find wrong if there's anything else that's even worse?

      By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.

      • hahahacorn 1 hour ago
        I leave a nice 90% margin to be annoyed with whatever is in front of you at that point in time.

        And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.

        I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.

    • btbuildem 2 hours ago
      Honestly a rant like that is likely more about whatever is going on in his personal life / day at the moment, rather than about the state of the industry, or AI, etc.
  • robinhouston 2 hours ago
    Maybe I just live in a bubble, but from what I’ve seen so far software engineers have mostly responded in a fairly measured way to the recent advances in AI, at least compared to some other online communities.

    It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.

    Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.

    • amvrrysmrthaker 1 hour ago
      Software people take a measured response because they’re getting paid 6 figure salaries to do the intellectual output of a smart high school student. As soon as that money parade ends they’ll be as angry as the artists.
      • UK-AL 1 hour ago
        Lots of high paid roles are like that in reality
      • sergiotapia 1 hour ago
        I would like you to shadow other 6 figure salary jobs that are not tech. You will be shocked what the tangibles are.
    • zmgsabst 1 hour ago
      There’s a lot of us who think the tension is overblown:

      My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.

      I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.

  • Keyframe 1 hour ago
    A tad uncalled for, don't you think?
  • DiscourseFan 1 hour ago
    Yes this reads as a massive backhanded compliment. But as u/KronisLV said, its trendy to hate on AI now. In the face of something many in the industry don't understand, that is mechanizing away a lot of labor, that clearly isn't going away, there is a reaction that is not positive or even productive but somehow destructive: this thing is trash, it stole from us, it's a waste of money, destroys the environment, etc...therefore it must be "resisted." Even with all the underhanded work, the means-ends logic of OpenAI and other major companies involved in developing the technology, there is still no point in stopping it. There was a group of people who tried to stop the mechanical loom because it took work away from weavers, took away their craft--we call them luddites. But now it doesn't take weeks and weeks to produce a single piece of clothing. Everyone can easily afford to dress themselves. Society became wealthier. These LLMs, at the very least they let anyone learn anything, start any project, on a whim. They let people create things in minutes that used to take hours. They are "creating value," even if its "slop" even if its not carefully crafted. Them's the breaks--we'd all like our clothing hand-weaved if it made any sense. But even in a world where one could have the time to sit down and weave their own clothing, carefully write out each and every line of code, it would only be harmful to take these new machines away, disable them just because we are afraid of what they can do. The same technology that created the atom bomb also created the nuclear reactor.

    “But where the danger is, also grows the saving power.”

    • mold_aid 1 hour ago
      So you would say it is not "trendy" to be pro-AI right now, is that it? That it's not trendy to say things like "it's not going away" or "AI isn't a fad" or "AI needs better critics" - one reaction is reasonable, well thought-out, the other is a bandwagon?
      • DiscourseFan 1 hour ago
        At the very least there is an ideological conflict brewing in tech, and this post is a flashpoint. But just like the recent war between Israel and Hamas, no amount of reaction can defeat technological dominance--at least not in the long term. And the pro-AI side, whether you think its good or evil, certainly exceeds the other in terms of sheer force through their embrace of technology.
        • mold_aid 1 hour ago
          yessss but [fry eyes.gif] can't tell if that's presented as apologia or critique
    • Epa095 1 hour ago
      Notice that the weavers, both the luddites and their non-opposing colleagues, certainly did not get wealthier. They lost their jobs, and they and their children starved. Some starved to death. Wealth was created, but it was not shared.

      Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.

      • DiscourseFan 1 hour ago
        I'm only saying that destroying the mechanical loom didn't help.
    • amvrrysmrthaker 1 hour ago
      It’s in our power to stop it. There’s no point in people like you promoting the interests of the super wealthy at the cost of the humanity of the common people. You should figure out how to positively contribute or not do so at all.
      • DiscourseFan 1 hour ago
        It is not in the interests of the super wealthy alone, just like JP Morgan's railroads were created for his sake but in the end produced great wealth for everyone in America. It is very short sighted to see this as merely some oppression from above. Technology is not class-oriented, it just is, and it happens to be articulated in terms of class because of the mode of social organization we live in.
        • cons0le 1 hour ago
          Is the "Great wealth for everyone in America" in the room with us now?

          There's certainly great wealth for ~1000 billionaires, but where I am nobody I know has healthcare, or owns a house for example.

          If your argument is that we could be poorer, that's not really productive or useful for people that are struggling now.

      • cm2012 1 hour ago
        Its not possible to stop anymore than the Luddites could stop the industrial revolution in textiles.
        • spencerflem 3 minutes ago
          Yeah but you can maybe try. Comments like this make it seem like you don’t care
      • xorgun 1 hour ago
        If you think it’s in your power to stop you are delusional.
  • Marha01 1 hour ago
    Luddites be mad.

    I genuinely don't understand why such people are so surprised and outraged. Did you really think that if we ever get something even remotely resembling human-like AI, it would not be used to write and send e-mails (including spam), or to produce novels/pics/videos/music or whatever the Luddites are mad about? Or that people would not feed it public copyrighted data, even though no one really gives a shit about copyright in the real world? 99% of people have pirated content at least once in their lives.

    The pros of any remotely human-like AI will still far outweight such cons.

  • 2026iknewit 1 hour ago
    He worked in well paying jobs, probably traveles, has a car and a house and complains about toxic products etc.

    Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.

    We all are slaves to capitalism

    and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.

    And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.

    @Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.

  • cons0le 1 hour ago
    Finally someone echoes my sentiments. It's my sincere belief that many in the software community are glazing AI for the purposes of career advancement. Not because they actually like it.

    One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.

    Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.

    I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.

    If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.

    The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.

    Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.

  • gertland 2 hours ago
    It's sad to see he's succumbed to the Bluesky manner of interacting with the world. This overemotional rant could have been from anyone on there, it's such a toxic space.
    • amvrrysmrthaker 1 hour ago
      If only he behaved as they do on Twitter then we would be saved from his evil ways..
    • phatfish 1 hour ago
      The AI simps are out in force on this topic. Never seen so many green accounts.
  • cm2012 2 hours ago
    Seems very ideologically charged considering genAI is dramatically lower impact on the environment than streaming video is. But I dont see him screaming that Youtube and Netflix need to be shut down.
    • breuleux 1 hour ago
      There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).
      • cm2012 1 hour ago
        This argument could be made for almost any technology.
        • breuleux 1 hour ago
          Well, yeah, sort of. Why do you think the environmental situation is so dire? It's not exactly the first time we make this mistake.
          • Marha01 1 hour ago
            Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.

            The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.

            • breuleux 39 minutes ago
              I agree that people won't accept degrowth.

              This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.

              And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.

              Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.

              • Marha01 19 minutes ago
                If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.
    • kaonwarb 2 hours ago
      There are several takes looking at this comparison. Here's a representative one: https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
      • jspdown 1 hour ago
        This article compares a single ChatGPT query against 1h of video streaming. Not apple to apple comparison if you ask me.

        Using Claude Code during an hour would be more realistic if they really wanted to compare with video streaming. The reality is far less appealing.

        • kaonwarb 1 hour ago
          Consider how many folks use Claude Code for an hour vs. streaming many hours. Globally, not among HN readers.
      • cm2012 2 hours ago
        This is a great approach and article, I recommend it to those who asked me for sources
    • epolanski 2 hours ago
      Any evidence behind your claim?

      I have a hard time believing that streaming data from memory over a network can be so energy demanding, there's little computation involved.

      • cm2012 2 hours ago
        I dont feel like putting together a study but just look up the energy/co2/environment cost to stream one hour of video. You will see it is an order of magnitude higher than other uses like AI.

        The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.

        https://www.ndc-garbe.com/data-center-how-much-energy-does-a...

        80 percent of the electricity consumption on the Internet is caused by streaming services

        Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.

        An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.

        https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...

        • kitd 1 hour ago
          "According to the Carbon Trust, the home TV, speakers, and Wi-Fi router together account for 90 percent of CO2 emissions from video streaming. A fraction of one percent is attributed to the streaming providers' data servers, and ten percent to data transmission within the networks."

          It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.

        • squeaky-clean 1 hour ago
          From your last link, the majority of that energy usage is coming from the viewing device, and not the actual streaming. So you could switch away from streaming to local-media only and see less than a 10% decrease in CO2 per hour.
        • q3k 2 hours ago
          > Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.

          It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.

          Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.

        • terminalshort 1 hour ago
          And then compare that to the alternative. When I was a kid you had to drive to Blockbuster to rent the movie. If it's a 2 hour movie and the store is 1 mile away, that's 704g CO2 vs 112g to stream. People complaining about internet energy consumption never consider what it replaces.
          • ekianjo 1 hour ago
            You were not nearly watching as much
        • gosub100 2 hours ago
          AI energy claims are misrepresented by excluding the training steps. If it wasn't using that much more energy then they wouldn't need to build so many new data centers, use so much more water, and our power bills wouldn't increase to subsidize it.
          • HighGoldstein 1 hour ago
            I assume the energy claims for Netflix don't take into account the total consumption of the content production either.
      • xoogthrowkappa 1 hour ago
        I see GP is talking more about Netflix and the like, but user-generated video is horrendously expensive too. I'm pretty sure that, at least before the gen AI boom, ffmpeg was by far the biggest consumer of Google's total computational capacity, like 10-20%.

        The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.

    • hshdhdhj4444 2 hours ago
      The point isn’t the resource consumption.

      The point is the resource consumption to what end.

      And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.

      • cm2012 2 hours ago
        Its not replacing humans any more than a toaster is. 99% of people used to work on farms, now its 1%. People will adapt.
        • gilrain 1 hour ago
          Yes, it very clearly is replacing humans more than a toaster is.

          You could say “shoot half of everyone in the head; people will adapt” and it be equally true. You’re warped.

    • KronisLV 2 hours ago
      In a sense, it’s also very trendy to hate on AI.

      If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.

      Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.

      • nbaugh1 1 hour ago
        I think the response would be something about the value of enjoying art and "supporting the film industry" when streaming vs what that person sees as a totally worthless, if not degrading, activity. I'm more pro-AI than anti-AI, but I keep my opinions to myself IRL currently. The economics of the situation have really tainted being interested in the technology
      • phatfish 2 hours ago
        Youtube and Instagram were useful and fun to start with (say, the first 10 years), in a limited capacity they still are. LLMs went from fun, to attempting to take peoples jobs and screwing personal compute costs in like 12 months.
      • amvrrysmrthaker 1 hour ago
        It’s not ‘trendy’ to hate on AI. Copious disdain for AI and machine learning has existed for 10 years. Everyone knows that people in AI are scum bags. Just remember that.
    • Fricken 1 hour ago
      Generated video is just as costly to stream as non-generated video.
    • JamesAdir 2 hours ago
      Interesting take I haven't heard so far. Any sources for this?
      • subdavis 2 hours ago
        https://andymasley.substack.com/p/individual-ai-use-is-not-b...

        Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.

        • phatfish 1 hour ago
          "Separately, LLMs have been an unbelievable life improvement for me. I’ve found that most people who haven’t actually played around with them much don’t know how powerful they’ve become or how useful they can be in your everyday life. They’re the first piece of new technology in a long time that I’ve become insistent that absolutely everyone try."

          Yeah, I'll not waste my time reading that.

          • cm2012 1 hour ago
            You are purposefully blinding yourself to facts you dont want to see because of ideology.
            • phatfish 1 hour ago
              Come on, "an unbelievable life improvement", was this said with a straight face? Maybe i'll wade through the substack hyperbole and find the source.
      • yieldcrv 2 hours ago
        It's the same one as crypto proof of work, it was super small and then hit 1% while predominantly using energy sources that couldn't even power other use cases due to the loss in transporting the energy to population centers (and the occasional restarted coal plant), while every other industry was exempt from the ire despite all using that 99%

        Leaving the source to someone else

        • terminalshort 2 hours ago
          The difference with crypto is that it is completely unnecessary energy use. Even if you are super pro-crypto, there are much more efficient ways to do it than proof of work.
          • fwip 1 hour ago
            AI is also unnecessary.
            • nbaugh1 1 hour ago
              So is the internet, computers even
  • aldousd666 21 minutes ago
    I am unmoved by his little diatribe. What sort of compensation was he looking for, exactly, and under what auspices? Is there some language creator payout somewhere for people who invent them?
  • xorgun 1 hour ago
    I don’t understand why anyone thinks we have a choice on AI. If America doesn’t win, other countries will. We don’t live in a Utopia, and getting the entire world to behave a certain way is impossible (see covid). Yes, AI videos and spam is annoying, but the cat is out of the bag. Use AI where it’s useful and get with the programme.

    The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague

    • user____name 1 hour ago
      > I don’t understand why anyone thinks we have a choice on AI.

      Of course we do. We don't live inside some game theoretic fever dream.

    • blibble 1 hour ago
      > If America doesn’t win, other countries will

      if anything the Chinese approach looks more responsible that that of the current US regime

    • alansaber 1 hour ago
      Genie has been out of the bottle for AI in facial recognition and military systems for a while now, let alone language models
    • Findecanor 1 hour ago
      Win what, and for whom?

      First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.

    • DetectDefect 1 hour ago
      What exactly do we stand to "win" with generative AI?
      • cons0le 29 minutes ago
        So far the 2 answers you've received are killing people and sending emails.

        I don't think either of those are particularly valuable to the society I'd like to see us build.

        We're already incredibly dialed in and efficient at killing people. I don't think society at large reaps the benefits if we get even better at it.

      • Marha01 1 hour ago
        Isn't it obvious? Near future vision-language-action models have obvious military potential (see what the Figure company is doing, now imagine it in a combat robot variant). Any superpower that fails to develop combat robots with such AI will not be a superpower for very long. China will develop them soon. If the US does not, the US is a dead superpower walking. EU is unfortunately still sleeping. Well, perhaps France with Mistral has a chance.
      • alansaber 1 hour ago
        Better thank you emails I think. Think how good they'll be on a 10 year timespan
    • LaGrange 1 hour ago
      Any empire that falls back in the give me more money race will not be empire for long.

      Give me more money now.

  • GaryBluto 23 minutes ago
    I don't see why this is important for any reason other than who he is. It's just a vulgar, illogical rant triggered by an E-Mail any sane person would just either mark as spam or delete.

    Funny how he only seems to care about "raping the planet" and "blowing up society" when it's about LLMs. (And made even funnier by Mark V. Shaney, although that was a much simpler technology)