3 comments

  • sjdv1982 3 hours ago
    Interesting to hear the industrial SWE perspective, it is very different.

    I am a scientific research engineer (bioinformatics), and here no one cares much about covering all the possible code paths.

    What we care about is if the code computes "the correct thing", i.e. that it represents the underlying science.

    No such guarantee with LLMs. But no such guarantee without LLMs, either (the "code growing above our heads" has happened already, a long time ago). Still, I would say that LLMs are a big net positive for us: they are better at checking such things than we are.

  • aurareturn 1 hour ago
    For some context around this sub r/BetterOffline, they follow Ed Zitron who is an AI denier, AI skeptic, or whatever you want to call him. He's on the extreme end.

    They basically deny everything positive AI brings. If AI cures cancer, they'll say but AI hasn't cured aging yet so it's still useless. If AI solves a math conjecture, they'll say but AI hallucinated that one time so we can't use it. The goal post keeps moving. It's the opposite of r/accelerate.

    When I read the sub, I can't help but get a cult-following feeling. They'll twist facts and bend them to their beliefs. It's not much different than people who seriously think the Earth is flat in my opinion.

    • MultifokalHirn 28 minutes ago
      thank you for the context info - would have had no idea
  • antibull 3 hours ago
    [dead]