• thevoidzero@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    23 hours ago

    The risk of LLMs aren’t on what it might do. It is not smart enough to find ways to harm us. The risk seems from what stupid people will let it do.

    If you put bunch of nuclear buttons in front of a child/monkey/dog whatever, then it can destroy the world. That seems to be what’s LLM problem is heading towards. People are using it to do things that it can’t, and trusting it because AI has been hyped so much throughout our past.

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 hours ago

      LLMs are already deleting whole production databases because “stupid” people are convinced they can vibe code everything.

      Even programmers I (used to) respect are getting convinced LLM are “essential”. 😞

      • anomnom@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        One of my former coders (good but super ADHD affected) was really into using it in the early iterations when GPT first gained attention. I think it steadily got worse as new revisions launched.

        I’m too far from it to assess its usefulness at this stage, but know enough about statistics to question most of what it spits out.

        Boilerplates work pretty much the same way and have usually been vetted by at least a couple good programmers.

        • bss03@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          I’ve not found them useful for that, even. I often just get “lied to” about any technical or tricky issues.

          They are just text generators. Even the dumbest stack overflow answers show more coherence. (Tho, they are certainly wrong in other ways.)