Who are these people? This is ridiculous. :)

I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

People even have romantic relationships with these things.

I dont agree with the argument that chat gpt should “push back”. They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

Very slippery slope if you ask me.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    4 hours ago

    ffs, this isn’t chatgpt causing psychosis. It’s schizo people being attracted like moths to chatgpt because it’s very good at conversing in schizo.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    ChatGPT is phenomenal at coming up with ideas to test out. Good critical thinking is necessary though… I’ve actually been able to make a lot of headway with a project that I’ve been working on, because when I get stuck emotionally, I can talk to chatgpt and it gets me through it because it knows how I think and work best. It’s scary how well it knows me… and I’m concerned about propoganda… but it’s everywhere.

  • RheumatoidArthritis@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    I know a guy who has all kinds of theories about sentient life in the universe, but noone to talk to about them. It’s because they’re pretty obvious to anyone who took a philosophy class, and too out there for people who are not interestes in such discussions. I tried to be a conversation partner for him but it always ends up with awkward silence on my part and a monologue on his side at some point.

    So, he finally found a sentient being who always knows what to answer in the form of ChatGPT and now they develop his ideas together. I don’t think it’s bad for him overall, but the last report I got from his conversations with the superbeing was that it told him to write a book about it because he’s full of innovative ideas. I hope he lacks persistence to actually write one.

  • TimewornTraveler@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 hours ago

    hi, they’re going to be in psychosis regardless of what LLMs do. they aren’t therapists and mustn’t be treated as such. that goes for you too

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    I use chatGPT to kind of organize and sift through some of my own thoughts. It’s helpful if you are working on something and need to inject a simple “what if” into the thought process. It’s honestly great and has at times pointed out things I completely overlooked.

    But it also has a weird tendency to just agree with everything I saw just to keep engagement up. So even after I’m done, I’m still researching and challenging things anyway because it want me to be its friend. It’s very strange.

    It’s a helpful tool but it’s not magical and honestly if it disappeared today I would be fine just going back to the before times.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    18 hours ago

    I dont agree with the argument that chat gpt should “push back”.

    Me neither, but if they are being presented as “artificial people to chat with” they must.

    I’d rather LLMs stay tools, not pretend people.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Some of the LLMs referred to are advertised as AI psychological help, so they must either act like psychologists (which they can’t) or stop being allowed as digital therapists.

  • acosmichippo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    15 hours ago

    I dont agree with the argument that chat gpt should “push back”. They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    but that’s an inherently unhealthy relationship, especially for psychologically vulnerable people. if it doesn’t push back they’re not in a relationship, they’re getting themselves thrown back at them.

    • TeddE@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 hours ago

      Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.

      I don’t think the problem is solvable if we keep treating the Speak’n’spell like it’s participating in this.

      Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we’re already playing their shell game.

      But yes, the tool seems primed for enabling self-harm.

      • Dyskolos@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        8 hours ago

        Like with every other thing there is: if you don’t know how it basically works or what it even is, you maybe should not really use it. And especially not voice an opinion about it. Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn’t put a screwdriver in your eyes. Just knowing what a plane does won’t make you an able pilot and will likely result in dire harm too.

        Not directed at you personally though.

        • TeddE@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Agreed, for sure.

          But if Costco modified their in-store sample booth policy and had their associates start offering free samples of bleach to children - when kids start drinking bleach we wouldn’t blame the children; we wouldn’t blame the bleach; we’d be mad at Costco.

    • 1984@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      10
      ·
      edit-2
      17 hours ago

      It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars…

      People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.

      • FartMaster69@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        This is also a problem for search engines.

        A problem that while not solved has been somewhat mitigated by including suicide prevention resources at the top of search results.

        This is a bare minimum AI can’t meet, and in conversation with AI vulnerable people can get more than just information, there are confirmed cases of the AI encouraging harmful behaviors up to and including suicide.

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        15 hours ago

        It will take another five seconds to find the same info using the web.

        good. every additional hurdle between a suicidal person and the actual act saves lives.

        Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars…

        this isn’t a slippery slope. we can land on a reasonable middle ground.

        People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.

        you don’t know that. maybe some will.

        the general trend i get from your comment is you’re thinking in very black and white terms. the world doesn’t operate on all or nothing rules. there is always a balance between safety and practicality.

  • Lydia_K@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    These are the same people who Google stuff then believe every conspiracy theory website they find telling them the 5G waves mind control the pilots to release the chemtrails to top off the mind control fluoride in the water supplies.

    They honestly think the AI is a sentient super intelligence instead of the Google 2 electric gargling boogaloo.