E.g. ollama/hugging face

  • NihilsineNefas@slrpnk.net
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    Only the ones pushing medical/astronomical/scientific analysis of huge swathes of data that would be too much for a human to go through are acceptable in my mind.

    Any company that’s pushing a chatbot/‘picture bot’/website generator/scalper/search engine can suck peanuts out of my taint

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I trust every AI group to do their level best to separate me from my money. Beyond that, I wouldn’t trust them with a stolen identity.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 hours ago

    You can’t trust an inherantly untrustworthy industry.

    The problem is that to make a good AI, you need a lot of input and we know from leaks and reports that many/most of the major players deliberately ignored copyright to train their models. If it was reachable, they used it. Are using it. Will use it. Like Johnny 5, there’s no limit to the data they want, or that their handlers want to feed them with. They’re the Cookie Monster at a biscuit factory.

    So when the question of trust comes up, you’d have to be pretty forgiving to overlook that they’re built on foundations of theft, and pretty naive to assume these companies have suddenly grown ethics and won’t use your data and input to train with, even when you’re using commercial systems that promise they won’t.

    Even in the event that there is an ethical provider that does their utmost to ensure your data doesn’t migrate (these do exist, at least in intention), this is an incredibly fast moving, ultra-competitive market where huge amounts of data are shifted around constantly and guardrails being notoriously hard to accurately define, let alone enforce. It’s inevitable stuff will leak.

  • It depends on your meaning of trust, trust in the answers, trust in your privacy, what kind of trust are we talking about? If we are talking about trust in the accuracy or usability of the data it responds with, I would say I trust copilot, grok, Claude, Lumo, and finally Gemini – in that order. However, if we are talking about trusting them with your data to stay private, that is a big 'ol zero because none of them are private (Lumo might be the most private but I still don’t trust it 100%).

    My job asks us to use AI agents so I have been playing around with paid licenses on all of these (Lumo was tested on my personal computer)

  • moistracoon@lemmy.zip
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    Trust with my data? None. Strictly for fun on hardware with no personal data? I’ll dabble with that ollama stuff, llm studio, all in offline mode. Even then it feels wrong. I’m not like, hardline against ai, I think scientists should be able to use it, I just don’t feel right using it morally speaking.

    • selokichtli@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      I’m in the same boat. The way it’s been used until now feels like mindlessly malnourishing the planet and the society. The AI economic bubble burst will be brutal, though.

  • juliebean@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    MIRI seems to have their collective head on straight. i’d at least trust them more on the subject of AI than any of the blood swilling oligarchs running the big ai companies these days.