LLMs can be very useful for my personal life. How can I deal with this in the future?

  • the quality highly depends on model, size, internet access, etc.
  • They get seemingly more accurate over time

Personally, I can find information within a second. I can ask it which philosopher wrote about “free will” and it’ll provide me a good chunk of information that sounds very plausible. Gemini is very impressive from a layman’s perspective. llama is worse in this regard but still ok. It may only be good on the surface but I can ask it for the book as well and it’ll provide me information. It will get better over time.

Google already knows a lot of stuff and now it will collect even more information about people. I caught myself asking it a philosophical thought of myself.

I was asking the computer. I was not judging an output of it. I was asking to judge my output.

I was asking the computer a philosophical question that has no clear answer. I evaluated the computer’s output and was happy it told me that I was right.

I also do maths with a computer. I trust it, it is usually deterministic.

I’ve also asked it about medical advice, which sounded good.

Today, I wanted to ask it something else, and I was observing that I ask a computer a question. I’d need many minutes, many difficult minutes to think about it. I’d need to research more information, talk to people. But I chose to prompt it.

I realised that I would need to think about this and prompt a community to think about it to exchange information by (hopefully) humans.

Using llms, especially online llms, e.g. google, yield higher quality output than local llms in my experience, hence I’d like to use online llms. But I do not want to give every question I have to google. I do not want all of us giving everything to google. Am I overreacting? Fear of new technology?

It can save me a lot of time. “I could achieve more” by using it. could I really? wouldn’t the ai achieve it for me? do i want the achievement anyway? Do I want to get a headstart with ai? I write code for a living. is there a huge difference in writing deterministic code and the probabilistic llm output?

Fear of missing out is kicking in.

I do not want to get left behind but I also do not want to give up my free will.

I do not want to lose my privacy (to google).

I do not want to lose my philosophical maturity, or at least what’s left of it.

Fear of missing out is kicking in.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    2 days ago

    I also do maths with a computer. I trust it, it is usually deterministic.

    I wouldn’t trust it. While math is usually deterministic, language models are not. They’re just spitting out whatever text is statistically likely. But that’s not how math works. These programs are usually terrible at basic math like multiplication.

    https://garymarcus.substack.com/p/math-is-hard-if-you-are-an-llm-and

    Fear of missing out is kicking in.

    Missing out on what?

  • Aman Verasia@lemm.ee
    link
    fedilink
    arrow-up
    23
    ·
    2 days ago

    You’re wrestling with the pros and cons of using LLMs like me—convenience and speed versus privacy risks, potential over-reliance, and accuracy concerns. You’re a coder who fears losing your intellectual edge or free will to AI, especially when personal queries (philosophical or medical) get sent to tech giants. I suggest treating LLMs as a tool, not a crutch: use them for routine tasks, cross-check critical info with trusted sources, and keep sensitive questions offline or human-led. Balance is key—leverage AI to boost your work and growth, but don’t let it replace your critical thinking or humanity.

    • enemenemu@lemm.eeOP
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      2 days ago

      Thanks for the reality check. It’s not in or out.

      Let’s see who has an advantage, the kids not knowing anything else or us.

      I hope we teach our kids how to use it responsibly.

  • 10001110101@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Haven’t used it yet, but venice.ai looks interesting; they have a good privacy policy. Right now, I just use ChatGPT with the “improve models” setting turned off, and use “temporary chat” mode. I don’t really trust OpenAI to be doing the right thing though. I’ve used 14B models locally, but they aren’t as good as 72B+ models.

  • jonathanvmv8f@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Adding to the original question, is it ethically OK to keep using these corporate AI tools? I fear engaging with it would give these companies all the more reason to develop them further and continue their illegal data extraction. I would personally prefer to boycott them but I am unsure as to whether it would be akin to shooting myself in the foot by this point.