It’s not the purpose of LLMs to lower human skills’ value, it’s just the inevitable outcome.
Transcriptionist? Industry died with good voice recognition 10-20 years ago.
Ditch digging shovel crew? Dramatically de-valued with the advent of the steam-shovel…
and on and on… The theory goes that it gives people more free time, but the way wealth is distributed it is dividing people into those with jobs serving the wealthy and those who live on handouts.
I think: non-stigmatized “handouts” for everybody are the way of a brighter future. UBI FTW.
I do those too! That’s where the ideas for new architectures, datasets, and training tweaks come from! Math is fun, and it’s fascinating that math can talk sometimes.
Edit: And I see now that we’re editing messages after people reply? Rude, no? Designing a hallucinating machine certainly doesn’t rot your brain.
Yeah, I also talk to ChatGPT sometimes, fully knowing that it’s a flawed machine and that it’s for my amusement. It’s incredible that it can do that! Just like with wireless communication, which I thought was impossible growing up. Like, holy shit, we’re living in the future, why not enjoy it a little? I don’t even think a conversation spans more than six messages tops. It’s amusing, but not that amusing if you can see right through it.
I don’t know what OP is on about but they seem to be on a crusade. They’re citing articles about getting advice from it and having it think for you, totally missing the point of having fun. Like, if you’re against AI, at least give better arguments that address why, instead of throwing things at the wall and seeing what sticks.>
Looking at traffic analytics pretty much all our developer staff use chatgpt for 3+ hours a day. I’m not a big fan of using llm for my own development work. I’m proficient in they languages I write in so I don’t need it as much.
I feel like using llm can get you a quick fix but for programming a lot of the results are nonsense. It’s really really well formatted but nonsense still the same. Maybe I can’t use it right. Or I’m asking the wrong questions.
I find it hilarious how when you call it out for being BS it responds with “yes of course, you are right!..” then gives a possibly working or nonsense answer, who knows.
I think in a non market economy I would still work on language models. It’s cool that a machine can hold a conversation.
It’s not the purpose of LLMs to lower human skills’ value, it’s just the inevitable outcome.
Transcriptionist? Industry died with good voice recognition 10-20 years ago.
Ditch digging shovel crew? Dramatically de-valued with the advent of the steam-shovel…
and on and on… The theory goes that it gives people more free time, but the way wealth is distributed it is dividing people into those with jobs serving the wealthy and those who live on handouts.
I think: non-stigmatized “handouts” for everybody are the way of a brighter future. UBI FTW.
Go outside. Touch grass. Talk to humans.
Talking to a hallucinating machine rots your brain:
https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
I do those too! That’s where the ideas for new architectures, datasets, and training tweaks come from! Math is fun, and it’s fascinating that math can talk sometimes.
Edit: And I see now that we’re editing messages after people reply? Rude, no? Designing a hallucinating machine certainly doesn’t rot your brain.
Yeah, I also talk to ChatGPT sometimes, fully knowing that it’s a flawed machine and that it’s for my amusement. It’s incredible that it can do that! Just like with wireless communication, which I thought was impossible growing up. Like, holy shit, we’re living in the future, why not enjoy it a little? I don’t even think a conversation spans more than six messages tops. It’s amusing, but not that amusing if you can see right through it.
I don’t know what OP is on about but they seem to be on a crusade. They’re citing articles about getting advice from it and having it think for you, totally missing the point of having fun. Like, if you’re against AI, at least give better arguments that address why, instead of throwing things at the wall and seeing what sticks.>
Looking at traffic analytics pretty much all our developer staff use chatgpt for 3+ hours a day. I’m not a big fan of using llm for my own development work. I’m proficient in they languages I write in so I don’t need it as much.
I feel like using llm can get you a quick fix but for programming a lot of the results are nonsense. It’s really really well formatted but nonsense still the same. Maybe I can’t use it right. Or I’m asking the wrong questions.
I find it hilarious how when you call it out for being BS it responds with “yes of course, you are right!..” then gives a possibly working or nonsense answer, who knows.