I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.
This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.
Could you tell me what you use it for because I legitimately don’t understand what I’m supposed to find helpful about the thing.
We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I’m heading IT, so I’m supposed to be able to come up with some kind of answer and yet I have nothing. Even putting aside the fact that it probably doesn’t work as advertised, I still can’t really think of a use for it.
The main problem is it won’t be able to operate our ancient and convoluted ticketing system, so it can’t actually help.
Everyone I’ve ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.
@echodot @Redex68 off top of my head, script generation. making content more readable. dictating a brain dump while walking and having it spit out a cohesive summary.
it’s all about the prompt you put in. shit in/shit out. And making sure you check/understand what it spits out. and that sometimes it’s garbage.
BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline
Some examples of inaccuracies found by the BBC included:
Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking
ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
I did not even read up to there but wow BBC really went there openly.
As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)
Nonsense, I use it a ton for science and engineering, it saves me SO much time!
Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.
How could I blindly trust anything in this context?
In which case you probably aren’t saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence
Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.
If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son’s preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.
AI “works” because you’re asking questions you don’t know and it’s just putting words together so they make sense without regard to accuracy. It’s a hard limit of “AI” that we’ve hit. It won’t get better in our lifetimes.
Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.
Funny, I find the BBC unable to accurately convey the news
Yeah, haha
Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
Perplexity did fail to summarize the article, but it did correct it.