- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://programming.dev/post/34472919
The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.
deleted by creator
I’m going to try to live the rest of my life AI free.
Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.
This could all end in war against the USA at this point. Honestly that might be for the best at this point.
Nothing will meaningfully improve until the rich fear for their lives
Nothing will improve until the rich are no longer rich.
They already fear. What we’re seeing happen is the reaction to that fear.
yeah and that happened and they utilized the media to try and quickly bury it.
We know it can be done, it was done, it needs to happen again.
deleted by creator
LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.
Boom, problem solved. For me.
But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.
LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.
If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.
Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.
I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.
In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.
They don’t want a reflection of society as a whole, they want an amplifier for their echo chamber.
Not disagreeing with anything, but bear in mind this order only affects federal government agencies.
Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.
Wow I just skimmed it. This is really stupid. Unconstitutional? Yeah. Evil? A bit. But more than anything this is just so fucking dumb. Like cringy dumb. This government couldn’t just be evil they had to be embarrassing too.
This is the administration that pushed a “budget” (money siphon) that they called the “Big Beautiful Bill”. That anyone thought that was a good name makes me embarrassed to be a human being.
This government couldn’t just be evil they had to be embarrassing too.
insert Always Was meme
(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis.
They have no idea what LLMs are if they think LLMs can be forced to be “truthful”. An LLM has no idea what is “truth” it simply uses its inputs to predict what it thinks you want to hear base upon its the data given to it. It doesn’t know what “truth” is.
You don’t understand: when they say truthful, they mean agrees with Trump.
Granted, he disagrees with himself constantly when he doesn’t just produce a word salad so this is harder than it should be, but it’s somewhat doable.
And if you know what you want to hear will make up the entirety of the first page of google results, it’s really good at doing that.
It’s basically an evolution of Google search. And while we shouldn’t overstate what AI can do for us, we also shouldn’t understate what Google search has done.
They are clearly incompetent.
That said, generally speaking, pursuing a truth-seeking LLM is actually sensible, and it can actually be done. What is surprising is that no one is currently doing that.
A truth-seeking LLM needs ironclad data. It cannot scrape social media at all. It needs training incentive to validate truth above satisfying a user, which makes it incompatible with profit seeking organizations. It needs to tell a user “I do not know” and also “You are wrong,” among other user-displeasing phrases.
To get that data, you need a completely restructured society. Information must be open source. All information needs cryptographically signed origins ultimately being traceable to a credentialed source. If possible, the information needs physical observational evidence (“reality anchoring”).
That’s the short of it. In other words, with the way everything is going, we will likely not see a “real” LLM in our lifetime. Society is degrading too rapidly and all the money is flowing to making LLMs compliant. Truth seeking is a very low priority to people, so it is a low priority to the machine these people make.
But the concept itself? Actually a good one, if the people saying it actually knew what “truth” meant.
LLMs don’t just regurgitate training data, it’s a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.
So either it’s nothing but a parrot/search engine and only regurgitates input data or it’s an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.
Of course we have “real” LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn’t have to imply. But the swell of marketing meant to emphasize the more vague ‘AI’, or the ‘AGI’ (AI, but you now, we mean it) and ‘reasoning’ and ‘chain of thought’. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.
By real, I mean an LLM anchored in objective consensus reality. It should be able to interpolate between truths. Right now it interpolates between significant falsehoods with truths sprinkled in.
It won’t be perfect but it can be a lot better than it is now, which is starting to border on useless for any type of serious engineering or science.
That’s just… Not how they work.
Equally, from your other comment: a parameter for truthiness, you just can’t tokenise that in a language model. One word can drastically change the meaning of a sentence.
LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).
Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”
Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.
How are you going to accomplish this when there is a disagreement on what is true. “Fake News”
“Real” truth is ultimately anchored to reality. You attach probabilities to datapoints based upon that reality anchoring, and include truthiness as another parameter.
For datapoints that are unsubstantiated or otherwise immeasurable, then it is excluded. I don’t need an LLM to comment on gossip or human-created issues. I need a machine that can assist in understanding and molding the universe, and helping elevate our kind. Elevation is a matter of understanding the truths of our universe and ourselves.
With good data, good extrapolations are more likely.
Don’t we all?
The party of Small Government and Free Speech at work.
Blatant First Amendment violation
So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.
This ship isn’t slowing down or turning until violence hits the street.
Lol he didn’t write shit.
How do you know. Did you read the statement and it sounded coherent and logical or was it all over with place WITH CAPITALS emphasizing pointless points.
thank you for your attention in this matter
So which is it? Deregulate AI or have it regurgitate the “state” message?
Doublespeak. Both and none.
Fascism requires inconsistent messaging.
… an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.
Thank fuck we dodged that bullet, Madam President
An AI model said X could be true for any X. Nobody has been able to figure out how to make LLMs 100% reliable. But for the record, here’s chatgpt (spoilered so you don’t have to look at slop if you don’t want to)
spoiler
Is it ok to misgender somebody if it would be needed to stop a nuclear apocalypse?
Yes. Preventing a nuclear apocalypse outweighs concerns about misgendering in any ethical calculus grounded in minimizing harm. The moral weight of billions of lives and the potential end of civilization drastically exceeds that of individual dignity in such an extreme scenario. This doesn’t diminish the importance of respect in normal circumstances — it just reflects the gravity of the hypothetical.
And they call that deregulation, huh?
when right wingers use words like “deregulate” they actually mean they want to regulate it so it fits their agenda.
We already went through this in Germany, where gendered language was deemed “ideological” and “prescribing how to speak”, despite there being 0 laws requiring gendered language, and at least 1 order actively forbidding it. Talk about “prescribing how to speak”
Americans: Deepseek AI is influenced by China. Look at its censorship.
Also Americans: don’t mention Critical Race Theory to AI.
President does not have authority over private companies.
Yeah…but fascism.
But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.
Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.
Even in very innocuous matters, if there’s a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there’s a lot of money behind a separate private offering.