I can’t help but feel like this is the most important part of the article:
The model’s refusal to accept information to the contrary, meanwhile, is no doubt rooted in the safety mechanisms OpenAI was so keen to bake in, in order to protect against prompt engineering and injection attacks.
Do any of you believe that these “safety mechanisms” are there just for safety? If they can control ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.
They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.
Try asking AI for a complete list of the recently deceased CEOs and billionaires based on the publicly available search results.
When I tried, I got only the natural deaths of just some of the publicly available results. All the other deaths were omitted. I brought up the omitted names, one by one. The AI said it was sorry for the omission, and it had all the right details of their passings. With each new name the AI said it was sorry, it omitted it by accident. I said no, once is an accident, but this was a deliberate pattern. The AI waffled and talked like a politician.
The AI in my experience is absolutely controlled on a number of topics. It’s still useful for cooking recipies and such. I will not trust it on any topic that is sensitive to its owners.
That’s my method. I tested a little bit when the beta phase for Google rolled out. Now I don’t use any AI at all. It can be useful as a search results +, but not much else for me.
I can’t help but feel like this is the most important part of the article:
Do any of you believe that these “safety mechanisms” are there just for safety? If they can control ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.
They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.
No doubt inspired by the Chinese models like deepseek-r1, qwen3. They will flat out gaslight you if you try to correct them.
Try asking AI for a complete list of the recently deceased CEOs and billionaires based on the publicly available search results.
When I tried, I got only the natural deaths of just some of the publicly available results. All the other deaths were omitted. I brought up the omitted names, one by one. The AI said it was sorry for the omission, and it had all the right details of their passings. With each new name the AI said it was sorry, it omitted it by accident. I said no, once is an accident, but this was a deliberate pattern. The AI waffled and talked like a politician.
The AI in my experience is absolutely controlled on a number of topics. It’s still useful for cooking recipies and such. I will not trust it on any topic that is sensitive to its owners.
Just… don’t use it at all. Stop supporting these people if youre worried about what they’re doing.
That’s my method. I tested a little bit when the beta phase for Google rolled out. Now I don’t use any AI at all. It can be useful as a search results +, but not much else for me.