Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
Doesn’t it mean whatever they Internet thinks it means? Isn’t that the problem with LLM? And eventually the internet will be previous LLM summaries so that it becomes self reinforcement.