Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”
Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.
Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”
Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.