- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://mander.xyz/post/34629331
cross-posted from: https://programming.dev/post/34472919
cross-posted from: https://mander.xyz/post/34629331
cross-posted from: https://programming.dev/post/34472919
Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.
BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.
why should any llm care about “ethics”?
well obviously it won’t, that’s why you need ethical output restrictions