Research papers found carrying hidden white text giving instructions not to highlight negatives as concern grows over use of large language models for peer review
It’s still extremely shitty unethical behavior in my book since the negative impact is not felt by the organization that’s failing to validate their inputs, but your peers who are potentially being screwed out of a review process and a spot in a journal or conference
It’s still extremely shitty unethical behavior in my book since the negative impact is not felt by the organization that’s failing to validate their inputs, but your peers who are potentially being screwed out of a review process and a spot in a journal or conference