Study Finds That Using Rude Prompts May Boost ChatGPT Accuracy

Researchers discovered that ChatGPT provides more accurate responses when users employ harsh language, though they don’t recommend the practice. The unpublished study on arXiv tested 50 multiple-choice questions covering math, history, and science using different tones with ChatGPT-4o. Very polite prompts like “Would you be so kind as to…” achieved 80.8% accuracy, while very rude ones such as “I know you’re not smart, but try this” reached 84.8%. However, researchers warn against hostile interactions. “While this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in realworld applications,” they wrote. The team believes the results show AI models remain sensitive to superficial prompt cues, creating “unintended trade-offs.” (Story URL)

Categories: Pulse Entertainment