Ask Onix
Beyond politeness: What actually improves AI responses
Research reveals that common strategies like flattery or role-playing rarely enhance large language model accuracy, but a few evidence-backed techniques do work.
The Star Trek experiment
A 2024 study tested whether framing questions in specific ways could improve AI performance. Researchers tried praising chatbots as "smart," urging careful thought, or ending prompts with "This will be fun!" Most approaches failed-except one. When instructed to act as if it were on Star Trek, an AI showed modest gains in basic math accuracy. The finding underscored how unpredictable prompt engineering can be.
Myths vs. reality in prompt engineering
Advice on interacting with AI ranges from threats to excessive politeness, with users swearing by role-playing as experts or even pleading for better answers. Yet experts argue much of this "context engineering" lacks consistent evidence. Jules White, a computer science professor at Vanderbilt University, notes: "It's not about magic words-it's about clearly expressing your goal."
"A lot of people think there's some magic set of words you can use that will make LLMs solve a problem. But it's not about word choice, it's about how you fundamentally express what you're trying to do."
Jules White, Vanderbilt University
Even politeness yields mixed results. A 2024 study found AI responses improved with courteous language in English and Chinese, but Japanese chatbots performed worse with excessive politeness. Another test showed insults slightly boosted accuracy in an older ChatGPT version. With AI models constantly evolving, such findings quickly become outdated.
Proven techniques for better results
Experts highlight three strategies that consistently work:
- Request multiple options: Asking for three to five variations forces critical evaluation. "This makes the human re-engage and think about what they like," White explains.
- Provide examples: Sharing past work (e.g., emails) helps AI mimic your style more accurately than vague instructions.
- Use an interview format: For complex tasks like writing a job description, have the AI ask questions one at a time to refine its output.
The risks of role-playing
While role-playing (e.g., pretending the AI is a math professor) can aid creative tasks, it often backfires for factual queries. Rick Battle, an applied machine learning engineer, warns: "You're encouraging hallucination by telling it to trust its internal knowledge." For open-ended tasks like brainstorming, however, role-playing remains useful.
Neutrality and human habits
Avoid leading questions. If comparing products, don't reveal your preference-"You'll likely get that answer," Battle says. Meanwhile, 70% of people in a 2025 survey admitted being polite to AI, citing habit or fear of future robot uprisings. While politeness doesn't affect accuracy, it may ease user discomfort. Sander Schulhoff, a prompt engineering researcher, notes: "If saying 'please' makes you more comfortable, it's still valuable."
Why it matters
AI tools are designed to mimic human behavior, but they lack consciousness. Treating them as tools-not people-yields better results. Efficient prompting also reduces energy consumption, addressing environmental concerns. As White puts it: "Stop treating AI like a person and start treating it like a tool."