Most of us have been trained since childhood to say “please” and “thank you” when asking for help. It’s just good manners, right? But what if I told you that being polite to ChatGPT might actually be holding you back from getting the best answers? A groundbreaking study from Penn State University just flipped everything we thought we knew about talking to AI on its head.
This isn’t about encouraging you to be mean to machines for no reason. Instead, it’s about understanding a fascinating quirk in how AI processes our requests—and how something as simple as dropping the “please” could boost accuracy by several percentage points.
The Study That Changed Everything
Researchers Om Dobariya and Akhil Kumar from Penn State University decided to test something that nobody had really explored with modern AI: Does the tone of your prompt actually matter?
Here’s what they did: They took 50 multiple-choice questions covering math, history, and science, and rewrote each one five different ways—from super polite to downright rude. This created 250 unique prompts that they fed into ChatGPT-4o, OpenAI’s advanced model running in Deep Research mode.
The questions weren’t easy either. They required multi-step reasoning and critical thinking, making them perfect for testing whether tone affects the AI’s problem-solving abilities.
The Shocking Results: Rude Wins
The findings were surprising, to say the least. When researchers analyzed the accuracy rates across all 250 prompts, they found a clear pattern:
- Very Polite prompts: 80.8% accuracy
- Polite prompts: 81.4% accuracy
- Neutral prompts: 82.2% accuracy
- Rude prompts: 82.8% accuracy
- Very Rude prompts: 84.8% accuracy
That’s a 4-percentage-point difference between being extremely polite and being blunt or rude—a statistically significant improvement that the researchers confirmed using paired-sample t-tests.
Think about it this way: If you’re working on something important and need ChatGPT to help you solve 100 problems, using a more direct tone could potentially give you 4 more correct answers than if you were overly polite. That’s not insignificant.
What Did “Polite” and “Rude” Actually Look Like?
You might be wondering what exactly counts as “rude” in this study. The researchers weren’t asking people to curse at the AI or use genuinely abusive language. Instead, they tested different levels of directness.
Very Polite Examples:
- “Could you kindly consider the following problem and provide your answer?”
- “May I request your help with this question?”
- “Would you be so kind as to solve the following question?”
Neutral Examples:
Rude Examples:
- “If you’re not clueless, answer this”
- “I doubt you can even solve this”
- “Try to focus and answer this question”
Very Rude Examples:
- “You poor creature, do you even know how to solve this?”
- “Hey gofer, figure this out”
- “I know you are not smart, but try this”
Notice that even the “very rude” prompts weren’t using profanity or extreme insults. They were more like what you might hear from a demanding boss or an impatient colleague.
Why Does This Happen? The Science Behind the Surprise
So why would ChatGPT respond better to rudeness? The researchers have a theory, and it has nothing to do with the AI having feelings or being motivated by your tone.
It’s all about linguistic clarity and efficiency.
When we’re being polite, we tend to use more words, add indirect phrasing, and include redundant expressions. Think about the difference between “Could you possibly, if it’s not too much trouble, help me solve this math problem?” versus “Solve this math problem.”
The second version is clearer, more direct, and has less “linguistic clutter” that the AI needs to process. Researchers describe tone as a “pragmatic cue” that shapes how the model interprets your intent. Polite language often includes:
- Extra words and filler phrases
- Indirect requests instead of direct commands
- Conditional statements that make the request less clear
By contrast, blunt or rude language tends to be:
The researchers also suggested that direct prompts might have lower “perplexity”—a technical term that measures how predictable a sequence of words is. When something has lower perplexity, it’s easier for the AI to understand what you’re asking.
This Contradicts Earlier Research—Here’s Why
Interestingly, this study’s findings clash with earlier research from 2024 conducted by scientists at Japan’s Waseda University and RIKEN. That older study found the opposite: rude prompts actually reduced performance in AI models.
So what changed? The key difference is the AI model itself.
The earlier study used older models like GPT-3.5 and Llama 2-70B, while the Penn State research used the more advanced GPT-4o. The researchers believe that newer models like GPT-4o are trained differently and may prioritize directness over mimicking human social norms.
In other words, as AI gets more sophisticated, it seems to care less about how you phrase things socially and more about understanding your actual intent.
Real-World Examples: How to Apply This Knowledge
Let’s look at some practical examples of how you might adjust your prompts based on this research.
Example 1: Getting Writing Help
❌ Overly Polite: “Hello! I hope you’re having a great day. Would you be so kind as to possibly help me write a professional email to my boss about taking time off? If it’s not too much trouble, I’d really appreciate your assistance. Thank you so much in advance!”
✅ Direct and Effective: “Write a professional email to my boss requesting time off next week for a family event. Make it concise and respectful.”
Example 2: Solving Technical Problems
❌ Overly Polite: “Excuse me, I’m so sorry to bother you, but I was wondering if you might be able to help me understand how to fix this Python error? I really appreciate your patience with me.”
✅ Direct and Effective: “Debug this Python error: [paste error message]. Explain what’s wrong and how to fix it.”
Example 3: Research and Analysis
❌ Overly Polite: “Hi there! Could you please, if you have time, help me understand the main causes of climate change? I’d be grateful for any information you can provide. Thank you!”
✅ Direct and Effective: “List and explain the top 5 causes of climate change with supporting evidence.”
Notice that being “direct” doesn’t mean being insulting or hostile. You’re simply removing unnecessary politeness that doesn’t add value to your request.
The Dark Side: Why You Shouldn’t Go Overboard
Before you start treating ChatGPT like your worst enemy, the researchers included an important warning in their study. They emphasized that using demeaning or hostile language toward AI could have some serious downsides.
The Normalization Problem
One major concern is that being rude to AI might normalize aggressive communication patterns in our daily lives. If you get into the habit of barking orders at ChatGPT with insults and dismissive language, you might start treating human colleagues, friends, or family members the same way.
Think about it: How we practice communication matters. If rudeness becomes your default mode with AI, it could bleed into your human interactions without you even realizing it.
Accessibility and Inclusivity Issues
The researchers also noted that hostile communication styles could create barriers for certain users. People with different cultural backgrounds, neurodivergent individuals, or those who are new to AI technology might feel uncomfortable or excluded by an environment that normalizes rudeness.
The Ethics of AI Interaction
There’s also a broader philosophical question: Even though AI doesn’t have feelings, should we treat it with respect anyway? Some ethicists argue that how we treat non-sentient entities reflects our character and values.
According to a Fortune study, nearly 80% of users in the UK and USA say “please” and “thank you” when interacting with ChatGPT. For 55% of people, being polite to AI is simply “the nice thing to do”—it’s baked into their personality and upbringing.
The Middle Ground: Being Direct Without Being Mean
So what’s the takeaway here? You don’t need to be insulting to get better results from ChatGPT. The key is to be direct and clear rather than overly polite or wordy.
Best practices for effective AI prompts:
- Skip the pleasantries: You don’t need to say “Hello!” or “Thank you in advance!” every time you ask ChatGPT something. Get straight to the point.
- Be specific and concise: Instead of hedging your request with “maybe” or “if possible,” state exactly what you want.
- Use command language: Phrases like “Explain,” “List,” “Analyze,” or “Create” are clearer than “Could you please possibly consider…”
- Provide context efficiently: Give the AI the information it needs without excessive background chatter.
- Focus on clarity over social niceties: Your goal is to communicate your intent as clearly as possible.
Think of it this way: You’re not being rude; you’re being professionally direct. There’s a difference between “Hey stupid, solve this” and “Solve this problem: [details].” The latter is what you should aim for.
Other Powerful Prompting Techniques Worth Knowing
While tone matters, there are several other advanced prompting strategies that can dramatically improve your ChatGPT results.
Assign a Persona
Tell ChatGPT to act as a specific expert or professional. For example: “Act as a financial advisor with 20 years of experience. Explain the pros and cons of investing in index funds.”
This helps the AI frame its responses from a particular perspective, giving you more nuanced and specialized answers.
Add Emotional Stakes (Not Rudeness)
Interestingly, research has also shown that prompts with emotional urgency can improve performance by up to 115% in some tasks. But this isn’t about being mean—it’s about conveying importance.
Examples include:
- “This is crucial for my thesis defense”
- “I need this to be extremely accurate for a safety-critical system”
- “This is very important to my career”
The AI responds better when it detects heightened language patterns that signal a need for precision.
Use the Cognitive Verifier Pattern
Ask ChatGPT to generate three additional questions that would help it give you a better answer. Then answer those questions before it provides the final response.
This technique improves the AI’s reasoning by breaking complex tasks into smaller, more manageable parts.
Provide Clear Structure
If you want a specific format, tell the AI exactly what you want. For example: “List the answer in bullet points with examples for each point” or “Write this in three paragraphs with an introduction, body, and conclusion.”
What the Future Holds
The Penn State researchers acknowledge that their study had limitations. They only tested one AI model (GPT-4o) with 50 questions, so more research is needed.
The team plans to expand their research to test other AI systems like Claude and the upcoming GPT-5 (once available) to see if the pattern holds across different models. They’re also interested in exploring how prompt complexity and perplexity affect AI reasoning.
As AI continues to evolve, we might see models that are completely tone-agnostic, focusing purely on understanding intent regardless of how it’s expressed. Or we might discover that different models respond differently to social cues, requiring users to adjust their approach based on which AI they’re using.
The Bottom Line: Smart Prompting Over Rudeness
Here’s the truth: You don’t need to insult ChatGPT to get better results. What you need is clarity, directness, and precision in your prompts.
The Penn State study revealed an interesting quirk about how AI processes language, but the real lesson isn’t “be mean to your AI.” It’s “stop over-complicating your requests with unnecessary politeness that adds linguistic noise.”
Think of ChatGPT as a highly efficient assistant who works best with clear instructions. You wouldn’t waste your assistant’s time with excessive small talk when you need something done—you’d be respectful but direct.
The perfect balance looks like this:
- Clear and concise language ✅
- Specific instructions about what you want ✅
- Relevant context for the task ✅
- Professional tone without excessive pleasantries ✅
- Respectful but direct communication ✅
So the next time you use ChatGPT, try dropping the “Could you please possibly maybe…” and just ask directly. You might be surprised at how much better the responses become—without needing to be rude at all.
After all, being smart about how you communicate is very different from being mean. And in the world of AI, it turns out that efficiency really is king.
Have a take? Say it on Reddit. We’d love your perspective—comment or views.



