How should we talk to an AI chatbot?

Threats, flattery, “please” and “thank you,” even assigning roles such as professor or lawyer: users of large language models (LLMs)—the technology behind chatbots like ChatGPT and Gemini—are experimenting with an ever-growing range of strategies in an effort to obtain better and more accurate answers.

This practice, known as prompt engineering, has evolved into a small industry of advice and tips. Some claim that politeness helps the system perform better; others believe that a more aggressive tone can “activate” the model. There are also users who ask the chatbot to “play the role” of an expert, hoping this will produce more reliable responses.

However, as experts tell BBC News, many of these approaches are not supported by strong scientific evidence. In some cases, they may even lead to less reliable results.

“Many people believe there’s a ‘magic’ phrase that will make an LLM solve a problem,” notes Jules White, a computer science professor at Vanderbilt University. “In reality, it’s not about specific words, but about how clearly and meaningfully you express what you’re trying to achieve.”

Large language models work by breaking text into smaller units known as tokens and statistically calculating which word or phrase is most likely to come next. This means that even small details—from a single word to a comma—can influence the final response. The challenge is that the impact of these changes is extremely difficult to predict.

Does politeness matter?

Researchers have attempted to identify patterns in how models behave. A 2024 study suggested that polite wording may lead to more accurate responses. However, another experiment found that an earlier version of ChatGPT actually performed better when it received… insults.

Available data remains limited, and the constant updates to these models mean that conclusions can quickly become outdated.

In any case, experts agree that tactics such as flattery or aggression are unlikely to meaningfully influence the accuracy of the output. These systems are designed to mimic human communication, which often creates the illusion that they possess “moods” or “personalities.” In reality, they are simply language simulation systems. For that reason, experts suggest treating them more as tools than as conversational partners.

So how should we talk to a chatbot?

For those who regularly use large language models, experts recommend several simple practices that can improve results.

Ask for more than one answer
“Don’t limit yourself to a single solution—ask for three or five,” White suggests. Especially in creative tasks, alternative responses help users compare ideas and refine their initial request.

Provide examples
If you ask the model to write an email and the result doesn’t match your style, it’s better to show examples of your own writing. Models tend to mimic tone and style far more effectively through examples than through abstract instructions.

Let it ask you questions
Instead of requesting a complete description right away, you can ask the AI to interview you—posing questions one by one until it gathers the necessary information. This makes the process more interactive and tailored to your needs.

Be cautious with role-playing
According to researcher and entrepreneur Sander Schulhoff, there was long a belief that asking a model to “act as,” for example, a mathematics professor would lead to more accurate answers. However, when the question has a single correct answer, this approach can actually reduce accuracy. It may push the model toward overconfidence and so-called hallucinations—responses that sound convincing but are incorrect.

For brainstorming, advice, or creative exploration, however, assigning roles can still be useful.

Stay neutral
If you’re looking for help making a decision, it’s best not to influence the system with your personal preferences. For instance, if you say you are leaning toward a particular car brand, the chatbot may reinforce that exact choice.

And finally… should we say “please”?

A 2025 survey found that about 70% of users speak politely to chatbots. Most do it simply because they believe it’s the right thing to do, while a small share—around 12%—admit they do it “just in case,” in the event that robots ever rebel.

Politeness probably doesn’t significantly affect model performance. Still, it may offer a different kind of benefit.

“The most important thing is that it can make people feel more comfortable when interacting with artificial intelligence,” Schulhoff says. “It doesn’t improve the model’s performance, but if it helps you use the tool more easily and more often, then politeness still has value.”