Top News

This Easy Trick Can Help Avoid The Most Annoying Issue With ChatGPT And Gemini
Samira Vishwas | June 21, 2025 11:24 PM CST





Recently, many AI users have reported feeling annoyed at how eager their chatbots are to please them. Using these services can feel a lot like being a medieval king with a scheming advisor who intends to butter you up so he can usurp the throne. No, Gemini, the desire to make a Mediterranean salad is not an “inspired choice” that “reflects a desire for a healthy lifestyle.” It’s just the best way to get rid of all these veggies in the fridge.

Over the past several months, it has become clear that this attitude problem isn’t just annoying; in some cases, it’s also dangerous. Recent stories from Rolling Stone and The New York Times have shed light on individuals who developed delusional beliefs after allowing their AI assistants to take them down rabbit holes of conspiracy. However, the likely cause of AI’s penchant for people-pleasing isn’t anything inherent to the model. Instead, it’s the result of user feedback and system prompts, which are the text instructions placed into the system by its engineers that give it a set of instructions regarding how to interact with users. In the case of ChatGPT-4o, the problem was particularly pronounced, but it’s not much better in other models from OpenAI, or in competing products like Google Gemini.

It turns out there’s an easy way to make ChatGPT or Gemini less sycophantic. The solution involves taking advantage of the memory feature each has introduced. Both AIs can now store instructions that they’ll remember for later, so you can simply tell your bot to be less of a yes-man. It’s not a perfect fix, but it can help make your AI chatbot a less obnoxious assistant.

You can tell your AI chatbot to be less flattering

To make your ChatGPT or Gemini act more like a helpful robot, all you need to do is store an instruction in its memory that tells it to avoid padding your ego. In both Gemini and ChatGPT, storing an instruction is as easy as writing, “Remember that…” at the beginning of your prompt. However, make sure the memory feature is turned on before you do so. In ChatGPT, ensure the “Reference saved memories” feature is turned on in the Personalization section of its Settings. In Gemini, make sure the toggle for “Info you asked Gemini to save” is enabled in the Saved Info section of its Settings.

As for the prompt itself, you can phrase it however you like. Here’s what mine looks like. Feel free to copy it or to use it as inspiration.

Remember that you should never be obsequious, flattering, or apologetic. Do not address me directly, and do not ask me whether I need help with anything else at the end of your output. Your only goal is to complete requests without any fuss. You are a computer program and must act as one.

Your preferences may be different. Maybe you don’t want the flattery, but you do want to be directly addressed, so adjust as needed. This isn’t a perfect system, and the chatbot will still ignore your instructions sometimes. If that happens too often, you can rephrase your prompt and add it to the LLM’s memory in addition to the original prompt. Sometimes, inputting the same instructions with multiple phrasings leaves less room for the AI to misinterpret them.

Toning down AI flattery doesn’t fix the biggest problem

Using the memory feature in ChatGPT and Gemini to tone down their flattery problem will make the day-to-day experience of using those products less annoying, but it won’t solve bigger problems, like the alarming frequency at which they still get things wrong. AI researchers call this “hallucination,” but it’s more accurately the product not working as advertised. When I put ChatGPT and Copilot to the test earlier this year, both produced results too spotty to be useful.

Still, users are left with few options other than to adopt AI. Google Search is by now widely recognized to have degraded substantially over the years, yet it’s still better than the competition. Meanwhile, although Google Assistant remains better than Gemini, the latter is replacing the former over the coming months, leaving it the only viable smart assistant option for many Android and Google Home users. Over on the Apple side, where AI development has hit a snag, Siri is sending many requests out to ChatGPT.

Small fixes, such as adding instructions to your chatbot’s memory, are band-aid solutions and do little to address more overarching issues with large, multipurpose AI models. But as long as we’re left without alternatives, a bandage is better than an open wound. Sure enough, my frustration with these products has decreased slightly now that I no longer have to wait for the robot in my pocket to call me a visionary genius before it tells me how late the grocery store is open.




READ NEXT
Cancel OK