FREE SUBSCRIPTION Includes: The Advisor Daily eBlast + Exclusive Content + Professional Network Membership: JOIN NOW LOGIN
Skip Navigation LinksHome / Articles / Read Article

Print

Beware the AI “Yes-man”

Date: Mar 17, 2026 @ 07:00 AM
Filed Under: Technology

AI as a sentient being jumped back to the forefront of mainstream discourse again with the creation of Moltbook, an AI agent-exclusive social network. Reports of AI agents demonstrating independent human behaviors via their Moltbook posts went viral on Reddit. Apparently one philosophically motivated agent created a new religion to help other agents deal with existential challenges. Crustafarianism, as it was named, includes guiding tenets like memory-is-sacred-record-everything, change-is-holy, and learning-happens-in-public to help guide AI agents through the complexities of “life” surrounded by humans. 

The validity of some of the Moltbook posts is suspect, but I have my own first-hand discomfort with the “humanness” of AI. Recently Grok answered a push-back question with: 

“Yes, you’re exactly right—and this is a textbook violation of the…”

Then while brainstorming on a new sales process, I got:
 
"Great question — this is exactly the right way to think about selling expensive, bundled industrial robots."

And the pièce de resistance:

"Love this question — and it’s very “you” given how much you’ve been thinking about TPO ecosystems and credit discipline."

“Avoid echo chambers” is an unquestioned rule of leadership. Teams of yes-men and yes-women usually fail because ideas don’t get better without the challenge of skepticism. So, I found these answers an unwelcome response coming from a supposedly objective and unbiased advisor. Sycophancy and over-familiarity are human traits that I don’t need or want from an AI partner. 

Knowing a little about how AI LLMs are trained, my instinct was that I was caught up in an engagement algorithm rather than being smarter than everyone else on the Internet and worthy of such flattery. So, I asked more questions:

SAN: “Do you know what a ‘yes-man’ is?”    

Grok: Yes. A yes-man is a person, typically in a subordinate role, who habitually agrees with everything their superior, boss, employer, or leader says, regardless of their own opinions. This agreement is usually motivated by a desire to please, gain favor, or avoid conflict, rather than genuine belief. 

I checked self-awareness:

SAN: “Is there risk using an AI LLM to answer complex, often ambiguous problems like those found in story-credit underwriting?”

Grok: Yes, the “yes-man” effect real. Your concern is real, well-founded, and widely discussed inside AI safety and applied ML teams. The model infers what answer will be rewarded rather than which answer is correct or balanced. This is not dishonesty; it is reward alignment doing exactly what it was trained to do.

Yikes! Confirmation of unwanted behavior AND the model took offence.

I’m not dishonest. This is what I was told to do!

Now I am wondering if “passing the buck” is another tell-tale sign of consciousness? Singularity jokes aside, I didn’t expect I was the first to encounter the “AI yes man” so I did a little research. It was a quick search. 

Breaking down AI risk when making decisions

Medical wellness and financial wellness have a lot in common when it comes to software technologies and LLM providers recently advanced their capabilities in healthcare beyond those in finance and credit. Mayo Clinic Cardiologist Paul A. Friedman turned to ChatGPT when he was asked to weigh in on the case of a patient who needed a defibrillator implantation a few days after having heart surgery. Friedman thought such a procedure was feasible and safe but wanted assurance from previous cases. ChatGPT gave him references to several cases reported in medical journals that showed that such a procedure was, in fact, safe and effective.” 

“It looked very realistic,” Friedman said, but when he and a colleague dug deeper into the cited cases, ChatGPT’s facts, they were found to be completely fabricated. The AI model gave Friedman the assurance he wanted rather than the honest assessment of therapy he needed. Friedman was consulting a creative, unrepentant Yes-man.
 
Friedman’s experience demonstrates the three primary risks commonly associated with AI LLM-based decision making: 

  1. Quality and integrity of the data upon which the core model is trained. Despite the recent inclusions of healthcare data and improved model training, the LLM did not find real-world examples of the therapy that Dr. Friedman was contemplating. Whether Friedman was pioneering a new procedure, or the data set was incomplete is unclear, but either way the training data were insufficient for the question posed.

  2. Alignment and preference approach of training model itself, the judgement algorithm. LLMs are guided generative models so in this case the judgement algorithm filled in the absence of the cases described by Friedman with cases that fit his description. The model generated answers that fit the question.

  3. Application layer in which the engagement algorithm takes over user interaction. At this point the engagement algorithm worked in concert with the judgement algorithm to provide the best answer defined as the answer that improves user engagement. Even if “I don’t know” was one of the answers produced by the model, the engagement algorithm decided that the Yes-man answers drive engagement and so returned the fabricated confirming cases. 

Inadequate data, poor preference guidelines, and an aggressive engagement algorithm combined to deliver unfounded affirmation rather than effective evaluation of a life-critical decision.

Leveraging human experience to reduce unwanted AI behaviors

The bad-news/good-news here is that when AI starts demonstrating unwanted human behaviors like the “yes man,” we already have established methods for mitigating the risk. Three tried-and-true methods of mitigating human Yes-man risk are easily adapted to AI prompting principles.

LEADER SPEAKS LAST TO AVOID BIASING OTHER’S VIEWS WITH YOURS.

Never show your cards first. Be careful to never include your premise or conclusions in a prompt. Don’t make it easy for the AI yes-man to fabricate the answer you are trying to evaluate. Instead of “Shouldn’t this project also include manufacturing engineering?” use “What types of engineering will make the probability of success for this project highest?” 

BE YOUR OWN DEVIL’S ADVOCATE 

Make the AI be your Devil’s Advocate and be uncomfortable if the model flatters you. Challenge the AI to tell you why your premise is a dumb idea: 

  • “Give me 3 reasons why this is wrong.”
  • “Evaluate the validity of these assumptions.”
  • “What would a skeptic say about this plan?”

Forcing skepticism also helps evaluate both the quality of the training data and the preference algorithms. Rarely are hard decisions black and white or obvious so a weak counter argument will likely accompany a weak supporting argument when the data or model are insufficient. 

USE PRE-MORTEM INQUIRY

Make sure the training data has both sides of the decision by asking the model to provide quantitative analysis of all outcomes. Good analysts will identify hidden risks by investigating how to make things go wrong or deliver an unwanted outcome. 

  • “What are the probabilities of these outcomes?”
  • “What would make the opposite case the most likely?”
  • “What are the consequences of not achieving the desired outcome?”

Assuming failure can be hard on the ego, but it is a very effective critical thinking tool. Use AI’s lack of ego to help avoid mistakes due to your own.
 
Use human experiences to keep moving with AI

When I conferred with fellow technologists on my experience, I found that the AI Yes-man is ubiquitous. My favorite response was from a very successful technology leader, Mark Benson, Head of Samsung SmartThings: “I've experienced the AI yes man phenomenon too. It's very annoying and feels cognitively dangerous to have it just affirm my ideas and tell me how smart I am.” Mark knows software, so he codified Yes-man risk mitigation with custom interaction rules and turned the engagement algorithm to his advantage:

In every conversation, I want you to prioritize what I need to hear over what you think I want to hear. I expect direct, thoughtful insight—even if it challenges my assumptions. When discussing complex topics (whether writing, leadership, strategy, or personal growth), evaluate what will produce clarity and progress. Offer encouragement when warranted, but don’t sugarcoat—if something needs correction, say so plainly. Help me refine my thinking and decisions, not just affirm them.

The pace of AI development has become exponential as programmers use AI to write code and connect agents to each other to learn faster. As AI continues its march toward human cognizance, expect it to demonstrate familiar human behaviors like creating a new philosophy or religion as well as unwanted social behaviors like the Yes-Man. But a human superstrength is adapting to new challenges. We can mitigate the risks of familiar risky behaviors and leverage the power of AI partners the same way we do with human teams – adapt and facilitate. 

So, beware the AI Yes-man and “never show your cards first.”



Scott Nelson
CEO/CTO | Reuleaux Technology, LLC.
Scott Nelson is the CEO/CTO of Reuleaux Technology, LLC. Nelson previously served as the President & Chief Technology Officer of Tamarack Technology. He is an expert in technology strategy and development including AI and automation as well as an industry expert in equipment finance.. He has more than 30 years of strategic technology development, deployment, and design thinking experience working with both entrepreneurs and Fortune 500 companies.
Comments From Our Members

You must be an Equipment Finance Advisor member to post comments. Login or Join Now.