Most people who use artificial intelligence (AI) tools accept their answers even when they are incorrect, according to new research that highlights a growing tendency among AI users to abandon critical thinking.
In a paper titled “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” researchers from the University of Pennsylvania describe a phenomenon they call “cognitive surrender,” in which users defer to AI-generated answers with little or no scrutiny.
The study builds on longstanding theories of human cognition that divide thinking into two systems: fast, intuitive responses and slower, more analytical reasoning. The researchers argue that AI introduces a third mode, the “artificial cognition,” defined as “external, automated, data-driven reasoning originating from algorithmic systems rather than the human mind.”
To explore this phenomenon, the team conducted experiments using Cognitive Reflection Tests, designed to measure whether participants rely on instinct or deliberate reasoning. Some participants had access to a chatbot deliberately programmed to give incorrect answers about half the time.
Even when the AI was wrong, users frequently trusted it. Participants accepted correct AI answers about 93% of the time but still accepted faulty responses roughly 80% of the time.
Overall, across 1,372 participants and more than 9,500 trials, subjects accepted incorrect AI reasoning 73.2% of the time and overruled it only 19.7% of the time.
The researchers note that “people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism.” They added that “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny.”
The study also found that incentives and time pressure influenced how participants responded to AI-generated information. Small financial rewards and immediate feedback made participants more likely to challenge incorrect AI responses, while time constraints reduced the likelihood of questioning the system’s answers.
Individual differences also played a role. Participants with higher fluid intelligence were less reliant on AI and more likely to override incorrect outputs, whereas those who viewed AI as highly authoritative were more prone to being misled.
Despite the risks, the researchers noted that cognitive surrender is not inherently negative.
When an AI system consistently outperforms humans in reasoning and decision-making, “deferring to a statistically superior system may be adaptive or even optimal,” the study suggests. Researchers noted that relying on AI can be particularly useful in data-heavy or probabilistic tasks where humans are more likely to make mistakes.
Brain Activity Drops When Students Rely on ChatGPT
These findings align with other research on AI’s effects on thinking. A 2025 MIT Media Lab study examined 54 participants divided into three groups: those using ChatGPT, Google Search, or no tools at all, to write SAT-style essays while monitoring brain activity with electroencephalogram (EEG).
The ChatGPT group produced largely identical essays, which two English teachers described as “soulless,” and showed declining brain activity. In contrast, the brain-only group demonstrated higher neural connectivity, creativity, and engagement. The Google Search group also performed well in the tests.
Afterward, participants were asked to rewrite one of their earlier essays. The ChatGPT group, working without the tool, recalled little of their original work and showed weaker alpha and theta brain waves, which are associated with learning and memory.
In contrast, the brain-only group, now allowed to use ChatGPT, performed well and exhibited much higher brain activity, including the waves linked to learning and memory.
Cognitive Abilities Decline With Heavy AI Use
In his 2025 paper, Michael Gerlich of SBS Swiss Business School in Kloten, Switzerland, observed that frequent AI use is linked to reduced critical-thinking skills, particularly among younger users who rely heavily on AI tools.
A similar study by Microsoft and Carnegie Mellon University examined 319 professionals who regularly used generative AI. While these tools improved efficiency, they also reduced critical thinking.
“It’s great to have all this information at my fingertips,” one participant in Gerlich’s study admitted, “but I sometimes worry that I’m not really learning or retaining anything. I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”
The research highlights that when information is passively received rather than actively processed, the opportunity to critically evaluate its meaning, implications, ethical dimensions, and accuracy is often lost.
“To be critical of AI is difficult – you have to be disciplined. It is very challenging not to offload your critical thinking to these machines,” says Gerlich.
He stresses the importance of “training humans to be more human again – using critical thinking, intuition – the things that computers can’t yet do and where we can add real value.”
Gerlich says big tech companies cannot be relied upon to guide this process: “No developer wants to be told their program works too well; makes it too easy for a person to find an answer. So it needs to start in schools. AI is here to stay. We have to interact with it, so we need to learn how to do that in the right way.”
Failing to follow this guidance, he cautions, risks not only human redundancy but also the erosion of cognitive abilities.







