The use of artificial intelligence, AI, changes our brainIn the sense that it changes how the brain works, how it remembers and what its capacities are. We often use the analogy that we stopped remembering phone numbers since we got smartphones, but somehow that analogy loses its meaning with the generations who were born in a digital environment and who never had the need to remember phone numbers…

Okay, we don't have to remember phone numbers. But new technology hasn't freed our brains to be more useful and more human, to remember stories, to learn. To learn from stories. It's led us to the point where we don't know how to tell stories, that we have difficulty reading texts longer than 300 words.

With AI, things get worse – people ask artificial intelligence to plan their trip, training, to be their consultant, loverpsychotherapist, life coach, all in order to save time. But the time we spent to plan something, come up with something, had its place in the brain, and in the soul. The decision-making process is a cognitive activity that requires a certain methodology, even when we are not aware of it. Gathering information, balancing desires and possibilities, determining goals, choosing a course of action, setting priorities, and all of this still needs to be sifted through a filter of one's own and social moral values ​​- what is good and what is not, what is fair.

Cognitive Surrender: A Term You Need to Remember

Letting AI make decisions for us has completely changed the way we make decisions. And clearly, some people consult AI to save time, because the terror of productivity forces them to do so, and for others it is simply easier. And it wouldn't be so terrible if every single one of us didn't control and question the advice and decisions that the AI ​​served us. Especially through the prism of moral principles. Most people, studies have shown, do not question what AI throws at them, but take them for granted. The problem here is no longer whether the AI ​​will throw out something that is incorrect, untrue, dangerous or immoral (which can happen), but it is a question of something we now call cognitive surrender. So adoption before information verification. Sound familiar? For over a decade, most people have been doing this on social media. So what would be different with AI.

By the way – social networks have prepared us to throw our brains out and not think, not verify. No one is asking us to be erudite – just to check the information.

Scrutinize: A Wonderful English Word

The English word scrutinize means to consider something, to question it, not to take it for granted. It means to study something very fundamental in order to discover and understand something. Put something to the test. In the process of taking for granted what AI serves us, we do not come to the truth, to the discovery, revelation, epiphany, whatever you want to call it. We only consume an ultra-processed product. Just like with ultra-processed food, the same thing happens with ultra-processed cognition – it slows down our cognitive metabolism, causes disorders of the appetite for information and we look for more and more junk information.

Scrutinize has something to do with a custom in early Christianity called scrutinium, it is a rite of purification in Lent, and it is also used for examining, questioning subjects before entering a profession – it is the last check, for example, before the act of ordination as a deacon. It is a rite of passage, whether that person is ripe for that position, whether he is up to it, whether his morals and way of thinking are worthy of the position.

Experiments Show That People, When They Can, Hand Over Their Cognitive Abilities to AI – As Many as 80% of People Do So.

Now let's go back to scientists and experiments. At the University of Pennsylvania at the Wharton School of Business, they wanted to empirically check whether the hypothesis about how AI technology literally changes our way of making decisions, the process itself, that is, that people rely more on AI than on their own brain, is true. In one of their experiments, published in Computers in Human Behavior in 2024 on about 1,300 respondents, about 80% of them relied on AI tools. Respondents followed AI advice even when it was in conflict with the context of the situation and against their interests, that is, bad for the respondents.

The paper posted in January 2026 on the preprint page of the University of Pennsylvania comes from the same university. He talks about exactly how generative artificial intelligence (AI) is changing the way people think and make decisions — and introduces a new theoretical model called Tri-System Theory.

What does this mean?

Before AI, we had two systems of thought – System 1, which is fast and intuitive, automatic (gut feeling), and System 2, which is slow and analytical, conscious. We need both and use them in different situations. Sometimes we use System 1 even when we have time to think – say, when we share misinformation that causes us fear or disgust, panic. But we need this system for quick reactions if there is an immediate danger (we see a bear in the forest, for example, or there is a fire, an earthquake, but he is drowning).

But now we also have System 3: artificial cognition outside the brain. It can be an extension of our abilities, complement our thinking, serve for ideation or can completely replace our thinking. Thinking is hybrid – human and AI. in this sense, although we are not integrated with AI, we are in a way cyborgs.

This preprint also brings that interesting finding of cognitive surrender, when people accept the AI's answer without questioning it. This means they ignore their own intuition (System 1), skip analysis (System 2) and simply trust AI (System 3).

In this research, participants solved tasks that require logical thinking and could choose whether to answer on their own or seek the help of an AI assistant such as ChatGPT. Crucially, the AI ​​was not always reliable — the researchers deliberately tuned it to sometimes give correct and sometimes wrong answers, without the participants’ knowledge. In this way, they monitored how often people use AI, how much it helps or distracts them, and how confident they are in their answers. The experiment was conducted in three variants: basic, with time pressure and rewards for accuracy. The goal was to understand not only whether people rely on AI, but also how their behavior changes in different decision-making conditions.

Those research participants who had greater trust in AI and a lower need for cognition and fluid intelligence showed greater surrender to System 3.

Corporate Forcing of AI on Employees: No One Thinks About How It Affects Cognitive Abilities

It should be added that corporations are increasingly forcing employees to use AI models, they are forcing this model, most likely in order to ultimately reduce the number of employees, because they would harness one person more. But this is also something I would call corporate forced cognitive surrender.

Intentionally or not, companies are practically dumbing down employees in this way, turning them into AI zombies who will eventually be unnecessary to the company. We talk about employees who, without requests and orders, can no longer think on their own and do some necessary action without a request being sent to them. They cannot come up with the steps of a solution, an algorithm, nor are they able to improvise and solve new situations, which is essentially the definition of intelligence.

It will be interesting one day if the power goes out.

People Like to Be On Autopilot and Don’t Invest Energy in Learning, Understanding

Realistically, how many people want to go through the process of knowing and questioning things? Even before it was a smaller percentage of people. But there was also less damage caused by people enjoying autopilot. Now this damage is amplified and has a more disastrous effect because the outsourcing of cognitive functions is greater than at any time in history. We've never had a tool that can let our brains roam so much…

This, in addition to the complete trotting of teaching empathy, logic, logical fallacies, philosophy, psychology and the way we think and teaching ethics, represents the perfect storm for the downfall of our species, even in combination with AI warfare and decision-making for attacks on military targets…

The only solution can be to strengthen what is human in us, now more than ever. I'm talking about very structured teaching of empathy and ethics, strengthening not only media, scientific and digital literacy, but above all social and emotional intelligence. Otherwise we are doomed to become tin men, without brains, without hearts, without inherent wisdom and – without balls.

Cunning is cleverness without morals, and wisdom is cleverness with morals. The struggle for the survival of the human species begins with resistance. To begin with, resistance to cognitive tradition, which tends to destroy the world. And destruction is far simpler than creation.

True learning requires effort, consumption of energy and time, and learning is what, in addition to empathy and ethics, makes us human. In his address at Witts University, engineer Sir John Lazar also mentioned the concept of cognitive endurance as an antidote to cognitive surrender. It is an act of teaching understanding of the subject, thinking, independent performance of the task and reflection on it.

References:

  1. Klingbeil A, Grützner C, and Schreck P Trust and reliance on AI—An experimental study on the extent and costs of overreliance on AI  Computers in Human Behavior  2024 160 108352
  2. Shaw, Steven D and Nave, Gideon, Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender (January 11, 2026). https://doi.org/10.31234/osf.io/yk25n_v1, The Wharton School Research Paper, Available at SSRN:  https://ssrn.com/abstract=6097646  or  http://dx.doi.org/10.2139/ssrn.6097646
  3. Futurism: Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It's Totally Wrong