The use of artificial intelligence, AI, changes our brain. In the sense that it changes how the brain works, how it remembers and what its capacities are. We often use the analogy that we stopped remembering phone numbers since we got smartphones, but somehow that analogy loses its meaning with the generations who were born in a digital environment and who never had the need to remember phone numbers…
Okay, we don't have to remember phone numbers. But new technology hasn't freed our brains to be more useful and more human, to remember stories, to learn. To learn from stories. It's led us to the point where we don't know how to tell stories, that we have difficulty reading texts longer than 300 words.
With AI, things get worse – people ask artificial intelligence to plan their trip, training, to be their consultant, lover, psychotherapist, life coach, all in order to save time. But the time we spent to plan something, come up with something, had its place in the brain, and in the soul. The decision-making process is a cognitive activity that requires a certain methodology, even when we are not aware of it. Gathering information, balancing desires and possibilities, determining goals, choosing a course of action, setting priorities, and all of this still needs to be sifted through a filter of one's own and social moral values - what is good and what is not, what is fair.
Cognitive Surrender: A Term You Need to Remember
Letting AI make decisions for us has completely changed the way we make decisions. And clearly, some people consult AI to save time, because the terror of productivity forces them to do so, and for others it is simply easier. And it wouldn't be so terrible if every single one of us didn't control and question the advice and decisions that the AI served us. Especially through the prism of moral principles. Most people, studies have shown, do not question what AI throws at them, but take them for granted. The problem here is no longer whether the AI will throw out something that is incorrect, untrue, dangerous or immoral (which can happen), but it is a question of something we now call cognitive surrender. So adoption before information verification. Sound familiar? For over a decade, most people have been doing this on social media. So what would be different with AI.
By the way – social networks have prepared us to throw our brains out and not think, not verify. No one is asking us to be erudite – just to check the information.
Scrutinize: A Wonderful English Word
The English word scrutinize means to consider something, to question it, not to take it for granted. It means to study something very fundamental in order to discover and understand something. Put something to the test. In the process of taking for granted what AI serves us, we do not come to the truth, to the discovery, revelation, epiphany, whatever you want to call it. We only consume an ultra-processed product. Just like with ultra-processed food, the same thing happens with ultra-processed cognition – it slows down our cognitive metabolism, causes disorders of the appetite for information and we look for more and more junk information.
Scrutinize has something to do with a custom in early Christianity called scrutinium, it is a rite of purification in Lent, and it is also used for examining, questioning subjects before entering a profession – it is the last check, for example, before the act of ordination as a deacon. It is a rite of passage, whether that person is ripe for that position, whether he is up to it, whether his morals and way of thinking are worthy of the position.
Experiments Show That People, When They Can, Hand Over Their Cognitive Abilities to AI – As Many as 80% of People Do So.
Now let's go back to scientists and experiments. At the University of Pennsylvania at the Wharton School of Business, they wanted to empirically check whether the hypothesis about how AI technology literally changes our way of making decisions, the process itself, that is, that people rely more on AI than on their own brain, is true. In one of their experiments, published in Computers in Human Behavior in 2024 on about 1,300 respondents, about 80% of them relied on AI tools. Respondents followed AI advice even when it was in conflict with the context of the situation and against their interests, that is, bad for the respondents.
The paper posted in January 2026 on the preprint page of the University of Pennsylvania comes from the same university. He talks about exactly how generative artificial intelligence (AI) is changing the way people think and make decisions — and introduces a new theoretical model called Tri-System Theory.
What does this mean?
Before AI, we had two systems of thought – System 1, which is fast and intuitive, automatic (gut feeling), and System 2, which is slow and analytical, conscious. We need both and use them in different situations. Sometimes we use System 1 even when we have time to think – say, when we share misinformation that causes us fear or disgust, panic. But we need this system for quick reactions if there is an immediate danger (we see a bear in the forest, for example, or there is a fire, an earthquake, but he is drowning).
But now we also have System 3: artificial cognition outside the brain. It can be an extension of our abilities, complement our thinking, serve for ideation or can completely replace our thinking. Thinking is hybrid – human and AI. in this sense, although we are not integrated with AI, we are in a way cyborgs.
This preprint also brings that interesting finding of cognitive surrender, when people accept the AI's answer without questioning it. This means they ignore their own intuition (System 1), skip analysis (System 2) and simply trust AI (System 3).