Generative artificial intelligence is changing the way we learn, but it also brings a number of challenges. Professor Federico Gobbo from the University of Amsterdam warns of risks such as plagiarism, echo chambers and false information. He emphasizes the importance of self-awareness, critical thinking and returning to books as an act of resistance to the rapid consumption of information. Can we use AI ethically? The answer lies in understanding its limitations.
Professor Gobbo, PhD, is a professor of interlinguistics and Esperanto at the University of Amsterdam, with a doctorate in computer science. He taught philosophy and history of computing at the University of Insubria, Varese-Como, Italy. Professor Gobbo was a participant in the Language and Artificial Intelligence meeting organized by the Esperanto Association of Bosnia and Herzegovina (Esperanto-Ligo de Bosnio kaj Hercegovino) in February 2025.
Students don't understand what genAI is and use it in a wrong way
Science speaks: You have identified four problems in the use of generative and artificial intelligence and possible answers to them. What are the problems and part of the solution?
Prof. Federico Gobbo: The challenges put by generative AI (genAI) chatbots are at the same time new and old. New, because of the magnitude of data and their appeal in using them, for free. Old, because most epistomolgical and ethical aspects surrounding AI are already there from decades, before the emergence of Large Language Models. The four problems you mentioned are restricted to the use case of generative AI in the context of high education, in particular by university students.
The most relevant problem is plagiarism. If it is relatively easy to spot a cut and paste from a text put out by a genAI chatbot, it is much harder if a student is working that text by hand. This is even more difficult in the case of production of source code in some programming language, in the case of basic courses in Computer Science.
So, plagiarism involve natural language texts and computer programming code as well. The only partial solution I can find by now is honesty: let's acknowledge if we have made use of genAI tools while writing our papers. This is valid both for students and researchers.
Another important issue, specifically seen among students, is the use of genAI chatbots as if they were search engines. The point is that the algorithms behind genAI were not done for retrieving information after natural language queries, but to understand questions and generate answers.
A shift of paradigm is needed: we have to teach students to shut down their electronic devices and go back to libraries, I mean physical libraries, so to feel the dependence from the digital world a bit less. At least they can have a different perspective. The art of browsing through tematically organized bookshelves should be taught again, in my view.
Internal and external awareness
Science speaks: what about situations when AI lies?
This is also an antidote to the third problem, which is the so-called hallucination of genAI results. In a nutshell, the answers we obtain are not to be trust, as the machine feels the knowledge gaps inventing the information it does not have. And it is good in doing that, faking that such information is plausible, credible. If students are trained to double-check their sources — also using books as primary sources, reading original authors, not their commentators — this can be, if not avoided, at least, reduced.
This brings us to the fourth problem, the so-called echo chamber effect. In short, the genAI chatbot is a people pleaser: it confirms the biases of the user, ultimately restricting her/his mind perspectives and avoiding information that can be challenging views, values, and beliefs. The only way to avoid this is to be very well aware, as conscious human beings, of who we are in front of the mirror (internal awareness) and in the society (external awareness), so to mitigate the echo chamber effect.
Echo chambers within virtual space
Science speaks: One of the things you emphasized is that it is important to be aware of your internal attitude, your perception of yourself and your external attitude, that is, how others perceive us in the context of the fight against echo chambers. can you explain in more detail what it all means and how it can help us?
Prof. Federico Gobbo: Research shows that internal and external self-awareness are completely independent. In an extreme, a person who is considering her/himself unsuccessful and weak can be treated as a successful and strong person by others with whom s/he is in contact with. Having a good job, satisfying relationship, personal and social control, increases the internal self-awareness; on the contrary, anxiety, stress, and depression.
The external counterpart is based on how our values, passions, aspirations, and reactions (thoughts, feelings, behaviours) are perceived by others.
I am deeply convinced that working on both internal and external self-awareness improve our happiness, not only in relation with AI but in general.
Science speaks: When we talk about the relative predictability and control of technology, including AI, you mentioned one important point – the intersection of curves where control decreases and predictability increases… have we perhaps already passed that point when we can control these technologies because we see that the world has changed a lot compared to just 5 years ago?
Prof. Federico Gobbo: I think we are still in the infancy stage, where a lot of different genAI tools are proposed, also with different specialisation, and the situation is magmatic, prone to sudden changes. Basically, we did not pass the point of stability, when these new technologies (after all, they emerged in 2022!) will be taken as given, like nowadays are machine translation or free-to-use encyclopedic resources such as Wikipedias. But this point will be reached soon. We have to act quickly, if we want to steer society in one or the other directions. This is what politics, in its highest sense, should do.
Science speaks: What do you notice as the biggest student mistakes in learning and using technology?
Blind trust. Most of them believe, in a kind of naive positivism, that a new technology is beneficial because it is new. This happens because a new technology has an aura of magic, as we do not understand how it works, in full. We should demystify our relationship with technology.
Books are resistance
Science speaks: Can we get the books back – now that they've become outcast and avanagarda? To what extent do you think that reading today, owning libraries, is a political act of resistance?
Prof. Federico Gobbo: I think that reading serious books to acquire structured knowledge is a form of political activism, almost revolutionary. We are more and more used to approach texts with fast consumption, which leads to a simplification of phrases, paragraphs, sections, chapters… 30 years ago, when I entered university, having 12 full books to be read for an exam (approx 2000 pages) in the humanities was normal. Now, being a university professor, if I would do something similar, students would bring me in front of some commission to justify my crazy choice.
Esperanto as a humanistic project
Science speaks: How does Esperanto teach us humanism and the ethical use of Ai technology?
Prof. Federicom Gobbo: Esperanto is a miracle: a language without political power, sustained by an ideal of brotherhood between individuals and respect among nations, religions, and beliefs in general, which survived two world wars, in spite of explicit persecutions. Who could believe that such a humanistic ground could be so strong?
Esperanto is a humanistic project become reality, against any previsions. Its ethical solidity teaches us to be better human beings, beyond our differences in passports, gender choices, income levels, ethnicities, age, and so on. All Esperanto speakers are at least bilingual. It is very intriguing to see how the echo chamber effect changes when you chat with genAI bots in your other natural language(s) and in Esperanto; the answers may be slightly different. Comparing the results make us more aware of how the algorithms work in their everyday reality; the more we understand of how they work, the more we can use these technologies ethically.
Jelena Kalinić, MA in comparative literature and graduate biologist, science journalist and science communicator, has a WHO infodemic manager certificate and Health metrics Study design & Evidence based medicine training. Winner of the 2020 EurekaAlert (AAAS) Fellowship for Science Journalists. Short-runner, second place in the selection for European Science journalist of the year for 2022.