The Pope’s AI guru on the human impact of tech

Think:Act Magazine "Thinking in decades"
The Pope’s AI guru on the human impact of tech

February 17, 2025

Preserving our human rights and human values in an era of expanding digital dominance

Listen to the interview

Interview

by Steffan Heuer
Illustrations by Jan Robert Dünnweller

Franciscan Friar Paolo Benanti has carved out a respected niche as the Vatican's semi-official voice on the thorny matters of rapid technological progress and what it means to be human in the 21st century.

He may live in a 16th-century monastery in the heart of Rome and teach at the Pontifical Gregorian University which traces its roots back to a Jesuit college founded in 1551, but Paolo Benanti is deeply engaged with the here and now. He grew up playing Dungeons & Dragons, wears an Apple Watch and is a fixture on the international tech scene, putting artificial intelligence into context as the Vatican's most prominent tech theologian. In this interview with Think:Act, he delves into the moral as well as the very tangible risks if artificial intelligence is left to innovate at breakneck speed.

You are a professor of ethics at the Pontifical Gregorian University in Rome, which is considered the Harvard of the Vatican, but you're also in the headlines frequently as the advisor to the Pope on artificial intelligence. Why does the Vatican need an AI advisor?

My daily work is at the university, teaching and writing. The idea of me advising the Pope is a misunderstanding by the media. I'm only a consultant. A consultant is a special role in the Vatican where the Pope appoints someone, usually a professor, to help work on some topics. I'm not someone who whispers into the Pope's ear. My role is to serve as an official element of advice in a broader framework. When the Holy See calls on consultants like me with a specific question, we write documents and have some kind of dialogue.

Paolo Benanti 

Paolo Benanti studied engineering before dropping out, joining the Franciscan Order and earning a Ph.D. in moral theology at the Pontifical Gregorian University, where he has taught on the moral questions of technology since 2008. He is also an AI advisor to the Italian government and author of the book Homo Faber: The Techno-Human Condition.

Bridging the worlds of faith and tech, you also work to facilitate a dialogue between tech companies such as Microsoft and Cisco and the Vatican. How would you describe Pope Francis' knowledge of AI?

Pope Francis is 86 (at the time of the interview) and not an engineer, but when you look at the pontifical declarations made by him, they are characterized by a really deep ability to perceive what the most challenging questions are for human beings. It started at the outset of his papacy with a visit to Lampedusa in July 2013 to pray for refugees and migrants lost at sea. He followed up with the encyclical Laudato si' in 2015 which addressed ecological concerns. And now you have the big topic of AI, where he has repeatedly spoken about the potential to contribute in a positive way to the future of humanity, but also cautioned that this requires a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly. Don't forget that one of the most viral images was the Pope "wearing" an AI-generated white puffer jacket. In his message for the Catholic Church's World Day of Social Communications in May 2024, he clearly stated: I, too, was an object of a deepfake. His own perspective, then, is not on himself – it's on the effect on people.

"AI can be a tool or can easily be weaponized."

Paolo Benanti

AI consultant to Pope Francis

What are, to your mind, the biggest and potentially existential risks that AI poses to mankind?

Just as when we first picked up a club 10,000 years ago, AI can be a tool or can easily be weaponized. It depends on culture, which serves as a complement to nature and arises out of language. If you can hack language, you can hack culture, knowledge and a core set of beliefs that characterize humanity. Language is a very powerful technology. First words were spoken, then written down, then came the printing press. Now we have a computed language. I think this is where the major risk comes from, because we learned during modernity to shape our knowledge in an architectural way, with knowledge remaining in self-sufficient silos. Engineering is engineering. Science is science. Now, if you interact with an AI, the knowledge that the AI can suggest to you is oracular. Everything is mixed together – and you have to trust it. What we know of ourselves is not automatic knowledge, but something that is handed from one generation to another. Once you insert AI in this chain and it's not well designed, it could have a really profound effect on who we are and how we understand the world.

Helping the Pope with AI thoughts - How some of Paolo Benanti's ideas found their way into Pope Francis' speech at the G7 on AI in 2024

Paolo Benanti:
"Human dignity and human rights tell us that it is man who must be protected in the relationship between man and the machine."

Pope Francis:
"We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it."

Paolo Benanti:
"What we are witnessing is a fundamental and deep transformation in the way of conceiving reality and ourselves especially […] and what is meant by 'man as a moral agent.'"

Pope Francis:
"We seem to be losing the value and profound meaning of one of the fundamental concepts of the West: that of the human person."

Paolo Benanti:
"Technology is part of what it means to be a human being. That is the techno-human condition. Technology is always present, even though it’s not always the same throughout the human story."

Pope Francis:
"Our ability to fashion tools […] speaks of a techno-human condition: Human beings have always maintained a relationship with the environment mediated by the tools they gradually produced."

Paolo Benanti:
"We must establish a language that can translate moral value into something calculable for the machine. The perception of ethical value is a purely human capacity. The ability to work on numerical values is rather the ability of the machine. Algor-ethics is born if we are able to transform moral value into something calculable."

Pope Francis:
"An ethical decision is one that takes into account not only an action's outcomes, but also the values at stake and the duties that derive from those values … With the term 'algor-ethics,' a series of principles are condensed into a global and pluralistic platform ... "

In which specific areas do we need regulation? You have made the case that guardrails for AI are similar to mandating the use of seat belts.

When we develop a car, we start by drafting the rules for driving in public. Those rules are not made to constrain the technological evolution of cars but to avoid accidents. Because we have machines now that can automate decision-making or can be applied in medicine – in other words replace humans in a lot of processes – we need guardrails to avoid accidents in those realms. Think of it as some kind of safety belt to avoid that human dignity, the most fragile part of us, could be damaged by the speed of innovation. Innovation has to serve the public good.

"Think of it as some kind of safety belt to avoid that human dignity, the most fragile part of us, could be damaged by the speed of innovation."

Paolo Benanti

AI consultant to Pope Francis

How much time do you think the public, politicians and regulators have to put these guardrails and mandates in place?

The world of technology is evolving really fast. But there is a flip side when regulations are put in place too fast, without us being able to understand or allowing new technology to develop. Take the privacy regulations passed in the EU called GDPR. We were able to define human rights around our data – and the slowness of this process turned out to be an advantage. It allowed us to protect not a fixed list of data, but all data that could be personalized. Now that AI can associate lots of new data points to a single human being, it also becomes protected data. The slowness of the legal process gives us not a list of forbidden and permitted technology, but the broad criteria to highlight what we'd like to protect when it comes to human identity – and dignity. The European Union's AI Act is going in the same direction because it isn't covering any industrial use of technology but focusing on protecting human rights.

The debate over regulation vis-à-vis progress is not new. Do you consider it an uneven race?

Regulation is always a compromise. I don't see it as a cat-and-mouse game. AI technologies are in some way temporary because they are continuously evolving. We also have to understand regulation as something temporary that needs to be adjusted.

What about enforcement? We are dealing with multinational players and multinational problems. Do we need global entities or something similar to the Non-Proliferation Treaty for nuclear arms?

It's certainly debatable if that treaty is really working, but something is better than nothing. Enforceability is the real problem, because we don't have global police that can step in. From my experience working on the United Nations' Artificial Intelligence Advisory Body, which was formed in October 2023, we have no solution. Sure, we can draft the ideal regulatory set up for the world, but then every country has its autonomy, which also involves the power of markets and money.

The Rome Call

Initiated in part by Paolo Benanti, the Rome Call is a nonbinding, interreligious document to promote shared global responsibility for ethical AI. The first signatories included IBM and Microsoft.

When it comes to ethics in AI, you're the driving force behind something called the Rome Call. What do you want to accomplish?

The Rome Call isn't a normative framework. It's an attempt to establish a setting in which different entities – churches or faiths, states or NGOs, universities and tech companies – can come together and define ethics for AI and then align what they do with those principles. Companies, for example, are discovering some kind of responsibility in their own processes and might mandate their engineers to take ethics classes. We hope we can create an ethical movement through shared principles, but we are not giving anyone a stamp of approval.

The initiative has been able to win over other world religions since it started in 2020. How can the world's large religions work together to ensure that AI serves all of humanity?

Having other faiths join is the most interesting part, from Asian religions to Judaism. It's a new perspective to look at how religions are leaving divisions and fights behind to say we are united on the topic of AI ethics. We want to make sure AI will be a tool for human development and not oppression.

What does that look like in practical terms?

Education is a good example. If you look at just Christianity, we are serving tens of millions of students. Then you have Buddhist, Shinto, Muslim and Jewish educational institutions all talking to future generations about the ethics of AI. We are fostering, nourishing a healthy sensibility of technology. This will have an effect, because it's part of the cultural process and how we understand reality. And we're not an industry. We are not thinking in quarters or investor returns. Religions work with long time horizons.

"We have systems that may fool us into believing they are conscious, but not more."

Paolo Benanti

AI consultant to Pope Francis

There's a debate as to whether those systems can become, or might have already become, sentient or conscious. Are some AI experts just delusional?

I come from an engineering background and I know very well that a Turing machine can only solve certain problems. And proof of consciousness is not such a problem. We have systems that may fool us into believing they are conscious, but not more. This makes it a hypothetical problem for now, while it is much more urgent and important to discuss the effects that these AI systems can have on human labor, equality and access to resources such as energy or water.

Let's assume we reach some form of super AI in the near future. What role will religions play when there are sentient beings other than humans among us?

If someone is interacting with a machine like an oracle, that is a religious approach. So, instead of fighting this new divinity, we should tell humans that they are treating machines as if they are gods. Idols will arise when we follow a solution suggested by a machine that we cannot understand. There is a risk that such a religious view of machines will spread in society. People who are not able to understand what these systems are will use them – not because we built a new God, but because we didn't educate the public enough how to use these tools. To be honest, I would not be shocked if in a nonreligious country like China, the power of AI will produce a sort of religious movement. Some may ascribe that to the absence of traditional religion and the human need to get answers. Let's remember how early Christians encountered Greek gods. So, forgive me the joke, it's not the first time that we have competitors in the market.

Key takeaways from this piece
AI can be a tool, or a weapon:

Little in the world can be described as entirely safe or inherently dangerous. Human intention makes the difference.

Regulation must be adjustable:

Technology is in a constant state of evolution and the guardrails we put in place need to be able to move as needed.

Think for the long term:

A healthy sensibility of technology requires education that remains focused on the horizon as well as the here and now.

About the author
Portrait of Steffan Heuer
Steffan Heuer
Steffan Heuer has been covering the intersection of technology, commerce and culture in Silicon Valley for more than two decades. His work has appeared in The Economist, the MIT Technology Review and the German business monthly brand eins. He currently divides his time and reporting between Berlin and California.
Download the full PDF
Think:Act Edition

Thinking in decades

{[downloads[language].preview]}

The latest edition of Think:Act Magazine explores enduring business themes from past decades through a new lens of pertinence for the next 20 years.

Published February 2025. Available in
Further readings
All online publications of this edition
Load More
Portrait of Think:Act Magazine

Think:Act Magazine

Munich Office, Central Europe