Italian Priest: Artificial Intelligence prompts Us to think about what it means to be truly human

Father Luca Peyron, a leading proponent of the dialogue between faith and new technologies, discusses the challenges and opportunities posed by artificial intelligence.


On April 21, the European Commission unveiled its proposals for a legal framework on Artificial Intelligence (AI) with the aim of regulating its use to protect the privacy of European citizens and their fundamental rights.
AI, as defined by the European Parliament, is “the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity.” Contrary to automation or programming, such a machine can take a decision without human intervention. AI includes various technologies and covers many areas of everyday life, from the health sector to services, transportations and customer relations. 
The new European AI legal project, which will be debated and potentially adopted by various European states in the coming years, is considered the largest ever undertaken in the west. As new technologies are developing faster and faster — and play an increasingly important role in citizens’ life amidst the ongoing pandemic health restrictions — the European Commission is seeking to limit potential abuses connected to their use, notably by banning “high risk” systems like biometric recognition in public spaces (with a few exceptions) and social credit systems, and the use of AI to manipulate human behavior or to exploit the vulnerabilities of individuals or groups. 
With the commission’s almost 100-page document already arousing debate and criticism for not being sufficiently protective, or conversely for braking innovation, the Register sought the views of Father Luca Peyron, priest of the archdiocese of Turin (northern Italy) and founder of the Digital Apostolate Service, one of the first services worldwide to address the connection between the digital world and faith. 
The author of several publications about AI from an ethical and theological perspective, Father Peyron has stood out as an authority in this field over the past years. 
Commenting about the subject with the Register, he explained that while AI necessarily carries risks, it could never compete with human intelligence, whose dimensions are only just beginning to be explored. He also believes that the Church represents a much-needed voice in this public debate, and should address these issues in a more direct and audacious way. 
 
The European Commission has just taken on a very ambitious legal project to address the potential risk connected to AI. Is this legislation moving in the right direction according to you?
It seems to me that it is along the right lines for a number of reasons and I would say that it shows an interesting display of courage on the part of these European authorities as it implies the creation of a legal and economic space that in some way claims its own independence, without losing the founding values of the European Union. This perhaps also derives from the fact of having understood that 650 million European citizens are also an economic pool of consumers that can be significant. 
What is new and important is first of all the idea that a legislation must be placed before the creation of an artificial intelligence service or product, in such a way that they are designed from this value framework. This aspect seems prophetic to me because legislation that tends to chase technological innovations always risks being late, because innovation always goes much faster than the ability of nation or states to legislate, not to mention international consensus. 
The other aspect that seems important to me is that it reveals a true anthropocentrism. Everything is perfectible, but the human being seems to me to be the ultimate goal of this process. That is, it is not only artificial intelligence that must not damage the human being. It seems to me that the direction of thought here is to help the human being to be himself. And this is a valuable orientation. 
 
Yet, several associations for the protection of individual rights and European deputies have denounced the fact that the use of facial recognition technology in public places could be allowed in some contexts, notably within the framework of crime investigations. These critics say it paves the way for mass surveillance. What do you think about it?
We can never completely avoid risks. When we build a prison, there is always the risk that a dictator will fill it, and following that logic, we should no longer build prisons. The moment there are judges who can decide on the freedom of a fellow citizen, there can be a corrupt judge who acts in bad faith. It is clear that since there are instruments that affect personal freedom, there is a risk that these instruments will be used badly. 
It is evident that from the moment that some processes are automated, it is likely to generate new injustices. But I don't think there is any legislation or tool in the history of mankind that has not been potentially harmful. I believe that the denialist approach to technology risks suspending in limbo the application of norms with respect to certain real issues. We practically worry about what is happening in an airport, when in fact it is happening inside our homes with our smartphones. 
The facial recognition tools are potentially dangerous, indeed. But this issue implies that we take responsibility and identify who is accountable in a timely and precise fashion. It is also true that a European legislation can never replace a digital culture able to deal with these issues.
 
How do you explain the lack of a proper digital culture in the West? 
The truth is that most of Western people — even the most cultured circles — still don’t know what AI is. It is a technology that is still, and too much, in the hands of too few people who understand its scope and who, in fact, risk taking advantage of the ignorance of the public in the use of these technologies. What needs to be more and more widespread is a culture of debate on this issue and a real knowledge of what we are talking about. 
AI seems almost something esoteric or magical to most people nowadays. In this sense, the word “artificial” counts more than “intelligence” in people’s imagination. We must bear in mind that artificial intelligence is not that intelligent. We today look at machines as if they could do much more than what they are capable of doing in reality. We should perhaps get used to focusing on humans again and be concerned about the fact that there isn’t a proper and widely spread virtue ethic, rather than being afraid that there isn’t a substantial enough ethic of AI. 
 
You’ve just said that this new legislation could be perfectible. What would you improve? 
I think that the relationship between human and technology is still not that clear. In the sense that the definition of what is actually human is still too weak. The definition of what is actually technological is still too general. One big advantage that AI can give us is a real reflection on what is truly human and what is not. We have defined as intelligent what is not intelligent. And we’ve called human things that are not really human. I think we still have so much to discover about what human is and can become. The greatest gift that technology can give us today is to bring us a new reflection on what the human actually is. This is one of the greatest challenges that this time poses us. 
 
How should the Catholic Church position itself with respect to these issues? 
In its dialogue with the world, the Church enjoys a very large attention on these very issues nowadays. I believe that this is an extraordinary opportunity for a re-evaluation of human rights and their effective implementation. We realize that these are global phenomena, to which we need to respond on a global level, as much as possible. We do not have a globally shared ethic. Human rights are the only shared ethic. In order to get a shared horizon, we should go back to human rights and ensure that they have — also thanks to technology, paradoxically — a new season of vitality. On this matter, the Church certainly has something to say. 
Another very important aspect for the Church is the possibilities of inclusion and exclusion that some technologies imply. AI is a very powerful technique. This means that it can greatly widen the gap between rich and poor or it can be a tool that narrows that gap. Technology can trivially use statistics to keep excluding the excluded or to identify them and then put them back in the game. But this stems from a political choice. 
In the relationship of dialogue between the Church and the world and in educating the various generations to a synergistic coexistence with this kind of energy, surely the Church has something significant to teach. Because we remain one of the very few institutions that has an absolutely precise mission and vision. We have an anthropology, a metaphysics, an anthology, a philosophy, a moral doctrine that are organic, logical, that hold together and are not ideological. 
In the twilight of the great ideologies, and in the great darkness that these ideologies have generated, we have a lumen fidei, a light that comes from faith, but that does not exclude rationality and logicality. We can give this reasonableness to the world and I believe that the world is willing to listen. 
 
Is it something you’ve been witnessing, as a priest and expert in AI?
Over the past two years, I have been asked to give lectures and classes mostly in non-ecclesiastical contexts. It looks like there is a greater focus on what the Church has to say on these issues ... outside the Church. 
I think that, inside the Church, we should also realize that dealing with these issues is dealing with the Gospel. Digital transformation is a sign of the times and as such, we need to listen to the Holy Spirit and have him and Christ reach out to us for guidance. Perhaps we struggle to see this as a fruitful field because it is totally new. But all things considered, the issues that AI touches are those that the Church has always addressed, because they concern the human dimension, its relationship with limits, with God. We must have the courage to go beyond the fear we have of all this because we do not understand it well, to discover that it is perfectly comprehensible and that we are already equipped to deal with it and give answers. 
 
Is homo sapiens only a transition toward “machina sapiens,” as some experts have been wondering during a conference promoted by the Vatican in 2017? 
We have a very limited knowledge of human intelligence. Do we actually believe we can create an artificial intelligence that would be better than a human intelligence that we don’t even know properly? 
Today, a 4-year-old child is able to move through a reality in a way that is infinitely better than any autonomous artificial intelligence system. Artificial intelligence requires a huge effort to work, and loads of energy and data. Any human being with an infinitesimal amount of data and energy is capable of doing better. 
The human being that technology is able to replace is a being that is simply able to function. It is not a human being in all the beauty of his being. 
Yes, technology is able to replace the human, it was created for this, and to solve problems. But the human being was not born to solve problems. He was born to enter into relationship with others, with himself and with God. These are two very different things. If we look at the human being as the one who does things, then yes, technology can imitate him because it does things. But if we look at the human being as the one who is the image and likeness of his Creator, then technology will never imitate him. 
 
Many historians of ideas see the Renaissance as a turning point in the history of humanity, as human beings stopped seeing themselves as the summit of Creation to become the center of the universe. Does this AI advent represent the emergence of a new paradigm according to you? If so, what could it look like?  
With the modern era, everything was reduced to power and mightiness. I think we need to take a leap back, and not see technology as a mere instrument of power, to turn it into a service. After the balance of terror of the ’80s, during the Cold War, we rediscovered nuclear power as it was originally meant, that is, an energy for the good of humanity. This also applies to technology. As long as technology is an instrument of power, it will always be a dangerous instrument. When it becomes a tool geared towards the common good, it becomes something that makes us less afraid and that can perhaps help us coexist on this planet. 
The coronavirus crisis has taught us clearly that we cannot live as individuals but that we must live as one body, as St. Paul once wrote. Salvation comes from Christ, and AI can also remind us that it is not technology that saves us, but the Savior. 
 
What can be the possible bulwarks of ethics and humanity in the face of the risks that AI also represents?
Children. We must take the child as our boundary. Human rights must be defined with respect to children. Artificial intelligence has to guard the life of a child, to adapt to his capabilities, etc. Then, we would have the guarantee of a boundary. Because preserving children means generating life, helping life to grow. If the most fragile are the standard of measurement for everything, then we will have the guarantee that none of us, even the most fragile, can be crushed by AI.
Qui il post originale