U.S. news

Morality of AI depends on human choices, Vatican says

New document from Vatican focuses on the moral use of technology and on the impact of artificial intelligence

VATICAN CITY — “Technological progress is part of God’s plan for creation,” the Vatican said, but people must take responsibility for using technologies like artificial intelligence to help humanity and not harm individuals or groups.

“AI is an extension of human power, and while its future capabilities are unpredictable, humanity’s past actions provide clear warnings,” said the document signed by Cardinals Víctor Manuel Fernández, prefect of the Dicastery for the Doctrine of the Faith, and José Tolentino de Mendonça, prefect of the Dicastery for Culture and Education.

The document, approved by Pope Francis Jan. 14 and released by the Vatican Jan. 28 — the day after International Holocaust Remembrance Day — said “the atrocities committed throughout history are enough to raise deep concerns about the potential abuses of AI.”

Titled, “Antiqua et Nova (ancient and new): Note on the Relationship Between Artificial Intelligence and Human Intelligence,” the document focused particularly on the moral use of technology and on the impact artificial intelligence already is having or could have in several areas.

AI technology is used not only in apps like ChatGPT and search engines, but in advertising, self-driving cars, autonomous weapons systems, security and surveillance systems, factory robotics and data analysis, including in health care.

The popes and Vatican institutions, particularly the Pontifical Academy of Sciences, have been monitoring and raising concerns about the development and use of artificial intelligence for more than 40 years.

“Like any product of human creativity, AI can be directed toward positive or negative ends,” the document said. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation.”

Human beings, not machines, make moral decisions, the document said. So, “it is important that ultimate responsibility for decisions made using AI rests with the human decision-makers and that there is accountability for the use of AI at each stage of the decision-making process.”

The Vatican document insisted that while artificial intelligence can quickly perform some very complex tasks or access vast amounts of information, it is not truly intelligent, at least not in the same way human beings are.

Human intelligence also involves listening to others, empathizing with them, forming relationships and making moral judgments — actions which even the most sophisticated AI programs cannot do, it said.

The Vatican dicasteries issued several warnings or cautions in the document, calling on individual users, developers and even governments to exercise control over how AI is used and to commit “to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation.”

First, they said, “misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts — such as in education or in human relationships, including the sphere of sexuality — is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people.”

The dicasteries warned that “AI could be used to perpetuate marginalization and discrimination, create new forms of poverty, widen the ‘digital divide,’ and worsen existing social inequalities.”

Of course, the document said, AI’s falsehood also “can be intentional: individuals or organizations intentionally generate and spread false content with the aim to deceive or cause harm, such as ‘deepfake’ images, videos and audio — referring to a false depiction of a person, edited or generated by an AI algorithm.”

Military applications of AI technology are particularly worrisome, the document said, because of, among other concerns, AI’s potential for removing “human oversight” from weapons deployment.