United By AI: The Unusual Alliance Between G7 Leaders, Big Tech, and the Pope

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

Earlier this month, Pope Francis made history as the first Pope to address the G7, a group of leaders representing some of the most powerful and influential countries in the world. Among the issues the Pope addressed was AI. The Pope himself was the victim of AI and deep fakes just last year when an image spread online of the Pope wearing a white puffy jacket. However, his comments pertained very little to this relatively insignificant incident. Instead the Pope emphasized that AI must be human focused such that it does not deprive us of our human dignity.

Strangely, this is not even the first time the Pope has gotten involved with AI. In 2020, the Vatican was among the signatories of the Rome Call for Ethics, which also included tech companies like Microsoft and IBM. While the call does not call for specific policy, it offers broad guidelines and principles on how to ethically develop AI such that it minimizes harm and maximizes benefit to humanity.

 

The Pope’s G7 Meeting

Chief among the concerns the Pope addressed at the G7 Summit was AI’s decision making. The Pope stated that important decisions must be left to humans and not algorithms, for there are factors to consider that cannot simply be quantified by a numerical value. One such issue is criminal sentencing. Algorithms are being used by judges to help determine if a prisoner should be granted home confinement. These algorithms, the Pope says, take into account factors like the severity of the sentence and the prisoner’s behavior, but also their ethnicity, education, and even credit rating. The Pope stressed that humans are not numbers and that these algorithms cannot properly account for a person’s ability to surprise others. 

However the greatest AI decision making threat that the Pope addressed is autonomous weapons. These are machines that can make a choice to kill a human. While much of the Pope’s address dealt primarily with broad principles, the Pope here made an outright call for specific policy. The Pope called for world leaders to ban the use of autonomous weapons, stating that “No machine should ever choose to take the life of a human being.”

Another point the Pope addressed was AI generated essays. While these essays seem analytical, the Pope stated that they do not actually develop any new analysis or concepts. These essays are instead merely copying existing analysis and compiling it into an appealing form. Rather than offering an opportunity for authentic reflection like education should, the prevalence of these essay writing algorithms “runs the risk of [education] being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.”

One final point the Pope addressed was the need for “algor-ethics.” The Pope believes that AI is shaped by the worldview of its creator and there is often no consensus on various issues concerning our world today. Algor-ethics is a set of global principles that are meant to mitigate the harmful effects of the biases AI may have obtained through its development. 

 

The Rome Call For Ethics

In February of 2020, the Rome Call For Ethics was signed by entities like Microsoft and IBM, but also included the Vatican. The Call established 3 areas of impact and 6 principles to ensure the ethical, human focused development of AI. The areas of impact are ethics, education, and rights. The 6 principles are transparency, inclusion, accountability, impartiality, reliability, and security/privacy.

The first area of impact is ethics. It starts with a basic recognition that all humans are free with equal dignity and rights. Overall the primary goal of AI should be to serve humanity and its progress, not undermine it. This can be achieved through three requirements. “It must include every human being, discriminating against no one; it must have the good of humankind and the good of every human being at its heart; finally, it must be mindful of the complex reality of our ecosystem and be characterized by the way in which it cares for and protects the planet (our “common and shared home”) with a highly sustainable approach, which also includes the use of artificial intelligence in ensuring sustainable food systems in the future.” AI must protect, not exploit humanity, especially the weak and vulnerable. In addition, one should always know when they are interacting with a machine and not a human. 

The next area of impact is education. This impact area calls for the working with younger generations for the benefit of younger generations. This is done through the development of quality, accessible, and non discriminatory education. However, these same education prospects should be deliverable for the elderly as well, including through off-line services during the technological transition. Education must raise awareness about both the opportunities and issues posed by AI. The goal should be that nobody is left behind and everyone is able to express themselves and contribute to the benefit of humanity.

The final area of impact is rights. This area focuses on ensuring AI development is done in a way that protects the weak and underprivileged, as well as the natural environment. This includes making human rights the core of AI. In particular, a “duty of explanation” should be considered. This means that what goes into the AI’s decision making and what its objective and purpose is should be transparent. 

To achieve the objectives outlined in the areas of impact, the Pope calls for “algor-ethics” that abides by 6 principles. The first is transparency. This ties back to the duty of explanation and how the AI’s decision making process must be clear and knowable. Next is inclusion, this means that people cannot be excluded or discriminated against by AI, which is part of both the ethics and education areas of impact. Next is responsibility, those who create the AI systems are to be accountable for the AI’s outcomes. After that is impartiality, which means the AI development must avoid the biases of its developer. Then there is the principle of reliability, which simply means AI should be relied upon such as through providing accurate information. AI is known to “hallucinate” and make up facts out of thin air, which must be addressed. Finally there is security/privacy, AI must respect a person’s private information.

 

Conclusion

The Catholic church is one of the oldest global organizations in the world. With the considerable impact AI will have on the world, it does strangely make sense for the Vatican to get involved. The Pope’s concerns are well founded and should be considered as AI continues to advance. If humanity is not at the center of AI’s development, we risk that AI may replace humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *

GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form.