Summit explores role of ethics in development of artificial intelligence
WASHINGTON (CNS) -- Universities around the world are taking steps alongside major technology companies to explore ways to bolster ethics education in the artificial intelligence field in line with an initiative supported by the Vatican.
The effort seeks to help those already working or aspiring to work in the tech fields understand that the development of artificial intelligence, or AI, should benefit humanity rather than pose uncontrollable challenges to human life.
Participants at a global summit at the University of Notre Dame Oct. 25-26 explored ways to encompass ethics education in coursework with speakers calling for widespread integration in both technical and nontechnical curricula.
Casey Fiesler, associate professor of information science at the University of Colorado, told in person and online attendees in a session that the long-held view that ethical topics are a "specialization" within technology education must be put aside.
"We should not be teaching ethics in the context of computing so that it is completely separate from everything else that we are doing," Fiesler said in calling for a culture shift in higher education that can reach across society.
The Vatican's role stems from its involvement in the "Rome Call for AI Ethics," which calls for ethical principles and guidelines to be used in AI development so that products that are developed, sold and used actually promote the good of all humanity.
Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, was among the first five signatories to the charter in February 2020. He joined executives from Microsoft, IBM, the U.N.'s Food and Agriculture Organization and Italy's minister of innovation.
He told the summit that the Vatican has led the development of the Rome Call for AI Ethics because the church sees the advantages of technological innovation in improving human life, but that such progress must be guided by ethical principles.
Progress, he said, "must be humankind's servant, not a monster that gobbles us up, wears us down, lets us die."
The summit included representatives of about three dozen Catholic, other faith-based and public universities in Europe, Africa, Asia, South America and the U.S. Speakers and panel discussions examined issues of transparency, accountability, impartiality, reliability and security and privacy.
The integration of AI systems in university life already is widespread, said Joseph Glover, provost and senior vice president of academic affairs at the University of Florida. For example, AI software allows school officials to gauge whether a student is under stress or at risk of withdrawing from a class based on certain nonstandard cues.
Having access to such information poses moral questions on whether to intervene, Glover said during a panel discussion.
"We're grappling on a practical level (with) how do we make use of this information and how to advance the student and promote student success," he said, explaining that "I know something about the student which the student may not know. How am I obligated to communicate that?"
Presenters also said such ethical questions can be raised in any number of areas, ranging from more sophisticated monitoring of social media usage to the development of military weapons that become more efficient at the risk of human life.
Speaker Pascale Fung, professor of electronic and computer engineering at the Hong Kong University of Science and Technology, told the summit she began looking at ethical implications in the AI field after working on a voice command system for fighter jet pilots in a project funded by the military industry in the 1990s.
She said her primary concern at the time was to make the technology more accurate and robust, thinking that eventually such an application eventually would benefit civilian endeavors as well. But others began asking her why she was investing her talents in a system that was reducing human involvement in decision-making that placed human life at risk.
"I was shocked by the questions and began to think about the why of what we were doing," Fung said.
A deeper integration of ethical issues can reach across technology fields and can involve other disciplines such as psychology, philosophy, sociology, law and business so that students begin to think about how AI can serve humanity, Fung and other participants said.
At the University of Illinois Urbana Champaign, ongoing events bring people together to explore ethics and artificial intelligence, Karrie Karahalios, professor of computer science at the school, said during a panel discussion.
Such gatherings allow for having "a common language across all our different departments" so that human needs are not eclipsed in the development of new technologies, she said.
Karahalios also said the same type of ethics education can be extended across society, reaching lawmakers, middle school and high school students and even consumers.
Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University in California, echoed this view.
"Ethics needs to be everywhere all the time. It's like the air. You run out of air, you'd be in bad shape," he said.
During the conference's second day, eight institutions, including Notre Dame, signed on to the "Rome Call for AI Ethics." Other are the University of Navarra and Schiller International University in Spain; Catholic University of Croatia; SWPS University of Social Sciences and Humanities in Poland; Chuo University in Japan, the University of Johannesburg in South Africa and the University of Florida.
The summit was planned by the Pontifical Academy for Life, IBM and Notre Dame and hosted by the Notre Dame-IBM Technology Ethics Lab.