By Rachel Amiri
Last September, Fr. Jeff Johnson, SJ, president of Strake Jesuit College Preparatory School in Houston, opened the annual President’s Dinner with a prerecorded video greeting.
“Would you know if what you saw with your own eyes wasn’t real?” he asked the audience from the screen.
In the video, Fr. Johnson appears seated in a sunny atrium that could be one of the buildings on Strake Jesuit’s campus. An astute observer might notice that his voice didn’t quite track with the movement of his lips, or that the image seemed a little stilted.
“The likeness you see, and the words you are hearing, were generated by artificial intelligence (AI). Raw audio and video material were used to train a computer to make this image appear – and sound – as real as possible,” the AI-generated voice continued.
In this case, AI was a video-creation tool highlighting the challenges facing today’s students and teachers. “What you saw was a very convincing fake,” the real Fr. Johnson told the audience.
How, he asked, are educators to prepare young people with the virtue and wisdom to act in a world where AI could be used for great good – or great harm? It’s a question the schools in the Jesuits USA Central and Southern Province (UCS) are grappling with.
What is Generative AI?
Such “deepfake” technology can both entertain and alarm, particularly when we can easily imagine its use in war or during political unrest. Of course, not every use of this new technology is quite so dramatic.
By now, many people have heard of generative AI thanks to tools like ChatGPT, Microsoft’s Copilot or Google’s Gemini. AI-powered tools are now integrated into technologies we use every day.
This new generation of artificial intelligence utilizes large language model (LLM) technology to draw from a vast dataset gathered from the World Wide Web and then uses algorithms to produce written content based on that data. LLMs are probabilistic, meaning they provide information that is likely true but may be factually inaccurate. In 2024, generative AI tools can also produce visual, audio or video content that is sometimes indistinguishable from what humans might create. And they are getting better every day.
The Ethical Implications of AI
Many have begun considering the implications of this transformative and potentially disruptive technology. What do we want – and not want – AI to do? When AI can complete tasks in a matter of seconds that could take hours of human labor, what makes our work valuable?
Pope Francis has directly addressed the topic of artificial intelligence on multiple occasions this year. “I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity,” he said. “We cannot dismiss it.”
This openness to AI can be read alongside the framework Pope Francis offered in Laudato si’, the “creation paradigm” or the “technocratic” one, says Dr. Brian J. A. Boyd, director of the Center for Ethics and Economic Justice at Loyola University New Orleans.
“If we come at this from the perspective of a creature made by a loving creator, we will appreciate these tools for what they can and can’t do,” he says. “If we come at it viewing all things as instruments for domination, then AI becomes an object of envy. We have the pride of creating something so powerful, and, on the other hand, the insecurity of creating something better than ourselves in some ways, such as rapidly processing data, creating images, or summarizing texts.”
Technology can legitimately assist in completing some tasks; Boyd even instructs students to use a popular AI tool to edit essays before submission for the business ethics course he teaches. However, there is also a risk of “outsourcing thinking” and habitually relying on AI so that people no longer know how to exercise independent judgment, losing important skills even in moral decision-making.
As Pope Francis says, “It is up to us to decide whether we will become fodder for algorithms or will nourish our hearts with that freedom without which we cannot grow in wisdom.”
Boyd seeks embodied craftsmanship in using AI as a tool, rather than blurring the boundaries between human and machine. “You don’t want to be so closely integrated with an AI tool that you can’t tell where it ends and where you begin,” he says.
Ultimately, just as AI challenges our human abilities, it may remind us of who we are.
“The biggest principle is to know who you are: a child beloved by God and called by God to become a man or woman for others,” Boyd says. “How is what I’m doing striving to bring glory to God?”
A New Era of Work
The possibility of replacing human workers and even relationships with chatbots could worsen inequity and damage our capacity for self-giving relationships.
“There’s a real risk as we are building better and more compelling robots that we will cease to learn how to interact with other people through self-gift and self-restraint,” said Boyd. “We will start to associate giving commands with being human.”
Beyond the personal, the ethical challenges related to AI for institutions are wide-ranging, from data privacy concerns to social justice and even environmental ones.
Will automation displace workers? Or might it improve their capacity and free up time to dedicate themselves to more meaningful work? There is great promise for AI to allow workers to “upskill,” accessing higher-paid positions, says Boyd. “Any employer has profound obligations in workplace justice to seek the good of their employees and offer them the ability to fulfill their role. The goal is always going to be helping workers do their jobs in a meaningful and rewarding way.”
The Jesuit Conference of Canada and the United States (JCCU) has chosen to be an early adopter of generative AI while remaining committed to its ethical implementation.
“We like to be proactive in exploring and integrating new tools into our workflows, ensuring that we stay ahead of the curve while aligning with our mission and values,” says Christine Stack, JCCU IT director.
The Conference and province offices are piloting generative AI using Microsoft Copilot, experimenting across departments. According to Stack, these tools help automate repetitive tasks, such as drafting documents, generating meeting minutes and creating reports. “This allows Jesuits and their colleagues to focus on more strategic and mission-critical activities,” she explains.
The formation of an AI Ethics Council reflects a commitment to ensuring AI use aligns with the Universal Apostolic Preferences and the Jesuit mission. Inclusivity, bias and the potential for AI systems to replace human workers are all areas the council is monitoring. Council members are drafting an AI Ethics Policy calling for disclosure of when and how AI has been used in the Conference’s work, as well as for human accountability for using it.
“The council will strive to make sure that our AI systems foster the common good, safeguard the environment and honor the dignity of all people,” says Stack.
The Challenge of Generative AI in Jesuit Education
When ChatGPT burst on the scene in late 2022, students were among the first to experiment with prompting it to write essays or answer homework questions. Educators noticed and updated academic integrity policies prohibiting the use of AI to complete student work.
Now, just two years later, the challenges are more complex, as generative AI has become more ubiquitous and harder to detect.
“It is changing so rapidly; we are in constant monitoring mode. I feel like we’re the CDC in 2021,” said Dr. Kevin Foy, vice principal for academics at St. Louis University High School. “We understand what’s out there, but we don’t know how it will mutate tomorrow.”
AI-detection software has proven unreliable, according to school administrators. Instead of playing catch-up by focusing on detecting AI use and penalizing students, some Ignatian educators have taken a different approach. At schools and universities across the UCS Province, they have approached new technology with a spirit of discernment while remaining rooted in Ignatian pedagogy.
Generative AI challenges how students and teachers have become accustomed to learning and demonstrating mastery. Today, a lesson could be planned by AI, an essay prompt generated by AI, the essay written for the student by AI, and that essay graded for the teacher by another AI tool.
“You need to design a lesson that can’t be authentically ChatGPTed,” said Spencer Wagner, a computer science teacher and AI specialist at Regis Jesuit High School in Aurora, Colorado.
Wagner, the author of the 2023 book, The Rise of the Teacher Coach: Adapting AI to the Classroom, is an early adopter of AI and enthusiastic about its potential to deepen student learning as teachers restructure lessons and build “thinking classrooms.” Rather than focusing on producing evidence of learning through essays and tests, this approach requires a focus on the learning process.
“Are we designing a curriculum that challenges students and requires them to think creatively?” Wagner asks. “Your goal is not just a finished paper anymore. It’s, ‘Show me your writing process. How did you come up with these ideas? When did you ask the AI for help?’”
Wagner suggests that as educators adapt, we may see a return to “where education used to be,” even utilizing oral assessments. And with time saved by using AI as an assistant, teachers may have more time in the classroom to gauge student progress one-on-one.
Other educators, such as Boyd at Loyola New Orleans, see great potential for using AI “tutors” in providing differentiated instruction – an enormous challenge for classroom teachers.
Forming Students Today for Tomorrow
At St. Louis University High School, which utilizes a stoplight scale of red, yellow and green zones to guide student use of AI in coursework, Foy sees a renewed opportunity to form lifelong learners in the shift to assignments that are difficult for AI to complete.
“It’s process over product, not just about what essay they produce,” he says. “We want students to want to do the work.”
How do educators continue to form students to put their education in service of the greater good? Although today’s technical and moral challenges were unimaginable just a few years ago, they are ones Jesuit educators are well-equipped to handle.
Mike Lally, a theology teacher who sits on a committee at SLU High tasked with developing the student-use policy, says it has prompted faculty discussions about formation. As AI tools become unavoidable, so, too, does personal discernment about their use.
“The question our committee really grappled with is, ‘How do you form students to be thoughtful, prayerful, discerning individuals?’” he says.
“I don’t think AI needs to change how our teachers accompany our students and look for those ideals of intellectual growth and commitment to justice and love. Those core values will not change,” says Wagner of Regis Jesuit. “That human element of embodied learning is why students still come to our schools.”
Those who find themselves anxious about approaching the new digital frontier might take courage from this balanced approach.
“Ignatius gave us pretty good advice: use the things that bring you closer to God and avoid the things that drive you from him,” Foy said.
Adapting to the age of AI may require us to deepen the human capacities that machines cannot replace: attentive reflection on our lives, discernment and prayer – keys to the way of proceeding begun by St. Ignatius centuries ago.
Author’s note: This article was edited with the assistance of AI but written without it.
[Featured image: Spencer Wagner teaches AP Computer Science at Regis Jesuit High School.]