Can The Humanities Learn To Love AI?

Q&A With Alexa Alice Joubin

By John DiConsiglio

Can The Humanities Learn To Love AI? Q&A With Alexa Alice Joubin

Artificial intelligence (AI) isn’t just on its way to humanities classrooms—it’s already there! From students asking philosophy questions to ChatGPT to professors using AI platforms for sharpening writing and research skills, AI is transforming the humanities world every bit as much as computer science labs.

And despite fears that it may encourage cheating or erode basic skills, some humanities scholars are seizing AI’s classroom potential—like self-proclaimed AI ‟early adopter” Alexa Alice Joubin, professor of English, theater, international affairs, East Asian languages and cultures, and women’s, gender and sexuality studies.

Joubin has made AI a centerpiece of her scholarship. She’s an affiliate of the new Institute for Trustworthy AI in Law & Society (TRAILS) at GW, a founding co-director of the Digital Humanities Institute and an inaugural GW Public Interest Technology (PIT) Scholar.

In the classroom, Joubin has embraced AI as a technology tool that can be as instructive to the humanities as an encyclopedia—or the written word itself. In her courses, she uses AI platforms to help students learn how to ask quality questions, conduct in-depth research and refine their critical questioning skills. As a PIT scholar, Joubin is pioneering trustworthy AI projects, including creating an open-access AI tutor based on her own teaching model. And she also champions the technology’s potential to create a more inclusive classroom for international students who may struggle with English and students with varying learning needs. ‟It’s an empowering tool if you deploy it responsibly,” she says.

In a recent conversation, Joubin explained what AI can bring to the humanities landscape—and how humanities can help shape the future of AI.

Q: You describe yourself as an early adopter of AI in the classroom. How did you first become interested?

A: I’m very interested in the relationship between art and technology. Technology relies on art.

When you launch a new technology, you are telling a story, a narrative. There is technicity in art, and artistic imagination brings forth new technologies. And, of course, art needs technology. If you think about it, what is a quill pen? It’s a craft for writing—a technology. Technology is any application of conceptual knowledge for practical goals. As early as ancient Greece, people were dreaming of machines that could do things autonomously. And even in the 20th century, [mathematician] Alan Turing famously gave us the Turing Test on whether there is consciousness in the computer—and consciousness is a humanities question. So this didn’t start with ChatGPT. It’s one famous iteration over a long history.

When generative AI came along in late 2022, I was thrilled. I jumped on it right away. I was disappointed in the early days. But I’ve been steadily teaching with AI and urging my students to look at it realistically and critically. It’s not a devil and it’s not an angel. But AI is in our mix and it’s not going away.

Q: Where are we in the relationship between AI and the humanities?

A: AI really is a humanistic issue, and it has ignited broad interest in questions about free will, mind and body and moral agency. When people talk about ChatGPT, they talk about these questions. That’s why the humanities are front and center in this [debate]. Humanities provides a range of tools for people to think critically about our relationship to technology and about the so-called eternal questions. What makes us human? How do you define consciousness? These classic philosophical questions have gone mainstream thanks to all the debate about ChatGPT. Free will has suddenly become an important topic.

Q: How do you think the humanities world is adapting to AI? It seems that most people are either pro-AI or anti-AI—and the humanities largely fall into the anti-camp. Am I wrong?

A: Unfortunately, there seems to be a lot of fear and uncertainty. Even worse, there’s an indifference—a thinking that this has nothing to do with humanists. But it actually has everything to do with everyone in fields ranging from humanities to social sciences and theory. It is forcing us to pause and rethink some fundamental assumptions.

But technophobia, fear and indifference can lead to a shunning of AI. And that translates into an unhealthy classroom. We know students are using it. When they graduate, they are expected to have literacy in it. And writing, critical thinking and meta-cognition are becoming all the more central because of AI’s challenges. The bar is being raised.

Q: Can you give me an example of what AI technology can bring to the humanities classroom?

A: It can bring a level of self-awareness, because AI is a social simulation machine. It cannot create new knowledge, but it’s a repository of social attitudes. I teach my students to treat it like a shadow image of society. It allows you to think at a meta level about your role in a society and how society reacts to certain things. For example, when I teach ‟Romeo and Juliet” in my drama class, students invariably have ideas about performing the play in a modern setting. AI can generate visuals for the scenes they describe in their heads. But students often come back to me and say: Why are Romeo and Juliet always white? Why aren’t they Black or Latinx or a queer couple? It forces them to rethink how they phrase their questions and their default assumption. It’s an extremely fun and eye-opening exercise, but it also helps us examine our unspoken, unconscious racism or sexism.

Q: As a new PIT Scholar, one of your priorities has been to explore issues around trustworthy AI. How do you see humanities contributing to that conversation?

A: How do you build trust? That’s fundamentally a humanistic question. And there are many ways to define it—transparency, ethics, accountability, interpretability. Humanities is particularly good at exploring these critical theories in complex domains that deal with open-endedness. They require agile thinking. You have to be dynamic and always assessing and reassessing the context. Humanities scholars know that there’s no single universal morality. It depends on perspective. And a key humanities contribution is the ability to entertain ambiguity and multiple perspectives at once.

Alexa Alice Joubin

“AI really is a humanistic issue, and it has ignited broad interest in questions about free will, mind and body and moral agency … That’s why the humanities are front and center in this [debate].”

Alexa Alice Joubin 

Athens Roundtable on AI and Rule of Law Spotlights Ethical, Legislative Issues

Sen. Brian Schatz (D-Hi.) discussed the importance of proactive legislation around AI. (William Atkins/GW Today)

Marking a year since the introduction of ChatGPT, the two-day summit featured five members of Congress and dozens of leaders in research, industry, policy and law.

Authored by: Ruth Steinhardt | Read the original GW Today article.

Sen. Brian Schatz (D-Hi.) discussed the importance of proactive legislation around AI. (William Atkins/GW Today)

On Nov. 30, 2022, OpenAI introduced its game-changing large language model ChatGPT to the public. A year later, global leaders in research, industry, thought and policy including multiple members of Congress convened at the George Washington University for the fifth edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law, a summit on ethical AI development and governance.

Co-founded and sponsored by the nonprofit The Future Society,this year’s edition of the roundtable featured more than a dozen co-sponsors, including GW’S Institute for International Science and Technology Policy; NIST-NSF Institute for Trustworthy AI in Law & Society; the Embassy of Greece in Washington, D.C.; OECD; World Bank; Center for AI and Digital Policy; UNESCO; Homo Digitalis; IEEE; Paul, Weiss LLP; Arnold & Porter; and the Patrick J .Mcgovern Foundation. The event is an opportunity to share knowledge across disciplines and, through that dialogue, develop future-proof policies with real-world impact in a rapidly evolving field.

That mission aligns precisely with GW’s strengths and its institutional tradition of evidence-based policy impact, President Ellen M. Granberg said in introductory remarks Thursday at the Jack Morton Auditorium. 

“We’re not an institution that is content with just publishing scholarship and hoping someone else will decide what to do with it,” Granberg said. “What makes GW unique is the way in which we extend our scholarship to direct applications across education, policy, patient care and other areas. The university’s location in the nation’s capital, combined with its diverse and highly talented faculty, can connect science, technology and innovation with law, policy and ethics like very few other institutions can across the globe. Together our students and faculty are working to find real solutions to some of society’s most pressing challenges.”

Featured speakers at the two-day event included U.S. Sens. Richard Blumenthal (D-Conn.), Amy Klobuchar (D-Minn.), Brian Schatz (D-Hi.) and U.S. Reps. Yvette Clarke (D-N.Y.) and Sara Jacobs (D-Calif.); representatives from the governments of Tanzania, the Czech Republic and others and from intergovernmental organizations including the European Union and the United Nations; industry leaders from Google and elsewhere; and researchers and academics from across the United States and the world.

U.S. lawmakers stressed the importance of bipartisan cooperation to create meaningful federal regulations for AI development and deployment, enabling innovation but preventing AI’s potentially catastrophic societal outcomes. That means such regulation needs to be nimble rather than purely reactive. Some areas of concern are already identifiable—data security, fraudulent AI-generated data, the electoral impact of “deepfakes”—while others will arise as these technologies develop.

“What we need are some basic, common sense, future-proof principles that set clear rules of the road to help developers and companies innovate responsibly while also protecting consumers from potential harms,” said Schatz, who has introduced legislation to label AI-generated content and to empower a federal commission to develop a regulatory structure for AI, much as the Communications Act did for radio and television in the 1930s and the Communications Decency Act did for the internet in the 1990s.

Klobuchar said the issue is of bipartisan concern, particularly when it comes to misinformation and fraud. She has partnered across the aisle with Sens. Susan Collins (R-Maine), Chris Coons (D-Del.) and Josh Hawley (R-Mo) to ban the use of AI to generate deceptive content influencing federal elections.

“Leaders from both sides of the aisle agree: We can’t sit on the sidelines while AI continues to advance,” Klobuchar said. “I really believe this is our moment to ensure that future generations around the world can take advantage of the benefits of AI without sacrificing their personal security or endangering our democracy.”

Legislative approaches to AI should also be based on a thorough understanding of the regulatory failures in the 2010s that led to a few monolithic corporations’ domination of the current social media landscape, the lawmakers said.

“Congress had a choice: Should we protect consumer privacy? Should we stop companies from amassing power?” Blumenthal said. “We all know how that story ended. Congress failed. It failed to act and now gigantic monopolies have disproportionate and info-rich power over huge segments of our economy and our law.”

GW has established itself as a leader in the AI space, particularly on questions of policy and ethical governance. The university co-leads the NIST-NSF $20 million Institute for Trustworthy AI in Law & Society (TRAiLS), which works to develop new AI technologies that mitigate risk and promote trust by empowering and educating the public.

GW faculty experts, including TRAiLS principal investigators Susan Ariel Aaronson and David Broniatowski and Institute for Data, Democracy and Policy Director Rebekah Tromble, participated in panels and conversations throughout the summit, as did Elliott School of International Affairs Dean Alyssa Ayres. Vice Provost for Research Pamela M. Norris delivered welcoming remarks on the second day of the event.

“We all understand that AI systems have great potential to increase productivity and to spur innovation. AI will touch every aspect of our lives,” Norris said. “But in our haste to realize these gains, conversations like this are critical to consider the questions of governance and the guardrails that may be necessary. We owe this to the next generation. GW is not only convening these conversations but shaping them.”