As the role of AI persists as a question mark in the landscape of higher education, Senior Managing Editor Mira Wilde ’28 and Staff Writer Caroline Paluska ’29 spoke with five professors about their thinking and use of the technology in the classroom.
This section explores the diverse approaches Amherst professors take regarding AI in their classrooms, acknowledging that AI cannot be ignored. Some, like Senior Lecturer Benigno Sanchez-Eppler, embrace AI as a "fancy toy" for tasks such as coding and generating content, viewing it as a natural progression of his technological engagement. Conversely, Henry S. Poler ’59 Presidential Teaching Professor of Music Klara Moricz prefers to minimize technology in her teaching, especially in the humanities, to foster a distinct learning environment. Assistant Professor of Anthropology Victoria Nguyen experiments with AI by having students evaluate AI-generated work, aiming to teach AI literacy and highlight the indispensable role of human judgment. Other professors, like Professor of Sexuality, Women’s and Gender Studies Krupa Shandilya and Moricz, have reverted to methods like blue book exams to prevent AI use, citing concerns about assessment validity and the difficulty of detecting AI-assisted work. Sanchez-Eppler also uses AI for administrative efficiency, like creating student grouping folders, and for generating class-related content, framing its use as pedagogical experimentation. In contrast, James J. Grosfeld Professor Lawrence Douglas maintains traditional testing methods, including take-home essays, noting consistent student performance without significant AI-related discrepancies.
The article delves into how AI impacts the crucial element of trust between professors and students in a liberal arts setting. Assistant Professor Victoria Nguyen emphasizes that trust is a reciprocal concern; if both students and faculty outsource intellectual tasks and evaluations to AI, the core pedagogical relationship is jeopardized. Professor Klara Moricz recounts becoming paranoid due to unusual passages in take-home work, which she suspected were AI-generated but couldn't confirm, leading her to implement in-class essays to re-establish trust in authentic student work. She now appreciates grammatical mistakes as evidence of genuine student effort. Despite these challenges, professors like Lawrence Douglas and Krupa Shandilya retain faith in their students' character and integrity. Their primary apprehension stems not from student ill-intent but from the "slippery slope" of AI usage, where the lines between student thought and AI contribution become indistinguishable, making effective regulation difficult. Douglas highlights that any pedagogy should foster skill development independent of technology, while Nguyen suggests that AI can instigate more explicit dialogues about mutual commitments and the nature of meaningful academic work.
Professors uniformly agree that AI poses a threat to the fundamental values of a curiosity-driven liberal arts education, primarily by offering shortcuts that undermine deep learning. Klara Moricz and Victoria Nguyen articulate that AI's promise of time-saving conflicts with the inherent value of "slowness," meticulous reading, sustained argumentation, revision, and reflection that are central to the humanities and social sciences. Krupa Shandilya voices concern that students may lose the capacity to deliberate, challenge, and develop their own original ideas. Nguyen suggests that the rise of AI necessitates an institutional re-evaluation, forcing colleges to distinguish between the superficial performance of knowledge and the genuine development of understanding. Lawrence Douglas cautions against students becoming overly reliant on generative AI before acquiring essential critical thinking and writing skills. Benigno Sanchez-Eppler worries about AI's potential to harm students' reading abilities and reduce the incentive for genuine learning. Moricz also connects AI use to a potential lack of student confidence in their own intellectual capabilities.
The future integration of AI into Amherst classrooms remains uncertain, as professors grapple with complex ethical considerations. Benigno Sanchez-Eppler advocates for experimentation with AI, cautioning against outright bans. However, Krupa Shandilya highlights the "insidious" and pervasive nature of AI, making it difficult to detect its use. Klara Moricz emphasizes the importance of safeguarding spaces in the college environment that are free from AI's influence, preserving room for authentic human thought and creativity, while also acknowledging the necessity of teaching responsible AI use. Victoria Nguyen believes that effective solutions will emerge from improved communication and clearer articulation of shared educational objectives, rather than increased surveillance. Student Senator Daniel Fleer ’26 is actively working to establish more coherent and official AI policies, aiming for a collective faculty approach to integrate AI thoughtfully into the long-term liberal arts curriculum. The article concludes that despite technological disruptions like AI, the enduring value of an Amherst education, focused on academic rigor, will persist, underscoring that degrees are earned from the college, not from AI.