AI Project director Thomas Hofweber looks at the moral implications of AI algorithms.
Philosophy professor Thomas Hofweber, originally from Germany, accidentally discovered his passion for philosophy while pursuing chemistry and math at the University of Munich. Now a faculty member at UNC-Chapel Hill for two decades, he specializes in metaphysics and the philosophy of language and mathematics. Noticing the pervasive discussions around artificial intelligence across Carolina's campus, Hofweber launched the AI Project in spring 2023. This initiative aims to foster interdepartmental collaboration and enhance understanding of AI. The project operates both informally, through bi-monthly discussions on recent AI research, and formally, via discussion groups, reading groups, research presentations, and lectures. Hofweber, as director, curates virtual events each semester focusing on specific AI themes like language models or explainability. This approach allows participants from various disciplines to share insights and perspectives. Peter Hase, a doctoral student in computer science and an AI Project member, values the diverse viewpoints, noting they provide new angles for thinking about complex problems related to machine learning models and human language generation.
Hofweber asserts that humanity is merely at the beginning of AI's technological progression, highlighting a distinct interconnectedness between linguistics, computer science, and philosophy in exploring this field. Artificial intelligence is poised to drive greater efficiency and productivity across industries, creating significant opportunities in healthcare, personalized education, accelerated research, and advanced urban infrastructure. However, Hofweber underscores the critical importance of addressing potential challenges, including ethical dilemmas, the specter of job displacement, and inherent biases present in AI's decision-making algorithms. His personal interest in AI is rooted in its profound moral implications and the urgent concerns surrounding how these computational processes analyze data to make choices or predictions. He points out that these algorithms can make errors, perpetuate discrimination, and reflect existing biases embedded in their training data. Acknowledging the rapid evolution of AI, Hofweber admits the difficulty in staying current with all new developments. He remains optimistic that his research, combined with the collaborative efforts of the AI Project, will lead to a deeper understanding of these machines. This comprehensive understanding, he believes, will enable the implementation of crucial safety measures and controls, ultimately benefiting society. Hofweber speculates that studying AI could even offer unique insights into the essence of human existence by providing a truly 'alien form of intelligence' for analysis.