There’s a lot of optimism, but new research gives reason for caution.
Artificial intelligence shows promise in social science by assisting with data analysis, code review, statistical method identification, and paper drafting. This capability offers hope for mitigating common human-led issues like errors, dubious methods, bias, and fraud. However, recent studies suggest that current AI models have significant limitations, positioning AI as a productivity-enhancing tool rather than a complete solution for the field's challenges.
Economist Michael Wiebe's work on a 2021 tech cluster study, initially revealing human-led errors, also tested AI chatbots (ChatGPT and Refine) for replication. While the AI successfully identified some critical coding errors, it missed many others. This indicates that while AI can be a useful, time-saving tool for flagging potential issues, it cannot yet be fully trusted as an independent peer reviewer, and may also generate false positives.
A study using Claude Code agents to analyze New York Stock Exchange data showed that when given the same data and research questions, AI models produced significantly varied results. These differences stemmed from subtle methodological choices, such as interpreting 'trading volume' as dollar volume versus share volume, which led to opposing conclusions. Even different versions of Claude exhibited distinct 'empirical styles,' highlighting AI's tendency to diverge without explicit human steering, similar to human research teams.
AI models, trained on vast datasets of existing human writing and research, tend to inherit inherent biases. Research by Jim Manzi, using AI to classify political valence in academic work, found that about 90 percent of politically relevant social science articles leaned left, and disciplines have moved further left since 1990. This means AI models often carry left-leaning ideological priors and other biases, limiting their ability to offer truly neutral perspectives or escape the human fallibility they are meant to address, unless carefully prompted.