There’s a lot of optimism, but new research gives reason for caution.
Artificial intelligence possesses capabilities in social science, including data analysis, code review, statistical method identification, and paper generation, which could mitigate human-led research flaws like mistakes, bias, and fraud. However, recent studies suggest that current AI models have significant limitations, advising caution.
Economist Michael Wiebe's replication study on tech clusters showed that AI chatbots, while useful in catching some technical and coding errors, still missed many problems. This indicates that AI can be a helpful productivity and quality-enhancing tool in peer review but is not yet fully trustworthy and cannot replace human scrutiny, as it may also produce false positives.
When provided with data and research questions, AI models (e.g., Claude Code agents) showed considerable variation in results. These differences stemmed from subtle choices in interpreting variables (like 'dollar volume' versus 'share volume' for trading volume) and distinct 'empirical styles' among different AI versions. While external examples of top papers could guide AI models to converge, this highlights the continued need for extensive human steering to achieve reliable scientific outcomes, especially given human fallibility and bias.
AI models are trained on existing human-generated data, including potentially biased research. Research by Jim Manzi indicates that a large majority of politically relevant social science articles lean left, and AI systems often exhibit similar left-leaning ideological priors and other biases, such as favoring the first of two options. This inherent bias challenges AI's ability to offer unbiased perspectives without careful prompting.
In summary, AI offers remarkable speed and efficiency in spotting errors and generating content, proving to be a valuable productivity tool in social science. Nevertheless, it remains prone to frequent mistakes, carries its own set of ideological biases derived from its training data, and struggles to produce consistently convergent results without substantial human guidance. Therefore, AI is a powerful assistant but not a complete solution for the challenges facing social science.