Policies to ensure public benefits from the adoption of artificial intelligence bear resemblance to policies designed to protect communities from climate change and the energy transition.
This section discusses the growing concern that advanced AI, or "superintelligence," poses an existential risk to humanity, including catastrophic scenarios like massive job losses, bioweapons, and critical infrastructure disruptions. Drawing a parallel to climate change, the author references Martin Weitzman's "Fat Tails" theory, arguing that even a small, uncertain chance of catastrophic outcomes justifies robust public policy interventions. AI is presented as an "existential risk on steroids" due to its global, potentially irreversible, and more imminent impacts compared to climate change. The article highlights that decisions about releasing powerful AI models like Anthropic's Mythos, which could disrupt essential systems, should not be left solely to corporate executives. Instead, it calls for transparent and consistent federal legislation to establish clear guardrails, ensuring AI's massive benefits are realized without profound damage.
This section explores lessons from local opposition to energy infrastructure projects (e.g., pipelines, fracking, solar farms) and applies them to the proliferation of AI data centers. It notes that communities often resist major changes unless they offer significant local benefits and a voice in the decision-making process. Historically, fossil fuel projects gained support by providing high-paying jobs, royalty payments, and public revenue. Similarly, successful wind and solar developers have engaged communities and delivered economic advantages, such as property taxes, land-lease payments, local sourcing, and innovative land-use solutions like "agrivoltaics." The article emphasizes that AI data center developers must prioritize community input and local benefits (e.g., local hiring, avoiding large tax abatements, supporting civic institutions) to overcome growing resistance, which has already halted or delayed billions in data center investments across the United States.
The section argues that the rapid growth and scale of the AI industry present an opportunity to deliver substantial national economic benefits, drawing a comparison to the historically profitable oil and gas sector. It notes corporate speculation about AI companies "capturing the light cone of all future value" and Nvidia's $5 trillion market cap, indicating immense profit potential. Echoing models from oil and gas-rich regions like Alaska, New Mexico, Norway, and Saudi Arabia, which invest revenues into permanent savings funds, the article proposes a new "token tax" or updated capital gains tax on AI profits. These revenues could fund a permanent national wealth fund, providing long-term economic benefits for generations. This strategy is presented as crucial, especially if AI leads to widespread job displacement, to ensure public perception of AI as a wealth creator rather than a job destroyer, a concept explicitly supported by companies like Anthropic and OpenAI.
This section underscores the importance of proactive planning for the societal impacts of AI by drawing parallels with research on energy transitions. It warns that unmanaged AI-induced automation and efficiency gains could lead to unprecedented job losses, potentially "swamping the impacts of an energy transition." The author suggests that the rapid pace of AI development might overwhelm workers' and institutions' ability to adapt, leading to large-scale job displacement across various sectors. The article stresses that avoiding such an outcome is vital to prevent immense human suffering, as jobs are fundamental to people’s lives and job loss impacts more than just earnings. Furthermore, it highlights the risk of public backlash, citing recent state-level moratoria on data center construction and federal proposals. Lessons from the energy transition suggest policy options such as robust wage insurance, public health coverage, job-training programs aligned with growing industries, and universal basic income to mitigate these effects.
This concluding section emphasizes the critical need to translate policy research into actionable legislation for AI governance, contrasting it with the "dust-gathering" fate of many welfare-enhancing energy and climate policies. It warns that failure to enact robust AI policies could lead to severe, potentially catastrophic, impacts. The article points out that even AI companies themselves, including OpenAI and Anthropic, have advocated for federal intervention to mitigate risks, protect national security, and prepare the workforce for job losses. This industry appeal stems from the recognition that comprehensive federal governance is essential not only for societal well-being but also for these companies to achieve the sustained returns demanded by their investors, underscoring the urgency and shared interest in legislative action.