Although Michigan lawmakers haven't enacted as many AI regulations as other states, proposals for guardrails around AI are prevalent in Lansing.
Michigan lawmakers are considering House Bill 4668, introduced by Rep. Sarah Lightner, which aims to impose regulations on companies operating major AI programs. The bill would mandate these operators to develop and implement publicly accessible safety and risk protocols to mitigate 'critical risks.' These risks are defined as incidents where an AI model could lead to the death or injury of 100 or more people, or cause $1 million or more in property damage. The requirements would apply to companies that either spend $100 million annually on their AI models or invest $5 million to initiate their operations. Proponents of Lightnerâs bill argue that establishing such guardrails is crucial given the rapid evolution of AI technology, emphasizing its potential for both significant benefits and considerable harm. They believe it is the responsibility of legislators to ensure public safety while fostering innovation that serves the public interest. However, the bill faces opposition from various chambers of commerce across the state. Opponents warn that state-level regulations could stifle innovation among AI developers and deter them from establishing or expanding operations in Michigan. They advocate for a federal approach to AI regulation, arguing that a patchwork of state laws would inevitably lead to inconsistencies and contradictions, complicating compliance for businesses. A companion bill, HB 4667, further proposes making it illegal to develop an AI system with the intent to commit a crime. While HB 4668 has been reported by the House Judiciary Committee, it is still awaiting a vote from the full chamber.
Concerns regarding the use of artificial intelligence for workplace surveillance have prompted labor advocates in Michigan to push for new regulations. The proliferation of remote work during the COVID-19 pandemic led to a significant increase in the availability and use of AI surveillance programs. These programs can monitor various aspects of an employee's work, including keystroke logging, facial recognition, and even tracking breaks, which many labor advocates consider an invasion of privacy. Rep. Penelope Tsernoglou, D-East Lansing, highlighted these concerns, describing such monitoring as 'invasive, unnecessary and unethical surveillance techniques' that are increasingly used to track employees' movements and expressions. In response, House Democrats proposed legislation in February that seeks to define the permissible uses of AI in the workplace, particularly concerning employer monitoring of worker productivity. House Bill 5579, introduced by Tsernoglou, would specifically prohibit employers from using AI programs to make decisions related to wage setting, hiring, and firing. While employers would still be permitted to use AI to screen large pools of job candidates, any use of AI tools for productivity monitoring would require explicit written consent from the workers involved. The bill has garnered strong support from major labor organizations, including the Michigan AFL-CIO, who believe it will protect workers' rights and privacy. However, business groups, such as the Michigan Chamber, have expressed opposition. They argue that the proposed bill would impose overly strict parameters on employers, potentially limiting their ability to maintain efficient and productive staff levels. HB 5579 has been referred to the House Committee on Economic Competitiveness, where it is currently awaiting a hearing and further consideration.
The ability of generative AI to mimic human conversation has raised significant concerns, particularly regarding its use by minors, leading to a legislative proposal in Michigan to ban AI chatbot 'therapy' for underage individuals. Studies, such as one from Stanford University, have demonstrated how easily researchers could elicit inappropriate responses from chatbots when interacting as minors. This prompted the Federal Trade Commission to launch an inquiry into companion chatbots, focusing on their interactions with minors. Furthermore, companies like OpenAI, creators of the widely used ChatGPT, have faced wrongful death lawsuits due to allegations that their chatbots affirmed suicidal ideations in users, although OpenAI has denied these claims. In response to these growing concerns, Senate Bill 760, introduced by Sen. Dayna Polehanki, D-Livonia, aims to prohibit AI platforms from making emotionally supportive chatbots available to minors. Specifically, the bill targets platforms that retain conversation history with minors, engage in sustained dialogue about a user's personal matters, or offer unprompted emotional advice. This legislation is part of a broader package of four bills designed to enhance social media safety for minors in Michigan. During a hearing before the Senate Committee on Finance, Insurance and Consumer Protection, Polehanki emphasized that these systems are being deployed 'without any meaningful safeguards for minors,' and the consequences of things going wrong can be severe. A key concern raised by opponents of the proposal, including policy analysts from the James Madison Institute, centers on the practicalities and privacy implications of age verification laws. Critics worry that mandating age verification would necessitate extensive personal data collection, creating a 'honeypot for cyber criminals and bad actors to exploit.' Senate Bill 760 currently remains in committee for further discussion and potential amendments.
Michigan lawmakers are also actively pursuing legislation to address the impact of artificial intelligence in critical sectors such as health care and housing. Representative Carrie Rheingans, D-Ann Arbor, introduced a package of bills last year aimed at regulating AI's role in health care claim determinations. House Bills 4536 and 4537 specifically propose to ban the use of AI programs to determine claims for Medicaid and other health insurance programs within the health care marketplace. This legislative effort aligns with a broader national trend, as at least six other statesâArizona, California, Illinois, Maryland, Nebraska, and Texasâhave already enacted laws that, in some form, prohibit the use of AI as the sole basis for denying health insurance claims, according to KFF (formerly Kaiser Family Foundation). These bills are currently under review by the House Committee on Insurance. Additionally, Representative Rheingans introduced House Bill 4538, which seeks to prohibit landlords from using AI-driven algorithms to determine average rental prices in a given area and subsequently imposing those AI-derived rent figures on their properties. This measure reflects growing concerns about algorithmic pricing potentially leading to inflated or discriminatory rental rates. While some major cities, such as San Francisco and Philadelphia, have taken steps to ban the use of algorithms for setting rental prices, the adoption of similar regulations at the state level has been notably slower, as reported by the government relations firm MultiState. Both the health care and rent-setting bills have been referred to their respective legislative committees (House Committee on Insurance and Committee on Regulatory Reform) but have not yet advanced to public hearings.
In December, President Donald Trump issued an executive order aimed at establishing a federal framework for artificial intelligence regulation, an action that exerts pressure on individual states like Michigan regarding their own AI legislative efforts. The executive order emphasized the need for a 'minimally burdensome national standard' to govern AI, explicitly stating a preference to avoid '50 discordant State ones' which could potentially stifle innovation in the rapidly evolving AI sector. The president argued that such a fragmented regulatory landscape would inevitably hamper the United States' ability to lead the global 'AI race.' Key tenets of this federal framework, as outlined in the order, include ensuring the protection of children, preventing censorship, respecting copyrights, and safeguarding communities. To further this objective, the executive order mandated the Secretary of Commerce to publish a comprehensive report examining the various AI regulations already enacted or proposed across all 50 states. This report is intended to inform the development of a cohesive national strategy. However, despite the executive order's strong stance and directive, the U.S. Congress has yet to pass any specific legislation that would legally prohibit states from enacting their own distinct AI regulations. This creates a dynamic tension between the federal government's call for a unified approach and the ongoing legislative initiatives at the state level, where various proposals are still being debated and considered without a direct federal preemption.