The proposed bills would place responsibility for wrongdoing caused by AI on those responsible for directing it. This legislative effort aims to define AI's legal status as non-sentient and establish clear accountability for its use and potential harm.
In Jefferson City, two significant House bills are currently under consideration, both focused on establishing clear lines of responsibility for any harm that might be caused by artificial intelligence. This legislative effort aims to address the legal implications arising from both the intended and unintended use of AI systems, ensuring accountability in their operation and deployment.
The House Committee on Emerging Issues convened a hearing to review the proposed legislation. These bills, spearheaded by Representatives Phil Amato (R-Arnold) and Scott Miller (R-St. Charles), are designed to enact the 'AI Non-Sentience and Responsibility Act,' a framework intended to legally define AI systems as non-sentient entities and assign legal obligations to their human operators.
A key provision within the proposed acts asserts that artificial intelligence systems are to be explicitly recognized as non-sentient entities. This means, from a legal standpoint, AI would not be granted personhood, nor could it be recognized as a spouse, legal entity, or owner of any form of property, thereby preventing AI from acquiring rights traditionally reserved for humans or established organizations.
The legislation also states that any harm, whether direct or indirect, caused by the deliberate use or misuse of an AI system, will be the sole responsibility of the individual or entity who was directing that AI at the time. This clause is crucial for assigning liability in incidents where AI technologies may cause unforeseen or harmful outcomes, shifting the burden of accountability to the human operators.
Representative Miller emphasized the clear intent behind the bills: to hold individuals accountable while also acknowledging efforts made to operate AI ethically. When questioned about liability in cases of AI misuse or intentional wrongdoing by Rep. Elizabeth Fuchds (D-St. Louis), Miller explained that ongoing revisions to the bills are focused on establishing a 'Missouri AI risk standard' that businesses must adhere to to be absolved from criminal or punitive damages. This standard would be primarily based on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, filling gaps where a specific Missouri standard is not yet defined.
When asked by Rep. Wick Thomas (D-Kansas City) if the proposed bills would conflict with President Donald Trump’s executive order aimed at removing barriers to American leadership in AI, Miller responded negatively. He believes the bills would not undermine America’s competitive advantage in the global AI landscape, suggesting they are aligned with national goals for AI leadership.
During the hearing, testimony highlighted the importance of a proactive approach to AI regulation. One witness advocated for the state to anticipate and address potential AI challenges preventatively, rather than waiting for negative incidents to occur before implementing regulatory measures. This perspective suggests a desire for forward-thinking governance in the rapidly evolving AI sector.
Miller also noted that similar legislative initiatives have already been successfully passed in other states, specifically mentioning Oklahoma, Idaho, and Utah. This indicates a growing trend among state governments to establish legal frameworks for AI, providing potential models and precedents for Missouri's own legislative efforts.