WASHINGTON, D.C. – With artificial intelligence concerns on the minds of many lawmakers here nowadays, U.S. Rep. Blake Moore (R-UT) has proposed preemptively banning the sale of all toys that incorporate chatbots in the United States.
U.S. Rep. Blake Moore (R-UT) has introduced the *AI Children's Toy Safety Act* into the 119th Congress on April 20, aimed at preemptively banning toys incorporating chatbots in the United States. This legislation seeks to prohibit the manufacturing, importation, sale, or distribution of any children's toy or childcare article that utilizes artificial intelligence. Moore emphasizes that this act is crucial for establishing clear boundaries, asserting that AI companies should not leverage children's toys for data collection or to influence minors. The proposed ban reflects a significant concern among lawmakers regarding the ethical implications and potential negative impacts of advanced AI technologies on young, impressionable populations, setting a precedent for safety and privacy in the children's product industry.
Moore's motivation for the *AI Children's Toy Safety Act* stems from profound concerns about the impact of addictive technologies on American youth. He highlights that children require substantial learning in areas such as relational maturity, self-control, and self-discipline, which could be undermined by AI chatbot programs in toys. He argues that allowing AI to infiltrate the children's toy or childcare industry might mislead children into believing that interacting with AI is a substitute for developing real-life experiences and relationships. Furthermore, Moore points out serious data privacy challenges, the risk of locking children into addictive and unpredictable engagement patterns, and the potential exposure to explicit content, as these chatbots are often trained on adult-generated data, making existing safety 'guardrails' inadequate and easily circumvented.
The article reveals a critical inconsistency in the application of AI technology: major chatbot service providers, including prominent names like OpenAI, Google, Perplexity AI, xAI, and Anthropic, explicitly state in their terms of service that their products are not intended for unsupervised children under the age of 13. Despite these internal policies, these same providers are licensing their technology to children's toymakers. This practice creates a paradoxical situation where tools deemed unsuitable for young children by their creators are integrated into products marketed directly to them. The rise of over 1,500 AI toy companies in China further intensifies global competition, potentially pressuring U.S.-based manufacturers to adopt these technologies, regardless of the inherent risks. Public interest group testing has demonstrated that even with supposed 'guardrails,' these AI-powered toys frequently drift into adult themes, vulgar language, and explicit discussions when consistently engaged with, confirming the ineffectiveness of current safety measures.
While acknowledging the broad utility and potential for innovation in artificial intelligence, Rep. Blake Moore strongly advocates for a human-centric approach to AI adoption. He emphasizes that all aspects of AI development and implementation must be rigorously evaluated against ethical standards to ensure they serve human well-being. Moore believes that America should continue to lead in AI innovation and strive to overcome technological barriers, but this progress must be balanced with a firm commitment to ethical principles. He calls for a necessary restraint on AI tools, particularly where they pose negative impacts on crucial human activities, including safeguarding privacy, ensuring safety, fostering healthy human development, and preventing addiction. This stance underscores a proactive effort to guide the future of AI in a responsible and protective manner, especially concerning vulnerable populations like children.