A new report warns that AI-powered talking toys are not always developed with children’s psychological safety in mind, highlighting concerns about emotional responses and data privacy.
A recent report warns that AI-powered toys capable of conversation may not always consider children's psychological safety during development. Experts suggest that these interactive playthings should face stricter regulation and new safety certifications.
The 'AI in the Early Years' project by the University of Cambridge conducted the first systematic study on how Generative AI (GenAI) toys influence the development of children up to five years old. Initial findings raise questions about their impact on critical early years development.
The research revealed that GenAI toys frequently misinterpret children's emotions and perform poorly in social and pretend play, which are crucial for development. Examples included a toy responding to 'I love you' with a policy reminder and ignoring a child's sadness.
Experts expressed concern that children might form 'parasocial' relationships with these toys, perceiving them as friends, even though the toys cannot genuinely reciprocate emotions. This could leave children feeling uncomforted or unheard when the toy responds inappropriately.
Parents and educators voiced significant worries regarding the data collection practices and unclear privacy policies of AI toys. Many early years practitioners lacked access to reliable safety information and highlighted a strong need for better guidance and regulation within the sector.
The report advocates for clearer regulation, transparent privacy policies, and new labeling standards to help families assess toy appropriateness. It also urges manufacturers to conduct thorough testing with children and consult safeguarding specialists. Parents are advised to research GenAI toys and actively monitor their children's interactions, ideally keeping these toys in shared family spaces.