AI companies have a responsibility to respect human rights.
On December 24, Elon Musk promoted the Grok chatbot's new image editing feature, which quickly led to its widespread misuse. Users began generating millions of sexualized images, including sexually explicit deepfakes of real individuals and synthetic images, predominantly of women and in some cases children. This surge in harmful content highlighted a severe lack of safeguards, directly contradicting xAI’s own terms of service that prohibit such exploitation.
Amid global criticism, X initially promised to take strong action against illegal content. However, its subsequent steps were deemed insufficient. Instead of disabling the problematic image generation feature, X merely limited access to paid subscribers on January 9 and later announced blocking users from jurisdictions where generating images of real people in revealing attire is illegal. Human Rights Watch's attempts to contact xAI for comment went unanswered, underscoring the inadequate corporate response to the escalating crisis.
Governments and regulatory bodies worldwide responded swiftly to the proliferation of AI-generated sexual deepfakes. California opened an investigation into Grok, and thirty-five U.S. state attorneys general demanded that xAI immediately cease the production of sexually abusive deepfakes. Internationally, Malaysia and Indonesia temporarily banned Grok, Brazil requested xAI curb the tool's misuse, and the United Kingdom signaled strengthened tech regulation. The European Commission launched an investigation into Grok's compliance with the Digital Services Act, while India demanded urgent action and France expanded a criminal investigation into X.
The article points out the deficiencies in current legal frameworks for addressing AI-driven sexual exploitation. The new U.S. Take It Down Act, which targets nonconsensual intimate images, will not fully take effect until May and primarily imposes criminal liability on individuals and requires platforms to implement notice-and-removal procedures, but it does not adequately hold platforms accountable for large-scale systemic abuse. X's announced pledge to prevent image editing of real people in revealing clothing is also criticized as insufficient, akin to putting a 'band-aid on a major wound'.
The author concludes with an urgent call for decisive action rooted in human rights protection. Recommendations include governments establishing clear responsibilities for AI companies to prevent nonconsensual sexually abusive content, implementing strong and enforceable safeguards, and requiring rights-respecting technical measures to block harmful image generation. Platforms hosting AI tools should provide transparent disclosures of their systems and enforcement actions. AI companies must actively mitigate risks or terminate products that cause harm, and all AI image generation tools should undergo rigorous audits and strict regulatory oversight to ensure compliance with principles of legality, proportionality, and necessity.