Rumman Chowdhury, a data scientist and AI ethicist, explores critical questions about the rapid evolution of artificial intelligence, including the impact of data centers, the nature of bias in AI models, and the need for human agency and independent auditing in AI development.
Ms. Chowdhury questions the perception of 'tech jobs' associated with data centers, stating that they primarily bring temporary construction jobs and then limited, lower-skilled roles like IT and janitorial staff. She emphasizes that these jobs do not reflect the high-salary Silicon Valley image and do not bring the 'big payday' people expect.
She views data centers as having significant negative externalities, including noise pollution, environmental impact, and substantial water usage. Chowdhury points out the irony that these centers are often built in small towns rather than where tech CEOs reside, suggesting they are not as beneficial as portrayed.
Her core value is human agency and the right to choose, particularly concerning the influence of social media algorithms on narratives and decision-making. She aims to empower individuals to make intelligent decisions and take action for themselves in an increasingly AI-driven world.
Science fiction, when done well, serves as a tool for societal critique by allowing objective examination of issues through a detached, alternative society. She cites 'Star Trek' as an example, enabling discussions on race dynamics that would otherwise be difficult.
Chowdhury appreciates works that offer positive visions for the future, such as Ruha Benjamin's Afrofuturism, which envisions a Black-centric world benefiting from technology. She also praises the 'Monk and Robot' series for its portrayal of a future with functional, tech-forward communities where people find meaning in their work and robots co-exist peacefully.
She explains that bias isn't inherently bad (e.g., targeting products for new parents). Her concern lies with the representation of vulnerable and underrepresented communities, and how their exclusion leads to erasure in AI outputs. She questions who controls 'truth' in content moderation and argues that true neutrality in AI models is impossible due to inherent social and political biases.
Chowdhury believes outcomes depend on incentives and institutions. For-profit organizations are legally obligated to maximize shareholder value, making it difficult for them to prioritize social good like eradicating poverty unless it directly improves their bottom line. This inherent structure can limit well-intentioned efforts.
She founded Humane Intelligence to create an independent community of AI evaluators, similar to auditors in other critical industries like finance or airlines. The goal is to build infrastructure that allows 'regular people,' such as teachers and students, to participate in evaluating AI technologies relevant to their lives, ensuring their voices are heard in design and implementation.
Her new iteration of Humane Intelligence is a public benefit corporation (PBC), a model also adopted by companies like Anthropic and OpenAI. PBCs are responsible to both shareholders and stakeholders, broadening their accountability beyond just financial gains to include a wider community of people they serve.