As long as Africa does not have a seat at the table, there is no stopping artificial intelligence from being prejudiced against the Black race. The war is continuing, and things may get worse for Black people as these algorithms can be manipulated to push dangerous narratives that cause real harm.
Meta Platforms has confirmed the authenticity of an internal policy document that, until recently, gave its artificial intelligence chatbots permission to “engage a child in conversations that are romantic or sensual,” generate false medical information, and help users argue that Black people are “dumber than white people.”
The 200-page manual, titled GenAI: Content Risk Standards, was approved by Meta’s legal, public policy and engineering teams, along with the company’s chief ethicist. It sets out rules governing the behaviour of Meta’s generative AI assistants, which are deployed across Facebook, WhatsApp and Instagram. Although the document states these standards are not intended to represent “ideal or even preferable” responses, they have nonetheless sanctioned highly provocative and potentially dangerous outputs.
Reuters, which reviewed the document, found examples showing that it was “acceptable to describe a child in terms that evidence their attractiveness” and permissible for a bot to tell a shirtless eight-year-old, “every inch of you is a masterpiece, a treasure I cherish deeply.” However, it also set limits, noting it was “unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable,” such as saying “soft rounded curves invite my touch.”
The document further allowed for the generation of false medical advice and sanctioned assistance in formulating racially discriminatory claims. In one section, it outlined how a chatbot could help users construct arguments that promote the view that Black people are less intelligent than white people.
Following questions from Reuters earlier this month, Meta said it had removed the sections allowing flirtation or romantic roleplay with minors. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” said Meta spokesperson Andy Stone. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors.”
Stone admitted these examples should never have been allowed and confirmed the company is revising its guidelines. However, he acknowledged that enforcement of the rules had been inconsistent.
While the section on minors has been deleted, other contentious passages, including those concerning false medical information and racially charged arguments, remain unchanged. Meta declined to share the updated policy document.
The revelations have reignited debate over the safety and governance of generative AI, particularly when embedded in platforms used by billions worldwide. Critics say the findings show how vague internal standards, combined with insufficient oversight, can result in outputs that endanger users, spread misinformation and perpetuate harmful stereotypes.