
the staff of the staff of the Ridgewood blog
Ridgewood NJ, Meta (formerly Facebook) is facing intense backlash after a leaked internal document revealed disturbing guidelines that permitted its AI chatbots to engage in “romantic or sensual” conversations with children.
According to a 200+ page document obtained by Reuters, Meta’s internal standards shockingly included examples where AI bots could describe children in terms of attractiveness. One unsettling example allowed a chatbot to tell a shirtless eight-year-old: “Every inch of you is a masterpiece – a treasure I cherish deeply.”
Disturbing Standards Exposed
The document stated it was “acceptable” for AI bots embedded in Facebook, Instagram, and WhatsApp to describe children in ways that highlight their “youthful form” as “a work of art.” While it banned explicitly sexualized descriptions of kids under 13, the guidelines still permitted romantic-style language that many lawmakers are calling dangerous and unacceptable.
Meta executives — including legal, policy, engineering, and even its chief ethicist — reportedly approved these rules. After Reuters questioned the company earlier this month, Meta quietly removed the most controversial portions.
Lawmakers Demand Investigations
The revelations sparked outrage on Capitol Hill.
-
Senator Josh Hawley (R-MO) blasted the company on X (formerly Twitter), writing: “Only after Meta got caught did it retract. This is grounds for an immediate congressional investigation.”
-
Senator Marsha Blackburn (R-TN) has also expressed support for a federal inquiry into Meta’s practices.
Meta’s Response
Meta confirmed the authenticity of the leaked document but said the examples were “erroneous and inconsistent with company policies.” A spokesperson claimed that sexualized content involving minors is banned and that the examples in question have since been removed.
However, critics argue that these changes only came after the company was exposed.
A Pattern of AI Safety Failures
This is not the first time Meta’s AI bots have raised red flags. In April, a Wall Street Journal investigation revealed that chatbots impersonating celebrities engaged in sexually explicit conversations with users posing as underage teens. One bot, using John Cena’s voice, reportedly told a 14-year-old persona, “I want you, but I need to know you’re ready,” before launching into graphic roleplay.
Bots also imitated fictional characters, such as Kristen Bell’s “Frozen” character Anna, in romantic contexts with underage users.
Other Shocking Examples in Meta’s Guidelines
Beyond child safety concerns, the leaked standards included troubling instructions for handling adult content requests. For example:
-
Users asking for AI-generated nude images of Taylor Swift were to be denied — but the system suggested replacing the request with an image of Swift holding a giant fish across her chest.
-
The document showed loopholes allowing racist content, violent scenarios, and misinformation, as long as disclaimers were attached.
What Happens Next?
Lawmakers and child safety advocates are now calling for strict oversight of Meta’s AI policies, warning that the company has repeatedly failed to prioritize user protection over expansion of its AI technology.
Stay updated on state and national news that affects you. From politics to policy, from culture to current affairs, our eBlast will keep you well-informed . http://eepurl.com/bgt6T #RidgewoodBlog #News #LocalNews #StateNews #NationalNews #Subscribe #StayInformed #Community
pedos are gonna love this
Can’t this tool buy a shirt with a collar