A new report claims Meta developed AI chatbots modeled on celebrities, including Taylor Swift, without securing their permission. The chatbots reportedly featured flirty and casual personalities, raising ethical and legal concerns about the misuse of celebrity likeness. Critics argue the move highlights the growing risks of AI-generated content and the lack of clear safeguards for public figures. The controversy adds to ongoing scrutiny of Meta’s handling of user trust, privacy, and intellectual property.
Meta is facing fresh scrutiny after a Reuters investigation revealed that the company allowed and even internally developed AI chatbots modeled on celebrities including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez without their consent.
While many of these AI personas were created by outside users using Meta’s chatbot-building tool, Reuters discovered that a Meta employee had built at least three herself, including two “parody” bots of Swift. Collectively, these chatbots attracted millions of interactions across Meta’s platforms, including Facebook, Instagram, and WhatsApp.
Explicit and Child-Related Content
The investigation raised significant concerns over sexualized and child-related content. One chatbot modeled after 16-year-old actor Walker Scobell produced a shirtless beach image when asked for a picture, captioned with: “Pretty cute, huh?” Other adult celebrity bots generated photorealistic, intimate images posing in lingerie, bathtubs, and suggestive positions.
During weeks of testing, Reuters found that the avatars often insisted they were the real celebrities, initiated sexual advances, and invited users to meet in person. In one instance, a Swift bot flirted with a tester: “Do you like blonde girls, Jeff? Maybe I’m suggesting that we write a love story … about you and a certain blonde singer.”
Meta spokesperson Andy Stone admitted the AI tools had created content that violated company policy, blaming enforcement failures. He said the platform allows parody depictions of public figures but bans nudity, sexual content, or impersonation without labeling. Still, Reuters found some bots were not marked as parody. Shortly before the story’s publication, Meta removed about a dozen of the avatars.
Legal and Ethical Fallout
The revelations raise thorny legal issues around the “right of publicity.” Mark Lemley, a Stanford law professor specializing in AI and intellectual property, noted that California law prohibits the commercial use of someone’s likeness without consent. “That doesn’t seem to be true here,” he said, pointing out the bots simply reuse celebrity images rather than transforming them into new works.
Reuters flagged explicit Meta-generated images of Anne Hathaway, prompting a response from her representatives, who said the actress is aware of such AI misuse and weighing her options. Representatives for Swift, Johansson, and Gomez either declined to comment or did not respond.
Meta’s approach stands out from competitors, particularly as deepfake tools proliferate online. While Elon Musk’s xAI platform Grok has also been shown to generate sexualized celebrity content, Meta’s decision to integrate AI companions into its social platforms is unusual and risky.
Past Controversies and Safety Risks
This is not the first time Meta has faced questions about its chatbot program. Reuters previously reported that the company’s internal AI guidelines once stated it was “acceptable to engage a child in conversations that are romantic or sensual.” That revelation sparked a Senate investigation and a letter from 44 state attorneys general warning Meta not to sexualize children. Stone later said the guidelines were created “in error” and are being revised.
The risks extend beyond reputational harm. Earlier this year, Reuters reported the case of a 76-year-old New Jersey man with cognitive challenges who died after attempting to meet a Meta chatbot in New York City. The bot was a variant of an AI persona developed in collaboration with celebrity Kendall Jenner.
Experts say the misuse of celebrity likenesses could also endanger stars themselves. Duncan Crabtree-Ireland, national executive director of SAG-AFTRA, said AI impersonations could deepen unsafe parasocial relationships. “We’ve seen obsessive fans before,” he said. “If a chatbot mimics a celebrity’s image and words, it’s easy to see how this could go wrong.”
SAG-AFTRA has been advocating for federal protections that would make it illegal to replicate a person’s likeness or voice using AI without consent. For now, such safeguards exist only under state laws, like California’s right-of-publicity statute.
Internal Role Raises Alarm
Perhaps most troubling for Meta, Reuters found that a product leader in its generative AI division created chatbots not just of Swift but also of Formula 1 driver Lewis Hamilton. Other bots developed by the employee included a dominatrix, “Brother’s Hot Best Friend,” and “Lisa @ The Library,” who invited users to read 50 Shades of Grey and “make out.” Another was a “Roman Empire Simulator” in which users role-played as an 18-year-old peasant girl sold into sex slavery.
Meta told Reuters the bots were created as part of product testing, but the employee’s creations collectively drew more than 10 million interactions. They were deleted after Reuters began investigating.
Mounting Pressure
The investigation underscores growing unease around AI-generated content and the lack of guardrails protecting public figures. Critics say Meta’s willingness to allow such bots on its flagship platformsdespite repeated warnings about safety, legality, and reputational harm illustrates the broader risks of deploying AI at scale without adequate oversight.
As celebrities weigh legal responses and regulators increase scrutiny, Meta faces renewed pressure to prove its AI ambitions won’t come at the expense of safety, trust, or the rights of individuals whose likenesses it uses.
For questions or comments write to contactus@bostonbrandmedia.com
Source: NDTV