A user took to the billion-dollar AI companion platform Character.AI to make a chatbot version of a murdered teen nearly two decades after her tragic death, AdWeek first reported earlier this month.
Now, the late girl’s father is speaking out about the experience of discovering that his daughter’s name and likeness were bottled into a chatbot without his consent.
Jennifer Crecente, of Austin, Texas, was just 18 years old when she was murdered by her ex-boyfriend in 2006. Her father, Drew Crecente, founded and continues to run a nonprofit dedicated to teen dating violence in her memory. (Her mother, Elizabeth Crecente, founded a different nonprofit with the same mission.)
As Drew told The Washington Post this week, it “takes quite a bit for me to be shocked, because I really have been through quite a bit.”
“But this,” he added, referring to the chatbot, “was a new low.”
Drew explained that he was notified of the bot’s existence by a Google alert, which took him to a Character.AI profile outfitted with Jennifer’s name and yearbook picture.
As screenshots of the profile show, she was marketed to other platform users as a “knowledgeable and friendly” AI persona of a “video game journalist.” Part of the profile was written in first person, with “Jennifer” claiming to “geek out on video games, technology, and pop culture.” None of this, Drew pointed out to WaPo, was true to Jennifer; indeed, the bizarre description was likely the result of an AI model confusing Jennifer with her uncle, Brian Crecente, a cofounder and former editor-in-chief of the video game publication Kotaku.
Meanwhile, there were no profile details to suggest that the Jennifer pictured on the Character.AI page was based on a real human — nevermind one who had been slain during her senior year of high school in a horrifying act of gendered dating violence.
“My pulse was racing,” Drew told WaPo of finding the profile. “I was just looking for a big flashing red stop button that I could slap and just make this stop.”
“You can’t go much further in terms of really just terrible things,” he added elsewhere.
Character.AI has since removed the bot on grounds that it violates its user policies, which of course it should. This one of the darkest applications of generative AI we’ve seen, and what’s worse, it’s unclear how the company — which recently struck a major deal to license its technology to Google — can functionally stop this kind of thing from happening in the future.
Sure, Character.AI says it doesn’t allow for the impersonation of real people. But is it ethical for its platform to rely on real-world victims of misuse to police that rule, while reaping the financial rewards of users interacting with those bots in the meantime? If Drew Crecente didn’t have a Google alert set for his daughter’s name, would the profile have gone undetected, racking up conversations with users for the benefit of Character.AI?
“If they’re going to say, ‘We don’t allow this on our platform,’ and then they allow it on their platform until it’s brought to their attention by somebody who’s been hurt by that, that’s not right,” Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, told WaPo. “All the while, they’re making millions of dollars.”
More on the Crecente family: An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.