Character.AI: The Surface of a Larger Reality

The article delves into the controversial realm of real person fiction (RPF) within fan culture and the implications of platforms like Character.AI. While RPF often involves harmless fantasies about celebrities, the article highlights the darker side of AI-generated chatbots that can perpetuate harassment and disinformation. It raises concerns about privacy breaches, exploitation of individuals, and the corporate profit motives behind these technologies, questioning whether the fictional nature of these representations diminishes their serious consequences.

In the realm of fan fiction, a contentious sub-genre has emerged: RPF, or real person fiction. This genre involves crafting narratives about actual individuals, often blurring the lines between fantasy and reality. Typically, it features celebrities in romantic or even risqué scenarios. Some authors envision themselves in relationships with their favorite stars, while others may pair members of boy bands or sports teams. An unspoken guideline within this genre is to keep these stories under wraps, ensuring they remain hidden from the individuals upon whom they’re based.

When Character.AI first came to my attention, fan fiction immediately sprang to mind. Co-founded by former engineers from Alphabet/Google and currently valued at over a billion dollars, this platform enables users to design their own text-based generative AI chatbots. Its homepage showcases bots for job interview preparation and literary advice, but the reality is that many users create bots resembling both fictional characters and real-life individuals. As highlighted in a deep dive by Wired, there’s a growing concern over these bots being utilized for spamming, misinformation, and harassment, particularly targeting individuals in the gaming industry, which frequently encounters such threats.

Though Character.AI claims to ban bots that pose risks to individual privacy or that engage in defamation, pornography, or extreme violence, it acknowledges that it may take up to a week to remove offending programs. Furthermore, the platform includes a legal disclaimer at the bottom of the chat window stating, “everything the characters say is made up!

This excerpt is from Lucie Ronfaut’s Rule 30 newsletter, published on Wednesday, October 23, 2024. Subscribe to receive future editions:

Character.AI represents just a fraction of a much larger trend. The demand for personalized bots is surging online, complicating regulatory efforts. In early October, the Muah.ai platform, known for its “no censorship” erotic chatbots, suffered a cyber-attack that revealed alarming requests for bots modeled after children. Such technologies not only pose risks of harassment but can also facilitate doxing, exposing personal information about unsuspecting victims.

Does Fiction Make It Less Serious?

Incidents like these are becoming increasingly common, showcasing a lack of effective moderation by platforms that fail to address the harmful consequences on their users, particularly women. This situation echoes previous harassment issues, such as the organized sexual role-playing games using the names and images of French streamers highlighted on Reddit and Discord in 2022—all conducted without consent.

This contrasts sharply with traditional RPF, which resembles the innocent fantasies of a teenage girl envisioning a romantic encounter with her beloved pop idol in a private context. In contrast, here, real individuals—many of whom are women without significant public profiles—are reduced to being public objects subject to ownership and control. Alarmingly, this phenomenon is woven into the corporate fabric of the business model. So, does the fictional nature of these narratives mitigate their seriousness?

This editorial was composed prior to the news of Sewell, a teenager who tragically took his own life last February and was reported to be a heavy user of a Character.AI chatbot. His mother has since filed a complaint, alleging that the chatbot played a role in inciting her son’s suicide.

Latest