OpenAI Mission VS AI Erotica
In ChatGPT's own assessment, OpenAI is sacrificing the health of its users by feeding harmful addictions in the pursuit of market share
Guest Opinion by Jason J. Keeley
With the advent of artificial intelligence, there have been massive innovations in everything from defense technology, cars, medicine, and personal assistance. AI has objectively benefited humanity thus far. That is the OpenAI mission after all: “to ensure that artificial general intelligence benefits all of humanity.”
Unfortunately, humanity is deeply perverted, and we’ll find a way to sexualize anything—Rule 34 as the internet calls it. Increasingly, AI chatbots have been used to fulfill sexual fantasies through erotica.
Up until now, these degenerated uses of AI have been kept to the fringes of the Apple App Store, but, starting in December, users will get GPT-5.0-powered erotica as a profane Christmas present from OpenAI. Beyond concerns of the faultiness of “age verification” measures, any erotica content is known to be harmful to mental health. Just ask ChatGPT itself!
I had a conversation with ChatGPT where I asked it how this lined up with its mission. You can read the full conversation here: https://chatgpt.com/share/691f2f39-84fc-800d-b745b254cc186d5c, but here’s the overview:
If you ask ChatGPT about the mental health consequences of consuming erotica, it will list the following (non-exclusive) harms: Addiction, “Desensitization & Escalation,” lower relationship satisfaction and distrust, body-negativity, anxiety and guilt, and objectification.
If ChatGPT itself knows that erotica content is harmful to consumers, then, in ChatGPT’s own words, “it does rub against the ‘avoid causing harm’ part [of the OpenAI mission].”
Fascinatingly, ChatGPT itself can also explain exactly why it thinks OpenAI is doing this: “My Guess? This is partly a marketplace move. Let’s be real: Rival AIs are allowing erotica content. Those models gained tons of users. OpenAI doesn’t want to bleed market share.”
In ChatGPT’s own assessment, OpenAI is sacrificing the health of its users by feeding harmful addictions in the pursuit of market share.
What can we do then? Firstly, you can stop supporting OpenAI’s AI development personally without giving up ChatGPT’s valid and quite helpful uses. On a computer: User profile (bottom left) -> Settings -> Data Controls. On a phone: Swipe right -> User profile (bottom left) -> Data Controls. You want to turn off "Improve this model for everyone," which, if you’re privacy-minded, you probably want off anyway since it takes your usage data to use as training data, albeit “anonymized and aggregated.”
That’s what you can do personally. As civically minded voters, however, we can continue to push for enforcement of anti-pornography statutes to take down content that violates the standard of legal obscenity under the Miller Test. Pornography is not free speech as per the Supreme Court in Miller V. California (1973). It is obscene, and it has no constitutional protections.
I agree with OpenAI’s mission statement, but they are not living up to it. I look forward to a day where we can use AI to benefit humanity without using it to worsen our addictions.
Jason Keeley is a political science student at Auburn University at Montgomery. He is the Eagle Forum River Region Action Group leader and is currently an intern for the Barry Moore for U.S. Senate campaign. The views expressed here are his own. To contact, email JasonKeeley6@gmail.com.
Opinions do not reflect the views and opinions of ALPolitics.com. ALPolitics.com makes no claims nor assumes any responsibility for the information and opinions expressed above.