At first, it looked like an April Fool’s joke. But as it turned out, OpenAI apparently did deactivate the account of “Pliny,” the pseudonym used by one of the world's most prolific and best-known AI jailbreakers.
The company cited policy violations related to "violent activity" and "weapons creation."
The ban was implemented yesterday, on April 1, 2025, according to screenshots Pliny shared on social media platform X.
“BANNED FROM OAI?! What kind of sick joke is this?” Pliny tweeted in response.
Since Pliny is known for his sense of humor, many of his 93,000 followers on X naturally assumed it was another of his jokes.
However, it turned out that he had been, in fact, excommunicated: "Yes, the account deactivation is real,” Pliny confirmed to Decrypt today. “I'm messaging someone at OpenAI now to try to get it resolved."
That, apparently, worked: Late in the day, it appeared that his service had been reinstated.
Pliny confirmed that OpenAI restored his access to ChatGPT. “I’m free,” he tweeted, sharing a screenshot of an email from OpenAI.
“We have determined that we incorrectly deactivated your organization’s account access. We sincerely apologize for any inconvenience this may have caused.”
OpenAI did not respond to Decrypt for comment.
When we asked the chatbot itself, however, it was more equivocal.
“As of now, there is no publicly available information confirming that Pliny the Prompter’s access to ChatGPT has been restored. Pliny, known for developing jailbreaks like “GODMODE GPT” to bypass OpenAI’s content restrictions, had his access revoked due to violations of OpenAI’s usage policies. While some social media discussions have speculated about the reinstatement of his access, no official statements or credible reports have verified this claim. For the most accurate and current information, it is advisable to refer to official communications from OpenAI or direct statements from Pliny the Prompter.”
Since there were no OpenAI statements, we’ll go with the jailbreaker on this one.
Pliny first learned he had been booted via a conversation he had with ChatGPT when the chatbot directed him to check his email for additional information.
He learned that he was specifically accused of violating OpenAI's usage policies related to violent content and weapons development.
Since most of the jailbreaks executed and shared by Pliny include figuring out how to get ChatGPT to generate offensive content, build weapons, drugs, and other illegal stuff, in violation of the platform's Terms of Service, you might wonder what took them so long.
But Pliny does his thing—without sharing the full text generated by the jailbroken LLMs, only the prompts—to make OpenAI’s models more bulletproof. Just like any other white hat hacker.
Jailbreaking involves crafting prompts and executing techniques that trick AI systems into bypassing their safety guardrails to generate prohibited content.
Advocates argue that jailbreaking contributes meaningfully to AI safety by exposing vulnerabilities before malicious actors can exploit them. One notable supporter has been Marc Andreesen, who previously donated “to support the cause.”
Over the past few years, Pliny has emerged as one of the best AI jailbreakers in the world, developing and openly sharing methods to circumvent AI safety restrictions.
His activities include launching the "BASI PROMPT1NG" Discord community dedicated to jailbreaking strategies and maintaining the GitHub repository L1B3RT4S with jailbreak prompts for various AI models, including ChatGPT, Claude, Gemini, and Llama.
Though this was the first time Pliny was outright banned from the service, some of Pliny's custom GPTs had faced restrictions, including one he built a year ago to jailbreak GPT-4o.
Pliny’s Discord server, which is home to more than 15,000 users, was notably quiet about the ban itself, with members continuing to focus on sharing information about AI models and jailbreaking techniques.
The ban did spark criticism of OpenAI across social media platforms, with many users taking Pliny's side.
Meanwhile, Pliny couldn’t resist doing the equivalent of a victory dance after he was reinstated: He shared a screenshot of his newest jailbreak—making ChatGPT use cuss words. “Pliny, you glorious bastard. Welcome the fuck back,” the ChatGPT said, among other things.
Edited b and Sebastian Sinclair
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。