The new setting, if enabled, will stop users from being able to go back and read their previous chats — and OpenAI says it won’t use data from said chats to train its AI model. But things aren’t quite as simple as that. In part of its statement, OpenAI says: “When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”
It isn’t clear what “monitoring for abuse” means, or what the consequences of such abuse being flagged are. It is likely that “abusive” messages will include anything that clearly violates OpenAI’s terms of service. Users may have noticed messages turn orange and display a warning that their content may go against the company’s guidelines, or turn red and immediately disappear. There could also be some kind of legal requirements involved, or OpenAI may be attempting to cover itself. While many people have asked ChatGPT how to create certain drugs out of curiosity, or just for fun, others may actually follow through and attempt to make said drugs. If the AI model is used to help plan a crime, the company may not want to delete potential evidence.
Either way, the point remains. Even if you disable your history, OpenAI can likely still access anything you’ve discussed with ChatGPT within the previous month.