OpenAI has taken decisive action against an Iranian influence operation that was using ChatGPT to spread disinformation about the U.S. presidential election. The tech giant identified and banned multiple accounts linked to the “Storm-2035” group, which was responsible for generating fake news articles and social media posts targeting both Republican and Democratic voters.

While the operation produced a substantial volume of content, it garnered minimal engagement, leading OpenAI to classify it as a low-level threat. Despite this, the company’s intervention underscores its commitment to combating the misuse of AI for malicious purposes.

The timing of this disclosure is noteworthy, coming just a week after the Trump campaign accused Iran of hacking its computer systems. While a direct connection between the two events hasn’t been established, it highlights the broader context of heightened tensions between the U.S. and Iran.

OpenAI’s actions serve as a reminder of the potential for AI to be exploited for disinformation campaigns and the importance of robust safeguards to prevent such abuses. By proactively identifying and blocking these accounts, the company has demonstrated its role as a responsible steward of AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By : XYZScripts.com