Over 100,000 compromised ChatGPT accounts have been leaked to Dark Web marketplaces throughout the last year.
ChatGPT has been booming in popularity for the past months, and it’s become one (if not the most) influential AI-based chatbot on the Internet.

As it turns out, though, hackers have taken aim at OpenAI’s language model – at least according to cybersecurity firm Group-IB.

In a recent blog post released on June 20th, cybersecurity company Group-IB discovered that over 100,000 compromised ChatGPT accounts were leaked to various Dark Web marketplaces.

The post notes that ChatGPT has been used increasingly by employees to optimize their workflow in various fields of expertise spanning from software development to business communications.
The firm argues that ChatGPT stores the history of the queries and the AI responses.

Consequently, unauthorized access to ChatGPT accounts may expose confidential or sensitive information, which can be exploited for targeted attacks against companies and their employees. – reads the post.

In an infographic, Groub-IB revealed that compromised ChatGPT accounts and their credentials have been leaking to dark web marketplaces since as early as June 2022, peaking in May 2023, when over 26K accounts were leaked.

The firm outlined that the Asia-Pacific region saw the largest number of ChatGPT accounts stolen during the aforementioned period, comprising over 40% of all compromised credentials.
Commenting on the matter was Dmitry Shestakov, Head of Threat Intelligence at Group-IB, who said:
Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard cdonfiguration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.

Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By : XYZScripts.com