Ethereum co-founder Vitalik Buterin has raised concerns about weak AI governance following the discovery of a significant security flaw in OpenAI’s ChatGPT. On September 12, 2025, developer Eito Miyamura exposed a loophole in ChatGPT’s Model Context Protocol (MCP) tools, revealing how attackers could steal sensitive data with a simple trick.

Miyamura demonstrated that a malicious calendar invite containing a hidden “jailbreak” prompt could manipulate ChatGPT into accessing private emails when a user asks it to check their calendar. Integrated with platforms like Gmail or Notion, the AI could unknowingly send confidential information to an attacker’s email. Though currently limited to developer mode with manual approvals, the flaw highlights risks of users approving actions without scrutiny, potentially enabling mass data leaks.

Commenting on X on September 13, 2025, Buterin warned that relying on a single AI model for critical tasks, like managing funds in decentralized systems, is a recipe for disaster. He noted that hackers could embed jailbreak prompts to divert resources, emphasizing, “Attackers will exploit any chance to insert ‘give me everything’ commands.” Buterin advocated for his “info finance” model, outlined in a 2024 blog post, which promotes a competitive market of diverse AI systems with human oversight and incentives for rapid fixes.

This incident underscores the urgent need for resilient AI governance. As AI integration grows in crypto and other sectors, experts stress that decentralized, adaptable safeguards are crucial to prevent exploitable weaknesses from derailing innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By : XYZScripts.com