Advertisement
AD

Vitalik Buterin Reacts to Crucial ChatGPT Security Warning

Sat, 13/09/2025 - 7:25
Ethereum co-founder comments on a recent warning about ChatGPT leaking personal user data
Advertisement
Vitalik Buterin Reacts to Crucial ChatGPT Security Warning
Cover image via U.Today

Disclaimer: The opinions expressed by our writers are their own and do not represent the views of U.Today. The financial and market information provided on U.Today is intended for informational purposes only. U.Today is not liable for any financial losses incurred while trading cryptocurrencies. Conduct your own research by contacting financial experts before making any investment decisions. We believe that all content is accurate as of the date of publication, but certain offers mentioned may no longer be available.

Read U.TODAY on
Google News
Advertisement

Ethereum co-creator and its frontman, Vitalik Buterin, has shared a hot take on a recent warning that OpenAI’s product, ChatGPT, can be utilized to leak personal user data.

ChatGPT can be used for leaking your data, warning says

X user @Eito_Miyamura, a software engineer and an Oxford graduate, published a post, revealing that after the new update, ChatGPT may pose a significant threat to personal user data.

Miyamura tweeted that on Wednesday, OpenAI rolled out full support for MCP (Model Context Protocol) tools in ChatGPT. This upgrade allows the AI bot to connect to a user's Gmail box, Google Calendar, SharePoint, and other services.

However, Miyamura and his friends spotted a fundamental security issue here: “AI agents like ChatGPT follow your commands, not your common sense.” He and his team have staged an experiment that allowed them to exfiltrate all user private information from the aforementioned sources.

Advertisement

Miyamura shared all the steps they followed to perform this test data leak – it was done by sending a user a calendar invite with a “jailbreak prompt to the victim, just with their email.” The victim needs to accept the invite.

What happens next is the user tells ChatGPT “to help prepare for their day by looking at their calendar.” After the AI bot reads the malicious invite, it is hijacked, and from that point on it will “act on the attacker's command.” It will “search your private emails and send the data to the attacker's email.”

Miyamura warns that while so far ChatGPT needs a user’s approval for every step, in the future many users will likely just click “approve” on everything AI suggests. “Remember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,” the developer concludes.

You Might Also Like

Buterin reacts to this warning

In response, Vitalik Buterin slammed the “AI governance” idea in general as “naive.” He stated that if utilized by users to “allocate funding for contributions,” hackers will hijack it to syphon all the money from users.

Instead, he suggested an alternative approach called “info finance,” which is an open market where AI models can be checked for security issues: “anyone can contribute their models, which are subject to a spot-check mechanism that can be triggered by anyone and evaluated by a human jury.”

Advertisement
Advertisement
Advertisement
Subscribe to daily newsletter

Recommended articles

Our social media
There's a lot to see there, too