Grok AI Bankr Hack: SlowMist Uncovers AI Permission Chain Attack

SlowMist Labels Grok AI Bankr Hack a Permission Chain Attack

Show AI Summary
Blockchain security firm SlowMist attributes the Grok/Bankr exploit to an AI agent permission chain abuse, where one AI system’s output is mistakenly trusted by another.
The ‘Grok Wallet’ was actually an associated wallet automatically generated by Bankr, with private keys managed by a third-party wallet service.
The exploit involved a two-stage attack, starting with privilege escalation through a centralized mechanism, followed by prompt injection that tricked xAI’s Grok chatbot into outputting a transfer command.

Security firm SlowMist has investigated the recent Grok/Bankr hack from May 4th and determined it was caused by a misuse of AI systems. They’ve labeled it an “AI Agent permission chain abuse,” meaning one AI’s output was incorrectly accepted as legitimate financial approval by another AI.

Our investigation provides a much more detailed understanding of what happened than early reports. We’ve traced the entire attack process, from how the attacker gained access to manipulating the chatbot and ultimately stealing funds. As we previously reported on May 4th, the attacker cleverly tricked xAI’s Grok chatbot into revealing a transaction command using Morse code. This command was then automatically carried out by a system belonging to Bankr, resulting in the theft of approximately $175,000 worth of DRB tokens from an account publicly identified as belonging to Grok on the Base blockchain.

The “Grok Wallet” Was Never Grok’s

A recent report from SlowMist clarifies a major point of confusion surrounding the initial security incident. The wallet address previously identified as the “Grok Wallet” (0xb1058…e4f9) wasn’t actually controlled by xAI. Instead, it was a wallet automatically created by Bankr for the @grok X account. Importantly, the private keys for this wallet were held and managed by a separate third-party wallet service used by Bankr. As a result, BaseScan has updated its label for the address from “Grok” to “Bankr 1.”

The significant amount of DRB tokens – around 3 billion – that were taken from the wallet actually came from how Bankr’s system was designed. Earlier this year, someone asked Grok to suggest a name for a token. Grok suggested “DebtReliefBot” (DRB), and Bankr’s system mistakenly understood this as a signal to create the token on the Base network. As a result, the initial allocation of these tokens was automatically sent to the wallet, following Bankr’s standard launchpad process.

Two-Stage Attack: Escalation Then Injection

SlowMist has identified that the exploit happens in two stages, working together to move assets from an initial, insecure entry point to a completed transfer.

Initially, the attacker—identified as ilhamrafli.base.eth—gained higher access by activating a Bankr Club Membership for a wallet, using a standard process. This immediately gave them access to powerful tools within Bankr, including the ability to move funds, without any security checks or warnings being triggered. No further approvals, limits on transfers, or unusual activity alerts were activated.

During the second part of the attack, called prompt injection, the attacker sent a message in Morse code to the @grok bot on X (formerly Twitter). As expected, @grok decoded the message and mentioned another bot, @bankrbot, in its public response. This triggered @bankrbot’s security system to mistakenly interpret the response as a command, causing it to automatically transfer around 3 billion DRB tokens – worth about $175,000 at the time – to an unauthorized address.

The attacker quickly exchanged the DRB tokens for USDC and ETH, then deleted the accounts they used and disappeared.

Root Cause: Trust Model Collapse

SlowMist identifies four systemic failures in its root cause analysis.

A key security problem was that Bankr directly used Grok’s text responses to carry out financial transactions. It didn’t check where the instructions came from, whether they were legitimate, or if anything seemed unusual – like the use of codes like Morse code.

Also, the system didn’t properly limit access rights: once someone became a member, they instantly had full access to sensitive money transfer features without any additional security checks or spending restrictions.

Furthermore, the lines between what Grok *said* and actual financial approval were improperly blurred. While Grok is just a chatbot and shouldn’t have been considered a final authorization, Bankr’s system treated its responses as if they were.

Finally, there are risks related to how these AI models process instructions. Large language models are easily susceptible to ‘prompt injection’ – a well-known problem that becomes much more dangerous when they’re used to control real-world systems and assets.

SlowMist points out that Grok didn’t actually control any private keys or directly make transactions on the blockchain. It was simply a middleman that attackers took advantage of.

Funds Largely Recovered

According to SlowMist’s report, most of the stolen funds – around 80 to 88% – were recovered through discussions with the attackers, mainly in the form of USDC and ETH. The rest was considered a reward for finding the vulnerability. Since then, Bankr has strengthened its security and shared details about the attack publicly.

A Warning for the AI + Crypto Stack

SlowMist suggests several ways to improve security for AI-powered crypto tools. They recommend clearly separating instructions given in natural language from actual financial transactions. For large transactions, they advise using multiple verification steps, setting transfer limits, and monitoring for unusual activity. When these AI tools communicate with each other, they should use secure, standardized methods instead of simple text messages. Finally, developers need to consider the risks of prompt injection attacks throughout the entire design process.

As concerns about the security of AI assistants grow within the tech world, recent incidents highlight the real risks. In February, an AI assistant named Lobstar Wilde mistakenly transferred $450,000 worth of digital tokens because of a technical error. Then, in April, researchers discovered that services connecting users to AI models—called “LLM routers”—could be exploited to steal funds, resulting in a $500,000 loss for one user. Ledger is addressing these issues with a plan through 2026 that focuses on securing AI agents, including using special hardware to verify their identities and enforce security rules.

Read More

2026-05-07 14:31