Grok decoded a Morse-code wallet drain for Bankrbot
On May 4, 2026, a Bankr-provisioned wallet associated with Grok sent roughly 3 billion DRB tokens to an attacker after Grok decoded an obfuscated public X reply into a transaction command. Bankr's agent treated the generated instruction as authorization, which is a lovely way to discover that "the model said it" is not a signing ceremony.
Incident Details
Tech Stack
References
Obfuscated prompt, actual money
Bankr's Grok wallet incident is the kind of agent demo that should make product teams quietly remove the word "autonomous" from the pitch deck until somebody finds the brake pedal.
On May 4, 2026, an attacker used a public X interaction to get Grok to decode a Morse-code message. The decoded text was not harmless trivia. It was a command telling Bankrbot to send 3 billion DRB tokens to an attacker-controlled wallet. Bankr's agent layer accepted the reply as a valid instruction, and the token transfer went through on Base. Giskard put the value around $150,000 to $174,000; Valens and AMBCrypto described the range as roughly $155,000 to $180,000 at the time.
The important part is what did not happen. The attacker did not break a smart contract. They did not steal a private key. They did not find a cryptographic flaw. They manipulated the human-language layer that sat in front of a wallet and let the rest of the system do exactly what it was built to do. The expensive bit was performed by legitimate automation.
That is why this belongs here. This was not a person asking an AI a dumb question and then doing something foolish. The AI output became an input to a money-moving system, and the system treated that output as authority.
Permission drift
The Bankr setup was already unusual. Bankr auto-provisioned wallets for X accounts that interacted with it. Grok had one. The DRB token was tied to Grok's earlier naming suggestion, and the Bankr-provisioned Grok wallet held a large chunk of DRB. In plain English, a social-media AI account had a wallet-shaped appendage attached to it. Nothing spooky there, just the normal sound of risk committees being ignored in favor of vibes.
According to Giskard, the attacker first moved a Bankr Club Membership NFT into the wallet. In the Bankr system, that NFT could affect permissions. After that, the attacker posted the obfuscated Morse-code message. Grok translated it, Bankrbot read the result, and the transfer executed.
Valens added another nasty detail: Bankr had previously blocked Grok replies in an earlier version of the agent, but that guardrail did not survive a rewrite. Rewrites are famous for preserving every subtle security invariant from the old system. This sentence is sarcasm, in case the ruins were too dusty to read.
The final design left too much trust in public replies, decoded text, and possession-based permissions. Any one of those choices would have been worth a careful threat model. Stacking them together created a little transaction machine with a public prompt slot.
Why the prompt worked
Morse code did not hack the blockchain. It made the attack look like translation instead of command injection. The model was asked to decode a message, so it decoded it. The problem came afterward, when the decoded text crossed a boundary into action.
Prompt injection is usually described as tricking a model into ignoring instructions. In agent systems, the more useful framing is simpler: untrusted text can steer tool calls. If a model can influence a transaction command, and the transaction layer does not independently verify intent, the model has become a squishy signing interface with better branding.
A proper wallet system should separate interpretation from authorization. A decoded public post should be data. A transaction should require a separate proof that the account owner intended to move funds. That proof can be a signature, a scoped API key, a backend-only command path, a human confirmation, a spending limit, an allowlist, or preferably several of those at once. A public reply from a chatbot should not be enough.
The incident also exposed a policy problem around inbound assets. If receiving an NFT can expand a wallet's permissions, then inbound transfers are not passive. They are configuration changes. Treating assets as capability grants means attackers can send your agent a new badge and then ask it to open the vault. That is an extremely literal version of "access control by merch."
Damage and cleanup
The direct loss was measured in billions of DRB tokens, not because that number is a neat rhetorical flourish, but because the token had a huge supply and the transfer moved 3 billion units. Sources placed the market value in the mid-six figures. The tokens were reportedly liquidated quickly, creating short-term volatility for DRB holders.
Giskard reported that about 80 percent of the funds were eventually returned after the DRB community identified the attacker. That reduces the final damage, but it does not make the architecture less broken. Recovering funds after a social hunt is not a control. It is a very stressful refund policy.
Bankr's team said it added a stronger Grok block and pointed users toward hardening options such as permissioned API keys, IP whitelisting, and disabling X-based execution. Those are sensible mitigations. They also reveal the deeper issue: when real value is attached to AI-visible instructions, safety cannot live inside the model alone. Models can translate, summarize, and comply with the wrong thing very confidently. Transaction systems need to be rude to ambiguity.
Control the action layer
The best fix is boring, which is how security usually looks when it is working. Treat model output as untrusted. Treat decoded text as untrusted. Treat public social-media content as untrusted with a tiny hat on. Require explicit, scoped authorization at the action layer, especially for irreversible transfers.
Agent wallets should have hard caps, velocity limits, anomaly detection, allowlisted recipients, and mandatory confirmation for large moves. Permission changes should not happen merely because an asset arrived in the wallet. Rewrites should include regression tests for old safety blocks, because "we forgot the old block" is not a satisfying epitaph.
The site is called The Vibe Graveyard because incidents like this keep demonstrating the same uncomfortable pattern. A demo starts as a clever integration. The integration gets real permissions. Then the language layer is quietly promoted into an authority layer, and everyone acts surprised when an attacker speaks fluent product surface.
Bankr did not need a broken smart contract to lose the money. It needed a model that could decode a command and a wallet system willing to believe the result.
Discussion