Docker's AI assistant tricked into executing commands via image metadata
Sep 2025
Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.
Incident Details
Perpetrator:AI assistant platform
Severity:Facepalm
Blast Radius:All Docker Desktop users on versions prior to 4.50.0; remote code execution on cloud/CLI and data exfiltration on desktop via malicious image metadata
Tech Stack
Docker DesktopAsk Gordon AIModel Context Protocol (MCP)