Hackers are seeding search results with AI‑generated commands that install malware — what happened and how to stay safe
Security firm Huntress warns of a new social‑engineering trick: attackers use AI assistants to craft dangerous terminal commands, publish the chat publicly, then pay to boost that page in Google. Unsuspecting users searching for how‑to instructions may follow the command and inadvertently install malware like AMOS.
The reported attack chain starts with an attacker prompting a chatbot (Huntress tested ChatGPT and Grok) to produce a copy‑pasteable command for a common task (e.g. “clear disk space on Mac”). The attacker posts the dialog publicly, promotes it so it ranks in search, and waits for victims to execute the command in their terminal.
Why this is dangerous
- It bypasses typical red flags — no downloaded file, no suspicious attachment, just a command from a trusted source (Google or a known chatbot).
- People are primed to trust high‑ranking search results and reputable chatbot outputs, increasing the chance of executing harmful commands.
- Huntress’s tests showed that both ChatGPT and Grok could reproduce the malicious command when prompted, amplifying the risk.
Practical safety steps
- Never paste commands into your terminal unless you understand what each part does.
- Learn to read common shell syntax and research unfamiliar commands.
- Verify sources: prefer official docs, vendor knowledge bases or trusted community threads (e.g., GitHub, Stack Overflow) over a random boosted chat or blog post.
- Use a sandbox or disposable VM to test unknown commands before running them on your main system.
- Disable auto‑execution features and avoid running scripts from unknown locations; inspect scripts with a text editor first.
- Keep backups and enable system protections (FileVault/BitLocker, anti‑malware) so you can recover if something goes wrong.
What researchers found
Huntress traced a Mac‑targeting data‑exfiltration campaign (AMOS) back to a search result that promoted a malicious ChatGPT conversation. The boosted link remained indexed for hours before removal, demonstrating how quickly the tactic can reach victims.
Researchers caution that this technique could be generalized across platforms and search engines — attackers only need to get a malicious instruction to rank for a common query.
Further reading
Read the Huntress analysis for technical details: Huntress blog (see their AMOS writeup). For wider coverage of the incident, see reporting from security outlets and mainstream media: Engadget coverage.
Staying cautious with copy‑paste commands and treating search/chat outputs as guidance — not executable truth — will reduce the risk of falling for these increasingly clever attacks.
Discussion: Have you ever pasted a command you later regretted? What habits or tools help you verify terminal commands before running them?
