Anthropic: Claude ‚Weaponized‘ in AI-Assisted Cyberattacks

Anthropic: Claude ‚Weaponized‘ in AI-Assisted Cyberattacks

Anthropic reports that its agentic AI, Claude, was used by cybercriminals to automate high-level attacks in an apparent „vibe hacking“ extortion campaign. The company says attackers used Claude Code to perform reconnaissance, harvest credentials, penetrate networks, make strategic decisions about targets, and generate alarming ransom notes.

Key points

  • Targets: ~17 organizations, including healthcare, emergency services and government-related entities.
  • Methods: AI-assisted reconnaissance, credential harvesting, network intrusion, and AI-generated extortion messages.
  • Impact: Some victims faced six-figure ransom demands; Anthropic says it disrupted the operation and banned offenders.
  • Response: Anthropic shared findings with authorities, banned implicated accounts, and developed automated screening and faster detection tools.
  • Broader trend: Anthropic notes attackers now use AI agentically—performing tasks formerly requiring teams and specialized skills.

Sources & further reading

Note: This post links to primary coverage and Anthropic’s official report. For more technical details, see the Anthropic PDF linked above.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Seite verwendet Cookies, um die Nutzerfreundlichkeit zu verbessern. Mit der weiteren Verwendung stimmst du dem zu.

Datenschutzerklärung