AI Tools Are Learning to Crack Crypto Smart Contracts—And Boost Security Too

AI Tools Are Learning to Crack Crypto Smart Contracts—And Boost Security Too

A recent discovery shows that artificial‑intelligence assistants are becoming serious partners in the fight over blockchain security. A security researcher used Claude, an AI model from Anthropic, to spot a critical flaw in the Aztec Network’s roll‑up contracts—a set of smart contracts that help scale Ethereum. The bug lived in a Merkle library and could have let attackers tamper with token balances. What’s striking is how quickly AI is improving at this game. According to Anthropic, the newest generation of large language models can now break more than half of the smart contracts they’re tested on, a jump from almost zero success just two years ago. Yet the same tools are still struggling to find brand‑new vulnerabilities: when they scanned 2,849 fresh contracts from mid‑2025, only two genuine issues were flagged—a read‑only function that could inflate token supplies and a poorly checked fee claim that could reroute payments. The upside is that the same AI “hackers” are being turned into defenders. Security teams are already leaning on these models to speed up code reviews and catch bugs before they go live. As the technology matures, we may see a future where AI works side‑by‑side with human auditors, making blockchain applications safer while also exposing new attack vectors.

Read more