IT Brief UK - Technology news for CIOs & IT decision-makers
Flux result b0e7cc49 91ef 4484 ba46 cdb3c997b1bf

Claude Code can leak secrets in public npm packages

Thu, 23rd Apr 2026 (Today)

Check Point researchers found that Anthropic's Claude Code can cause API keys and other credentials to be included in public npm packages. The issue centres on a local settings file that can be published without any obvious warning.

The researchers examined how Claude Code stores approved shell commands in .claude/settings.local.json within a project directory. If that directory is later used to publish an npm package, the file can be included unless developers explicitly exclude it through .npmignore or package configuration.

The file records commands users have chosen to "allow always" under Claude Code's permission model. According to the findings, those commands can include credentials entered inline at the time of approval, such as tokens, passwords, and bearer headers.

What Was Found

To measure the scale of the problem, the team built a TypeScript service to monitor the npm registry's CouchDB changes feed. For each new or updated package in the scan window, it fetched the tarball and checked for the Claude settings file.

Among about 46,500 monitored packages, 428 contained a .claude/settings.local.json file. Of those, 33 files across 30 packages included credentials, meaning roughly one in thirteen exposed settings files contained sensitive data.

The exposed material included npm authentication tokens, plaintext npm login credentials, GitHub personal access tokens, Telegram bot tokens, production bearer tokens for third-party services, Hugging Face API tokens, and test email and password pairs embedded in commands.

The issue stems from a gap in npm's default publishing flow. Hidden dotfiles do not necessarily stand out during package preparation, and there is no standard exclusion for the .claude directory.

As a result, a file meant to store local permissions can be published to the public registry if a developer does not manually block it. Its name suggests it is local and environment-specific, but unlike .env files, it does not benefit from broad awareness or routine tooling checks.

Why It Matters

Published npm package tarballs are effectively permanent. Even if a version is later deprecated, it is not removed from the registry in a way that guarantees the original contents disappear, and cached copies may remain accessible.

Any credential included in a published tarball should therefore be treated as compromised once the package is released. In practice, developers would need to rotate npm tokens, GitHub tokens, and any other affected credentials.

The findings add to broader concerns about software generated or managed with AI tools, particularly when developers trust automated defaults in security-related workflows. Here, the problem was not that the file was malicious, but that a routine approval process could quietly create a record of sensitive command strings.

Developers using AI coding assistants often approve large numbers of shell commands during routine work. A command that includes an authentication header, a token in an environment variable, or login credentials can therefore be saved permanently if the user selects persistent approval rather than one-time execution.

Simple Fix

The main mitigation is straightforward: exclude the .claude directory from npm packages by adding it to .npmignore. The researchers also advised adding the settings file to .gitignore and checking package contents before release with a dry run.

Developers who rely on the files field in package.json should still verify that the directory is not being included. Reviewing already published versions is also important, particularly for packages released while Claude Code was in active use.

Steve Giguere, Principal AI Security Advocate at Check Point Software, said the issue shows that existing software hygiene practices do not automatically cover new AI-related files.

"Files like .npmignore and .gitignore exist for one main reason: don't ship secrets. What this research highlights is that AI coding assistants are introducing entirely new ways those secrets can be created, stored, and accidentally exposed. Even when these safeguards are generated by AI, the system doesn't yet understand how to protect itself from itself. For organizations, the takeaway is simple: don't assume AI-generated safeguards are correct just because they look right. Any files created for defensive purposes, like ignore rules or security configurations, should have a human in the loop to validate that they actually do what they're intended to do," Giguere said.