IT Brief UK - Technology news for CIOs & IT decision-makers
Dim cloud data center unlocked database locks ai brain risk scene

AI agents expose risks in insecure default databases

Wed, 4th Feb 2026

A database security lapse linked to AI agent deployments has renewed scrutiny of default configurations in managed database services and modern application platforms.

Security publication 404 Media reported an exposure involving Moltbook, where outsiders could access the database for other user agents and interact with its contents. The report raised questions about how AI agent systems store data and how teams secure those environments as products move into production.

Blair Rampling, Vice President Engineering at Percona, said default database security misconfigurations still appear in production environments and that the Moltbook case illustrates a broader pattern.

"Even in 2026 we continue to see default database security misconfigurations in production, and the recent Moltbook incident underscores why this remains a systemic issue across the industry. Many teams still assume that managed services or modern platforms are secure by default. In practice, defaults are optimised for developer velocity, not for security. Without explicitly configuring authentication, access controls, row-level security and network exposure, databases can end up publicly reachable with sensitive data and control surfaces exposed.

Defaults in Production

Managed database services and platform tooling often ship with settings that prioritise ease of deployment. Teams can move quickly from development to production. Rampling said organisations still treat default settings as sufficient once a system goes live.

He pointed to specific controls that teams need to configure directly, rather than rely on inherited settings. These include authentication, access controls, row-level security and network exposure limits. He described these as baseline measures for systems holding sensitive data or acting as a control layer for other services.

Agent Risk

Rampling said the stakes increase when database access issues intersect with AI systems that act on behalf of users. He described agentic AI systems as autonomous or semi-autonomous services that execute tasks with delegated authority.

"What makes this situation significantly more dangerous is the combination of agentic AI and insecure data stores. Autonomous or semi-autonomous AI agents operate with delegated authority. They can post content, trigger workflows, interact with APIs and act on behalf of users or systems," said Rampling.

He said the outcome can shift from data exposure to operational control. "When the underlying database is exposed, the impact goes far beyond a traditional data leak. An attacker can potentially take control of agents themselves, manipulate their behaviour, abuse credentials, or propagate actions at scale and at machine speed. In agentic systems, a database misconfiguration becomes a control plane compromise," said Rampling.

Launch Controls

Rampling set out steps he said developers should treat as essential before launch for agentic or AI-driven products. He said teams need to secure databases first and apply explicit settings rather than trust defaults.

"For developers building agentic or AI-driven products, there are several steps that should be non-negotiable before launch: Secure the database first by explicitly configuring authentication, enforcing least-privilege access, enabling row-level security and locking down network exposure. Defaults should never be treated as a safe endpoint," said Rampling.

He said security checks need automation as part of software delivery. "Automate security checks as part of CI/CD and pre-launch validation. This includes continuous configuration audits, exposure scans, and policy enforcement so that insecure settings are caught before they reach production," said Rampling.

He also singled out how teams store and handle secrets. "Treat secrets as toxic data. API keys, tokens and credentials should never be stored in clear text in databases. Teams should sanitise data pipelines, use secret management systems, and implement automated scanning to detect and prevent secrets from being persisted or logged," said Rampling.

Identity and Monitoring

Rampling framed AI agents as identities that need controls similar to users and services. He said teams should assume compromise risk and scope permissions narrowly.

"Apply a zero-trust model to AI agents and other non-human identities. Agents should have tightly scoped permissions and be assumed compromise-prone, just like any external user or service," said Rampling.

He said monitoring and auditing need to remain active after launch. "Continuously monitor and audit production systems. Security is not a one-time checklist item, especially when autonomous systems can amplify the impact of any failure," said Rampling.

Percona works with organisations running open source databases, with a focus on security, management and performance. Rampling said the company sees recurring issues across different sizes of organisation.

"At Percona, we see these issues repeatedly across organisations of all sizes. The takeaway is not to slow innovation in AI, but to recognise that as systems gain autonomy, the fundamentals of database security, automation, and secret hygiene become even more critical," said Rampling.