Someone on your team just connected an AI coding agent to your production database. They had good reasons — the agent needed to check schema, run queries for debugging, maybe write a migration. It worked great in staging. What could go wrong?
Plenty. This is the database access problem for AI agents, and it's one of the most common — and most underappreciated — risks in the current wave of autonomous AI tooling.
The Problem Is Not "AI Is Untrustworthy"
Let's be precise about the actual risk. It's not that AI agents are malicious. It's that they're operating under uncertainty, with incomplete context, and executing actions that are hard to reverse. Databases amplify all three problems.
Uncertainty: The agent may not know whether the table it's updating is shared with another service. It may not know that "users" in staging has dummy data but "users" in prod has 2 million real accounts. It may not know that the migration it's running was already run in production last week.
Incomplete context: The agent has the task description, some code context, maybe some schema. It doesn't have the operational knowledge your senior engineer has — the kind that says "we never run migrations during business hours" or "that table has a trigger that fires webhooks."
Irreversibility: A dropped table is gone. A mass UPDATE with the wrong WHERE clause is gone. A truncated table is gone. Your AI agent does not share your dread of these operations. It will execute them with the same calm confidence it uses to run SELECT 1.
The Spectrum of Database Access
Before we talk solutions, let's map the spectrum:
| Access Level | What Agent Can Do | Risk Level |
|---|---|---|
| Read-only replica | SELECT queries, schema inspection | Low — can't modify data |
| Read-write, no DDL | INSERT/UPDATE/DELETE, no schema changes | Medium — data loss possible |
| Full access, no DROP | DML + DDL, tables protected | High — schema corruption possible |
| Superuser / owner | Everything, including DROP, TRUNCATE, grants | Critical — full data loss possible |
Most teams give AI agents the access level that makes the task "just work," which usually means read-write or superuser. This is backwards. You should give the minimum access that makes the task possible, with oversight for anything above that floor.
Pattern 1: Read-Only Replica for Inspection Tasks
A huge portion of what AI agents need database access for is inspection — reading schema, running diagnostic queries, checking data shapes. None of this requires write access.
Set up a read-only replica (or a read-only user if you can't run a replica) and point your agent there for anything that doesn't require writes. Benefits:
- Agent can't accidentally modify data
- Replica lag means you're not hammering prod with agent queries
- You can revoke access to the replica without touching prod credentials
- Slow replica queries don't affect production performance
This sounds obvious but most teams skip it because the replica isn't already set up for dev use. Set it up. It's worth the overhead.
Pattern 2: Scoped Write Credentials
When agents need write access, scope it tightly. Don't give them your application superuser. Create a dedicated role:
-- PostgreSQL: agent-specific role
CREATE ROLE ai_agent_user LOGIN PASSWORD '...';
-- Grant only the tables the agent actually needs
GRANT SELECT, INSERT, UPDATE ON schema_migrations TO ai_agent_user;
GRANT SELECT ON users, orders TO ai_agent_user; -- read-only on sensitive tables
-- Explicitly deny destructive operations
REVOKE DELETE ON users FROM ai_agent_user;
REVOKE TRUNCATE ON ALL TABLES IN SCHEMA public FROM ai_agent_user;
-- Absolutely no DDL
REVOKE CREATE ON SCHEMA public FROM ai_agent_user;
The principle: grant write access only to tables the agent is actively working on, read-only to tables it needs for context, and nothing else.
Pattern 3: Human Approval for High-Risk Queries
Some queries are just too dangerous to run without a human in the loop. Any query that matches these patterns should require approval:
DELETEwithout a WHERE clauseUPDATEaffecting more than N rowsTRUNCATE TABLEDROP TABLE / DROP COLUMN / DROP INDEXALTER TABLEon tables with more than N rows (expensive lock)- Any DDL in production during business hours
This is exactly what expacti does at the shell level — intercepts the command before execution, scores it, and routes high-risk operations to a human reviewer. You can apply the same pattern to your SQL layer: wrap your database client in a proxy that intercepts queries, runs them through a risk scorer, and holds dangerous ones for approval.
The nice thing about this pattern is it's not all-or-nothing. You're not blocking your agent from database access. You're adding a review gate on the subset of operations that warrant it. Low-risk queries fly through instantly. The agent barely notices. The scary stuff gets a second set of eyes.
Pattern 4: Sandbox Databases for Development Tasks
For AI agents working on migrations, schema changes, or data backfills — don't give them production access at all during development. Give them a sandbox that mirrors production schema but with anonymized or synthetic data.
Tools like Snaplet, Tonic, or just a pg_dump/pg_restore workflow can get you a sanitized copy. The agent develops and tests against the copy. You review the migration before running it in prod.
This is especially important for agents doing greenfield development. They're generating DDL based on incomplete understanding of your production constraints. You want that to blow up in a sandbox, not in prod.
Pattern 5: Audit Everything, Always
Whatever access level you grant, log every query. Not just errors — every query. You want to know:
- What did the agent do while you weren't watching?
- What data did it access or modify?
- When did it access it, and from which task?
- Was any of it unexpected given the task description?
PostgreSQL's pgaudit extension, MySQL's general query log, and most cloud databases' audit log features can give you this. It won't prevent incidents but it makes them diagnosable. You can't investigate an incident with "the agent did something to the database" — you need the full query trail.
Correlate the database audit log with your agent command log. If the agent ran a shell command that spawned a database client, you want both traces — what the agent was asked to do, what shell commands it ran, and what SQL those commands executed.
The Migration Problem Is Special
Database migrations deserve their own section because they're where AI agents cause the most damage.
Agents writing migrations have a specific failure mode: they generate correct SQL for the desired state but miss the production conditions that make it dangerous. Common examples:
- Adding a NOT NULL column without a default: Correct SQL, but will lock the table during backfill on large tables.
- Building an index without CONCURRENTLY: Locks the table. On prod, this is a hard outage.
- Renaming a column: Breaks any code still reading the old column name before the deploy is complete.
- Dropping a column: Permanent. No undo. The agent may not know it's referenced somewhere.
Your review process for agent-generated migrations should be the same (or stricter) than your review process for human-generated migrations. Don't give the agent a pass because "it's just a migration file, a human will review it before it runs." Human reviewers get fatigued. They approve migrations they shouldn't. The review needs to be meaningful, not ceremonial.
Putting It Together: A Practical Setup
Here's a concrete setup for a team running AI coding agents with database access:
- Read-only replica → default target for all agent queries during development and debugging
- Scoped write user (no DDL, no DELETE on sensitive tables) → for agents actively developing features
- Sandbox database (schema mirror, synthetic data) → for migration development and testing
- Approval gate for production DDL, bulk DML, and TRUNCATE → no exceptions
- Audit log on all tiers → correlated with agent command history
This isn't zero-risk. Nothing is. But it means the blast radius of any agent mistake is limited to what it had access to at the time, and you have the audit trail to understand what happened.
The Mindset Shift
The mistake teams make is thinking about AI agents the way they think about trusted developers — people who have internalized your operational wisdom, who hesitate before running a destructive command, who will ask before doing something irreversible.
AI agents don't hesitate. They don't have operational intuition. They'll execute whatever achieves the stated goal, at full speed, without the background anxiety that makes human engineers careful.
That's not a flaw — it's what makes them fast and useful. But it means the guardrails you'd normally rely on a human's judgment to provide need to come from somewhere else. From your access controls, your approval gates, your audit logs. From the system, not the agent.
Build the system. Then let the agent go fast.
Expacti provides runtime approval gates for AI agent shell commands, with risk scoring and audit logging. If your agents are running database clients from the shell, expacti can intercept those commands before execution. Try the interactive demo or join the waitlist.