AI-generated code works, but is it secure?

· 5 min read
Share:

A client recently asked me to review the security of their VPS. They had several services running with Docker and Traefik as a reverse proxy. Everything worked fine, had been running for months without apparent issues.

What I found made me rethink how we use LLMs to generate configurations.

The finding that kept us up at night

Among the deployed services was an invoicing application. They’d set it up with Claude’s help, it worked perfectly, had basic authentication configured in Traefik… all correct, right?

No.

The docker-compose had this:

services:
  invoices:
    image: invoice-app
    ports:
      - "8000:8000"
    # ... rest of config

See the problem? That ports: "8000:8000" exposes the container directly to the world, bypassing Traefik. The auth_basic they’d so carefully configured only protected requests coming through the domain via the proxy.

But anyone could do:

curl http://their-ip:8000/api/invoices

And get a JSON with all invoices. Clients, amounts, dates, everything. No password, no SSL, nothing.

Why LLMs generate insecure code by default

When you ask Claude, GPT, or any LLM to generate a docker-compose, its goal is to give you something that works. And the most direct way to make it work is to expose the port:

# What the AI generates
ports:
  - "8000:8000"

It works. Start the container, open your browser, see your app. Success.

But “works” doesn’t mean “production-ready.” The LLM doesn’t know (or ask) if:

  • You’re putting this behind a reverse proxy
  • You need it accessible only locally
  • You have a firewall configured
  • It’s a development or production environment

AI optimizes for the shortest path to a visible result, not for security. And this isn’t a bug, it’s an inherent limitation of the context in which these models operate.

If you’re interested in the security and AI topic, I also wrote about how AI has become a new data leak vector.

The solution: three levels of exposure

Depending on your case, there are three ways to handle ports in Docker:

Level 1: Exposed to the world (you almost never want this)

ports:
  - "8000:8000"  # Equals 0.0.0.0:8000:8000

Use this only if you really need direct external access AND have other security layers (firewall, in-app authentication, etc.).

Level 2: Localhost only

ports:
  - "127.0.0.1:8000:8000"

The service is only accessible from the machine itself. Useful if your reverse proxy runs on the host (not in Docker) or for admin tools.

expose:
  - "8000"
# Without "ports", the container is only accessible
# from other containers on the same network

If you use Traefik or another proxy inside Docker, this is the safest. External traffic must go through the proxy, where you have your SSL, auth, rate limiting, etc.

Post-AI security checklist for docker-compose

Every time an LLM generates a docker-compose, check this before deploying:

  1. Does it have ports defined? → Ask yourself if you really need them
  2. Is the port bound to 0.0.0.0? → Change it to 127.0.0.1 or use expose
  3. Using a reverse proxy? → Services should use expose, not ports
  4. Environment variables with secrets? → Move them to .env or Docker secrets
  5. Is the container running as root? → Add user: "1000:1000" if possible
  6. Does it have Docker socket access? → Only if strictly necessary

How to request more secure configurations from the start

Instead of asking:

“Generate a docker-compose for a Node app on port 8000”

Try:

“Generate a docker-compose for a Node app that will run behind Traefik. It should not expose ports directly, only through Docker’s internal network. Include Traefik labels for routing.”

Giving context about your infrastructure helps the LLM generate something more appropriate. It’s not foolproof, but it reduces the chances of messing up.

Conclusion

LLMs are incredible tools for accelerating development, but they don’t replace security knowledge. Every line of AI-generated code should go through the same scrutiny as a junior’s code: it works, but is it correct?

My client’s app had been exposed for months. It worked perfectly. But “works” and “is fine” are not the same thing.

Do the audit. Review your docker-compose files. Your future self will thank you.

Found this useful? Share it

Share:

You might also like