/ Containers

Container ENV configs are an Anti-Pattern

Using the OS Environment to configure your containers is an insecure practice, and we need to stop encouraging people to do it! The 12-factor app says to use the running Environment of a container to configure your running service. I would contend this does not mandate OS Environment variables, but rather bring your configurations in at Runtime (further discussed in the Idiomatic Pattern: Config at runtime).

The reason this is clearly an Anti-Pattern is that the OS environment is weakly protected, at best. I don't have all of the answers – my goal is to spur on the discussion and to help raise visibility to the issue by providing demonstrable examples.

For example, I have a process running Node.JS in a container. From the base host running my containers I can see all processes (even those inside a container). I can find this processes with a process list and a grep:

HOST:~$ ps -eaf|grep node
root  10496 10469  0 Jan18 ?  00:02:03 /app/node/bin/node index.js

From this I can see it is process id 10496. To read my node container's environment variables I can use a simple python script I save as readenv:

#!/usr/bin/env python
import sys
with open("/proc/{}/environ".format(sys.argv[1])) as infile:

This will read in that processes' environment, and print it to my screen. I then run it as follows (I have redacted some lines, leaving only a few to demonstrate the results):


Try this on your own, to get a feel for the scope on your own systems. Do you see your passwords and cryptographic key material? If so, how do your security/compliance measures address this trivial disclosure in plain text?


A few counter points can be made. Consider this all in the context that your backing store passwords and any cryptographic key material are commonly available in these environment variables.

  1. You must have root access — you are right, but root pivots happen. What about in a managed environment? Do you trust the operations team managing your hosts? (i.e. Heroku? or whomever else is hosting for you).
  2. Any process in my container can already see my environment — This is not a good thing. Following the principle of least privilege, your sensitive data should only live where it is required. Storing it at a global level is dangerous.
  3. Docker Secrets! (or Kubernetes) — these are a poor attempt to address the problem. Anything within the container can still read the secret configurations, meaning it is only marginally different from the OS environment, and they are difficult to manage.

The point of my assertion is that this is an Anti-Pattern, meaning that it is something easy to do, but which is ultimately a dangerous practice.

So what can you do?

Start by thinking more about how your sensitive configurations are introduced to your containers. Don't just default to stuffing your environment variables. A few options:

  • Have your service directly query a secure data store, such as Vault or Reflex, and store the received configuration safely.
  • Improve our secrets delivery systems so that the secrets themselves are ephemeral — ideally the secret would disappear after it is read.
  • Introduce your configuration on STDIN - just like we expect containers to write to STDOUT, why not read our configurations from STDIN? (Reflex supports this)
Brandon Gillespie

Brandon Gillespie

Brandon loves Tech and has a breadth of experience including hands-on Implementation and Leadership Roles across many companies including small startups, enterprises, and the USAF.

Read More