immutable infrastructure with docker and containers – by jerome petazzoni

This is part of my live blogging from QCon 2015. See my QCon table of contents for other posts.

Immutable Infrastructure

Never change what’s on server- no upgrading packages, no installing new packages.

We upgrade by creating a new server and deploy to it. Keep old server around just in case.

We do this to avoid configuration drift between “identical” servers. Caused by provisioning servers at different time or manual operations. Normally deal with drift by following careful instructions or automation with many edge cases. For example with parallel ssh, what if one server is down/unreachable or repo partially down. Now have different states.

Rolling back is hard because of transitive dependencies. With immutable servers have old versions of everything so rolling back is easy – switch to old server.

Reprovision servers regularly even if “nothing changed” so know can and always usig recent packages. Also, manual changes get reverted so find out while still remember problem/solution.

Improvement: golden image – Create multiple identical servers from same image. Allows to keep past versions around. Downsides: small changes are cumbersome. Even if automate, takes time. But the time a human is involved is small. Can save time with intermediate golden image. Deploy from checkpoint rather than from scratch

How to debug

Workarounds for dealing with debugging tools

  • allow drift but tag as “re-imaging” and schedule self destruct. This lets you install debug tools or whatever you need.
  • Install debug tools on all servers. Have feature switch to enable/disabled

How to store data

Need to retain cross server reprovisioning. One option is to use AWS or outsource. Another option is to externalie files such as putting on SAN. Or can use containers.

Containers
Can build from scratch orincremental. Simple DSL to break down the build into steps.

A Docker file is a glorified shell script. [looks like DOS it has commands like RUN git clone repo

Docker recognizes what parts haven’t changed so doesn’t re-install all the infrastructure for an incremental change such as new app code. But still get benefit of two copies – old one is still around.

Containers can share directories, logs, backups, metrics, etc since on same machine.

Can make container read only to enforce immutablity. Easier security because can’t install. Can make exception for data volumes and prevent execution in that area.

Q&A

  • How to think correctly about config in container? Some in container. Some in app. What about middle ground? Can use command line params if small amount of config. Or environment variables. Or if lots of config, use a volume or repository of config and pass URL. Another approach is dynamic DNS. (use virtual names and then inject real URLs later. Let’s have redis point to different location for dev/qa/prod)
  • What kinds of workloads is container tech suited for? Think any workload can be containerized. Harder to do on desktop such as Open Office. Gets easier over time because automate as more people need it.
  • What is best host to run a container on? The one your ops team knows best.
  • For an app with core data storage, do you need to bring apps down when update database? Yes, a tiny amount of down time. Does break existing connections/transactions. Have that when do manually too; just a longer one.

Impressions: I like the mix of lecture vs demo. And that it wasn’t very docker specific.

Leave a Reply

Your email address will not be published. Required fields are marked *