Containerisation of services has numerous advantages in terms of repeatability, security, etc. This is all well and good for an application server and related infrastructure, but how to do this in a light weight fashion, for instance, serving static websites?
The ultimate goal is to be able to leverage the benefits of modern devops with low overhead. This blog is hosted on a VPS with 1GB RAM for instance. To do this we utilises:
- SaltStack for configuration management
- nginx-proxy for ingress
- goStatic based web image for providing static content
Configuration Management
SaltStack is a Python based configuration management system. It is in the same league as Puppet, Chef, and others. It can be used in a master minion (worker) mode, or a standalone mode, which is how it is used here (using salt-call --local state.highstate
).
The details of how to use Salt is beyond this article.
The key states are:
- network which sets up the docker network
- webfront which sets up the ingress
- sites which starts the site containers
I also use the fail2ban formula as a first line security response.
top.sls
:
base:
'host':
- network
- webfront
- sites
- fail2ban
network.sls
:
main_network:
docker_network.present:
- name: main_network
webfront.sls
:
include:
- network
nginx_container:
docker_container.running:
- restart: unless-stopped
- image: jwilder/nginx-proxy:alpine
- port_bindings:
- 80:80
- 443:443
- volumes:
- /etc/nginx/certs
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- binds:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/web/vhost.d:/etc/nginx/vhost.d:rw
- networks:
- main_network
letsencrypt:
docker_container.running:
- restart: unless-stopped
- image: jrcs/letsencrypt-nginx-proxy-companion:v1.12.1
- binds:
- /var/run/docker.sock:/var/run/docker.sock:ro
- volumes_from:
- nginx_container
sites/init.sls
:
include:
- sites.blog_nigelsim_org
sites/blog_nigelsim_org.sls
:
site-blog-nigelsim-org:
docker_container.running:
- image: blog.nigelsim.org
- restart: unless-stopped
- environment:
- VIRTUAL_HOST: blog.nigelsim.org,nigelsim.org,www.nigelsim.org
- LETSENCRYPT_HOST: blog.nigelsim.org,nigelsim.org,www.nigelsim.org
- VIRTUAL_PORT: 8043
- networks:
- main_network
- ports: 8043
Ingress - nginx-proxy
nginx-proxy is a Docker container that can be used for really easy web ingress. It listens to the Docker socket for containers coming up and shutting down, and adjusts the proxying according to the VIRTUAL_HOST
environment variable on the container.
It also has a Lets Encrypt companion container that handles the certificate generation based off the LETSENCRYPT_HOST
environment variable.
This makes exposing the sites very easy, if they are the entire site, and not just on a context path (for another time).
Content Serving
This is possibly the main point of interest: how do you serve up files in a light weight manor. I investigated a number of options including:
- Host mounting a directory into the nginx container - but this is then hard to atomically deploy. Git would be one option, but it didn’t feel to be in keeping with the spirit of things
- A data only Docker container that could be mounted - it does not appear that this is a thing. At least not in the sense that it could be mounted/unmounted against a running container
- A very light weight webserver to serve the file - seems like the only option
Wanting to keep the container tiny, and the CPU/RAM overhead tiny, options like Apache/Nginx/LightHTTP weren’t going to fly. The solution I ended up running with is to use the goStatic Docker image that is 6MB in size (small, but not tiny) and very memory friendly (although apparently serving large files can be problematic).
To do this, I build a site image based on the goStatic base image, and copy my files in.
Dockerfile
:
FROM pierrezemb/gostatic
COPY . /srv/http/
And to cut out the middle-man of a container registry, I build the image directly onto the host, so that at next highstate it gets deployed:
DOCKER_HOST=ssh://host docker build -t blog.nigelsim.org:latest .
Conclusion
Next steps would include putting the build/deploy into a CI/CD pipeline, but given the low frequency of updates this isn’t a priority.
Also, while the nginx-proxy is sufficient, and highly configurable, it can be hard to do things like hosting one domain name to multiple backends (microservices). This just isn’t the use case, and something like Traefik may be a better fit.
Finally, Salt is fine for this kind of deployment, but you do have to cover off on all of your state changes (e.g., to make a container go away you need to make an absent
state), which a you don’t need to worry about in fully managed system like Kubernetes.