Category Archives: Linux

containerized blog

Another chapter in the ever growing book that is the story of my blog, as is good and right for any developer.

This is now coming at you from docker-compose. The blog, I mean. It used to be on a normal digital ocean droplet running on bare metal (well, low tier instance so probably a vmware instance but you know what I mean). Even worse, to my great shame it was just a normal wordpress instance. Now, it’s still running on that same vmware instance and it’s still wordpress, but it’s using roots/bedrock.

Dark Mode

bedrock (this link opens in a new window) by roots (this link opens in a new window)

WordPress boilerplate with modern development tools, easier configuration, and an improved folder structure

roots/bedrock lets you manage wordpress as a composer dependency, including themes and plugins. Essentially that means the whole blog is now a git repo with a single composer.json and composer.lock file. Of course there’s a bit more to it with .env files and persistent stuff, but essentially that’s it. This is very cool on its own, but just moving one wordpress site to using composer isn’t cool enough, so I did the same for the archive. The archive was using some plugins that don’t even exist anymore, but I manged to find and patch their successors well enough to keep it afloat, so now that’s also managed with composer. That means I can easily upgrade and patch both blogs on my machine, test them here, and if everything work quickly run the same upgrade in a predictable manner in production. Cool.

But this server doesn’t just host wordpress, it’s also running my nrk_subs app, my cv app, and new as of today, my lolz aggregator. What I really want is to run everything in nice little docker containers so I can duplicate everything locally and develop it further there in the same way I would do at work, so that’s what I did. I first built the containers I needed for the blogs and then started incorporating the other projects which were already mostly containerized. So currently, this is the docker-compose.yml that manages everything here.

version: "3.8"

services:
  database:
    build:
      context: "./database/docker"
    volumes:
      - "./storage/blog_and_archive.sql.gz:/docker-entrypoint-initdb.d/initdb.sql.gz"
      - "./database/data:/var/lib/mysql"
    container_name: "database"
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "--silent"]
    command: "--default-authentication-plugin=mysql_native_password"
    env_file: .env
    environment:
      MYSQL_DATABASE: $MYSQL_BLOG_DATABASE
      MYSQL_RANDOM_ROOT_PASSWORD: 1

  blog:
    image: brbcoffee/blog-base
    env_file: .env
    depends_on:
      - database
    environment:
      DB_HOST: database:3306
      DB_USER: $MYSQL_USER
      DB_PASSWORD: $MYSQL_PASSWORD
      DB_NAME: $MYSQL_BLOG_DATABASE
      WP_HOME: $WP_HOME_BLOG
      WP_SITEURL: $WP_SITEURL_BLOG
      XDEBUG_CONFIG: remote_host=172.17.0.1
    volumes:
      - "./blog/:/var/www/blog"
      - "./storage/media/blog:/var/www/blog/web/app/uploads"

  archive:
    image: brbcoffee/blog-base
    env_file: .env
    depends_on:
      - database
    environment:
      DB_HOST: database:3306
      DB_USER: $MYSQL_USER
      DB_PASSWORD: $MYSQL_PASSWORD
      DB_NAME: $MYSQL_ARCHIVE_DATABASE
      WP_HOME: $WP_HOME_ARCHIVE
      WP_SITEURL: $WP_SITEURL_ARCHIVE
      XDEBUG_CONFIG: remote_host=172.17.0.1
    volumes:
      - "./archive/:/var/www/archive"
      - "./storage/media/archive:/var/www/archive/web/app/uploads"

  proxy:
    image: brbcoffee/proxy
    env_file: .env
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - blog
      - archive
      - cv
      - subs
    volumes_from:
      - blog
      - archive
      - lolz

  mailhog:
    image: mailhog/mailhog
#    ports:
#      - "1025:1025"
#      - "8025:8025"

  cv:
    image: brbcoffee/cv
    volumes:
      - "./storage/resume/CV.xml:/app/data/CV.xml"

  subs:
    image: "brbcoffee/subs"

  lolz:
    image: php:7.3-fpm
    environment:
      - APP_ENV=prod
    volumes:
      - "./lolz:/var/www/lolz"

  lolz-cron:
    image: brbcoffee/lolz-cron
    environment:
      - APP_ENV=prod
    volumes:
      - "./lolz:/app

As you can see a lot is managed in the .env file, and a lot of code is mounted in. The code mounting’s not necessary for everything, and I’ll be tweaking it going forward, but for now I mostly wanted to get it live so I had an MVP to work from. There are also a lot of brbcoffee/* images here, those are built in a Makefile specific to the project. I factored it out of the docker-compose.yml file in order to separate concerns a bit once the docker-compose.yml file started getting too hairy. The goal is to get rid of the droplet entirely and run the whole setup in kubernetes or something like that.

One hiccup was ssl. The rest has actually been working for weeks, but I couldn’t figure out a clean way to do ssl. In the end I decided I’m ok with not having the certificates automatically renew in version one and just fetched a wildcard with certbot and built it into the proxy container for now.

So there it is, all the stuff on brbcoffee now runs in docker containers under docker-compose. The blogs and the proxy are in the main repo, while the other services have their own repositories which are installed as git submodules. I can toggle a single .env variable and add a build arg and have node serve in dev mode, have the blog containers run xdebug, and have the python containers run a debugpy listener for fullstack local dev. Pretty cool stuff.

Certbot and apache

I promised a blog post detailing changes I needed to make to my apache config in order to move BRBcoffee to Https, but in hindsight there isn’t much to write about it, it’s basically just a refactor.

Certbot, the tool from EFF (written in Python, yay!) that gets ssl certs from Let’s Encrypt, doesn’t work with monolithic conf files with multiple hosts. I run all my projects on the same server, routing traffic based on the site address using apache’s VirtualHost directive. It used to look like this:

<VirtualHost *:80>
    DocumentRoot "/var/www/blog"
    ServerName blog.brbcoffee.com
    # Some more directives
</VirtualHost>
<VirtualHost *:80>
    DocumentRoot "/var/www/archive"
    ServerName archive.brbcoffee.com
    # Some more directives
</VirtualHost>
<VirtualHost *:80>
    ProxyPreserveHost On
    ProxyRequests Off
    ServerName cv.brbcoffee.com
    ProxyPass / http://localhost:5000/
    ProxyPassReverse / http://localhost:5000
</VirtualHost>

So what you need to do is rip all of that out, you don’t want it in there. In their place you want this:

IncludeOptional conf.d/*.conf

Depending on your packager, it may be that this directive is already somewhere in your httpd.conf file. If it is, great, just leave it be. After that you want to take each VirtualHost that you ripped out of the main httpd.conf, and place them in individual files, like so:

<VirtualHost *:80>
    DocumentRoot "/var/www/blog"
    ServerName blog.brbcoffee.com
    # Some more directives
</VirtualHost>

blog.conf

The configuration doesn’t change, it just needs to be in separate file for certbot to be able to do its thing. You see, after certbot goes out and gets your certificates it needs to add some rules to each vhost for redirecting traffic to ssl, which I guess they didn’t want to write a lot of ugly parsing code to be able to do in a program that really isn’t about that (although it should be trivial with BeautifulSoup.

Anyway, before, running certbot –apache probably didn’t work, it got the certs for you, but couldn’t edit your conf to do what we want. Now, when you run it, it’ll complete just fine. If you chose to set all traffic to go to https, it will add three redirect lines to your conf files, and it will create a new file as well, in my case, blog-le-ssl.conf. It’s identical to the old conf file, except that it is on port 443, and that it checks that mod_ssl is loaded. All of this is stuff we could have done ourselves, of course, but it’s a convenience thing.

So that’s all there is to it. Refactor your httpd.conf by separating each virtualhost into a different file, and run certbot –apache again.

Switching from screen to tmux

Hello avid readers of yesteryear!

I’ve recently moved from working in a Linux/Windows environment to a Linux/Windows/OS X environment, and as such I’ve had to make some small changes to my workflows. I’m here to tell you what went wrong and how to fix it (hint: it’s in the title)

XKCD comic about using old software configured for you
Relevant XKCD title text: 2078: He announces that he’s finally making the jump from screen+irssi to tmux+weechat.

Now I’m the guy who just has it set up the way I want. I use screen in linux, and I use screen in the amazingly named Bash on Ubuntu on Windows. It works how I need it to work and I’m able to get things done. Now we introduce Mac OS X to the mix, and a seemingly tiny problem arises:

screenshot of vim
vim in normal terminal session

Screenshot of vim with different colors
vim inside a screen session

Try to spot the difference. I’ll wait.

The problem is a vim plugin called airline, which uses a lot of colors while enhancing vim’s normal ui. Something happens with the way screen identifies itself, that confuses airline, and makes the colors change slightly. No big deal, but it also makes the text less readable, which can be a bigger problem. Now there exists a separately compiled gnu screen for mac, specifically made to fix problems with screen colors. That screenshot was taken using that binary, and it looks identical to the native one. I spent about 2 hours trying to figure out a workaround for this problem, but in the end I decided to just finally give tmux a try, I’d been meaning to get around to that anyway.

vim in tmux session
That was easy

Okay, so tmux can handle colors better than screen, but what about all the other features from screen that we’re used to and love? Well once you’ve remapped your escape key to the one you’re used to from Gnu screen, you should be totally fine. To do this just add unbind C-b with C-a, replacing ‘a’ with whatever key you prefer. Splitting panes is done with ‘%’ and ‘”‘ in tmux, but you can simply unbind and bind to whatever you’re used to. Detaching is the same as always, attaching is done with “attach” instead of “-r”, so fairly easy to remember. Mouse mode is just as easy to enable as before, just replace mousetrack on from .screenrc with set -g mouse on in .tmux.conf.

All in all there isn’t much to write about when moving from screen to tmux. They do the same job, but tmux does it better, since it’s been built for a modern world than the literally 30 year old GNU Screen. If you’re using screen still, give tmux a try. You may spend a little time in the config file at first remapping things, but I swear it’s worth it.

Configuring a linux firewall

So you’ve got your Linux server going, it’s configured the way you want it , and everything is coming up roses. You try to ping it, but the server doesn’t seem to exist. You’ve been blocked by some of the best/most insane firewall in the galaxy: iptables. A firewall’s job is to block unwanted traffic, and iptables takes its job seriously. By default, it. drops. everything. Got a http request incoming? Psh, drop that packet. I don’t care if you’ve got apache running. FTP request? Same story. Mysql? Nope.

A cat shoving things off a desk
This is iptables

Ssh is usually fine though, so we can log in and edit the rules. Iptables rules are added to rule chains. The only chain we’re interested in is the INPUT chain for now; We want to be able to receive http requests to our server, ssh connections, and nothing else. We’ll also want to allow existing connections to persist. These are the switches we’ll be using (you can find all these in the manpages, of course, but some are in the iptables-extensions manpage).

  • -F flushes the rulechains. This means exactly what you’d think.
  • -A [chain] adds a rule to the specified chain.
  • -I [chain] [number] same as -A but inserts rule at a given point in the chain.
  • -i [interface] specifies the interface the rule will act on. If you don’t specify this, the rule will act on all interfaces, including loopback (more on this later).
  • -p [protocol] specifies whether the rule is for tcp, udp, or whathaveyou.
  • --dport [port] further narrows down the packets to look at by checking which port they’re headed for.
  • -m [match] this is an extension that looks at the type of traffic the packet belongs to. We use it with:
  • --state [state], which asks a different module called conntrack whether the connection is INVALID, NEW, ESTABLISHED, RELATED, or UNTRACKED. This is magic, I have only a vague understanding of how it works.
  • -j [policy] says whether to accept or drop the packet.

Alright, let’s get to it. You can think of iptables as a sieve, where every rule along the way checks out a packet and decides whether to keep it or discard it. If the rule doesn’t match the packet, it moves further down the sieve to be checked by other rules. Therefore, at the end of it, if the packet doesn’t match any of our rules, we will just discard it. A good policy for internet traffic is that if you don’t know what it is, don’t touch it. Every rule we add gets added last in the chain/sieve.

A script demonstrating the use of iptables

And that’s it. We’ve configured our firewall. It will reset every time you reboot your server, but that isn’t often. I just keep a script like the one above to reconfigure it. You can get NetworkManager to configure it for you on boot, but I don’t really see the point unless you reboot your server all the time, which, I mean, why would you do such a thing?