All posts by Bjørn

Docker MySQL with multiple databases

As discussed before I am containerizing this server for ease of development and portability. One problem is initializing the database, which is an RDS with several databases inside for different applications. Having the container initialize with the correct permissions on all databases was a headache that I at first solved by customizing the docker image, but I finally have a stable way of doing it with the upstream image.

Docker is awesome, I think we can all agree, and the RDB images like mysql and mariadb are essential for a lot of applications. Those databases have good docker images ready to go with helpful initialization support like environment variables to create the desired database, user, and setting root and user passwords. The containers also support initializing db state using a .sql or .sql.gz file in a specific directory, which is very useful for when you want to work on real data and not fixtures/fresh and empty databases. Using docker-compose, you could initialize a db container like this:

services:
  database:
    image: mysql:5.7
    volumes:
      - "./blog.sql.gz:/docker-entrypoint-initdb.d/initdb.sql.gz"
      - "./database/data:/var/lib/mysql"
    container_name: "database"

    environment:
      MYSQL_USER: "dbuser"
      MYSQL_PASSWORD: "somepassword"
      MYSQL_DATABASE: "blog"
      MYSQL_RANDOM_ROOT_PASSWORD: 1

The variables are only hardcoded for the purposes of example and you should be using secrets instead. An extra neat thing is that we’re using environment variables to tell the docker image to create the blog database, but the sqldump also has that database, and this all works as you would expect: The database is created, the dump is applied, and the user is granted access on it. There is one huge limitation though; Using the environment variables you can only create a single database in this way that the user will be granted all privileges on. My server has several apps that have separate databases, and I would like to be able to keep adding more! How can I do that?

It turns out the initialization files are loaded in alphabetical order! If only I could create an SQL file that grants access on the databases I need…

CREATE DATABASE IF NOT EXISTS archive;

SET @grantto = (select User from mysql.user where User!="root" and Host!="localhost");
SET @grantStmtText = CONCAT("GRANT ALL ON archive.* to ", @grantto);
PREPARE grantStmt FROM @grantStmtText;
EXECUTE grantStmt;

Now this is dark magic, and is likely to break in the future in strange ways. The first line speaks for itself. The second line assigns the username defined in the MYSQL_USER environment variable to the mysql user defined variable @grantto. I’m taking advantage of a known initial state for the database, as there doesn’t seem to be a way to read actual environment variables from within mysql. The only users allowed access are some internal mysql users and our user created from the environment variable. Next I just construct the grant statement as a string. The last two lines are turning that string into a mysql expression and executing it, et voila, our user has access to the archive database!

Now we just take advantage of the alphabetic nature of the init files to add this short sql file to our docker-compose.yml like so:

services:
  database:
    image: mysql:5.7
    volumes:
      - "./blog_and_archive.sql.gz:/docker-entrypoint-initdb.d/initdb.sql.gz"
      - "./grant-all.sql:/docker-entrypoint-initdb.d/zz-grant-all.sql"
      - "./database/data:/var/lib/mysql"
    container_name: "database"

    environment:
      MYSQL_USER: "dbuser"
      MYSQL_PASSWORD: "somepassword"
      MYSQL_DATABASE: "blog"
      MYSQL_RANDOM_ROOT_PASSWORD: 1

If we clear out the database data and restart the container it will start with our user having access to both “blog” and “archive”! We could tweak it even further in order to figure out which databases were created and granting on all of them, but I have a manageable amount of databases and a job, so I’m not doing it.

containerized blog

Another chapter in the ever growing book that is the story of my blog, as is good and right for any developer.

This is now coming at you from docker-compose. The blog, I mean. It used to be on a normal digital ocean droplet running on bare metal (well, low tier instance so probably a vmware instance but you know what I mean). Even worse, to my great shame it was just a normal wordpress instance. Now, it’s still running on that same vmware instance and it’s still wordpress, but it’s using roots/bedrock.

Dark Mode

bedrock (this link opens in a new window) by roots (this link opens in a new window)

WordPress boilerplate with modern development tools, easier configuration, and an improved folder structure

roots/bedrock lets you manage wordpress as a composer dependency, including themes and plugins. Essentially that means the whole blog is now a git repo with a single composer.json and composer.lock file. Of course there’s a bit more to it with .env files and persistent stuff, but essentially that’s it. This is very cool on its own, but just moving one wordpress site to using composer isn’t cool enough, so I did the same for the archive. The archive was using some plugins that don’t even exist anymore, but I manged to find and patch their successors well enough to keep it afloat, so now that’s also managed with composer. That means I can easily upgrade and patch both blogs on my machine, test them here, and if everything work quickly run the same upgrade in a predictable manner in production. Cool.

But this server doesn’t just host wordpress, it’s also running my nrk_subs app, my cv app, and new as of today, my lolz aggregator. What I really want is to run everything in nice little docker containers so I can duplicate everything locally and develop it further there in the same way I would do at work, so that’s what I did. I first built the containers I needed for the blogs and then started incorporating the other projects which were already mostly containerized. So currently, this is the docker-compose.yml that manages everything here.

version: "3.8"

services:
  database:
    build:
      context: "./database/docker"
    volumes:
      - "./storage/blog_and_archive.sql.gz:/docker-entrypoint-initdb.d/initdb.sql.gz"
      - "./database/data:/var/lib/mysql"
    container_name: "database"
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "--silent"]
    command: "--default-authentication-plugin=mysql_native_password"
    env_file: .env
    environment:
      MYSQL_DATABASE: $MYSQL_BLOG_DATABASE
      MYSQL_RANDOM_ROOT_PASSWORD: 1

  blog:
    image: brbcoffee/blog-base
    env_file: .env
    depends_on:
      - database
    environment:
      DB_HOST: database:3306
      DB_USER: $MYSQL_USER
      DB_PASSWORD: $MYSQL_PASSWORD
      DB_NAME: $MYSQL_BLOG_DATABASE
      WP_HOME: $WP_HOME_BLOG
      WP_SITEURL: $WP_SITEURL_BLOG
      XDEBUG_CONFIG: remote_host=172.17.0.1
    volumes:
      - "./blog/:/var/www/blog"
      - "./storage/media/blog:/var/www/blog/web/app/uploads"

  archive:
    image: brbcoffee/blog-base
    env_file: .env
    depends_on:
      - database
    environment:
      DB_HOST: database:3306
      DB_USER: $MYSQL_USER
      DB_PASSWORD: $MYSQL_PASSWORD
      DB_NAME: $MYSQL_ARCHIVE_DATABASE
      WP_HOME: $WP_HOME_ARCHIVE
      WP_SITEURL: $WP_SITEURL_ARCHIVE
      XDEBUG_CONFIG: remote_host=172.17.0.1
    volumes:
      - "./archive/:/var/www/archive"
      - "./storage/media/archive:/var/www/archive/web/app/uploads"

  proxy:
    image: brbcoffee/proxy
    env_file: .env
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - blog
      - archive
      - cv
      - subs
    volumes_from:
      - blog
      - archive
      - lolz

  mailhog:
    image: mailhog/mailhog
#    ports:
#      - "1025:1025"
#      - "8025:8025"

  cv:
    image: brbcoffee/cv
    volumes:
      - "./storage/resume/CV.xml:/app/data/CV.xml"

  subs:
    image: "brbcoffee/subs"

  lolz:
    image: php:7.3-fpm
    environment:
      - APP_ENV=prod
    volumes:
      - "./lolz:/var/www/lolz"

  lolz-cron:
    image: brbcoffee/lolz-cron
    environment:
      - APP_ENV=prod
    volumes:
      - "./lolz:/app

As you can see a lot is managed in the .env file, and a lot of code is mounted in. The code mounting’s not necessary for everything, and I’ll be tweaking it going forward, but for now I mostly wanted to get it live so I had an MVP to work from. There are also a lot of brbcoffee/* images here, those are built in a Makefile specific to the project. I factored it out of the docker-compose.yml file in order to separate concerns a bit once the docker-compose.yml file started getting too hairy. The goal is to get rid of the droplet entirely and run the whole setup in kubernetes or something like that.

One hiccup was ssl. The rest has actually been working for weeks, but I couldn’t figure out a clean way to do ssl. In the end I decided I’m ok with not having the certificates automatically renew in version one and just fetched a wildcard with certbot and built it into the proxy container for now.

So there it is, all the stuff on brbcoffee now runs in docker containers under docker-compose. The blogs and the proxy are in the main repo, while the other services have their own repositories which are installed as git submodules. I can toggle a single .env variable and add a build arg and have node serve in dev mode, have the blog containers run xdebug, and have the python containers run a debugpy listener for fullstack local dev. Pretty cool stuff.

I made an nrk thing

Have you ever wanted to leisurely browse the subtitles for your favorite TV show? Does that specific show currently exist in the web player for the Norwegian Broadcasting Company? Boy do I have news for you!

It was made as a language learning aid, and it started as a very simple scraping app. The problem with that approach was that NRK kept changing their website so my app kept breaking. Recently I found out that they actually have an API for most of the stuff I needed, and now the app is stable enough to publish! I still have to scrape the website to figure out what shows are available, but after that we’re pretty stable.

I’m not a designer. Actually, I don’t have a single designer bone in my body, but I still did my best. The react app has drop shadows, and it’s somewhat responsive. All of which goes away when you want to print something. What more could you want?

If anyone wants to help me pick better colors, feel free to make a pull request to the repo! If you’re wondering how it looks on desktop, just run the docker container and find out! It’s all there in the readme.

Let’s do a song of the blog, it’s been a while.


I’m a wizard

My week

So you’ve been working hard and you’re just about to go on holiday break, but what’s this? You have to enter your hours into <insert invoicing system here> before you can go? You don’t remember what you did two days ago?

So I made this for me, but also more importantly I made this for you.

So you’re looking to move in Norway

Hey, long time no blag.

My other half and I are looking to relocate, which means spending lots of time on FINN.no trying to find the perfect house. One of our main criteria is travel distance to work, as I imagine is the case for most people. Lucky for us, we both work in the same area, which makes figuring that part out quite easy. Now, FINN has some nifty features that can help with this, but all of them will move you away from the page you’re browsing, or open in a new tab, or take you to google maps or something. Super useful if you’re only looking at one property, but that is most decidedly not how we do things around here.

Just a few tabs

Can’t go opening another tab for each of those tabs to see which is closer to work, cant follow links and get lost and possibly lose the best one out there. So what’s a poor web developer to do?

Dark Mode
It’s a chrome extension

I made a chrome extension. It’s not in the “app store”, you’ll have to go to chrome://extensions and turn on developer mode so you can load the unpacked extension yourself. Once you have it installed it’s pretty self explanatory, set your work address, then start finding trips to work by searching in the From address field.

But that’s not all! This extension was made for FINN.no. So if you’re viewing an ad for a house/appartment/box on FINN, the extension will automatically use the metadata on the page to grab the coordinates of your dream home, and look up the trip right away, no interaction required!

That’s it for me for now. I got to play with two Norwegian public APIs to make this, one to look up addresses and turn them into GPS coordinates, which was just plain old REST from https://geonorge.no. Cool stuff, but hardly revolutionary. The second one was from https://en-tur.no, which really surprised me. They have a fully fleshed out graphql API! Documentation was a bit sparse, but with your standard igraphql browser “IDE”, we got there.

I’ve started to take note of the different Norwegian services that expose public APIs, and I’m definitely pleasantly surprised by what’s out there. Just look around and ye shalt finde coole shit.

Bye

PS: I’ve been postponing blogging because I’ve been working on replacing WordPress with some homebrew solution. It’s getting there, but I have a job and a house hunt to deal with, so don’t hold your breath.

Certbot and apache

I promised a blog post detailing changes I needed to make to my apache config in order to move BRBcoffee to Https, but in hindsight there isn’t much to write about it, it’s basically just a refactor.

Certbot, the tool from EFF (written in Python, yay!) that gets ssl certs from Let’s Encrypt, doesn’t work with monolithic conf files with multiple hosts. I run all my projects on the same server, routing traffic based on the site address using apache’s VirtualHost directive. It used to look like this:

<VirtualHost *:80>
    DocumentRoot "/var/www/blog"
    ServerName blog.brbcoffee.com
    # Some more directives
</VirtualHost>
<VirtualHost *:80>
    DocumentRoot "/var/www/archive"
    ServerName archive.brbcoffee.com
    # Some more directives
</VirtualHost>
<VirtualHost *:80>
    ProxyPreserveHost On
    ProxyRequests Off
    ServerName cv.brbcoffee.com
    ProxyPass / http://localhost:5000/
    ProxyPassReverse / http://localhost:5000
</VirtualHost>

So what you need to do is rip all of that out, you don’t want it in there. In their place you want this:

IncludeOptional conf.d/*.conf

Depending on your packager, it may be that this directive is already somewhere in your httpd.conf file. If it is, great, just leave it be. After that you want to take each VirtualHost that you ripped out of the main httpd.conf, and place them in individual files, like so:

<VirtualHost *:80>
    DocumentRoot "/var/www/blog"
    ServerName blog.brbcoffee.com
    # Some more directives
</VirtualHost>

blog.conf

The configuration doesn’t change, it just needs to be in separate file for certbot to be able to do its thing. You see, after certbot goes out and gets your certificates it needs to add some rules to each vhost for redirecting traffic to ssl, which I guess they didn’t want to write a lot of ugly parsing code to be able to do in a program that really isn’t about that (although it should be trivial with BeautifulSoup.

Anyway, before, running certbot –apache probably didn’t work, it got the certs for you, but couldn’t edit your conf to do what we want. Now, when you run it, it’ll complete just fine. If you chose to set all traffic to go to https, it will add three redirect lines to your conf files, and it will create a new file as well, in my case, blog-le-ssl.conf. It’s identical to the old conf file, except that it is on port 443, and that it checks that mod_ssl is loaded. All of this is stuff we could have done ourselves, of course, but it’s a convenience thing.

So that’s all there is to it. Refactor your httpd.conf by separating each virtualhost into a different file, and run certbot –apache again.

Https

Welcome to Https BRBcoffee, and thank you to Let’s Encrypt and certbot for making it a breeze, mostly.
Tomorrow there will be a blog post up about the changes I needed to make to my Apache configuration in order to get certbot to play nice.

For now, glory in that green padlock! And don’t go to the archive if you want it to stay green. WordPress doesn’t automatically update image links, so I’ll need to fix that at some point. The CV though I had no trouble with, even though it’s a Flask app that Apache just redirects traffic to. Good job, Apache, good job, python!

Typescript is genius

I’m teaching myself typescript, because why not, and right off the bat I’m blown away by the genius that is using public as a prefix to constructor arguments.
If you haven’t seen it before, it looks like this:

class Human {
    constructor(public name, public age, public job){
        // do more constructor things here
    }
}


And that’s the same as doing this:

class Human {
    constructor(name, age, job){
        this.name = name;
        this.age = age;
        this.job = job;
    }
}


It just takes the argument and sets a field with that same name in the object! It’s not a revolutionary feature, but it saves so much time in the long run. Anyway, back to it. Look forward to reading about my journey through NativeScript soon, possibly, maybe.

Switching from screen to tmux

Hello avid readers of yesteryear!

I’ve recently moved from working in a Linux/Windows environment to a Linux/Windows/OS X environment, and as such I’ve had to make some small changes to my workflows. I’m here to tell you what went wrong and how to fix it (hint: it’s in the title)

XKCD comic about using old software configured for you
Relevant XKCD title text: 2078: He announces that he’s finally making the jump from screen+irssi to tmux+weechat.

Now I’m the guy who just has it set up the way I want. I use screen in linux, and I use screen in the amazingly named Bash on Ubuntu on Windows. It works how I need it to work and I’m able to get things done. Now we introduce Mac OS X to the mix, and a seemingly tiny problem arises:

screenshot of vim
vim in normal terminal session

Screenshot of vim with different colors
vim inside a screen session

Try to spot the difference. I’ll wait.

The problem is a vim plugin called airline, which uses a lot of colors while enhancing vim’s normal ui. Something happens with the way screen identifies itself, that confuses airline, and makes the colors change slightly. No big deal, but it also makes the text less readable, which can be a bigger problem. Now there exists a separately compiled gnu screen for mac, specifically made to fix problems with screen colors. That screenshot was taken using that binary, and it looks identical to the native one. I spent about 2 hours trying to figure out a workaround for this problem, but in the end I decided to just finally give tmux a try, I’d been meaning to get around to that anyway.

vim in tmux session
That was easy

Okay, so tmux can handle colors better than screen, but what about all the other features from screen that we’re used to and love? Well once you’ve remapped your escape key to the one you’re used to from Gnu screen, you should be totally fine. To do this just add unbind C-b with C-a, replacing ‘a’ with whatever key you prefer. Splitting panes is done with ‘%’ and ‘”‘ in tmux, but you can simply unbind and bind to whatever you’re used to. Detaching is the same as always, attaching is done with “attach” instead of “-r”, so fairly easy to remember. Mouse mode is just as easy to enable as before, just replace mousetrack on from .screenrc with set -g mouse on in .tmux.conf.

All in all there isn’t much to write about when moving from screen to tmux. They do the same job, but tmux does it better, since it’s been built for a modern world than the literally 30 year old GNU Screen. If you’re using screen still, give tmux a try. You may spend a little time in the config file at first remapping things, but I swear it’s worth it.