Ansible Webserver Rebuild


In this post I want to cover a project I recently finished, that uses ansible to completely re-install the VPS that this site runs on. I already had much of of the config around the webserver configuration automated but I have just finished adding all the other jobs. The drive to get this finished was that I was moving from my previous VPS supplier (Digital Ocean) to a new one (Vultr).

My goal was be in a situation where I could easily move providers again and the entrie setup of the server be fully automated. So I simply create a new VPS with my SSH key in, wait for it to provision, change the IP in DNS to match the new one and then run my playbook. The good news is that this is exactly where I am at now. Its 100% easily repeatable and I have destroyed the server and done the whole thing again and again to ensure its flawless.

Why the Move

Firstly a quick side note on why I moved away from Digital Ocean. I have been a happy Digital Ocean customer for about three years, during this time I have never had any issues and I can honestly say I was 100% happy with their service. Then CentOS 8 was released and I decided to move my Blog from Fedora Server to CentOS 8, however Digital Ocean didnt have a CentOS 8 image. So I waited for a month, still nothing, its now nearly four months and still no CentOS 8 image. While in the forums reading varisous “where is CentOS 8 threads, I noticed someone reply that Vultr has the image available and has done for a while. I needed no other reason to give them a try. I signed up and played around firing up their CentOS 8 image in various locations around the globe and I was really impressed. I was so impresed I decided move my site over to Vultr right away.

Finishing the Job

As I say I had much of the configuration around the webserver automated but I took the opportunity to automate the rest. This post details what I have done, it may not be the best way or the most efficient way but it works and I understand it. From the time the new server is available to the time everything is up and running with a fresh LetsEncrypt certificate is around 5 mins (5 mins 19 seconds to be precise).

Automate all the Things

So other than the obvious webserver config and website content, what else is it that I automated? Well just read on


  • /etc/hosts
  • checkmk client
  • enable EPEL
  • configure automatic updates
  • install and configure sslh
  • firewall configuration
  • lets encrypt / certbot config
  • fresh 4096 bit dhparam file creation
  • copy blog content over (Hugo)
  • install, configure & start nginx

There is nothing particularly hard in any of these steps but they all take time and, if done by hand, are prone to human error. The most complex part is actually the Certbot certificate creation. Previously I had written my own ansible scripts to do this but I have now taken the opportunity to use the Galaxy role created by the prolific ++geerlinguy++ user. Where possible I use built in roles, then I look for Galaxy roles and finally if what I find is not suitable or the Galaxy role is to complex, I write a simple role myself

For the bullet points above I have

Lets dive in a little deeper to each one


This is a super simple module that adds a single line into the /etc/hosts file for the servers own IP address and hostname


- name: updating /etc/hosts 
    src: etc/hosts.j2
    dest: /etc/hosts
    owner: root
    group: root
    mode: 0644

files/etc/hosts.j2 localhost

{{ ansible_default_ipv4.address }} {{ ansible_nodename }}


A very simple role that ensures that chekcmk can login in and grab data to put in my checkmk server


  - name: place checkmk script in place
      src: files/check_mk_agent
      dest: /usr/bin/check_mk_agent
      owner: root
      group: root
      mode: 0755

  - name: add entry to authorized keys
      user: root
      key: '{{ item }}' 
      state: present
      - files/checkmk_key 


Enables the EPEL repo (Extra Packages for Enterprise Linux)


Now this is worthy of a few more lines. In the past, I had installed a packages called yum.cron which, on a regular bases, checks for updates and depending how you configure it, will also install them. When a server is internet facing I think its essential to a) always have the latest patches installed b) ensure that SELinux is enabled. With the move from yum to dnf we can now achive this in a different way. dnf-automatic has a very similar configuration, in that it checks periodically and can notify, cache and or update based on a systemd timer.

You can learn more about it here


One of the best things you can do to reduce the number of people trying and failing to login via SSH, is to move SSH to a non-standard port. I like to reduce the number of ports that are open on my server and sslh will allow you to multiplex the SSL port. This means that you can have multiple deamons utilsing port 443 - so SSH, Webserver, OpenVPN etc can all just use port 443.


# requires EPEL enabled, this is done in by geerlingguy.repo-epel 

# install package 
  - name: SSLH | Install package
      name: sslh
      state: present

# upload config
  - name: SSLH | template config file
      src: sslh.cfg.j2
      dest: /etc/sslh.cfg
    notify: Restart sslh

# start service
  - name: Enable the sslh service
      name: sslh.service
      enabled: yes
      state: started

This one has a handler also


# handlers file for sslh
- name: Restart sslh
    name: sslh.service
    state: restarted

The config file only really needs one line changing - the one where I added {{ sslh_ip_address }}


# This is a basic configuration file that should provide
# sensible values for "standard" setup.

verbose: false;
foreground: true;
inetd: false;
numeric: false;
transparent: false;
timeout: 2;
user: "sslh";

# Change hostname with your external address name.
    { host: "{{ sslh_ip_address }}"; port: "443"; }

     { name: "ssh"; service: "ssh"; host: "localhost"; port: "22"; fork: true; },
     { name: "openvpn"; host: "localhost"; port: "1194"; },
     { name: "xmpp"; host: "localhost"; port: "5222"; },
     { name: "http"; host: "localhost"; port: "80"; },
     { name: "tls"; host: "localhost"; port: "443"; log_level: 0; },
     { name: "anyprot"; host: "localhost"; port: "443"; }

You can learn more about it here


As mentioned above I like to limit the number of open firewall ports on my server, so I have just 80 and 443 open and everything else is rejected.


# tasks file for blog_firewall
- firewalld:
    service: "{{ item }}" 
    permanent: yes
    immediate: yes
    state: enabled
    - https
    - http

- firewalld:
    service: "{{ item }}" 
    immediate: yes
    permanent: yes
    state: disabled
    - ssh
    - cockpit


This is the big one, it will enrole you a new SSL certificate and setup a cronjob to automatically renew it

You can read more about it here


Part of increasing the security of your webserver involves generating a 4096 dhparams file. There is an ansble openssl module which can do this but its very slow as its limited in the extra params you can pass it, so I wrote a quick role to run the command manually, with the additional params that speed up the generation of this file

I have left in the other way to do it, just for reference, but its commented out


# tasks file for blog_dhparam
- name: Create /etc/ssl/ directory
    path: /etc/ssl/
    state: directory

# not using this as it takes too long and there is no -dsaparam option
#- name: Generate DH Parameters file (4096 bits)
#  openssl_dhparam:
#    path: /etc/ssl/dhparam.pem
#    size: 4096

- name: Generate DH Parameters file (4096 bits)
  command: openssl dhparam -dsaparam  -out /etc/ssl/dhparam.pem 4096
    creates: /etc/ssl/dhparam.pem

More Info


This module simply uses the builtin syncronize module to copy of the files that Hugo generates. I have a GitLab CICD pipeline that handles uploading new posts, so this only ever needs to be run once


  - name: ensure webroot directory 
      path: "/var/www/{{ ansible_nodename }}"
      recurse: yes
      state: directory
      setype: httpd_sys_content_t

  - name: Copy file with owner and permissions
      src: ~/ansible_blog-dev2_setup/hugo/public/
      dest: "/var/www/{{ ansible_nodename }}"
      dest_port: 443


This one installs and configures nginx - it can take a huge ammount of variables to ensure Im always getting A+ on the SSLLabs tests

I think I will cover all these variables in a differnt post

You can learn more here

The playbook

The actual playbook just references the roles in the correct order, there are a stack of variables in the top of the playbook but as mentioned above I shall cover this in another post

          - etc-hosts 
          - checkmk_client
          - geerlingguy.repo-epel
          - exploide.dnf-automatic
          - sslh
          - blog_firewall
          - geerlingguy.certbot
          - blog_dhparam
          - blog_content
          - geerlingguy.nginx

The playbook takes just over 5 mins to complete and can, of course, be run multiple times if required

I hope this was somewhat interesting, OSG