Ansible is a tool for managing infrastructure. It uses YAML files to define the desired state of servers and then apply the state to the infrastructure. This involves SSHing into one or more remote hosts and running commands. In a live environment, you don't want to be running untested commands on your production servers. This is where testing your Ansible playbooks locally can save you time and effort.

Pre-requisites

Docker Image for Testing

In development we can test the Ansible scripts on a docker image which will allow us to start from a clean slate each time. This sounds simple but is tricky as Ansible is designed to run over SSH and Docker containers don't have SSH installed by default. To get around this, we can use a Docker image that has SSH installed.

FROM ubuntu:22.04

# set shell to bash and error on any failure, unset variables, and pipefail
SHELL ["/bin/bash", "-euo", "pipefail", "-c"]

RUN apt-get update && apt-get install --no-install-recommends openssh-server sudo python3 pip -y && \
    rm -rf /var/lib/apt/lists/* && \
    pip install --no-cache-dir supervisor
    
COPY ./ubuntu.ssh.supervisor.conf /etc/supervisor.conf

# Set the root password
RUN echo "root:root" | chpasswd && \
    # Enable password authentication over SSH for root
    sed 's/\(#\{0,1\}PermitRootLogin .*\)/PermitRootLogin yes/' /etc/ssh/sshd_config > /tmp/sshd_config && mv /tmp/sshd_config /etc/ssh/sshd_config && \
    sed 's/\(#\{0,1\}PasswordAuthentication .*\)/PasswordAuthentication yes/' /etc/ssh/sshd_config > /tmp/sshd_config && mv /tmp/sshd_config /etc/ ssh/sshd_config && \
    # Start the SSH service to create the necessary run symlinks
    service ssh start
    
# Expose docker port 22
EXPOSE 22

CMD ["supervisord", "-c", "/etc/supervisor.conf"]

There's a fair amount going on above so let's break it down.

  • We start with the ubuntu:22.04 image as our base image.
  • We update the package list and install openssh-server, sudo, python3, and pip.
  • We remove the package list to keep the image size down.
  • We install supervisor using pip as it's not available in the Ubuntu package manager.
  • We add a supervisor configuration file that will start the SSH service when the container starts.
  • We set the root password to root.
  • We enable password authentication over SSH for the root user.
  • We start the SSH service to create the necessary run symlinks.
  • We expose port 22 so that we can SSH into the container.
  • We set the user to root and start the supervisor service. This last point is important as it allows us to restart the SSH service without killing the container. By default, Docker containers will stop if the main process stops. By using supervisor, we can keep the container running even if the SSH service stops.

The supervisor config mentioned in the Docker file as ubuntu.ssh.supervisor.conf is as follows:

[supervisord]
nodaemon=true
user=root

[supervisorctl]

[inet_http_server]
port = 127.0.0.1:9001

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[program:sshd]
directory=/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
user=root

You can build the docker image with the following command:

docker build . --file ubuntu.ssh.dockerfile -t prosopo/ubuntu:latest

Initialising a new VPS

When commissioning a new VPS, there are usually a few common steps that you need to take to secure the machine and install relevant software. In our example, we'll demonstrate the following:

  • Installing aptitude
  • Installing required system packages
  • Installing Docker and Docker Compose
  • Creating a new user
  • Disabling root login via SSH
  • Disabling password authentication via SSH
  • Adding our SSH key to the authorized keys file

You can view the playbook in the repo that completes these steps at ./playbooks/init.yml. It calls several other playbooks called apt.yml, docker.yml and ssh.yml. Breaking down playbooks like this makes them reusable across other projects.

---
#1st play to install the dependencies, add the SSH key, add the prosopo SSH user, restart SSH.
- hosts: all
  gather_facts: false
  become: true
  vars:
    ansible_ssh_user: "{{ ansible_user }}"
    ansible_ssh_pass: "{{ ansible_ssh_password }}"
  tasks:
    - name: Install and configure apt and SSH
      include_tasks: "{{ item }}"
      loop:
        - apt.yml
        - docker.yml
        - ssh.yml

Testing Docker Containers Within Docker Containers

Once you have your Docker image set up, you can use it to test your Ansible playbooks. At Prosopo, we run most of our software in Docker containers, so we have a playbook that installs Docker and Docker Compose on a fresh Ubuntu machine. But wait, this is Docker inside Docker! How can we run Docker commands inside a Docker container? The answer is to run the Docker image in privileged mode.

docker run --privileged -d --name=ubuntu -p 33322:22 prosopo/ubuntu:latest

⚠️ Please note that running Docker in privileged mode is not recommended for production environments. It's fine for testing but you should never run Docker in privileged mode in production.

Defining Hosts and running Playbooks

Ansible requires details of the host to run the playbooks on. We can use the following inventory file to specify a host.

---
all:
  hosts:
    server1:
      ansible_host: "localhost"
      ansible_port: "33322"
      ansible_user: "root"
      ansible_ssh_password: "root"
      ansible_sudo_pass: "root"
  vars:
    environment: "development"
    NODE_ENV: "development"
    NODE_OPTIONS: "--max-old-space-size=256"
    sshuser:
      ansible_python_interpreter: /usr/bin/python3
      ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
      ansible_ssh_user: sshuser
      ansible_ssh_user_password: sshuser
      ansible_ssh_public_key_file: "id_ecdsa_prosopo.pub"
      ansible_ssh_private_key_file: "id_ecdsa_prosopo"
      home_folder: "/home/sshuser"

The inventory file above specifies a host called server1 that Ansible will run the playbooks on. The host is running onlocalhost on port 33322 and the initial user is root with the password root.

vars are common variables available to all of your hosts. They are made available to your playbooks as follows: {{ vars.key_name }}. You can use any name you like for keys under the vars section.

In our vars we specify the sshuserthat Ansible will create and the path to an SSH key for authentication.

Running Ansible Playbooks

Remove any running containers:

Remove any running containers and start a new container by running the following command:

docker rm -f $(docker ps -aq -f name=ubuntu) && docker run --privileged -d --name=ubuntu -p 33322:22 prosopo/ubuntu:latest

To run the initialisation playbook, use the following command:

ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ./playbooks/init.yml -i ./inventory/hosts.development.yml

Now test that we are able to SSH into the container:

> ssh -p 33322 sshuser@localhost -i ./accounts/ssh/id_ed25519_example_DO_NOT_USE_IN_PRODUCTION 
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 6.8.0-40-generic x86_64)

...

sshuser@647cb69720bf:~$ printenv
SHELL=/bin/bash
PWD=/home/sshuser
LOGNAME=sshuser

...

⚠️ Please note that the SSH key above is an example and should not be used in production.

We should also test that we can no longer SSH into the container as the root user:

ssh -p 33322 root@localhost                                                          
root@localhost: Permission denied (publickey).

Great! We're now in a position to test our Ansible playbooks locally. Once we're happy with the playbooks, we can create additional inventory files for different environments, such as staging and production.

Setting environment variables

The inventory file we used had settings related to users and SSH keys. We can also set environment variables in a vars file. This is useful for storing environment variables that are specific to a host.

For example, we might want to define all of our server1 env file variables in a section called env_file and then write these to a .env file on the host.

We create the file ./vars/development.server1.yml with the following content:

---
env_file:
  MONGO_INITDB_ROOT_PASSWORD: "dbPassword"
  MONGO_INITDB_ROOT_USERNAME: "dbUsername"
  MONGO_INITDB_DATABASE: "dbName"

A second playbook at ./playbooks/env.yml will read the env_file variables and write them to a .env file on the host. It also starts the Docker service and copies a simple docker compose that spins up an instance of MongoDB. The .env file is created in the home directory of the sshuser and is named .env.development. This .env file is used to set environment variables for the Docker Compose file.

---
# 2nd play to be run as service account so root is not used.
- hosts: all
  gather_facts: false
  # this is used to change the ssh user to the one defined in the inventory file
  remote_user: "{{ vars.sshuser.ansible_ssh_user }}"
  vars:
    # this is the ssh user we created in the init.yml playbook
    ansible_ssh_user: "{{ vars.sshuser.ansible_ssh_user }}"
    ansible_sudo_pass: "{{ vars.sshuser.ansible_ssh_user_password }}"
    # use relative path to the ssh key within this repository
    ansible_ssh_private_key_file: "{{ inventory_dir }}/../accounts/ssh/{{ sshuser.ansible_ssh_private_key_file }}"
    # use ssh user name to set the environment file location on the host
    env_location: "/home/{{ sshuser.ansible_ssh_user}}/.env.{{ hostvars[inventory_hostname].environment }}"
    # set a global NODE_ENV environment variable for the host
    NODE_ENV: "{{ hostvars[inventory_hostname].environment }}"

  tasks:

    # include the vars file for the host by using relative path and the `environment` which is set in the inventory file
    - include_vars: "{{ inventory_dir }}/../vars/{{ hostvars[inventory_hostname].environment }}/{{ inventory_hostname }}.yml"

    - name: Create an env file using the env_file set for this host
      lineinfile:
        path: "{{ env_location }}"
        create: yes
        state: present
        line: "{{ item.key }}={{ item.value}}"
        regexp: "^{{ item.key }}="
        insertafter: EOF
      # this section loops the values in our env_file dictionary
      with_items:
        - "{{ hostvars[inventory_hostname].env_file | dict2items }}"

    # check that the sudo password is set. Remove this in production
    - debug: var="ansible_sudo_pass"

    # We need to be sudo to do this
    - name: Ensure docker deamon is running
      become: yes
      become_method: sudo
      service:
        name: docker
        state: started

    - name: Copy the docker compose file to the server
      ansible.builtin.copy:
        src: "{{ inventory_dir }}/../docker/docker-compose.yml"
        dest: "/home/{{ sshuser.ansible_ssh_user}}/docker-compose.yml"
        owner: "{{ sshuser.ansible_ssh_user }}"
        group: "{{ sshuser.ansible_ssh_user }}"
        mode: '0755'

    - name: Run `docker-compose up`
      environment:
        NODE_ENV: "{{ hostvars[inventory_hostname].environment }}"
        MONGO_IMAGE: "{{ hostvars[inventory_hostname].mongo_image_version }}"
      community.docker.docker_compose_v2:
        project_src: "/home/{{ sshuser.ansible_ssh_user}}"
        files: "docker-compose.yml"
        env_files: "{{ env_location }}"
      register: output

    # you can stick debug statements anywhere you want to see the output of a variable
    - debug: var=output

Run the above playbook with the following command:

ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ./playbooks/env.yml -i ./inventory/hosts.development.yml

You should see output like the following:

PLAY [all] ********************************************************************************************************************************************************************************************************

TASK [include_vars] ***********************************************************************************************************************************************************************************************
ok: [server1]

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [server1] => {
    "vars.sshuser.ansible_ssh_user_password": "sshuser"
}

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [server1] => {
    "vars.sshuser.ansible_ssh_user": "sshuser"
}

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [server1] => {
    "env_location": "/home/sshuser/.env.development"
}

TASK [Create an env file using the env_file set for this host] ****************************************************************************************************************************************************
changed: [server1] => (item={'key': 'MONGO_INITDB_ROOT_PASSWORD', 'value': 'dbPassword'})
changed: [server1] => (item={'key': 'MONGO_INITDB_ROOT_USERNAME', 'value': 'dbUsername'})

...

Now when we ssh into the machine we can check that Mongo is running.

sshuser@1e54ae1351b4:~$ docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                                           NAMES
f9142751cb71   mongo:5.0.4   "docker-entrypoint.s…"   55 seconds ago   Up 52 seconds   0.0.0.0:27017->27017/tcp, :::27017->27017/tcp   sshuser-database-1

And out initialised database is running! We can docker exec into the container and check that the database has been set up with the correct credentials.

sshuser@1e54ae1351b4:~$ docker exec -it 5c28f93ae92f bash
root@5c28f93ae92f:/# mongosh -u dbUsername -p dbPassword
Current Mongosh Log ID: 66c6f5dfe7548c40d048ebf0
Connecting to:  mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB:  5.0.4
Using Mongosh:  1.1.2

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting:
   2024-08-22T08:24:33.320+00:00: Soft rlimits for open file descriptors too low
------

test> 

Hooray! We have a docker MongoDB instance running with the correct credentials running on a fresh Ubuntu machine. We now have a repeatable process for setting up new machines and can test our Ansible playbooks locally before deploying to any new VPS instances.

Ready to ditch Google reCAPTCHA?
Start for free today. No credit card required.