ansible

Add a path to the global $PATH with Ansible

When building certain roles or playbooks, I often need to add a new directory to the global system-wide $PATH so automation tools, users, or scripts will be able to find other scripts or binaries they need to run. Just today I did this yet again for my geerlingguy.ruby Ansible role, and I thought I should document how I do it here—mostly so I have an easy searchable reference for it the next time I have to do this and want a copy-paste example!

    - name: Add another bin dir to system-wide $PATH.
      copy:
        dest: /etc/profile.d/custom-path.sh
        content: 'PATH=$PATH:{{ my_custom_path_var }}'

In this case, my_custom_path_var would refer to the path you want added to the system-wide $PATH. Now, on next login, you should see your custom path when you echo $PATH!

Updating all your servers with Ansible

From time to time, there's a security patch or other update that's critical to apply ASAP to all your servers. If you use Ansible to automate infrastructure work, then updates are painless—even across dozens, hundreds, or thousands of instances! I've written about this a little bit in the past, in relation to protecting against the shellshock vulnerability, but that was specific to one package.

I have an inventory script that pulls together all the servers I manage for personal projects (including the server running this website), and organizes them by OS, so I can run commands like ansible [os] command. Then that enables me to run commands like:

Mount an AWS EFS filesystem on an EC2 instance with Ansible

If you run your infrastructure inside Amazon's cloud (AWS), and you need to mount a shared filesystem on multiple servers (e.g. for Drupal's shared files folder, or Magento's media folder), Amazon Elastic File System (EFS) is a reliable and inexpensive solution. EFS is basically a 'hosted NFS mount' that can scale as your directory grows, and mounts are free—so, unlike many other shared filesystem solutions, there's no per-server/per-mount fees; all you pay for is the storage space (bandwidth is even free, since it's all internal to AWS!).

I needed to automate the mounting of an EFS volume in an Amazon EC2 instance so I could perform some operations on the shared volume, and Ansible makes managing things really simple. In the below playbook—which easily works with any popular distribution (just change the nfs_package to suit your needs)—an EFS volume is mounted on an EC2 instance:

Ansible playbook to upgrade all Ubuntu 12.04 LTS hosts to 14.04 (or 16.04, 18.04, 20.04, etc.)

Generally speaking, I'm against performing major OS upgrades on my Linux servers; there are often little things that get broken, or configurations gone awry, when you attempt an upgrade... and part of the point of automation (or striving towards a 12-factor app) is that you don't 'upgrade'—you destroy and rebuild with a newer version.

But, there are still cases where you have legacy servers running one little task that you haven't yet automated entirely, or that have data on them that is not yet stored in a way where you can tear down the server and build a new replacement. In these cases, assuming you've already done a canary upgrade on a similar but disposable server (to make sure there are no major gotchas), it may be the lesser of two evils to use something like Ubuntu's do-release-upgrade.

Changing a deeply-nested dict variable in an Ansible playbook

I recently had to build an Ansible playbook that takes in a massive inventory structure (read from a YAML file), modifies a specific key in that file, then dumps the file back to disk. There are some other ways that may be more efficient standalone (e.g. using a separate Python/PHP/Ruby/etc. script and a good YAML library), but since I had to do a number of other things in this Ansible playbook, I thought it would keep it simple if I could also modify the key inside the playbook.

I was scratching my head for a while, because while I knew that I could use the dict | combine() filter to merge two dicts together (this is a feature that was introduced in Ansible 2.0), I hadn't done so for a deeply-nested dict.

Adding strings to an array in Ansible

From time to time, I need to dynamically build a list of strings (or a list of other things) using Ansible's set_fact module.

Since set_fact is a module like any other, you can use a with_items loop to loop over an existing list, and pull out a value from that list to add to another list.

For example, today I needed to retrieve a list of all the AWS EC2 security groups in a region, then loop through them, building a list of all the security group names. Here's the playbook I used:

dockrun oneshot — quick local environments for testing infrastructure

Since I work among a ton of different Linux distros and environments in my day-to-day work, I have a lot of tooling set up that's mostly-OS-agnostic. I found myself in need of a quick barebones CentOS 7 VM to play around in or troubleshoot an issue. Or I needed to run Ubuntu 16.04 and Ubuntu 14.04 side by side and run the same command in each, checking for differences. Or I needed to bring up Fedora. Or Debian.

I used to use my Vagrant boxes for VirtualBox to boot a full VM, then vagrant ssh in. But that took at least 15-20 seconds—assuming I already had the box downloaded on my computer!

Bash logic structures and conditionals (if, case, loops, etc.) in Travis CI

Travis CI's documentation often mentions the fact that it can call out to shell scripts in your repository, and recommends anything more complicated than a command or two (maybe including a pipe or something) be placed in a separate shell script.

But there are times when it's a lot more convenient to just keep the Travis CI-specific logic inside my repositories' .travis.yml file.

As it turns out, YAML is well-suited to, basically, inlining shell scripts. YAML's literal scalar indicator (a pipe, or |) allows you to indicate a block of content where newlines should be preserved, though whitespace before and after the line will be trimmed.

So if you have a statement like:

if [ "${variable}" == "something" ]; then
  do_something_here
fi

You can represent that in YAML via:

How to fix SSH errors when using Ansible with newer OSes like Ubuntu 16.04

Recently, as I've been building more and more servers running Ubuntu 16.04, I've hit the following errors:

PLAY [host] ************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
fatal: [1.2.3.4]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host "1.2.3.4". Make sure this host can be reached over ssh", "unreachable": true}

or:

/bin/sh: 1: /usr/bin/python: not found

The former error seems to happen when you're running a playbook on an Ubuntu 16.04 host (with gather_facts: yes), while the latter happens if you're using a minimal distribution that doesn't include Python at all. The problem, in both cases, is that Python 2.x is not installed on the server, and there are two different fixes:

Cloning private GitHub repositories with Ansible on a remote server through SSH

One of Ansible's strengths is the fact that its 'agentless' architecture uses SSH for control of remote servers. And one classic problem in remote Git administration is authentication; if you're cloning a private Git repository that requires authentication, how can you do this while also protecting your own private SSH key (by not copying it to the remote server)?

As an example, here's a task that clones a private repository to a particular folder:

- name: Clone a private repository into /opt.
  git:
    repo: [email protected]:geerlingguy/private-repo.git
    version: master
    dest: /opt/private-repo
    accept_hostkey: yes
  # ssh-agent doesn't allow key to pass through remote sudo commands.
  become: no

If you run this task, you'll probably end up with something like: