After the announcement on September 26 that Ansible will be adopting molecule and ansible-lint as official 'Ansible by Red Hat' projects, I started moving more of my public Ansible projects over to Molecule-based tests instead of using the homegrown Docker-based Ansible testing rig I'd been using for a few years.
There was also a bit of motivation from readers of Ansible for DevOps, many of whom have asked for a new section on Molecule specifically!
In this blog post, I'll walk you through how to use Molecule, and how I converted all my existing roles (which were using a different testing system) to use Molecule and Ansible Lint-based tests.
Installing Molecule
Assuming you have pip
installed, Installing Molecule is quite easy:
pip install molecule
Check if it's working:
$ molecule --version
molecule, version 2.19.0
Initializing a new role with Molecule
Before we start integrating Molecule into an existing role, let's take a look at the 'happy path'—using Molecule itself to init
a new role:
molecule init role geerlingguy.example -d docker
This command uses ansible-galaxy
behind the scenes to generate a new Ansible role, then it injects a molecule
directory in the role, and sets it up to run builds and test runs in a docker
environment. Inside the Molecule directory is a default
directory, indicating the default
test scenario. It contains the following:
$ cd geerlingguy.example/molecule/default/ && ls
Dockerfile.j2
INSTALL.rst
molecule.yml
playbook.yml
tests
Here's a quick rundown of what all these files are:
Dockerfile.j2
: This is the Dockerfile used to build the Docker container used as a test environment for your role. You can customize it to your heart's content, and you can even use your own Docker image instead of building the container from scratch every time—I'll cover how to do that later though. The key is this makes sure important dependencies like Python,sudo
, and Bash are available inside the build/test environment.INSTALL.rst
: Contains instructions for installing required dependencies for runningmolecule
tests.molecule.yml
: Tells molecule everything it needs to know about your testing: what OS to use, how to lint your role, how to test your role, etc. We'll cover a little more on this later.playbook.yml
: This is the playbook Molecule uses to test your role. For simpler roles, you can usually leave it as-is (it will just run your role and nothing else). But for more complex roles, you might need to do some additional setup, or run other roles prior to running your role.tests/
: This directory contains a basic Testinfra test, which you can expand on if you want to run additional verification of your build environment state after Ansible's done its thing.
You can customize and/or remove pretty much everything. In many of my roles, I use a custom pre-made Docker image which already has Python and Ansible, so I removed the Dockerfile.j2
template, and updated the image
in my molecule.yml
to point to my public Docker Hub images. You can also use environment variables anywhere inside the molecule.yml
file, so you could have an environment variable like MOLECULE_IMAGE
and specify a different OS base image whenever you do a test run, without adding additional scenarios (besides default
).
I also use Ansible itself (usually the uri
, assert
, and fail
modules mostly) to do functional tests after the playbook.yml
runs, so I delete the tests/
directory, and Molecule automatically detects there are no test infra
tests to run.
Running your first Molecule test
Your new example role doesn't have anything exciting in it yet, but it should—at this point—pass all tests with flying colors! So let's let Molecule do its thing:
$ cd ../../
$ molecule test
...
--> Validating schema /Users/jgeerling/Downloads/geerlingguy.example/molecule/default/molecule.yml.
Validation completed successfully.
--> Test matrix
└── default
├── lint
├── destroy
├── dependency
├── syntax
├── create
├── prepare
├── converge
├── idempotence
├── side_effect
├── verify
└── destroy
--> Executing Yamllint on files found in /Users/jgeerling/Downloads/geerlingguy.example/...
Lint completed successfully.
--> Action: 'syntax'
playbook: /Users/jgeerling/Downloads/geerlingguy.example/molecule/default/playbook.yml
--> Action: 'converge'
--> Action: 'idempotence'
Idempotence completed successfully.
...
After a couple minutes, Molecule runs through all the testing steps: linting, checking playbook syntax, building the Docker environment, running the playbook in the Docker environment, running the playbook again to verify idempotence, then cleaning up after itself.
This is great! You can work on the role, and then whenever you want, run molecule test
and verify everything still passes.
But what about development? Molecule is actually great for that too!
Role development with Molecule
I used to use this Ansible role testing 'shim' script to do lightweight development of my roles, but it was a little bit of a hassle using it, and I didn't have a quick "build me a local environment to work in and leave it running" mode, without setting some extra environment variables.
With Molecule, any time you want to bring up a local environment and start running your role, you just run molecule converge
. And since you can use Molecule with VirtualBox, Docker, or even AWS EC2 instances, you can have your role run inside any type of virtual environment you need! (Sometimes it can be hard to test certain types of applications or automation inside of a Docker container).
So my new process for developing a role (either a new one, or when fixing or improving an existing role) is:
molecule init role geerlingguy.example -d docker
molecule converge
<do some work on the role>
molecule converge
<see that some changes didn't work>
molecule converge
<see everything working well, commit my changes>
molecule converge
<idempotence check - make sure Ansible doesn't report any changes on a second run>
molecule destroy
It's a lot more fun and painless to work on my roles when I can very quickly get a local development environment up and running, and easily re-run my role against that environment over and over. The faster I can make that feedback cycle (make a change, see if it worked, make another change...), the more—and higher quality—work I can get done.
The converge
command is even faster after the first time it's run, as the container already exists and doesn't have to be created again.
Also, when I get a build failure notification from Travis CI, I can just cd
into the role directory, run molecule test
, and quickly see if the problem can be replicated locally. I used to have to set a few environment variables (which I would always forget) to reproduce the failing test locally, but now I have a more efficient—and easy to install, via pip install molecule
—option.
Configure Molecule
If you've been working through the examples in this blog post, you'll notice Molecule puts out a lot of notices about skipped steps, like:
--> Action: 'dependency'
Skipping, missing the requirements file.
--> Scenario: 'default'
--> Action: 'create'
Skipping, instances already created.
--> Scenario: 'default'
--> Action: 'prepare'
Skipping, prepare playbook not configured.
What if you want to customize the order of tasks run when you run molecule test
? Or if you need to remove a step (e.g. for some very specialized use cases, you don't want to verify idempotence—the playbook is supposed to make a change on every run)?
The order of scenario
s (in Molecule's terms) can be modified, along with just about everything else, in the molecule.yml
file.
For the example above, since we don't need to install any dependencies or run a preparatory playbook, we can take those out by adding the following configuration, under the scenario
top level configuration. Here's the default set of scenarios as of Molecule 2.19:
scenario:
name: default
test_sequence:
- lint
- destroy
- dependency
- syntax
- create
- prepare
- converge
- idempotence
- side_effect
- verify
- destroy
Since we don't need certain steps, we can comment out:
dependency
: If this role required other roles, we could add arequirements.yml
file in thedefault
scenario directory, and Molecule would install them in this step. But we don't have any requirements for this role.prepare
: You can create aprepare.yml
playbook in thedefault
scenario directory and have Molecule run it during theprepare
step, but we don't need one for this role.side_effect
: You can create aside_effect.yml
playbook in thedefault
scenario directory and have Molecule run it during theprepare
step, but we don't need one for this role.
So now our scenario
definition looks like:
scenario:
name: default
test_sequence:
- lint
- destroy
# - dependency
- syntax
- create
# - prepare
- converge
- idempotence
# - side_effect
- verify
- destroy
And if we run molecule test
, we won't see all the notices about skipped actions, because they're not in the test_sequence
.
There are many other things you can configure in molecule.yml
; in fact, pretty much every setting and option in Molecule is configurable or overridable. See the Molecule configuration documentation for all the gory details.
Use pre-built Docker images with Molecule
Speaking of things you can configure in molecule.yml
—if you want to speed up your test runs (especially in ephemeral CI environments), you can swap out the default Docker build configuration for your own Docker image, and tell Molecule you've already configured the image with Ansible.
And wouldn't you know, I already maintain a number of base Docker images with Ansible pre-installed, for most popular Linux distributions:
All of the above base images have not only Ansible, but also systemd (or sysvinit in older Ubuntu and CentOS releases), so you can test service
management, and pretty much anything you'd be able to test in a full-fledged virtual machine.
To use those images, I customize my molecule.yml
file's platforms
configuration a bit:
platforms:
- name: instance
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
Many of the options, like volumes
, command
, image
, and privileged
are passed through exactly like you'd expect from a Docker Compose file, or in the Ansible docker_container module's parameters. The special pre_build_image
option, when set to true
, means that the image you're using already has Ansible inside, so Molecule doesn't need to waste time building it if the image doesn't exist locally.
You might also notice my use of variables like ${MOLECULE_DISTRO:-centos7}
; Molecule conveniently supports environment variables inside the molecule.yml
file, and you can even provide defaults like I have with the :-centos7
syntax (this means that if MOLECULE_DISTRO
is an empty string or not set, it will default to centos7
).
So when I run molecule
commands, I can run them with whatever OS I like, for example:
MOLECULE_DISTRO=ubuntu1804 molecule test
This will run the test suite inside an Ubuntu 18.04 environment. If I don't set MOLECULE_DISTRO
, it will run inside a CentOS 7 environment.
I also use use volumes
to mount /sys/fs/cgroup
inside the image, and set the --privileged
flag, because without these, systemd
won't run correctly inside the container. If your Ansible roles aren't managing services using systemd
, you might not need to use those options (even if you use my Docker images); many Ansible playbooks and roles work just fine without the elevated Docker container privileges.
You don't have to use your own pre-built images (or mine), but if you have the tests run in Travis CI or any other environments where Docker images are not persistent (heck, I run docker system prune --all
on my local workstation all the time!), it's much faster to use pre-built images. Plus you can have images which simulate a more full VM-like experience.
If you use VirtualBox, EC2, or some other platform, then the amount of time and effort required to maintain a custom image might negate the minimal performance gain from pre-building the images; but at least the option is out there. Oh, and I have plenty of Vagrant base image builds for VirtualBox (built automatically with Packer and Ansible) you could use to get started (e.g. packer-ubuntu-1804)!
Integrating Molecule into Travis CI
The holy grail, at least for me, is reproducing my entire multi-os testing platform I've built up over the years using Molecule instead of some cobbled-together shell scripts. One reason I am able to maintain nearly 100 popular Ansible roles on Ansible Galaxy (and lots of other projects besides) is the fact that all the common usage scenarios are thoroughly and automatically tested—not only on every pull request and commit, but also on a weekly cron schedule—via Travis CI.
I won't go into the full details of how I switched from my old setup to Molecule in Travis CI, but I will offer a couple examples you can look at to see how I did it, and managed to make a maintainable test setup for multiple operating systems and playbook test cases using Molecule and Travis CI:
geerlingguy.kibana
(Travis CI build) - This is one of the simpler test cases; it just tests on two distros, the latest version of Ubuntu and CentOS, and runs one playbook which installs Kibana then makes sure it's reachable.geerlingguy.jenkins
(Travis CI build) - This is one of my most complicated roles, therefore it has much a more complex array of tests. Not only is it tested against four different distros, there are also five separate test playbooks used to test various common use cases and configurable options.
In the Jenkins role (and many others), I specify a different MOLECULE_PLAYBOOK
for the converge
playbook. Some people may prefer an entirely different molecule scenario
(besides the default
scenario) for different test cases, but in my case, I set the converge
playbook name (provisioner.playbooks.converge
in the molecule.yml
) to an environment variable so I can use a different playbook per scenario:
provisioner:
name: ansible
lint:
name: ansible-lint
playbooks:
converge: ${MOLECULE_PLAYBOOK:-playbook.yml}
In all my roles, I use Travis CI's build matrix feature to perform each distro/playbook combination test in a separate build container, running in parallel with the other tests. It would take 7x longer to run all the Jenkins tests in series, and I hate waiting to see if a pull request will break anything in my role!
Dive Deeper into Ansible role and playbook testing
You can find out even more about Molecule, Travis CI, Docker integration, and other testing topics in my book, Ansible for DevOps.
Comments
Molecule sounds cool and all, but will it support roles intended for multiple environments, including un-Dockerable platforms like macOS? https://github.com/mcandre/ansible-curl
For macOS role testing, I still use Travis CI's native macOS build support—see, for example, my
geerlingguy.mas
role.I haven't tested it, but it could be possible to use Molecule with a macOS VirtualBox image. It's not very fun to get working though—Apple doesn't really give a lot of support to running macOS in virtualized environments :(
Hey Jeff, a couple months a go I had a crack at porting your home brew and Elliot’s commandline tools roles to Travis. It ended up working pretty well but it took a very long time to run, the main issue was that I needed to manually recompile Ruby because of the brew links when uninstalling the already installed hombrew and commandline tools. I don’t know if that’s needed anymore. I think I attempted MAS locally but I had issues with my OSX version, but MAS shouldn’t need ruby compiling because it’s just a brew package. My GitHub link for the home brew role is
: https://github.com/audibailey/ansible-homebrew-role
The Travis link is:
https://travis-ci.org/audibailey/ansible-homebrew-role
I didn’t try the vagrant option for OSX but I’m thinking the overall run time would take longer because it needs to download a 14GB+ packer OSX image.
Anyways, thanks for you work and hopefully this helps.
Regards,
Audi Bailey
I've been using Molecule for Ansible testing since I was fortunate enough to be in the room for this talk at AnsibleFest 2017.
https://www.ansible.com/infrastructure-testing-with-molecule
It's quite flexible in what you target for machines, and for one integration test that required a lot of resources, I used the Delegated driver to defer the infrastructure setup to Terraform, and then used the Terraform module for Ansible to provision from the terraform template. Since Terraform accepts JSON configuration files, the playbook was able to assemble the infrastructure using a series or set-facts and then pipe and object to JSON and store it in a file. As Proof-of-Concepts go, it was pretty ugly, but with a couple filter plugins the whole thing could have been made very elegantly.
Just remember that version presented at Ansible fest were of Molecule v1 and not very relevant today.
just a quick note to add to your article. One very nice feature of Molecule when testing with Docker is the `--destroy-never` flag (`molecule test --destroy never`). It allows not to destroy the container upon failure. Which allows to inspect the state of the container after the failure. This is particularly useful when debugging testinfra tests.
I used to use molecule before. At some point it started to crash my Ubuntu workstation. This is due to running systemd in a container and `--privileged`. So the systemd in docker was actually communicating to the outside which freaked my workstation out. I consider this a very unsafe practice. Especially, if you use roles of other people.
https://github.com/ansible/molecule/issues/1102
https://github.com/ansible/molecule/issues/1104
Therefore, I actually switched back to you shim. Do you have any idea how to do this in a better way?
In many cases, if you're not managing services inside Docker containers, you can just not use
privileged
, and it will work fine. I only use it because in a few cases my roles don't build correctly without it. But I should really default to off and only turn it on for the roles that actually need it...Also, you can use VirtualBox instead if you want (or one of the other providers supported by Molecule).
Thanks again for the article. Using the TravisCI matrix is a nice trick to speed up the testing. Caching the pip dependencies saves another 40 seconds for me.
```
# .travis.yml
language: python
cache: pip
```
I get the following errors. My setup is a macbook pro.
--> Action: 'destroy'
PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=None)
changed: [localhost]
TASK [Wait for instance(s) deletion to complete] *******************************
failed: [localhost] (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
ERROR:
An error occurred during the test sequence action: 'destroy'. Cleaning up.
--> Scenario: 'default'
--> Action: 'destroy'
PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=None)
changed: [localhost]
TASK [Wait for instance(s) deletion to complete] *******************************
failed: [localhost] (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
ERROR:
It looks like Molecule is having trouble deleting the Docker instance used for your testing. Is Docker running on your Mac? If you run
docker ps
do you see any running containers?I had the exact same errors. Docker was running but the docker python library was not installed. Fixed by using: pip install docker
Once I did that a newly initialized module tested cleanly. Hope this helps...
Thanks for writing this. I've been using your homegrown docker testing method for a while now so I'm looking forward to trying this out.
Thanks for your reply.
`pip install docker` worked for me.
Hi Jeff, I noticed that you opt for testing all the different platforms using the Travis CI matrix rather than having a list of platforms in `molecule.yml`. I presume that this gives you better error messages from Travis when things fail, i.e. you can see which specific item in the matrix failed, which also tells you the OS and scenario.
This seems like it would slow down role development though, since you're running converge against a single platform at a time, rather than against all the platforms at once.
Is this the case, or have I misinterpreted your setup?
Thanks!
This is true; when developing roles, though, I only work on one OS at a time; my brain couldn't handle having 5 or 6 distros running at once. I can control which OS I'm working on with
MOLECULE_DISTRO=ubuntu1804 molecule [command]
. This isn't 100% matching the paradigms molecule itself sets up—and indeed if you do amolecule converge
and forget to do amolecule destroy
later (before switching gears to another OS/version), you might think you're on one OS but actually be on the other, but it fits the way I work, so that's how I do it :)Are you setting $MOLECULE_DISTRO within travisci?
Yes; see, for example: https://github.com/geerlingguy/ansible-role-apache/blob/master/.travis…
Happy to see you finally adopting those man. I remember as if it was yesterday when you specifically disbanded one of my PRs to one of the ansible roles including it. Hope mine was one of the tipping points to get you started properly with it ;)
I couldn't get systemctl to work on your ubuntu1804 image on my mac until I added the following to the platforms section.
Just thought I would share for anyone else running into this problem.
This is my part of molecule.yml:
platforms:
- name: instance
image: geerlingguy/docker-debian10-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
I'm getting error about not working systemd. I've ran 'docker system purge --all' already. Any ideas what I'm doing wrong?
Hi Jeff,
We are facing an issue that does not run the "prepare.yml" on every test run with the output (Action: 'prepare' Skipping, instances already prepared).
There is an existing issue on Github: https://github.com/ansible-community/molecule/issues/1459 (says instances already prepared and skips for no reason)
We wanted to know how to add the state file in the temp directory of the molecule as mentioned in the solution? An example of the syntax of the "State" would help us to understand and fix the issue with the above-mentioned solution(molecule prepare --force state is kept in the state file in $TMPDIR/molecule/).
Note: Running the molecule destroy before the test run is not solving the issue either.
Thanks in advance for the solution.
molecule init role -r geerlingguy.example -d docker, does not work anymore.
The -r needs to be dropped.
Thanks! I've updated the post.
The geerlingguy.example/molecule/default directory content is slightly different:
Instead of:
cd geerlingguy.example/molecule/default/ && ls
Dockerfile.j2
INSTALL.rst
molecule.yml
playbook.yml
tests
there is
cd geerlingguy.example/molecule/default/ && ls
INSTALL.rst converge.yml molecule.yml verify.yml
I also got the same directory output after trying. Now sure if I am doing something around or if it's not pulling from ansible-galaxy but if you figured out how to get it correctly please post! thanks
Is there a way to compute the code coverage of the molecule test?
Hi guys,
I use GitLab CI for testing Ansible with the molecule, and I've used matrix for testing multiple OS simultaneously. Like this:
```
molecule-test:
image: qwe1/dind-ansible-molecule:root
stage: test
variables:
ANSIBLE_FORCE_COLOR: '1'
parallel:
matrix:
- MOLECULE_DISTRO: [debian10,debian11]
```
The problem is when I use a matrix with two OS, I've got this error:
```
[WARNING]: Error deleting remote temporary files (rc: 1, stderr: Error response
from daemon: Container
4d46de18c6021eb094beeeebad174afd1c3a11da2ba1c927a8d19c94b9aa4077 is not running
})
fatal: [instance]: FAILED! => changed=false
module_stderr: ''
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 137
```
Could anyone help me out?
Jeff -
I noticed an issue with running molecule on ubuntu 22.04 compared to 20.04 (and Arch for that matter). In the molecule/default/molecule.yml file you have
platforms:
- name: instance
image: "geerlingguy/docker-${MOLECULE_DISTRO:-ubuntu2204}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
Note the command: line. That works fine when the host system running docker and molecule is 20.04, but causes a failure with 22.04 with the instance creation. It's not the instance creation that fails, it's the command, which disallows connection to the container.
With that command: line, instance creation looks like this
TASK [Wait for instance(s) creation to complete] *******************************
FAILED - RETRYING: [localhost]: Wait for instance(s) creation to complete (300 retries left).
changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '419720698727.21394', 'results_file': '/home/ubuntu/.ansible_async/419720698727.21394', 'changed': True, 'item': {'command': '', 'env': {'DOCKER_HOST': 'unix:///var/run/docker.sock'}, 'image': 'geerlingguy/docker-ubuntu2204-ansible:latest', 'name': 'instance-ubuntu2204', 'pre_build_image': True, 'privileged': True, 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'ansible_loop_var': 'item'})
Note the command bit just before the env bit. Works fine when the host system running docker and molecule is 20.04, but fails on 22.04. The prepare stage fails with
TASK [Gathering Facts] *********************************************************
fatal: [instance-ubuntu2204]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1670863232.4766915-25138-84648872554305 `\" && echo ansible-tmp-1670863232.4766915-25138-84648872554305=\"` echo ~/.ansible/tmp/ansible-tmp-1670863232.4766915-25138-84648872554305 `\" ), exited with result 1", "unreachable": true}
I can't figure out WHY that's the case. The host system should have no impact - the instance-ubuntu2204 container should be exactly the same regardless of the host OS. Unfortunately, the instance-ubuntu2204 container exits right after the molecule run, so no way to see what actually happened via logs.
I've tested this using multipass to create 20.04 and 22.04 VMs, then creating a python venv, installing ansible/molecule/etc and a test role with
pip install ansible molecule molecule-docker molecule-goss molecule-containers molecule-inspec
molecule init role -d docker molecule_test
Modify the molecule/default/molecule.yml file to use your geerlingguy/docker-ubuntu2204-ansible:latest image and the command: line from above. Create a simple molecule/default/prepare.yml file that adds a user for example. Then "molecule create".
Works fine in 20.04, fails in 22.04, with otherwise-identical configuration.