Installing PHP 7 and Composer on Windows 10, Natively
Note: If you want to install and use PHP 7 and Composer within the Windows Subsystem for Linux (WSL) using Ubuntu, I wrote a guide for that, too!
Note: If you want to install and use PHP 7 and Composer within the Windows Subsystem for Linux (WSL) using Ubuntu, I wrote a guide for that, too!
Ansible 2.4 is notable for a number of improvements and changes, but one that flew under my radar was the addition of a set of new openssl_*
crypto-related modules.
The following modules were added in Ansible 2.4.0:
For many of my AWS-specific Ansible playbooks, I need to have some operations (e.g. AWS inspector agent, or special information lookups) run when the playbook is run inside AWS, but not run if it's being run on a local test VM or in my CI environment.
In the past, I would set up a global playbook variable like aws_environment: False
, and set it manually to True
when running the playbook against live AWS EC2 instances. But managing vars like aws_environment
can get tiresome because if you forget to set it to the correct value, a playbook run can fail.
So instead, I'm now using the existence of AWS' internal instance metadata URL as a check for whether the playbook is being run inside AWS:
I'm documenting this here, just because it's something I imagine I might have to do again someday... and when I do, I want to save myself hours of pain and misdirection.
A client had an old SOAP web service that used IP address whitelisting to authenticate/allow requests. The new PHP infrastructure was built using Docker containers and auto-scaling AWS instances. Because of this, we had a problem: a request could come from one of millions of different IP addresses, since the auto-scaling instances use a pool of millions of AWS IP addresses in a wide array of IP ranges.
Because the client couldn't change their API provider (at least not in any reasonable time-frame), and we didn't want to throw away the ability to auto-scale, and also didn't want to try to build some sort of 'Elastic IP reservation system' so we could draw from a pool of known/reserved IP addresses, we had to find a way to get all our backend API SOAP requests to come from one IP address.
The solution? Reverse-proxy all requests to the backend SOAP API.
Generally speaking, I'm against performing major OS upgrades on my Linux servers; there are often little things that get broken, or configurations gone awry, when you attempt an upgrade... and part of the point of automation (or striving towards a 12-factor app) is that you don't 'upgrade'—you destroy and rebuild with a newer version.
But, there are still cases where you have legacy servers running one little task that you haven't yet automated entirely, or that have data on them that is not yet stored in a way where you can tear down the server and build a new replacement. In these cases, assuming you've already done a canary upgrade on a similar but disposable server (to make sure there are no major gotchas), it may be the lesser of two evils to use something like Ubuntu's do-release-upgrade
.
I recently had to build an Ansible playbook that takes in a massive inventory structure (read from a YAML file), modifies a specific key in that file, then dumps the file back to disk. There are some other ways that may be more efficient standalone (e.g. using a separate Python/PHP/Ruby/etc. script and a good YAML library), but since I had to do a number of other things in this Ansible playbook, I thought it would keep it simple if I could also modify the key inside the playbook.
I was scratching my head for a while, because while I knew that I could use the dict | combine()
filter to merge two dicts together (this is a feature that was introduced in Ansible 2.0), I hadn't done so for a deeply-nested dict.
From time to time, I need to dynamically build a list of strings (or a list of other things) using Ansible's set_fact
module.
Since set_fact
is a module like any other, you can use a with_items
loop to loop over an existing list, and pull out a value from that list to add to another list.
For example, today I needed to retrieve a list of all the AWS EC2 security groups in a region, then loop through them, building a list of all the security group names. Here's the playbook I used:
Dockerfile
s have been able to use ARG
s to allow passing in parameters during a docker build
using the CLI argument --build-arg
for some time. But until recently (Docker's 17.05 release, to be precise), you weren't able to use an ARG
to specify all or part of your Dockerfile
's mandatory FROM
command.
But since the pull request Allow ARG in FROM was merged, you can now specify an image / repository to use at runtime. This is great for flexibility, and as a concrete example, I used this feature to allow me to pull from a private Docker registry when building a Dockerfile in production, or to build from a local Docker image that was created as part of a CI/testing process inside Travis CI.
To use an ARG
in your Dockerfile
's FROM
:
For my Raspberry Pi Time-Lapse App, I find myself having to either copy hundreds (or thousands!) of 3+ MB image files, or a 1-2 GB video file from a Raspberry Pi Zero W to my Mac.
Copying over the WiFi network works, but it's extremely slow (usually topping out around 5 Mbps... which means it could take a couple hours to copy). So I decided to finally try to mount the Raspberry Pi's drive directly on my MacBook Pro (running macOS Sierra 10.12). This is normally a bit tricky, because the Raspberry Pi uses the Linux ext4
filesystem—which is not compatible with either macOS or Windows!
Recently, as I've been building more and more servers running Ubuntu 16.04, I've hit the following errors:
PLAY [host] ************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
fatal: [1.2.3.4]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host "1.2.3.4". Make sure this host can be reached over ssh", "unreachable": true}
or:
/bin/sh: 1: /usr/bin/python: not found
The former error seems to happen when you're running a playbook on an Ubuntu 16.04 host (with gather_facts: yes
), while the latter happens if you're using a minimal distribution that doesn't include Python at all. The problem, in both cases, is that Python 2.x is not installed on the server, and there are two different fixes: