Ansible is a system to automate the updating of server configurations and other administration tasks. In this post I’ll explain what’s necessary to get started with Ansible, creating a configuration structure, telling Ansble about your hosts and running ad-hock commands on multiple hosts.
Ansible is useful when you have 3 or more VPSs and need to keep changes synchronised or updates applied in a consistent manner. It takes a little more work to do something through a configuration management system, but the reward is that you can apply your configuration change to 3, (or 3000) servers with little extra effort once that is done.
The ansible software is downloaded on to your workstation (often) or perhaps another machine closer on the network to the machines you want to manage. This is called the control node, and it needs to be running a recent linux distribution. If you want to manage many VPSs in Dallas for example, you could create a new VPS in Dallas as the control node and that would result in faster operation than running it from your workstation.
Ansible uses ssh, the same access method linux administrators would use, to manage the servers (target servers). Python must be installed on each target server. (It’s aimed at enterprise servers, so it requires only python 2.4 or later be installed on target servers. That is, you don’t need to upgrade or install any specialised software to your target servers before you can start managing them.)
Ansible is an open source product with good documentation available at http://docs.ansible.com/ansible/index.html. In this blog post I’m going to demonstrate how to use ansible to manage 3 different servers I have access to. Since ansible uses ssh to access servers, you can only manage servers you already have ssh access to, and you can only manage root-owned files if you already have permissions for that; Ansible doesn’t add or use any separate permission layer. It can take advantage of sudo if you normally use that. I’ll show two servers where I ssh as root, and one where I ssh as a regular user and use sudo. I have ssh access to the hosts I’m demonstrating on, you should try this on hosts you have access to.
Install Ansible
Ansible is usually installed on your laptop or workstation. It doesn’t require root privileges, but it must run on a linux or mac system. If you use windows on your workstation install it to a VPS instead. Installation instructions are available at http://docs.ansible.com/ansible/intro_installation.html. They describe how to get the latest version (version 2.1 as I write) from git or package repositories. However, I normally just install the version in Debian stable (1.7) (or Ubuntu):
apt-get install ansible
Or if you’re running CentOS with EPEL enabled:
yum install ansible
Define your hosts
Ansible is “configured” by creating a directory containing files that describe your hosts and how you want them configured. This directory is normally kept under your home directory, and could be shared with others using git or svn. Using a terminal window (or putty connection to the vps where ansible is installed,) I make a new directory for that:
mkdir ~/myansible cd ~/myansible
Every other file I create is relative to ~/myansible, so I won’t keep repeating that here. Except….
The hosts file is the only file that by default is not inside my ~/myansible directory, it’s in /etc/ansible instead. Ansible needs to know about which hosts you intend to manage. Put this information in the hosts file. List each server there under one or more groups, in an “ini file” type of list, with each section a group. The groups normally specify roles that server is intended to take, such as a webserver or a mailserver. I’ll also create a group for each location. Here’s mine:
# /etc/ansible/hosts: list of hosts and groups ansible knows about [webserver] cheapred [mailserver] 4800680121 [nameserver] cheapred t1 [dal] 4800680121 [akl] cheapred [dud] t1
Note you can call the servers that are managed anything you like. Most often it will be the short name of the server, but you could use an id number as I’ve done with 4800680121. Whatever makes the most sense to you. Ansible needs to know a bit more information about each server, so I create a yaml file for each server in a host_vars directory:
mkdir host_vars
I create the following 3 files:
#host_vars/cheapred: data specific to cheapred --- ansible_ssh_host: cheapred.linuxworks.nz ansible_ssh_user: root
#host_vars/t1: data specific to t1 --- ansible_ssh_host: t1.king.net.nz
#host_vars/4800680121: data specific to 4800680121 --- ansible_ssh_host: 74.50.53.118 ansible_ssh_port: 2222 ansible_ssh_user: root
At this point it should be possible to test that ansible can connect to those machines:
ansible -m ping all
The “-m” here means “use the ansbile module called…” Modules are explained more below. That command should result in a “pong” message for each server. That assumes you have password-less ssh access set up for your servers using ssh keys, if not you could instead use:
ansible --ask-pass -m ping all
but really you should set up ssh keys.
Get the facts
Ansible can give me a list of information, called “facts”, about the servers it can access. I’m going to check what facts are available for 4800680121, using the “setup” module:
ansible -m setup 4800680121
Which results in:
4800680121 | success >> { "ansible_facts": { "ansible_all_ipv4_addresses": [ "74.50.53.118" ], "ansible_all_ipv6_addresses": [ "fe80::a800:ebff:fe12:fd30" ], "ansible_architecture": "x86_64", "ansible_bios_date": "", "ansible_bios_version": "", ... "ansible_user_id": "root", "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "xen", "module_setup": true }, "changed": false }
I’ve left a lot out of the display, but I can determine a lot about the system from this information, which can in turn be used in playbooks, which I’ll explain in the next post.
Running ad hoc commands
At this point, if ansible is successfully pinging, it can be used to run one-off commands on the hosts in my hosts file. I’d like to find out the uptime of those machines:
ansible -a uptime all t1 | success | rc=0 >> 14:43:17 up 208 days, 20:56, 1 user, load average: 0.00, 0.01, 0.05 cheapred | success | rc=0 >> 02:43:24 up 12 days, 7:18, 3 users, load average: 0.07, 0.03, 0.01 4800680121 | success | rc=0 >> 02:43:30 up 1 day, 6:00, 2 users, load average: 0.00, 0.00, 0.00
So this tells me t1 has been up for 208 days, cheapred for 12 days, and 4800680121 for about 30 hours.
Ansible Modules
The core of Ansible are it’s modules. Each task is done by a module. When you run “ansible” on the command line of the control server and don’t explicitly select a different module with “-m”, the “command” module is used by default. The command module will execute a single command on the target machine.
The “-a” option is used to pass arguments to the ansible module. The arguments for each ansible module are documented on the website. In the case of the command module, this is a free-form string describing a program (and arguments if necessary) to be executed on each node.
“all” means run this on all machines. So the result of my ansible command was to ssh to each server and run “uptime” on each, returning the output of that to the screen.
Next I want to update my ssh installation. To do this I run:
ansible --sudo --ask-sudo-pass -m apt -a "name=openssh-server state=latest update_cache=yes" all
Which results in:
sudo password: 4800680121 | FAILED >> { "cmd": "apt-get update && apt-get install python-apt -y -q", "failed": true, "msg": "/bin/sh: apt-get: command not found", "rc": 127, "stderr": "/bin/sh: apt-get: command not found\n", "stdout": "" } cheapred | success >> { "changed": false } t1 | success >> { "changed": false }
Since I don’t connect as root on t1, but I need root access to update a package, I’ve used –sudo to access via sudo, and –ask-sudo-pass so ansible will prompt me for the password to pass on to sudo.
-m apt means use Ansible’s apt module to update packages, and -a “name=openssh-server state=latest update_cache=yes” means update the package list then load the latest openssh-server package.
For cheapred and t1, “success” means the command executed successfully, “changed: false” means there was nothing to do because openssh-server was already at the latest version.
For 4800680121, oops, that’s running CentOS which does not use apt, hence the error. So instead I use the “yum” module and run:
ansible -m yum -a "name=openssh-server state=latest" 4800680121 4800680121 | success >> { "changed": false, "msg": "", "rc": 0, "results": [ "All packages providing openssh-server are up to date" ] }
Often you would only be looking after servers of the same distribution, so you’d only need the single command. But for more complex cases like this, it’s better to put those commands in a playbook file, which can query the server and run appropriate commands depending on distribution. (The information about distribution is held in the facts returned above, in this case “ansible_os_family”: “RedHat”, “ansible_distribution”: “CentOS”, “ansible_distribution_major_version”: “7” and “ansible_distribution_version”: “7.2.1511” are interesting. Those could be used in a playbook to determine which servers to use the apt module for, and which servers to use yum.)
Playbooks are the topic of my next post. In the meantime, if you have questions about how to most effectively manage your own servers, pop in a ticket at https://rimuhosting.com/ticket/startticket.jsp and our experienced sysadmins can assist you.