Posted by Steve on Mon 1 Aug 2011 at 18:48
fabric is described as a simple library and command-line tool for performing application deployment and system administration tasks. Here we'll take a look at using it to deploy simple applications remotely.
Recently I wanted to streamline the deployment of several services, applications, and utilities I maintain and develop. There are several deployment systems out there which promised to be useful, but ultimately I decided to standardise upon fabric which is packaged for Debian GNU/Linux.
Some people have suggested that I look at CFEngine, Puppet, Chef, or similar. On the face of it I understand that advice completely as I do use CFEngine already to maintain 150+ systems.
However I largely regard the use of such complex tools as being responsible for "systems", not "projects". To me a project can be deployed upon many many hosts, and on that basis I don't want the installation and configuration to be tied into any system/company-wide infrastructure.
(Obviously there are exceptions and the use of CFEngine, Puppet, or Chef can and does often involve the manipulation of software and the installation of packages.)
Although there are many deployment tools out there several of the commonly used systems tend to suffer from a lack of documentation. (For example Capistrano is an example of a system I've struggled with in the past; historically documentation was both scant and unreliable.)
Happily fabric is wonderfully documented, at least in part because it is available as both a library and a simple driver tool. The expectation of the authors is presumably that you will interface with the library, and the driver is merely a slightly useful example. That said I've not needed to integrate with it so far, because all the obvious things I wish to do are easily supported right out of the box.
When it comes to managing remote machines there are several things that you want to do, and these basic primitives are supported nicely. Although there are many other things you can do we're going to primarily concentrate upon these three options:
Connecting to remote hosts is carried out via SSH, and you can specify usernames, hostnames, and appropriate SSH keys.
Using the put() primitive files may be uploaded from your local system to the remote host(s).
Running commands, easily, is one of the most important things that fabric gives you. It allows you to run commands locally, remotely, and using sudo to gain additional privileges.
To get started you'll need to install the package:
apt-get update apt-get install fabric
Then we can create our first file. Since fabric is designed around the idea of connecting to a remote host we will need one. Let us start by assuming you can ssh to the host test.example.org as user bob.
The fabric driver code will read and execute the contents of a file named fabfile.py in the current directory. This file is actually python which gives you a lot of room for "cleverness" but lets start by saving the following contents there:
# ~/fabfile.py from fabric.api import * env.hosts = ['test.example.org'] env.user = 'bob' def remote_info(): run('uname -a') def local_info(): local('uname -a')
Given this file we can now run:
If you preferred to run this command upon another host you could manually specify that upon the command line:
skx@birthday:~$ fab --hosts=localhost,birthday.my.flat --user=skx remote_info [birthday.my.flat] run: uname -s [birthday.my.flat] out: Linux [localhost] run: uname -s [localhost] out: Linux Done. Disconnecting from birthday.my.flat... done. Disconnecting from localhost... done.
Note: In the second sample above two things: Explicitly changed the username, and specified two hosts to connect to separated by commas.
Also note that we ran the code in the function "remote_info", we could have similarly executed the code in the local_info subroutine by executing:
skx@birthday:~$ fab local_info
Now that we've demonstrated a simple example of connecting and running a command we can do something more useful. Let us pretend we have a remote cluster of machines, each of which is running Varnish. Further suppose that we have a configuration file that we wish to deploy upon these hosts. We can do this via the following fabfile.py:
# ~/fabfile.py # deploy a varnish configuration file upon the cluster. # from fabric.api import * env.user = "steve" env.hosts = [ 'cluster-1.example.org', 'cluster-2.example.org', 'cluster-3.example.org.' ] def update(): """ Upload varnish configuration file and restart it. """" put( "/tmp/varnish.vcl", "~steve/varnish.vcl" ) sudo( "mv ~steve/varnish.vcl /etc/varnish/", pty=true ) sudo( "/etc/init.d/varnish restart", pty=true )
With this in place we can then run "fab update", which will upload the file from the local filesystem, using SSH as the transport, and then restart varnish with that updated configuration file. We've had to break the upload into two steps as it isn't possible to play with multiple usernames in one block.
The sudo() primitive is exactly like the run() primitive we previously demonstrated - with the exception that it runs a command as root. Obviously we need to be root to restart the varnish server, perhaps less obviously we uploaded the file initially into a location we had permission to do so, then used sudo to move it into the final location - that is because we can only upload files as "ourself".
A More Complex Example
As fabric is a mere wrapper around a python library we can avoid using it entirely; and instead use the library directly. This is what the following library does:#!/usr/bin/python # # save this as ~/project/deploy # from __future__ import with_statement from fabric.api import * from fabric.contrib.console import confirm import sys import os import subprocess env.user = 'steve' env.host = [ 'example.org' ] def test(): """ Run our test suite. If it fails prompt the user for action. """ with settings(warn_only=True): result = local('make test', capture=True) if result.failed and not confirm("Tests failed. Continue anyway?"): abort("Aborting at user request.") def deploy(): """ Archive our current code and upload to the remote host. NOTE: Insecure since /tmp/project.tar.gz might be a symlink ..! """ local( "tar czf /tmp/project.tar.gz .") put( "/tmp/project.tar.gz /tmp/project.tar.gz" ) local( "rm /tmp/project.tar.gz" ) def install(): """ Install remotely if the test suite passes. """ test() deploy() with cd('/home/steve/project/'): run("tar xzf /tmp/project.tar.gz") run('make install') # # This is our entry point. # if __name__ == '__main__': if len(sys.argv) > 1: # # If we got an argument then invoke fabric with it. # subprocess.call(['fab', '-f', __file__] + sys.argv[1:]) else: # # Otherwise list our targets. # subprocess.call(['fab', '-f', __file__, '--list'])
The example above has three targets which you can see if you run it with no arguments:skx@birthday:~$ ./deploy Available commands: deploy Archive our current code and upload to the remote host. install Install remotely if the test suite passes. test Run our test suite. If it fails prompt the user for action.
This output could also be found by running "fab --list", which lists the top-level functions along with any associated documentation strings that have been supplied. (That's what the """ Text """ is for.)
The command line handling is pretty basic, but adequate for our demonstration purposes. The more interesting part of this code is that it runs a local test suite prior to deployment, and if that fails it will prompt you to decide whether to abort or complete.
In the case of the code above we're obviously only failing back to a Makefile to do the real work, as we can see that I've invoked:
- "make test"
- To run a local test suite.
- "make install"
- To install system-wide.
However with the freedom to run commands locally, remotely, and remotely as root, along with the ability to upload files securely it should be possible for you to script almost any deployment scenario.