New Project Setup

AWS Configuration

Some configuration within the AWS console is necessary to begin using FabulAWS:

IAM User

First, you’ll need to create credentials via IAM that have permissions to create servers in EC2 and manage autoscaling groups and load balancers. Amazon will provide you with a credentials file which will contain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, which you will need later in this document.

Security Groups

You’ll also need the following security groups. These can be renamed for your project and updated in fabulaws-config.yml.

  • myproject-sg
    • TCP port 22 from
  • myproject-cache-sg
    • TCP port 11211 from myproject-web-sg
    • TCP port 11211 from myproject-worker-sg
  • myproject-db-sg
    • TCP port 5432 from myproject-web-sg
    • TCP port 5432 from myproject-worker-sg
    • TCP port 5432 from myproject-db-sg
  • myproject-queue-sg
    • TCP port 5672 from myproject-web-sg
    • TCP port 5672 from myproject-worker-sg
  • myproject-session-sg
    • TCP port 6379 from myproject-web-sg
    • TCP port 6379 from myproject-worker-sg
  • myproject-web-sg
    • For EC2-classic:
      • TCP port 80 from amazon-elb/amazon-elb-sg
      • TCP port 443 from amazon-elb/amazon-elb-sg
    • For VPC-based AWS accounts:
      • TCP port 80 from myproject-web-sg
      • TCP port 443 from myproject-web-sg
  • myproject-worker-sg
    • (used only as a source - requires no additional firewall rules)
  • myproject-incoming-web-sg
    • TCP port 80 from any address
    • TCP port 443 from any address

Load Balancer

You will need to create a load balancer for your instances, at least one for each environment. Note that multiple load balancers can be used if the site serves different domains (though a single load balancer can be used for a wildcard SSL certificate). Use the following parameters as a guide:

  • Choose a name and set it in fabulaws-config.yml
  • Ports 80 and 443 should be mapped to 80 and 443 on the instances
  • If on EC2-Classic (older AWS accounts), you can use ‘EC2-Classic’ load balancers. Note that this will cause a warning to be shown when you try to ‘Assign Security Groups’. That warning can be skipped.
  • If on newer, VPC-based AWS accounts:
    • Add security group myproject-incoming-web-sg to the load balancer so the load balancer can receive incoming requests.
    • Add security group myproject-web-sg to the load balancer so the backend instances will accept forwarded requests from the load balancer.
  • Setup an HTTPS health check on port 443 that monitors /healthcheck.html at your desired frequency (you’ll setup the health check URL in your app below)
  • Backend authentication and stickiness should be disabled
  • The zones chosen should match those in fabulaws-config.yml (typically 2)
  • Configure a custom SSL certificate, if desired.

After the load balancer is created, you can set the domain name for the associated environment fabulaws-config.yml to your custom domain or the default domain for the load balancer.

Auto Scaling Group

You will also need to create one auto scaling group per envrionment, with the following parameters:

  • Choose a name and set it in fabulaws-config.yml

  • Choose an existing dummy launch config and set it with a “min” and “desired” instances of 0 to start, and a “max” of at least 4 (a higher max is fine).

  • Select Advanced, choose your load balancer, and select the ELB health check

  • Choose the same availability zones as for your load balancer

  • You don’t need to configure scaling policies yet, but these will need to be set eventually based on experience

  • You must configure the auto scaling group to tag instances like so:
    • Name: myproject_<environment>_web
    • deployment: myproject
    • environment: <environment>
    • role: web

Local Machine

You’ll need to make several changes to your local machine to use FabulAWS:

System Requirements

  • Ubuntu Linux 14.04 or later
  • Python 2.7
  • PostgreSQL 9.3
  • virtualenv and virtualenvwrapper are highly recommended

AWS API Credentials

First, you need to define the AWS credentials you created above in your shell environment:

export AWS_ACCESS_KEY_ID=...

It’s helpful to save these to a file (e.g., that you can source (. each time they’re needed.


Local passwords

A number of passwords are required during deployment. To reduce the number of prompts that need to be answered manually, you can use a file called fabsecrets_<environment>.py in the top level of your repository.

If you already have a server environment setup, run the following command to get a local copy of fabsecrets_<environment>.py:

fab <environment> update_local_fabsecrets

Note: If applicable, this will not obtain a copy of the luks_passphrase secret which for security’s sake is not stored directly on the servers. If you will be creating new servers, this must be obtained securely from another developer.

If this is a brand-new project, you can use the following template for fabsecrets_<environment>.py:

database_password = ''
broker_password = ''
smtp_password = ''
newrelic_license_key = ''
newrelic_api_key = ''
s3_secret = ''
secret_key = ''

All of these are required to be filled in before any servers can be created.

Remote passwords

To update passwords on the server, first retrieve a copy of fabsecrets_<environment>.py using the above command (or from another developer) and then run the following command:

fab <environment> update_server_passwords


It’s only necessary to have a copy of fabsecrets_<environment>.py locally if you will be deploying new servers or updating the existing passwords on the servers.


This command is really only useful on the web and worker servers. On all other servers, nothing will update the configuration files to use the new secrets.

Project Configuration

You’ll need to add several files to your repository, typically at the top level. You can use the following as templates:

import logging

root_logger = logging.getLogger()

fabulaws_logger = logging.getLogger('fabulaws')

logger = logging.getLogger(__name__)

# XXX import actual commands needed
from fabulaws.library.wsgiautoscale.api import *


  ami: ami-b2e3c6d8 # us-east-1 14.04.3 LTS 64-bit w/EBS-SSD root store
  key_prefix: 'myproject-'
  admin_groups: [admin, sudo]
  run_upgrade: true
  # Secure directories, volume, and filesystem info
  secure_root: #/secure # no trailing /
  secure_home: #/home/secure
  fs_type: ext4
  fs_encrypt: false
  # create swap of swap_multiplier * available RAM
  swap_multiplier: 1

deploy_user: myproject
webserver_user: myproject-web
database_host: localhost
database_user: dbuser
home: /home/myproject/
python: /usr/bin/python2.7

disable_known_hosts: true
ssh_keys: deployment/users/
password_names: [database_password, broker_password, smtp_password,
                 newrelic_license_key, newrelic_api_key, s3_secret,
project: myproject
wsgi_app: myproject.wsgi:application
requirements_file: requirements/app.txt
settings_managepy: myproject.local_settings
  upgrade_message: deployment/templates/html/503.html
  healthcheck_override: deployment/templates/html/healthcheck.html
localsettings_template: deployment/templates/
logstash_config: deployment/templates/logstash.conf

# Set gelf_log_host to the host of your Graylog2 server (or other GELF log
# receiver)
# gelf_log_host: hostname

# Set syslog_server to a "hostname:port" (quote marks required due
# to the ":" in there) and server logs will be forwarded there using
# syslog protocol.  "hostname:port" could be e.g. papertrail or a
# similar service.
# (You might want to set this in fabsecrets instead of here.)
# syslog_server: "hostname:port"

# Set awslogs_access_key_id to the AWS_ACCESS_KEY_ID of the user with
# permissions to create log groups, log streams, and log events. For help
# setting up this role, see:
# NOTE: You will also need to set awslogs_secret_access_key in your
# fabsecrets_<environment>.py file
# awslogs_access_key_id: AK....

# Set extra_log_files to a list of log files you want to monitor, in addition
# to the default logs monitored by FabulAWS itself:
# extra_log_files:
#   /path/to/file:
#     tag: mytag
#     date_format: '%Y-%m-%d %H:%M:%S'

vcs_cmd: git # or hg
latest_changeset_cmd: git rev-parse HEAD # hg id -i # or git rev-parse HEAD
# Mapping of Fabric deployments and environments to the Mercurial branch names
# that should be deployed.
    production: master
    staging: master
    testing: master


# Local server port for pgbouncer
pgbouncer_port: 5432

# Version of Less to install
less_version: 2.5.3

# Local server ports used by Gunicorn (the Django apps server)
  staging: 8000
  production: 8001
  testing: 8002

# Whether we're hosting static files on our webservers ('local')
# or somewhere else ('remote')
static_hosting: remote

# Mapping of celery worker names to options
# The worker name (key) can be any text of your choosing. The value should
# be any additional options you'd like to pass to celeryd, such as specifying
# the concurrency and queue name(s)
  main: -c 10 -Q celeryd

# Start this many Gunicorn workers for each CPU core
gunicorn_worker_multiplier: 8

# Mapping of environment names to domain names. Used to update the
# primary site in the database after a refresh and to set ALLOWED_HOSTS
# Note that the first domain in the list must not be a wildcard as it
# is used to update a Site object in the database.
# Wildcard format used per ALLOWED_HOSTS setting


default_deployment: myproject
- myproject
- staging
- production
- testing
- production
- cache
- db-master
- db-slave
- web
- worker


region: us-east-1
- e
- c

# Mapping of role to security group(s):
  db-master: [myproject-sg, myproject-db-sg]
  db-slave: [myproject-sg, myproject-db-sg]
  cache: [myproject-sg, myproject-session-sg, myproject-cache-sg, myproject-queue-sg]
  worker: [myproject-sg, myproject-worker-sg]
  web: [myproject-sg, myproject-web-sg]

# Mapping of environment and role to EC2 instance types (sizes)
    cache: c3.large
    db-master: m3.xlarge
    db-slave: m3.xlarge
    web: c3.large
    worker: m3.large
    cache: t1.micro
    db-master: m1.small
    db-slave: m1.small
    web: m1.small
    worker: m3.large
    cache: t1.micro
    db-master: t1.micro
    db-slave: t1.micro
    web: m1.small
    worker: m1.small

# Mapping of Fabric environment names to AWS load balancer names.  Load
# balancers can be configured in the AWS Management Console.
    - myproject-production-lb
    - myproject-staging-lb
    - myproject-testing-lb

# Mapping of Fabric environment names to AWS auto scaling group names. Auto
# scaling groups can be configured in the AWS Management Console.
    production: myproject-production-ag
    staging: myproject-staging-ag
    testing: myproject-testing-ag

# Mapping of Fabric environment and role to Elastic Block Device sizes (in GB)
    cache: 10
    db-master: 100
    db-slave: 100
    web: 10
    worker: 50
    cache: 10
    db-master: 100
    db-slave: 100
    web: 10
    worker: 50
    cache: 10
    db-master: 100
    db-slave: 100
    web: 10
    worker: 50

# Mapping of Fabric environment and role to Elastic Block Device volume types
# Use SSD-backed storage (gp2) for all servers. Change to 'standard' for slower
# magnetic storage.
  cache: gp2
  db-master: gp2
  db-slave: gp2
  web: gp2
  worker: gp2

  - python2.7-dev
  - libpq-dev
  - libmemcached-dev
  - supervisor
  - mercurial
  - git
  - build-essential
  - stunnel4
  - pgbouncer

  # for help adjusting these settings, see:
    # Settings to apply to Postgres servers
    # You can put anything here from postgresql.conf

    # connections
    max_connections: '80' # _active_ connections are limited by pgbouncer

    # replication settings
    wal_level: 'hot_standby'
    hot_standby: 'on'
    hot_standby_feedback: 'on'
    max_wal_senders: '3'
    wal_keep_segments: '3000' # during client deletion 50 or more may be generated per minute; this allows an hour

    # resources - let pgtune set these based on actual machine resources
    # shared_buffers: '8GB' # 25% of available RAM, up to 8GB
    # work_mem: '750MB' # (2*RAM)/max_connections
    # maintenance_work_mem': '1GB' # RAM/16 up to 1GB; high values aren't that helpful
    # effective_cache_size': '48GB' # between 50-75%, should equal free + cached values in `top`

    # checkpoint settings
    wal_buffers: '16MB'
    checkpoint_completion_target: '0.9'
    checkpoint_timeout: '10min'
    checkpoint_segments: '256' # if checkpoints are happening more often than the timeout, increase this up to 256

    # logging
    log_min_duration_statement: '500'
    log_checkpoints: 'on'
    log_lock_waits: 'on'
    log_temp_files: '0'

    # write optimizations
    commit_delay: '4000' # delay each commit this many microseconds in case we can do a group commit
    commit_siblings: '5' # only delay if at least N transactions are in process

    # index usage optimizations
    random_page_cost: '2' # our DB servers have a lot of RAM and may tend to prefer Seq Scans if this is too high

  # More Postgres-related settings.
  # How to install Postgres:
    - postgresql
    - libpq-dev
  # Whether and how to apply pgtune
  postgresql_tune: true
  postgresql_tune_type: Web
  # Kernel sysctl settings to change
  postgresql_shmmax: 107374182400  # 100 GB
  postgresql_shmall: 26214400  # 100 GB / PAGE_SIZE (4096)
  # Networks to allow connections from
    - ''
    - ''
  # Whether to disable the Linux out-of-memory killer
  postgresql_disable_oom: true

This file should be placed at the location specified in fabulaws-config.yml, typically deployment/templates/

from myproject.settings import *

DEBUG = False

# logging settings
#LOGGING['filters']['static_fields']['fields']['deployment'] = '{{ deployment_tag }}'
#LOGGING['filters']['static_fields']['fields']['environment'] = '{{ environment }}'
#LOGGING['filters']['static_fields']['fields']['role'] = '{{ current_role }}'
AWS_STORAGE_BUCKET_NAME = '{{ staticfiles_s3_bucket }}'
AWS_SECRET_ACCESS_KEY = "{{ s3_secret }}"

SECRET_KEY = "{{ secret_key }}"

# Tell django-storages that when coming up with the URL for an item in S3 storage, keep
# it simple - just use this domain plus the path. (If this isn't set, things get complicated).
# This controls how the `static` template tag from `staticfiles` gets expanded, if you're using it.
# We also use it in the next setting.

# This is used by the `static` template tag from `static`, if you're using that. Or if anything else
# refers directly to STATIC_URL. So it's safest to always set it.

# Tell the staticfiles app to use S3Boto storage when writing the collected static files (when
# you run `collectstatic`).
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'

# Auto-create the bucket if it doesn't exist

AWS_HEADERS = {  # see
    'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
    'Cache-Control': 'max-age=94608000',

# Having AWS_PRELOAD_META turned on breaks django-storages/s3 -
# saving a new file doesn't update the metadata and exists() returns False

# database settings
{% for server in all_databases %}
    '{{ server.database_key }}': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': '{{ server.database_local_name }}',
        'USER': '{{ database_user }}',
        'PASSWORD': '{{ database_password }}',
        'HOST': 'localhost',
        'PORT': '{{ pgbouncer_port }}',
    },{% endfor %}

# django-balancer settings
{% for server in slave_databases %}
    '{{ server.database_key }}': 1,{% endfor %}
MASTER_DATABASE = '{{ master_database.database_key }}'

# media roots
MEDIA_ROOT = "{{ media_root }}"
STATIC_ROOT = "{{ static_root }}"

# email settings
EMAIL_HOST_PASSWORD = '{{ smtp_password }}'
EMAIL_SUBJECT_PREFIX = '[{{ deployment_tag }} {{ environment }}] '

# Redis DB map:
# 0 = cache
# 1 = unused (formerly celery task queue)
# 2 = celery results
# 3 = session store
# 4-16 = (free)

# Cache settings
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '{{ cache_server.internal_ip }}:11211',
        'VERSION': '{{ current_changeset }}',
    'session': {
        'BACKEND': 'redis_cache.RedisCache',
        'LOCATION': '{{ cache_server.internal_ip }}:6379',
        'OPTIONS': {
            'DB': 3,

# Task queue settings

# see
BROKER_URL = "amqp://{{ deploy_user }}:{{ broker_password }}@{{ cache_server.internal_ip }}:5672/{{ vhost }}"
CELERY_RESULT_BACKEND = "redis://{{ cache_server.internal_ip }}:6379/2"

# Session settings
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

# django-compressor settings
# Use MEDIA_ROOT rather than STATIC_ROOT because it already exists and is
# writable on the server.
COMPRESS_OFFLINE_MANIFEST = 'manifest-{{ current_changeset }}.json'

ALLOWED_HOSTS = [{% for host in allowed_hosts %}'{{ host }}', {% endfor %}]

SSH keys

Before attempting to deploy for the first time, you should add your SSH public key to a file named deployment/users/<yourusername> in the repository. This path can also be configured in fabulaws-config.yml. Multiple SSH keys are permitted per file, and additional files can be added for each username (developer).

Django Settings

FabulAWS uses django_compressor and django-storages to store media on S3. The following settings changes are required in your base

  1. compressor, storages, and djcelery should be added to your INSTALLED_APPS.
  2. Add the following to the end of your, modifying as needed:
# Celery settings
import djcelery
from celery.schedules import crontab


# List of finder classes that know how to find static files in
# various locations.

STATIC_ROOT = os.path.join(BASE_DIR, 'static')

COMPRESS_ENABLED = False # enable in if needed
    ('text/less', 'lessc {infile} {outfile}'),

You’ll need to change the default DJANGO_SETTINGS_MODULE in your project’s to myproject.local_settings.

Static HTML

You need to create two static HTML files, one for displaying an upgrade message while you’re deploying to your site, and one to serve as a “dummy” health check to keep instances in your load balancer healthy while deploying.

The paths to these files can be configured in the static_html dictionary in your fabulaws-config.yml:

  upgrade_message: deployment/templates/html/503.html
  healthcheck_override: deployment/templates/html/healthcheck.html

The 503.html file can contain anything you’d like. We recommend something distictive so that you can tell if your health check is being served by Django or the “dummy” health check html file, e.g.: OK (nginx override)

Similarly, the healthcheck.html can contain anything you’d like, either something as simple as Upgrade in progress. Please check back later. or a complete HTML file complete with stylesheets and images to display a “pretty” upgrade-in-progress message.

Basic Auth

If you want to add HTTP Basic Auth to a site, add a section to fabulaws-config.yml like this:

# Any sites that need basic auth
# This is NOT intended to provide very high security.
  testing: True
  anotherenv: True

Add basic_auth_username and basic_auth_password to password_names:

password_names: [a, b, c, ..., basic_auth_username, basic_auth_password]

And add the desired username and password to each environment secrets file:

basic_auth_username: user1
basic_auth_password: password1

You’ll need to add these entries to all secrets files; just set them to an empty string for environments where you are not using basic auth.

Then in the testing and anotherenv environments, fabulaws will apply basic auth to the sites. For testing, user user1 will be able to use password password1, and so forth.


Fabulaws will also turn off Basic Auth for the health check URL so that the load balancer can access it. It assumes that the health check URL is /healthcheck.html and that Django will be serving the health check URL (rather than being served as a static file directly by Nginx, for example). If either of those assumptions are not correct, you will need to tweak it by copying and modifying the template for nginx.conf.

Health Check

You’ll need to configure a health check within Django as well. Following is a sample you can use.

Add to

import logging

from django.db import connections
from django.http import HttpResponse, HttpResponseServerError

def health_check(request):
    Health check for the load balancer.
    logger = logging.getLogger('fabutest.views.health_check')
    db_errors = []
    for conn_name in connections:
        conn = connections[conn_name]
            cursor = conn.cursor()
            cursor.execute('SELECT 1')
            row = cursor.fetchone()
            assert row[0] == 1
        except Exception, e:
            # note that there doesn't seem to be a way to pass a timeout to
            # psycopg2 through Django, so this will likely not raise a timeout
            # exception
            logger.warning('Caught error checking database connection "{0}"'
                           ''.format(conn_name), exc_info=True)
    if not db_errors:
        return HttpResponse('OK')
        return HttpResponseServerError('Configuration Error')

Add lines similar to those highlighted below to your

from django.conf.urls import include, url
from django.contrib import admin

from fabutest import views as fabutest_views

urlpatterns = [
    url(r'^admin/', include(,
    url(r'^healthcheck.html$', fabutest_views.health_check),

Python Requirements

The following are the minimum Python requirements for deploying a web application using FabulAWS (update version numbers as needed):


In addition, the following requirements are needed for deployment:


First Deployment

Once you have your EC2 environment and project configured, it’s time to create your initial server environment.

To create a new instance of the testing environment, you can use the create_environment command to Fabric, like so:

fab create_environment:myproject,testing

In addition to the console, be sure to inspect the log files generated (*.out in the current directory) to troubleshoot any problems that may arise.

For more information, please refer to the Deployment documentation.