New Project Setup¶
AWS Configuration¶
Some configuration within the AWS console is necessary to begin using FabulAWS:
IAM User¶
First, you’ll need to create credentials via IAM that have permissions to create
servers in EC2 and manage autoscaling groups and load balancers. Amazon will provide you with a
credentials file which will contain AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
, which you
will need later in this document.
Security Groups¶
You’ll also need the following security groups. These can be renamed for your project and updated in fabulaws-config.yml.
- myproject-sg
- TCP port 22 from 0.0.0.0/0
- myproject-cache-sg
- TCP port 11211 from myproject-web-sg
- TCP port 11211 from myproject-worker-sg
- myproject-db-sg
- TCP port 5432 from myproject-web-sg
- TCP port 5432 from myproject-worker-sg
- TCP port 5432 from myproject-db-sg
- myproject-queue-sg
- TCP port 5672 from myproject-web-sg
- TCP port 5672 from myproject-worker-sg
- myproject-session-sg
- TCP port 6379 from myproject-web-sg
- TCP port 6379 from myproject-worker-sg
- myproject-web-sg
- For EC2-classic:
- TCP port 80 from amazon-elb/amazon-elb-sg
- TCP port 443 from amazon-elb/amazon-elb-sg
- For VPC-based AWS accounts:
- TCP port 80 from myproject-web-sg
- TCP port 443 from myproject-web-sg
- myproject-worker-sg
- (used only as a source - requires no additional firewall rules)
- myproject-incoming-web-sg
- TCP port 80 from any address
- TCP port 443 from any address
Load Balancer¶
You will need to create a load balancer for your instances, at least one for each environment. Note that multiple load balancers can be used if the site serves different domains (though a single load balancer can be used for a wildcard SSL certificate). Use the following parameters as a guide:
- Choose a name and set it in
fabulaws-config.yml
- Ports 80 and 443 should be mapped to 80 and 443 on the instances
- If on EC2-Classic (older AWS accounts), you can use ‘EC2-Classic’ load balancers. Note that this will cause a warning to be shown when you try to ‘Assign Security Groups’. That warning can be skipped.
- If on newer, VPC-based AWS accounts:
- Add security group myproject-incoming-web-sg to the load balancer so the load balancer can receive incoming requests.
- Add security group myproject-web-sg to the load balancer so the backend instances will accept forwarded requests from the load balancer.
- Setup an HTTPS health check on port 443 that monitors
/healthcheck.html
at your desired frequency (you’ll setup the health check URL in your app below) - Backend authentication and stickiness should be disabled
- The zones chosen should match those in
fabulaws-config.yml
(typically 2) - Configure a custom SSL certificate, if desired.
After the load balancer is created, you can set the domain name for the
associated environment fabulaws-config.yml
to your custom domain or the
default domain for the load balancer.
Auto Scaling Group¶
You will also need to create one auto scaling group per envrionment, with the following parameters:
- Choose a name and set it in
fabulaws-config.yml
- Choose an existing dummy launch config and set it with a “min” and “desired” instances of 0 to start, and a “max” of at least 4 (a higher max is fine).
- Select Advanced, choose your load balancer, and select the ELB health check
- Choose the same availability zones as for your load balancer
- You don’t need to configure scaling policies yet, but these will need to be set eventually based on experience
- You must configure the auto scaling group to tag instances like so:
- Name: myproject_<environment>_web
- deployment: myproject
- environment: <environment>
- role: web
Local Machine¶
You’ll need to make several changes to your local machine to use FabulAWS:
System Requirements¶
- Ubuntu Linux 14.04 or later
- Python 2.7
- PostgreSQL 9.3
- virtualenv and virtualenvwrapper are highly recommended
AWS API Credentials¶
First, you need to define the AWS credentials you created above in your shell environment:
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
It’s helpful to save these to a file (e.g., aws.sh
) that you can source
(. aws.sh
) each time they’re needed.
Passwords¶
Local passwords¶
A number of passwords are required during deployment. To reduce the number of
prompts that need to be answered manually, you can use a file called
fabsecrets_<environment>.py
in the top level of your repository.
If you already have a server environment setup, run the following command to get a local copy of fabsecrets_<environment>.py:
fab <environment> update_local_fabsecrets
Note: If applicable, this will not obtain a copy of the luks_passphrase
secret which for security’s sake is not stored directly on the servers. If you
will be creating new servers, this must be obtained securely from another
developer.
If this is a brand-new project, you can use the following template for
fabsecrets_<environment>.py
:
database_password = ''
broker_password = ''
smtp_password = ''
newrelic_license_key = ''
newrelic_api_key = ''
s3_secret = ''
secret_key = ''
All of these are required to be filled in before any servers can be created.
Remote passwords¶
To update passwords on the server, first retrieve a copy of fabsecrets_<environment>.py
using the above command (or from another developer) and then run the following
command:
fab <environment> update_server_passwords
Note
It’s only necessary to have a copy of fabsecrets_<environment>.py
locally if you
will be deploying new servers or updating the existing passwords on the servers.
Note
This command is really only useful on the web and worker servers. On all other servers, nothing will update the configuration files to use the new secrets.
Project Configuration¶
You’ll need to add several files to your repository, typically at the top level. You can use the following as templates:
fabfile.py¶
import logging
root_logger = logging.getLogger()
root_logger.addHandler(logging.StreamHandler())
root_logger.setLevel(logging.WARNING)
fabulaws_logger = logging.getLogger('fabulaws')
fabulaws_logger.setLevel(logging.INFO)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# XXX import actual commands needed
from fabulaws.library.wsgiautoscale.api import *
fabulaws-config.yml¶
instance_settings:
# http://uec-images.ubuntu.com/releases/trusty/release/
ami: ami-b2e3c6d8 # us-east-1 14.04.3 LTS 64-bit w/EBS-SSD root store
key_prefix: 'myproject-'
admin_groups: [admin, sudo]
run_upgrade: true
# Secure directories, volume, and filesystem info
secure_root: #/secure # no trailing /
secure_home: #/home/secure
fs_type: ext4
fs_encrypt: false
ubuntu_mirror: us.archive.ubuntu.com
# create swap of swap_multiplier * available RAM
swap_multiplier: 1
## REMOTE SETTINGS ##
deploy_user: myproject
webserver_user: myproject-web
database_host: localhost
database_user: dbuser
home: /home/myproject/
python: /usr/bin/python2.7
## LOCAL / PROJECT SETTINGS ##
disable_known_hosts: true
ssh_keys: deployment/users/
password_names: [database_password, broker_password, smtp_password,
newrelic_license_key, newrelic_api_key, s3_secret,
secret_key]
project: myproject
wsgi_app: myproject.wsgi:application
requirements_file: requirements/app.txt
requirements_sdists:
settings_managepy: myproject.local_settings
static_html:
upgrade_message: deployment/templates/html/503.html
healthcheck_override: deployment/templates/html/healthcheck.html
localsettings_template: deployment/templates/local_settings.py
logstash_config: deployment/templates/logstash.conf
# Set gelf_log_host to the host of your Graylog2 server (or other GELF log
# receiver)
# gelf_log_host: hostname
# Set syslog_server to a "hostname:port" (quote marks required due
# to the ":" in there) and server logs will be forwarded there using
# syslog protocol. "hostname:port" could be e.g. papertrail or a
# similar service.
# (You might want to set this in fabsecrets instead of here.)
# syslog_server: "hostname:port"
# You can alternatively supply a multi-line config for rsyslog as follows
# (e.g., in the event you need to enable TLS). For more information, see:
# http://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_client.html#sample-syslog-conf
# syslog_server: |
# # make gtls driver the default
# $DefaultNetstreamDriver gtls
#
# # certificate files
# $DefaultNetstreamDriverCAFile /rsyslog/protected/ca.pem
# $DefaultNetstreamDriverCertFile /rsyslog/protected/machine-cert.pem
# $DefaultNetstreamDriverKeyFile /rsyslog/protected/machine-key.pem
#
# $ActionSendStreamDriverAuthMode x509/name
# $ActionSendStreamDriverPermittedPeer central.example.net
# $ActionSendStreamDriverMode 1 # run driver in TLS-only mode
# *.* @@central.example.net:10514 # forward everything to remote server
# Set awslogs_access_key_id to the AWS_ACCESS_KEY_ID of the user with
# permissions to create log groups, log streams, and log events. For help
# setting up this role, see:
# http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
# NOTE: You will also need to set awslogs_secret_access_key in your
# fabsecrets_<environment>.py file
# awslogs_access_key_id: AK....
# Set extra_log_files to a list of log files you want to monitor, in addition
# to the default logs monitored by FabulAWS itself:
# extra_log_files:
# /path/to/file:
# tag: mytag
# date_format: '%Y-%m-%d %H:%M:%S'
backup_key_fingerprint:
vcs_cmd: git # or hg
latest_changeset_cmd: git rev-parse HEAD # hg id -i # or git rev-parse HEAD
repo: git@github.com:username/myproject.git
# Mapping of Fabric deployments and environments to the Mercurial branch names
# that should be deployed.
branches:
myproject:
production: master
staging: master
testing: master
## SERVER SETTINGS ##
# Local server port for pgbouncer
pgbouncer_port: 5432
# Version of Less to install
less_version: 2.5.3
# Local server ports used by Gunicorn (the Django apps server)
server_ports:
staging: 8000
production: 8001
testing: 8002
# Whether we're hosting static files on our webservers ('local')
# or somewhere else ('remote')
static_hosting: remote
# Mapping of celery worker names to options
# The worker name (key) can be any text of your choosing. The value should
# be any additional options you'd like to pass to celeryd, such as specifying
# the concurrency and queue name(s)
celery_workers:
main: -c 10 -Q celeryd
# Start this many Gunicorn workers for each CPU core
gunicorn_worker_multiplier: 8
# Mapping of environment names to domain names. Used to update the
# primary site in the database after a refresh and to set ALLOWED_HOSTS
# Note that the first domain in the list must not be a wildcard as it
# is used to update a Site object in the database.
# Wildcard format used per ALLOWED_HOSTS setting
site_domains_map:
production:
- dualstack.myproject-production-1-12345.us-east-1.elb.amazonaws.com
staging:
- dualstack.myproject-staging-1-12345.us-east-1.elb.amazonaws.com
testing:
- dualstack.myproject-testing-1-12345.us-east-1.elb.amazonaws.com
## ENVIRONMENT / ROLE SETTINGS ##
default_deployment: myproject
deployments:
- myproject
environments:
- staging
- production
- testing
production_environments:
- production
valid_roles:
- cache
- db-master
- db-slave
- web
- worker
## AWS SETTINGS ##
region: us-east-1
avail_zones:
- e
- c
# Mapping of role to security group(s):
security_groups:
db-master: [myproject-sg, myproject-db-sg]
db-slave: [myproject-sg, myproject-db-sg]
cache: [myproject-sg, myproject-session-sg, myproject-cache-sg, myproject-queue-sg]
worker: [myproject-sg, myproject-worker-sg]
web: [myproject-sg, myproject-web-sg]
# Mapping of environment and role to EC2 instance types (sizes)
instance_types:
production:
cache: c3.large
db-master: m3.xlarge
db-slave: m3.xlarge
web: c3.large
worker: m3.large
staging:
cache: t1.micro
db-master: m1.small
db-slave: m1.small
web: m1.small
worker: m3.large
testing:
cache: t1.micro
db-master: t1.micro
db-slave: t1.micro
web: m1.small
worker: m1.small
# Mapping of Fabric environment names to AWS load balancer names. Load
# balancers can be configured in the AWS Management Console.
load_balancers:
myproject:
production:
- myproject-production-lb
staging:
- myproject-staging-lb
testing:
- myproject-testing-lb
# Mapping of Fabric environment names to AWS auto scaling group names. Auto
# scaling groups can be configured in the AWS Management Console.
auto_scaling_groups:
myproject:
production: myproject-production-ag
staging: myproject-staging-ag
testing: myproject-testing-ag
# Mapping of Fabric environment and role to Elastic Block Device sizes (in GB)
volume_sizes:
production:
cache: 10
db-master: 100
db-slave: 100
web: 10
worker: 50
staging:
cache: 10
db-master: 100
db-slave: 100
web: 10
worker: 50
testing:
cache: 10
db-master: 100
db-slave: 100
web: 10
worker: 50
# Mapping of Fabric environment and role to Elastic Block Device volume types
# Use SSD-backed storage (gp2) for all servers. Change to 'standard' for slower
# magnetic storage.
volume_types:
cache: gp2
db-master: gp2
db-slave: gp2
web: gp2
worker: gp2
app_server_packages:
- python2.7-dev
- libpq-dev
- libmemcached-dev
- supervisor
- mercurial
- git
- build-essential
- stunnel4
- pgbouncer
db_settings:
# for help adjusting these settings, see:
# http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
# http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
# http://thebuild.com/presentations/not-my-job-djangocon-us.pdf
postgresql_settings:
# Settings to apply to Postgres servers
# You can put anything here from postgresql.conf
# connections
max_connections: '80' # _active_ connections are limited by pgbouncer
# replication settings
wal_level: 'hot_standby'
hot_standby: 'on'
hot_standby_feedback: 'on'
max_wal_senders: '3'
wal_keep_segments: '3000' # during client deletion 50 or more may be generated per minute; this allows an hour
# resources - let pgtune set these based on actual machine resources
# shared_buffers: '8GB' # 25% of available RAM, up to 8GB
# work_mem: '750MB' # (2*RAM)/max_connections
# maintenance_work_mem': '1GB' # RAM/16 up to 1GB; high values aren't that helpful
# effective_cache_size': '48GB' # between 50-75%, should equal free + cached values in `top`
# checkpoint settings
wal_buffers: '16MB'
checkpoint_completion_target: '0.9'
checkpoint_timeout: '10min'
checkpoint_segments: '256' # if checkpoints are happening more often than the timeout, increase this up to 256
# logging
log_min_duration_statement: '500'
log_checkpoints: 'on'
log_lock_waits: 'on'
log_temp_files: '0'
# write optimizations
commit_delay: '4000' # delay each commit this many microseconds in case we can do a group commit
commit_siblings: '5' # only delay if at least N transactions are in process
# index usage optimizations
random_page_cost: '2' # our DB servers have a lot of RAM and may tend to prefer Seq Scans if this is too high
# More Postgres-related settings.
# How to install Postgres:
postgresql_packages:
- postgresql
- libpq-dev
# Whether and how to apply pgtune
postgresql_tune: true
postgresql_tune_type: Web
# Kernel sysctl settings to change
postgresql_shmmax: 107374182400 # 100 GB
postgresql_shmall: 26214400 # 100 GB / PAGE_SIZE (4096)
# Networks to allow connections from
postgresql_networks:
- '10.0.0.0/8'
- '172.16.0.0/12'
# Whether to disable the Linux out-of-memory killer
postgresql_disable_oom: true
local_settings.py¶
This file should be placed at the location specified in fabulaws-config.yml
,
typically deployment/templates/local_settings.py
.
from myproject.settings import *
DEBUG = False
# logging settings
#LOGGING['filters']['static_fields']['fields']['deployment'] = '{{ deployment_tag }}'
#LOGGING['filters']['static_fields']['fields']['environment'] = '{{ environment }}'
#LOGGING['filters']['static_fields']['fields']['role'] = '{{ current_role }}'
AWS_STORAGE_BUCKET_NAME = '{{ staticfiles_s3_bucket }}'
AWS_ACCESS_KEY_ID = 'YOUR-KEY-HERE'
AWS_SECRET_ACCESS_KEY = "{{ s3_secret }}"
SECRET_KEY = "{{ secret_key }}"
# Tell django-storages that when coming up with the URL for an item in S3 storage, keep
# it simple - just use this domain plus the path. (If this isn't set, things get complicated).
# This controls how the `static` template tag from `staticfiles` gets expanded, if you're using it.
# We also use it in the next setting.
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
# This is used by the `static` template tag from `static`, if you're using that. Or if anything else
# refers directly to STATIC_URL. So it's safest to always set it.
STATIC_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN
# Tell the staticfiles app to use S3Boto storage when writing the collected static files (when
# you run `collectstatic`).
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
# Auto-create the bucket if it doesn't exist
AWS_AUTO_CREATE_BUCKET = True
AWS_HEADERS = { # see http://developer.yahoo.com/performance/rules.html#expires
'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
'Cache-Control': 'max-age=94608000',
}
# Having AWS_PRELOAD_META turned on breaks django-storages/s3 -
# saving a new file doesn't update the metadata and exists() returns False
#AWS_PRELOAD_METADATA = True
# database settings
DATABASES = {
{% for server in all_databases %}
'{{ server.database_key }}': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': '{{ server.database_local_name }}',
'USER': '{{ database_user }}',
'PASSWORD': '{{ database_password }}',
'HOST': 'localhost',
'PORT': '{{ pgbouncer_port }}',
},{% endfor %}
}
# django-balancer settings
DATABASE_POOL = {
{% for server in slave_databases %}
'{{ server.database_key }}': 1,{% endfor %}
}
MASTER_DATABASE = '{{ master_database.database_key }}'
# media roots
MEDIA_ROOT = "{{ media_root }}"
STATIC_ROOT = "{{ static_root }}"
# email settings
EMAIL_HOST_PASSWORD = '{{ smtp_password }}'
EMAIL_SUBJECT_PREFIX = '[{{ deployment_tag }} {{ environment }}] '
# Redis DB map:
# 0 = cache
# 1 = unused (formerly celery task queue)
# 2 = celery results
# 3 = session store
# 4-16 = (free)
# Cache settings
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '{{ cache_server.internal_ip }}:11211',
'VERSION': '{{ current_changeset }}',
},
'session': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': '{{ cache_server.internal_ip }}:6379',
'OPTIONS': {
'DB': 3,
},
},
}
# Task queue settings
# see https://github.com/ask/celery/issues/436
BROKER_URL = "amqp://{{ deploy_user }}:{{ broker_password }}@{{ cache_server.internal_ip }}:5672/{{ vhost }}"
BROKER_CONNECTION_TIMEOUT = 4
BROKER_POOL_LIMIT = 10
CELERY_RESULT_BACKEND = "redis://{{ cache_server.internal_ip }}:6379/2"
# Session settings
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'session'
# django-compressor settings
COMPRESS_URL = STATIC_URL
# Use MEDIA_ROOT rather than STATIC_ROOT because it already exists and is
# writable on the server.
COMPRESS_ROOT = MEDIA_ROOT
COMPRESS_STORAGE = STATICFILES_STORAGE
COMPRESS_OFFLINE = True
COMPRESS_OFFLINE_MANIFEST = 'manifest-{{ current_changeset }}.json'
COMPRESS_ENABLED = True
ALLOWED_HOSTS = [{% for host in allowed_hosts %}'{{ host }}', {% endfor %}]
SSH keys¶
Before attempting to deploy for the first time, you should add your SSH public key
to a file named deployment/users/<yourusername>
in the repository. This path
can also be configured in fabulaws-config.yml
. Multiple SSH keys are permitted
per file, and additional files can be added for each username (developer).
Django Settings¶
FabulAWS uses django_compressor and django-storages to store media on S3. The following
settings changes are required in your base settings.py
:
compressor
,storages
, anddjcelery
should be added to yourINSTALLED_APPS
.- Add the following to the end of your
settings.py
, modifying as needed:
# Celery settings
import djcelery
from celery.schedules import crontab
djcelery.setup_loader()
CELERY_SEND_TASK_ERROR_EMAILS = True
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'compressor.finders.CompressorFinder',
)
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
COMPRESS_ENABLED = False # enable in local_settings.py if needed
COMPRESS_CSS_HASHING_METHOD = 'hash'
COMPRESS_PRECOMPILERS = (
('text/less', 'lessc {infile} {outfile}'),
)
wsgi.py¶
You’ll need to change the default DJANGO_SETTINGS_MODULE
in your project’s
wsgi.py
to myproject.local_settings
.
Static HTML¶
You need to create two static HTML files, one for displaying an upgrade message while you’re deploying to your site, and one to serve as a “dummy” health check to keep instances in your load balancer healthy while deploying.
The paths to these files can be configured in the static_html
dictionary
in your fabulaws-config.yml
:
static_html:
upgrade_message: deployment/templates/html/503.html
healthcheck_override: deployment/templates/html/healthcheck.html
The 503.html
file can contain anything you’d like. We recommend something
distictive so that you can tell if your health check is being served by Django
or the “dummy” health check html file, e.g.: OK (nginx override)
Similarly, the healthcheck.html
can contain anything you’d like, either
something as simple as Upgrade in progress. Please check back later.
or
a complete HTML file complete with stylesheets and images to display a “pretty”
upgrade-in-progress message.
Basic Auth¶
If you want to add HTTP Basic Auth to a site, add a section to fabulaws-config.yml
like this:
# Any sites that need basic auth
# This is NOT intended to provide very high security.
use_basic_auth:
testing: True
anotherenv: True
Add basic_auth_username
and basic_auth_password
to password_names
:
password_names: [a, b, c, ..., basic_auth_username, basic_auth_password]
And add the desired username and password to each environment secrets file:
basic_auth_username: user1
basic_auth_password: password1
You’ll need to add these entries to all secrets files; just set them to an empty string for environments where you are not using basic auth.
Then in the testing
and anotherenv
environments, fabulaws will apply
basic auth to the sites. For testing, user user1
will be able to use password
password1
, and so forth.
Note
Fabulaws will also turn off Basic Auth for the health check URL so that the load balancer
can access it. It assumes that the health check URL is /healthcheck.html
and that Django will
be serving the health check URL (rather than being served as a static file directly by Nginx, for
example). If either of those assumptions are not correct, you will need to tweak it by copying
and modifying the template for nginx.conf.
Health Check¶
You’ll need to configure a health check within Django as well. Following is a sample you can use.
Add to views.py
:
import logging
from django.db import connections
from django.http import HttpResponse, HttpResponseServerError
def health_check(request):
"""
Health check for the load balancer.
"""
logger = logging.getLogger('fabutest.views.health_check')
db_errors = []
for conn_name in connections:
conn = connections[conn_name]
try:
cursor = conn.cursor()
cursor.execute('SELECT 1')
row = cursor.fetchone()
assert row[0] == 1
except Exception, e:
# note that there doesn't seem to be a way to pass a timeout to
# psycopg2 through Django, so this will likely not raise a timeout
# exception
logger.warning('Caught error checking database connection "{0}"'
''.format(conn_name), exc_info=True)
db_errors.append(e)
if not db_errors:
return HttpResponse('OK')
else:
return HttpResponseServerError('Configuration Error')
Add lines similar to those highlighted below to your urls.py
:
from django.conf.urls import include, url
from django.contrib import admin
from fabutest import views as fabutest_views
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^healthcheck.html$', fabutest_views.health_check),
]
Python Requirements¶
The following are the minimum Python requirements for deploying a web application using FabulAWS (update version numbers as needed):
Django==1.8.8
psycopg2==2.6.1
pytz==2015.7
django-celery==3.1.17
celery==3.1.19
gunicorn==19.4.5
django-balancer==0.4
boto==2.39.0
django-storages==1.1.8
django-compressor==2.0
python-memcached==1.57
redis==2.10.5
django-redis-cache==1.6.5
django-cache-machine==0.9.1
newrelic==2.60.0.46
In addition, the following requirements are needed for deployment:
fabric==1.10.2
boto==2.39.0
pyyaml==3.11
argyle==0.2.1
First Deployment¶
Once you have your EC2 environment and project configured, it’s time to create your initial server environment.
To create a new instance of the testing environment, you can use the
create_environment
command to Fabric, like so:
fab create_environment:myproject,testing
In addition to the console, be sure to inspect the log files generated (*.out
in the current directory) to troubleshoot any problems that may arise.
For more information, please refer to the Deployment documentation.