The Nextstrain website


Domains and are hosted on Heroku. is an AWS CloudFronted S3 bucket, nextstrain-data. is an AWS CloudFronted S3 bucket, nextstrain-staging. is used by our AWS Cognito user pool.


The production Heroku app is nextstrain-server, which is part of a Heroku app pipeline of the same name. Deploys of master happen automatically after Travis CI tests are successful.

A testing/staging app, nextstrain-dev, is also used. Deploys to it are manual, via the dashboard or git pushes to the Heroku remote.

Environment variables

  • SESSION_SECRET must be set to a long, securely generated string. It protects the session data stored in browser cookies. Changing this will invalidate all existing sessions and forcibly logout people.

  • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are tied to the AWS IAM user. These credentials allow the backend web server limited access to private S3 buckets.

  • REDIS_URL is provided by the Heroku Redis add-on. It should not be modified directly. Our authentication handlers rewrite it at server start to use a secure TLS connection.

  • FETCH_CACHE is not currently used, but can be set to change the location of the on-disk cache used by (some) server fetch()-es. The default location is /tmp/fetch-cache.

Redis add-on

The Heroku Redis add-on is attached to our nextstrain-server and nextstrain-dev apps. Redis is used to persistently store login sessions after authentication via AWS Cognito. A persistent data store is important for preserving sessions across deploys and regular dyno restarts.

The maintenance window is set to Friday at 22:00 UTC to Saturday at 02:00 UTC. This tries to optimize for being outside/on the fringes of business hours in relevant places around the world while being in US/Pacific business hours so the Seattle team can respond to any issues arising.

If our Redis instance reaches its maximum memory limit, existing keys will be evicted using the volatile-ttl policy to make space for new keys. This should preserve the most active logged in sessions and avoid throwing errors if we hit the limit. If we regularly start hitting the memory limit, we should bump up to the next add-on plan, but I don’t expect this to happen anytime soon with current usage.


All resources are in the us-east-1 region. If you don’t see them in the AWS Console, make sure to check the region you’re looking at.

S3 buckets


Public. CloudFronted. Contains JSONs for our core builds, as well as the nextstrain.yml conda environment definition. Fetches by the server happen over unauthenticated HTTP.


Public. CloudFronted. Contains JSONs for staging copies of our core builds. Fetches by the server happen over unauthenticated HTTP.


Private. Access controlled by IAM groups/policies. Fetches by the server happen via the S3 HTTP API using signed URLs.

EC2 instances hosts the lab’s fauna instance, used to maintain data for the core builds.

Ephemeral instances are automatically managed by AWS Batch for nextstrain build --aws-batch jobs.


A user pool called provides authentication for Nextstrain logins. Cognito is integrated with the server using the OAuth2 support from PassportJS in our authn.js file.

We currently don’t use Cognito’s identity pools. It may be beneficial to use one in the future so we can get temporary AWS credentials specific to each Nextstrain user with the appropriate authorizations baked in (instead of using a server-wide set of credentials).


Nameservers for the zone are hosted by DNSimple.


nextstrain/ is the GitHub repo for the Nextstrain website.

Core and staging narratives are sourced from the nextstrain/narratives repo (the master and staging branches, respectively).