2.2 Install
Concourse is distributed as a single concourse
binary, which contains the logic for running both a web
node and a worker
node. The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Kubernetes, or other ops tooling.
For the sake of brevity and clarity, this document will focus solely on the concourse
binary. Documentation for other platforms is available localized to their GitHub repository, as linked to by the Download page.
Note: this document is not an exhaustive reference for the concourse
CLI! Consult the --help
output if you're looking for a knob to turn.
Prerequisites
Grab the appropriate binary for your platform from the downloads section.
On Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.
A PostgresSQL 9.5+ server running somewhere with an empty database created. If you're going to run a server yourself, refer to your platform or Linux distribution's installation instructions; we can't feasibly maintain the docs for this subject ourselves.
Quick Start
Before you spend time getting a cluster up and running, you might want to just kick the tires a bit and run everything at once. This can be achieved with the quickstart
command:
concourse quickstart \
--add-local-user myuser:mypass \
--main-team-local-user myuser \
--external-url http://my-ci.example.com \
--worker-work-dir /opt/concourse/worker
This command is shorthand for running a single web
node and worker
node on the same machine, auto-wired to trust each other. We've also configured some local auth for the main
team - you may want to change that (see Configuring Team Auth).
So far we've assumed that you have a local PostgreSQL server running on the default port (5432
) with an atc
database, accessible by the current UNIX user. If your database lives elsewhere, just specify the --postgres-*
flags (consult concourse quickstart --help
for more information).
The addition of the --external-url
flag is not technically necessary, so if you're just testing things it's safe to omit it. Concourse uses it as a base when generating URLs to itself, so you won't want those to be the default http://127.0.0.1:8080
URL when you're developing on a different machine than the server.
Configuring Auth Providers
An operator will need to provide the following information to concourse web
upon startup:
Locally configured users
Third-party auth providers (e.g. GitHub Auth, CF Auth, etc.)
Users who should be members of the
main
team (either local or from auth providers)
Local Auth
In order to add new users to the system, you can supply the --add-local-user
flag to the concourse web
subcommand like so:
concourse web \
--add-local-user myuser:mypass
This will allow someone to log in with the username myuser
and password mypass
. Note that this user doesn't intrinsically have access to anything - they must be granted access on a team-by-team basis (see Teams).
Adding Local Users to the main
Team
Once you have added the user myuser
, you can add them to the main
team:
concourse web \
...
--main-team-local-user myuser
GitHub Auth
A Concourse server can authenticate against GitHub to take advantage of their permission model and other security improvements in their infrastructure.
Creating a GitHub application
First you need to create an OAuth application on GitHub.
The callback URL will be the external URL of your Concourse server with /sky/issuer/callback
appended. For example, Concourse's own CI server's callback URL would be https://ci.concourse-ci.org/sky/issuer/callback
.
Configuring the client
You will be given a Client ID and a Client Secret for your new application. These will then be passed as the following flags to concourse web
:
--github-client-id=CLIENT_ID
--github-client-secret=CLIENT_SECRET
Note that the client must be created under an organization if you want to authorize users based on organization/team membership. If the client is created under a personal account, only individual users can be authorized.
Here's a full example:
concourse web \
--github-client-id CLIENT_ID \
--github-client-secret CLIENT_SECRET
If you're configuring GitHub Enterprise, you'll also need to set the following flags:
--github-host=github.example.com
--github-ca-cert=/tmp/some-cert
The GitHub Enterprise host should not contain a scheme, or a trailing slash.
Adding GitHub Users to the main
Team
Once you have added the GitHub auth connector, you can specify a GitHub Organization, GitHub Team, and/or GitHub User to the main
team.
concourse web \
... \
--main-team-github-org=ORG_NAME \
--main-team-github-team=ORG_NAME:TEAM_NAME \
--main-team-github-user=USERNAME
GitLab Auth
A Concourse server can authenticate against GitLab to take advantage of their permission model.
Creating a GitLab application
First you need to create an OAuth application on GitLab.
The redirect URI will be the external URL of your Concourse server with /sky/issuer/callback
appended. For example, Concourse's own CI server's callback URL would be https://ci.concourse-ci.org/sky/issuer/callback
.
Configuring the client
You will be given a Client ID and a Client Secret for your new application. These will then be passed as the following flags to concourse web
:
--gitlab-client-id=CLIENT_ID
--gitlab-client-secret=CLIENT_SECRET
Here's a full example:
concourse web \
--gitlab-client-id CLIENT_ID \
--gitlab-client-secret CLIENT_SECRET
If you're configuring a self hosted GitLab instance, you'll also need to set the following flag:
--gitlab-host=https://gitlab.example.com
The GitLab host must contain a scheme and not a trailing slash.
Adding GitLab Users to the main
Team
Once you have added the GitLab auth connector, you can specify a GitLab Group and/or GitLab User to the main
team.
concourse web \
... \
--main-team-gitlab-group=GROUP_NAME \
--main-team-gitlab-user=USERNAME
CF Auth
Cloud Foundry (CF) Auth can be used for operators who wish to authenticate their users configured against their Cloud Foundry instance via the UAA product.
Creating the client
You'll first need to create a client for Concourse in UAA.
The callback URL will be the external URL of your Concourse server with /sky/issuer/callback
appended. For example, Concourse's own CI server's callback URL would be https://ci.concourse-ci.org/sky/issuer/callback
.
The client should look something like this, under uaa.clients
:
concourse:
id: my-client-id
secret: my-client-secret
scope: openid,cloud_controller.read
authorized-grant-types: "authorization_code,refresh_token"
access-token-validity: 3600
refresh-token-validity: 3600
redirect-uri: https://concourse.example.com/sky/issuer/callback
Configuring the client
You will be given a Client ID and a Client Secret for your new application. These will then be passed as the following flags to concourse web
:
--cf-client-id=CLIENT_ID
--cf-client-secret=CLIENT_SECRET
You will also need to configure your base API URL for CF. We will use this url to discover information used for authentication.
--cf-api-url=CF_API_URL
Here's a full example:
concourse web \
--cf-client-id CLIENT_ID \
--cf-client-secret CLIENT_SECRET \
--cf-api-url CF_API_URL
Adding CF Users to the main
Team
Once you have added the CF auth connector you can add a CF Org, CF Space, CF Space GUID and/or CF User to the main
team.
concourse web \
... \
--main-team-cf-org=ORG_NAME \
--main-team-cf-space=ORG_NAME:SPACE_NAME \
--main-team-cf-space-guid=SPACE_GUID \
--main-team-cf-user=USERNAME
Generic OIDC
If your auth provider adheres to the OIDC specification then you should use this provider. Unlike the oAuth provider you don't need to provide the auth-url
, token-url
and userinfo-url
. Instead you can simply provide an issuer-url
and the system will query the .well-known/openid-configuration
endpoint to discover all the information it needs.
Creating the client
You'll first need to create a client with your OIDC provider.
The callback URL will be the external URL of your Concourse server with /sky/issuer/callback
appended. For example, Concourse's own CI server's callback URL would be https://ci.concourse-ci.org/sky/issuer/callback
.
Configuring the client
Configuring generic OIDC for X company's internal OIDC service may look something like:
concourse web \
--oidc-display-name='X' \
--oidc-client-id=CLIENT_ID \
--oidc-client-secret=CLIENT_SECRET \
--oidc-issuer=https://oidc.example.com
Adding OIDC Users to the main
Team
Once you have added the OIDC auth connector you can add a Group and/or User to the main
team.
concourse web \
... \
--main-team-oidc-group=GROUP_NAME \
--main-team-oidc-user=USERNAME
Generic oAuth
If your auth provider supports oAuth2 but doesn't adhere to the OIDC specification then you should be using this provider. It's mostly the same as the OIDC provider except it gives you more control over specifying the full set of authorization endpoints (auth-url
, token-url
).
Creating the client
First you'll need to create a client with your oAuth provider.
The callback URL will be the external URL of your Concourse server with /sky/issuer/callback
appended. For example, Concourse's own CI server's callback URL would be https://ci.concourse-ci.org/sky/issuer/callback
.
Configuring the client
Configuring generic oAuth for X company's internal oAuth service may look something like:
concourse web \
--oauth-display-name='X' \
--oauth-client-id=CLIENT_ID \
--oauth-client-secret=CLIENT_SECRET \
--oauth-auth-url=https://oauth.example.com/oauth2/auth \
--oauth-token-url=https://oauth.example.com/oauth2/token \
--oauth-userinfo-url=https://oauth.example.com/oauth2/userinfo
Adding OAuth Users to the main
Team
Once you have added the OAuth auth connector you can add a Group and/or User to the main
team.
concourse web \
... \
--main-team-oidc-group=GROUP_NAME \
--main-team-oidc-user=USERNAME
Dex Connectors & Future Providers
Concourse uses a fork of coreos/dex for its authentication. You can find additional documentation on the supported auth providers in the dex connectors documentation
Adding a new auth provider to Concourse is as simple as submitting a pull request to our fork at concourse/dex and then adding a bit of configuration to concourse/skymarshal.
Multi-node Cluster
Beyond quickstart
, the Concourse binary includes separate commands for running a multi-node cluster. This is necessary for either high-availability or just being able to run things across more than one worker.
Generating Keys
First, you'll need to generate 3 private keys (well, 2, plus 1 for each worker):
session_signing_key
(currently must be RSA)Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.
tsa_host_key
Used for the TSA's SSH server. This is the key whose fingerprint you see when the
ssh
command warns you when connecting to a host it hasn't seen before.worker_key
(one per worker)Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.
To generate these keys, run:
ssh-keygen -t rsa -f tsa_host_key -N ''
ssh-keygen -t rsa -f worker_key -N ''
ssh-keygen -t rsa -f session_signing_key -N ''
...and we'll also start on an authorized_keys
file, currently listing this initial worker key:
cp worker_key.pub authorized_worker_keys
Running web
Nodes
The concourse
binary can run a web
node via the web
subcommand, like so:
concourse web \
--add-local-user myuser:mypass \
--main-team-local-user myuser \
--session-signing-key session_signing_key \
--tsa-host-key tsa_host_key \
--tsa-authorized-keys authorized_worker_keys \
--external-url http://my-ci.example.com
Just as with Quick Start, this example is configuring local auth for the main
team, and assumes a local PostgreSQL server. You'll want to consult concourse web --help
for more configuration options.
The web
node can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize. This is done by just running more web
commands on different machines, and optionally putting them behind a load balancer.
To run a cluster of web
nodes, you'll just need to pass the following flags:
The
--postgres-*
flags must all be set to the same database.The
--peer-url
flag must be specified as a URL used to reach the individualweb
node, from otherweb
nodes. So this just has to be a URL reachable within their private network, e.g. a10.x.x.x
address.The
--external-url
should be the URL used to reach any ATC, i.e. the URL pointing to your load balancer.
For example:
Node 0:
concourse web \
--add-local-user myuser:mypass \
--main-team-local-user myuser \
--session-signing-key session_signing_key \
--tsa-host-key tsa_host_key \
--tsa-authorized-keys authorized_worker_keys \
--postgres-host 10.0.32.0 \
--postgres-user user \
--postgres-password pass \
--postgres-database concourse \
--external-url https://ci.example.com \
--peer-url http://10.0.16.10:8080
Node 1 (only difference is --peer-url
):
concourse web \
--add-local-user myuser:mypass \
--main-team-local-user myuser \
--session-signing-key session_signing_key \
--tsa-host-key tsa_host_key \
--tsa-authorized-keys authorized_worker_keys \
--postgres-host 10.0.32.0 \
--postgres-user user \
--postgres-password pass \
--postgres-database concourse \
--external-url https://ci.example.com \
--peer-url http://10.0.16.11:8080
Running worker
Nodes
The concourse
binary can run a worker
node via the worker
subcommand, like so:
sudo concourse worker \
--work-dir /opt/concourse/worker \
--tsa-host 127.0.0.1:2222 \
--tsa-public-key tsa_host_key.pub \
--tsa-worker-private-key worker_key
Note that the worker must be run as root
, as it orchestrates containers.
You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.
The --work-dir
flag specifies where container data should be placed. Make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.
The --tsa-host
refers to wherever the TSA on your web
node is listening. This may be an address to a load balancer if you're running multiple web
nodes, or just a local address like 127.0.0.1:2222
if you're running everything on one box.
The --tsa-public-key
flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like known_hosts
with the ssh
command. Refer to Generating Keys if you're not sure what this means.
The --tsa-worker-private-key
flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.
Workers have a statically configured platform
and a set of tags
, both of which determine where steps in a Build Plan are scheduled.
The Linux concourse
binary comes with a set of core resource types baked in. If you are planning to use them you need to have at least one Linux worker.
Community Installation Tools
These are installation tools built by the community. Use at your own risk!
Concourse Up by @EngineerBetter. A tool for easily deploying Concourse in a single command.
Concourse Helm Chart. The latest stable Helm chart for deploying Concourse into k8s. Official support coming soon.
Concourse Formula by @marco-m. All-in-one Concourse installation using Vagrant and Virutalbox with S3-compatible storage and Vault secret manager.
BUCC by @starkandwayne. The bucc
command line utility allows for easy bootstrapping of the BUCC stack (BOSH UA Credhub and Concourse). Which is the starting point for many deployments.