aboutsummaryrefslogtreecommitdiff
path: root/README.md
blob: a97f7ec66bc0e70e7712379f7779ce7d052f0d3e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
Concourse
=========

Manages all parts of [Concourse](CONCOURSE), including web nodes, worker
nodes, and databases.

Usage
-----

### Overview
Concourse is configured as a set of clusters. ach cluster consists of

- 1 database (a database within PostgreSQL)
- 1 or more worker nodes
- 1 load balancing nginx
  (this is needed for also a single node, due to how this module is written).
- 1 or more worker nodes

### Keys

There are also a number of [different keys](https://concourse-ci.org/concourse-generate-key.html)
needed for concourse to operate correctly.

These are

- The session signing key, used by the web node for signing user session tokens.
- the TSA host key, used by worker nodes to verify their connection to the web node
- The worker keys, simple ssh keys used by the nodes when connecting.

The session signing key, and the TSA host key are **NOT** managed by this
module. This since they need to be the same for all nodes in a cluster (and
there isn't a good way to mark a single node as the "master" without extra
work, which might as well be used for manually generating the keys).

The worker keys *are* however managed by this module. Each worker
generates its own key, and then creates an exported resource which
each web node realizes. (this is bounded to within the cluster).

### Example Configuration

A complete concourse configuration might look like this.

Note that the `session_signing_key`, `tsa_private_key`, and `tsa_public_key` is
found through Hiera in this example, as explained under [Keys](#Keys).

```puppet
$cluster = 'default'
$external_domain = 'concourse.example.com'

# Cluster configuration should be set on the main resource. All other resources
# references this hash, referenced by the cluster parameter.
class { 'concourse':
  default_cluster => $cluster,
  clusters        => {
    $cluster => {
      'postgres_user'       => 'concourse',
      'postgres_password'   => 'CHANGEME',
      'external_url'        => "https://${external_domain}",

      # Keys are gotten through Hiera here.
      'session_signing_key' => lookup('session_signing_key'),
      'tsa_private_key'     => lookup('tsa_private_key'),
      'tsa_public_key'      => lookup('tsa_public_key'),
    }
  }
}

# Creates the database and user.
# Omit this if managing the database elsewhere
concourse::database {
  cluster => $cluster,
}

# Configures the load balancer.
# Should only be done once for the cluster
# (unless you load balance you load balancers...)
#
# ensure that `nginx::stream` is set to true.
concourse::proxy::nginx { $external_domain:
  cluster => $cluster,
}

# Configures a web node, and attach it to the cluster.
# Note that multiple web nodes in the same cluster should have identical
# configurations (except for their peer_address).
# Note that concourse currently always bind to port 8080.
class { 'concourse::web':
  cluster => $cluster,
}

# Some authentication method needs to be configured. The authentication happens
# in the web nodes (although an extra layer could be added through nginx).
# Check the `concourse::auth::` module for available methods.
#
# The simplest is `concourse::auth::local`:
class { 'concourse::auth::local':
  users => [
    {
      'name'     => 'hugo',
      'password' => 'This password is stored in cleartext',
    }
  ]
}

# Configure a worker node, and also attach that to the cluster.
class { 'concourse::worker':
  cluster => $cluster,
}

# Finally, this installs the fly cli.
include concourse::fly
```

Note that only some keys are managed through the
`concourse::configured_clusters`, and for Hiera is *strongly* recommended for
more advanced setups with multi-node clusters.

### Nodes
As mentioned above, a concourse cluster contains a number of different roles
(here called nodes). A short summary of each node.

#### Web node
Web nodes acts as the front-end, and dispatcher.

Each web node is stateless, and manages its state through a shared
database. If multiple nodes are used, then a
[web node cluster](#web node cluster)
should be used.

(technically clusters are always used, and default to the cluster "default").

##### Authentication

TODO

#### Worker Node

TODO

#### Database

TODO

#### Fly Client

TODO

#### Web node cluster

TODO


### Special Hiera Keys

TODO

```yaml
concourse::${cluster}:
    postgres_user: pg_username
    postgres_password: pg_password
    session_signing_key: 'A valid key'
    tsa_private_key: 'A valid key'
    tsa_public_key: 'A public key matching the private key'
```

[CONCOURSE]: https://concourse-ci.org/