aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorHugo Hörnquist <hugo@lysator.liu.se>2023-06-18 20:35:48 +0200
committerHugo Hörnquist <hugo@lysator.liu.se>2023-06-20 00:26:09 +0200
commit5e1032519189f3b6fa793cec81833a781a91d8f2 (patch)
tree51a5ba59974e61f7a56128afcb324d49c9f8b7c8 /README.md
parentInitial add. (diff)
downloadconcourse-5e1032519189f3b6fa793cec81833a781a91d8f2.tar.gz
concourse-5e1032519189f3b6fa793cec81833a781a91d8f2.tar.xz
Rewrote almost everything.
Diffstat (limited to 'README.md')
-rw-r--r--README.md169
1 files changed, 129 insertions, 40 deletions
diff --git a/README.md b/README.md
index 6f51548..a97f7ec 100644
--- a/README.md
+++ b/README.md
@@ -7,71 +7,160 @@ nodes, and databases.
Usage
-----
+### Overview
+Concourse is configured as a set of clusters. ach cluster consists of
+
+- 1 database (a database within PostgreSQL)
+- 1 or more worker nodes
+- 1 load balancing nginx
+ (this is needed for also a single node, due to how this module is written).
+- 1 or more worker nodes
+
+### Keys
+
+There are also a number of [different keys](https://concourse-ci.org/concourse-generate-key.html)
+needed for concourse to operate correctly.
+
+These are
+
+- The session signing key, used by the web node for signing user session tokens.
+- the TSA host key, used by worker nodes to verify their connection to the web node
+- The worker keys, simple ssh keys used by the nodes when connecting.
+
+The session signing key, and the TSA host key are **NOT** managed by this
+module. This since they need to be the same for all nodes in a cluster (and
+there isn't a good way to mark a single node as the "master" without extra
+work, which might as well be used for manually generating the keys).
+
+The worker keys *are* however managed by this module. Each worker
+generates its own key, and then creates an exported resource which
+each web node realizes. (this is bounded to within the cluster).
+
+### Example Configuration
+
+A complete concourse configuration might look like this.
+
+Note that the `session_signing_key`, `tsa_private_key`, and `tsa_public_key` is
+found through Hiera in this example, as explained under [Keys](#Keys).
+
+```puppet
+$cluster = 'default'
+$external_domain = 'concourse.example.com'
+
+# Cluster configuration should be set on the main resource. All other resources
+# references this hash, referenced by the cluster parameter.
+class { 'concourse':
+ default_cluster => $cluster,
+ clusters => {
+ $cluster => {
+ 'postgres_user' => 'concourse',
+ 'postgres_password' => 'CHANGEME',
+ 'external_url' => "https://${external_domain}",
+
+ # Keys are gotten through Hiera here.
+ 'session_signing_key' => lookup('session_signing_key'),
+ 'tsa_private_key' => lookup('tsa_private_key'),
+ 'tsa_public_key' => lookup('tsa_public_key'),
+ }
+ }
+}
+
+# Creates the database and user.
+# Omit this if managing the database elsewhere
+concourse::database {
+ cluster => $cluster,
+}
+
+# Configures the load balancer.
+# Should only be done once for the cluster
+# (unless you load balance you load balancers...)
+#
+# ensure that `nginx::stream` is set to true.
+concourse::proxy::nginx { $external_domain:
+ cluster => $cluster,
+}
+
+# Configures a web node, and attach it to the cluster.
+# Note that multiple web nodes in the same cluster should have identical
+# configurations (except for their peer_address).
+# Note that concourse currently always bind to port 8080.
+class { 'concourse::web':
+ cluster => $cluster,
+}
+
+# Some authentication method needs to be configured. The authentication happens
+# in the web nodes (although an extra layer could be added through nginx).
+# Check the `concourse::auth::` module for available methods.
+#
+# The simplest is `concourse::auth::local`:
+class { 'concourse::auth::local':
+ users => [
+ {
+ 'name' => 'hugo',
+ 'password' => 'This password is stored in cleartext',
+ }
+ ]
+}
+
+# Configure a worker node, and also attach that to the cluster.
+class { 'concourse::worker':
+ cluster => $cluster,
+}
+
+# Finally, this installs the fly cli.
+include concourse::fly
+```
+
+Note that only some keys are managed through the
+`concourse::configured_clusters`, and for Hiera is *strongly* recommended for
+more advanced setups with multi-node clusters.
+
### Nodes
+As mentioned above, a concourse cluster contains a number of different roles
+(here called nodes). A short summary of each node.
#### Web node
Web nodes acts as the front-end, and dispatcher.
Each web node is stateless, and manages its state through a shared
-database. If multiple nodes are used, then a
-[web node cluster](#web node cluster)
+database. If multiple nodes are used, then a
+[web node cluster](#web node cluster)
should be used.
(technically clusters are always used, and default to the cluster "default").
-```puppet
-class { 'concourse::web':
- postgres_user => '',
- postgres_password => '',
-}
-```
-
##### Authentication
-#### Worker Node
-
-#### Database
+TODO
-#### Fly Client
+#### Worker Node
-#### Web node cluster
+TODO
+#### Database
-### Special Hiera Keys
-- `concourse::${cluster}::postgres_user`
-- `concourse::${cluster}::postgres_password`
-- `concourse::${cluster}::session_signing_key`
-- `concourse::${cluster}::tsa_private_key`
-- `concourse::${cluster}::tsa_public_key`
+TODO
-Keys
-----
-### Session signing key
-Used by the web node for signing and verifying user session tokens.
+#### Fly Client
-### TSA host key
-Used by the web node for the SSH worker registration gateway server ("TSA").
+TODO
-The public key is given to each worker node to verify the remote host wthen
-connecting via SSH.
+#### Web node cluster
-### Worker key
+TODO
-Each worker node verifies its registration with the web node via a SSH key.
-The public key must be listed in the web node's *authorized worker keys* file
-in order for the worker to register.
+### Special Hiera Keys
-Hiera Examples
---------------
+TODO
```yaml
-concourse::cluster::tsa_host: concourse.example.com
-concourse::cluster::postgres_user: concourse
-concourse::cluster::postgres_password: MySuperSecretPassword
-concourse::cluster::session_signing_key: 'A valid key'
-concourse::cluster::tsa_private_key: 'A valid key'
-concourse::cluster::tsa_private_key: 'A valid key'
+concourse::${cluster}:
+ postgres_user: pg_username
+ postgres_password: pg_password
+ session_signing_key: 'A valid key'
+ tsa_private_key: 'A valid key'
+ tsa_public_key: 'A public key matching the private key'
```
[CONCOURSE]: https://concourse-ci.org/