Gitpod
Always ready-to-code.
# Installer
The best way to get started with Gitpod is by using our recommended & default installation method [described in our documentation](https://www.gitpod.io/docs/self-hosted/latest/getting-started). In fact, our default installation method actually wraps this installer into a UI that helps you manage, update and configure Gitpod in a streamlined way. This document describes how to use the installer directly.
> The installer is an internal tool and as such not expected to be used by those external to Gitpod.
# Requirements
- A machine running Linux
- MacOS and Windows are not currently supported, but may be in future
- [Kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed
- A [Kubernetes cluster configured](https://www.gitpod.io/docs/self-hosted/latest)
- A [TLS certificate](#tls-certificates)
Or, [open a Gitpod workspace](https://gitpod.io/from-referrer/)
The process to install Gitpod is:
1. generate a base config
2. amend the config for your own use-case
3. validate
4. render the Kubernetes YAML
5. `kubectl apply`
# Quickstart
## Download the Installer on Linux
Releases can be downloaded from [GitHub Releases](https://github.com/gitpod-io/gitpod/releases/)
Select your desired binary, download and install
1. Download the latest release with the command:
```shell
curl -fsSLO https://github.com/gitpod-io/gitpod/releases/latest/download/gitpod-installer-linux-amd64
```
2. Validate the binary (optional)
Download the checksum file:
```shell
curl -fsSLO https://github.com/gitpod-io/gitpod/releases/latest/download/gitpod-installer-linux-amd64.sha256
```
Validate the binary against the checksum file:
```shell
echo "$( gitpod.yaml
```
## Deploy
```shell
kubectl apply -f gitpod.yaml
```
After a few minutes, your Gitpod installation will be available on the
specified `domain`.
## Uninstallation
The Installer generates a ConfigMap with the metadata of every Kubernetes
object generated by the Installer. This can be retrieved to remove Gitpod
from your cluster.
```shell
kubectl get configmaps gitpod-app -o jsonpath='{.data.app\.yaml}' \
| kubectl delete -f - # Piping to this will delete automatically
```
**Important**. This may leave certain objects still in your Kubernetes
cluster. This will include `Secrets` generated from internal `Certificates`
and `PersistentVolumeClaims`. These will need to be manually deleted.
[Batch jobs](https://kubernetes.io/docs/concepts/workloads/controllers/job/) are
not included in this ConfigMap by design. These have `ttlSecondsAfterFinished`
defined in the spec and so will be deleted shortly after the jobs have run.
# Advanced topics
## Post-processing the YAML
> Here be dragons.
>
> Whilst you are welcome to post-process your YAML should the need arise, it is not
> recommended and is entirely unsupported. Do so at your own risk.
The Gitpod Installer is designed as a way of providing you a robust and well-tested
framework for installing Gitpod to your own infrastructure. There may be times when
this framework doesn't meet your individual requirements. In these situations, you
should post-process the generated YAML.
As an example, this will allow you to change the `proxy` service to a `ClusterIP`
type instead of `LoadBalancer` using [yq](https://mikefarah.gitbook.io/yq).
```shell
yq eval-all --inplace \
'(select(.kind == "Service" and .metadata.name == "proxy") | .spec.type) |= "ClusterIP"' \
gitpod.yaml
```
Similarly, if you are doing a `Workspace` only install (specifying `Workspace`
as the kind in config) you might want to change the service type of `ws-proxy` to `ClusterIP`
instead of the default `LoadBalancer`. You can post-process the YAML to change that.
```shell
yq eval-all --inplace \
'(select(.kind == "Service" and .metadata.name == "ws-proxy") | .spec.type) |= "ClusterIP"' \
gitpod.yaml
```
## Error validating `StatefulSet.status`
```shell
error: error validating "gitpod.yaml": error validating data: ValidationError(StatefulSet.status): missing required field "availableReplicas" in io.k8s.api.apps.v1.StatefulSetStatus; if you choose to ignore these errors, turn validation off with --validate=false
```
Depending upon your Kubernetes implementation, you may receive this error. This is
due to a bug in the underlying StatefulSet dependency, which is used to generate the
OpenVSX proxy (see [#8529](https://github.com/gitpod-io/gitpod/issues/8529)).
To fix this, you will need to post-process the rendered YAML to remove the `status` field.
```shell
yq eval-all --inplace \
'del(select(.kind == "StatefulSet" and .metadata.name == "openvsx-proxy").status)' \
gitpod.yaml
```
---
# What is installed
- All Gitpod components
- Container registry*
- MySQL database*
- Minio object storage*
\* By default, these dependencies are installed if the `inCluster` setting
is `true`. External dependencies can be used in their place
# Config
> Not every parameter is discussed in this table, just ones that are likely
> to need changing. The full config structure is available in [config.go](/install/installer/pkg/config/v1/config.go).
| Property | Required | Description | Notes |
| --- | --- | --- | --- |
| `domain` | Y | The domain to deploy to | This will need to be changed on every deployment |
| `kind` | Y | Installation type to run - for most users, this will be `Full` | Available options are: - `Meta`: To install the tools that make up the front-end facing side of `Gitpod`
- `Workspace`: To install the components that make up the `Gitpod Workspaces`
- `Full`: To install the complete setup, i.e. both `Meta` and `Workspace`
|
| `metadata.region` | Y | Location for your `objectStorage` provider | If using Minio, set to `local` |
| `workspace.runtime.containerdRuntimeDir` | Y | The location of containerd on host machine | Common values are: - `/run/containerd/io.containerd.runtime.v2.task/k8s.io` (K3s)
- `/var/lib/containerd/io.containerd.runtime.v2.task/k8s.io` (AWS/GCP)
- `/run/containerd/io.containerd.runtime.v1.linux/k8s.io`
- `/run/containerd/io.containerd.runtime.v1.linux/moby`
|
| `workspace.runtime.containerdSocket` | Y | The location of containerd socket on the host machine |
| `workspace.runtime.fsShiftMethod` | Y | File system | Can be `shiftfs`. |
## Auth Providers
Gitpod must be connected to a Git provider. This can be done via the
dashboard on first load, or by providing `authProviders` configuration
as a Kubernetes secret.
### Setting via config
1. Update your configuration file:
```yaml
authProviders:
- kind: secret
name: public-github
```
2. Create a secret file:
```yaml
# Save this public-github.yaml
id: Public-GitHub
host: github.com
type: GitHub
oauth:
clientId: xxx
clientSecret: xxx
callBackUrl: https://$DOMAIN/auth/github.com/callback
settingsUrl: xxx
```
3. Create the secret:
```shell
kubectl create secret generic --from-file=provider=./public-github.yaml public-github
```
## In-cluster vs External Dependencies
Gitpod requires certain services for it to function correctly. The Installer
provides all of these in-cluster, but they can be configured to use services
external to the cluster.
To use the in-cluster dependency, set `inCluster` to be `true`.
## Container Registry
```yaml
containerRegistry:
inCluster: false
external:
url:
certificate:
kind: secret
name: container-registry-token
```
The `container-registry-token` secret must contain a `.dockerconfigjson`
key - this can be created by using the `kubectl create secret docker-registry`
[command](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line).
### Using Amazon Elastic Container Registry (ECR)
Gitpod is compatible with any registry that implements the [Docker Registry HTTP API V2](https://docs.docker.com/registry/spec/api/)
specification. Amazon ECR does not implement this spec fully. The spec expects
that, if an image is pushed to a repository that doesn't exist, it creates the
repository before uploading the image. Amazon ECR does not do this - if the
repository doesn't exist, it will error on push.
To configure Gitpod to use Amazon, you will need to use the in-cluster
registry and configure it to use S3 storage as the backend storage.
```yaml
containerRegistry:
inCluster: true
s3storage:
bucket:
certificate:
kind: secret
name: s3-storage-token
```
The secret expects to have two keys:
- `s3AccessKey`
- `s3SecretKey`
## Database
Gitpod requires an instance of MySQL 5.7 for data storage.
The default encryption keys are `[{"name":"general","version":1,"primary":true,"material":"4uGh1q8y2DYryJwrVMHs0kWXJlqvHWWt/KJuNi04edI="}]`
### Google Cloud SQL Proxy
If using a GCP SQL instance, a [Cloud SQL Proxy](https://cloud.google.com/sql/docs/mysql/sql-proxy)
connection can be used.
```yaml
database:
inCluster: false
cloudSQL:
instance: ::
serviceAccount:
kind: secret
name: cloudsql-token
```
The `cloudsql-token` secret must contain the following key/value pairs:
- `credentials.json` - GCP Service Account key with `roles/cloudsql.client` role
- `encryptionKeys` - database encryption key. Use default value as above if unsure
- `password` - database password
- `username` - database username
### External Database
For all other connections, use an external database configuration.
```yaml
database:
inCluster: false
external:
certificate:
kind: secret
name: database-token
```
The `database-token` secret must contain the following key/value pairs:
- `encryptionKeys` - database encryption key. Use default value as above if unsure
- `host` - IP or URL of the database
- `password` - database password
- `port` - database port, usually `3306`
- `username` - database username
## Object Storage
Gitpod supports the following object storage providers:
### GCP
```yaml
metadata:
region:
objectStorage:
inCluster: false
cloudStorage:
project:
serviceAccount:
kind: secret
name: gcp-storage-token
```
The `gcp-storage-token` secret must contain the following key/value pairs:
- `service-account.json` - GCP Service Account key with `roles/storage.admin` and `roles/storage.objectAdmin` roles
### S3
> This is currently only tested with AWS. Other S3-compatible providers should
> work but there may be compatibility issues - please raise a ticket if you have
> issues with other providers
```yaml
metadata:
region:
objectStorage:
inCluster: false
s3:
endpoint: s3.amazonaws.com
credentials:
kind: secret
name: s3-storage-token
```
The `s3-storage-token` secret must contain the following key/value pairs:
- `accessKeyId` - username that has access to S3 account
- `secretAccessKey` - password that has access to S3 account
> In AWS, the accessKeyId/secretAccessKey are an IAM user's credentials with
> `AmazonS3FullAccess` policy
# Cluster Dependencies
In order for the deployment to work successfully, there are certain
dependencies that need to be installed.
## Kernel and Runtime
Your Kubernetes nodes must have the Linux kernel v5.4.0 or above and
have a containerd runtime.
## Affinity Labels
Your Kubernetes nodes must have the following labels applied to them:
- `gitpod.io/workload_meta`
- `gitpod.io/workload_ide`
- `gitpod.io/workload_services`
- `gitpod.io/workload_workspace_regular`
- `gitpod.io/workload_workspace_headless`
It is recommended to have a minimum of two node pools, grouping the `meta`
and `ide` nodes together and the `workspace` nodes together.
## TLS certificates
It is a requirement that a certificate secret exists, named as per
`certificate.name` in your config YAML with `tls.crt` and `tls.key`
in the secret data. How this certificate is generated is entirely your
choice - we suggest [cert-manager](https://cert-manager.io/) for
simplicity, however any certificate authority can be used by creating a
Kubernetes secret.
The certificate must be associated with the following domains (where
`$DOMAIN` is the value in config `domain`):
- `$DOMAIN`
- `*.$DOMAIN`
- `*.ws.$DOMAIN`
See [FAQs](#how-do-i-use-cert-manager-to-create-a-tls-certificate) for help
with creating a TLS certificate using cert-manager.
### cert-manager
cert-manager **MUST** be installed to your cluster. In order to secure
communication between the various components, the application creates
internally which are created using the cert-manager `Certificate` and
`Issuer` Custom Resource Definitions.
```shell
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade \
--atomic \
--cleanup-on-fail \
--create-namespace \
--install \
--namespace='cert-manager' \
--reset-values \
--set installCRDs=true \
--set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
--wait \
cert-manager \
jetstack/cert-manager
```
# FAQs
## Why are you writing your own Installer instead of using Helm/Kustomize/etc?
The Installer is a complete replacement for our Helm charts. Over time,
this had grown to be too complex to effectively support and was a barrier
to entry for new users - the base config was many hundreds of lines long
and did not have effective validation on it. By contrast, the Installer's
config is < 50 lines long and can be fully validated before running.
Also, by baking-in the container image versions to each release of the
Installer, we reduce the potential for variance making it easier to support
the community.
## How do I use Cert Manager to create a TLS certificate?
> Please see [cert-manager.io](https://cert-manager.io) for full documentation.
> This should be considered a quickstart guide.
There are two steps to creating a public TLS certificate using cert-manager.
### 1. Create the Issuer/ClusterIssuer
As the certificate is a wildcard certificate, you must use the DNS01 Challenge
Provider. Please consult their [documentation](https://cert-manager.io/docs/configuration/acme/dns01)
for instructions. This can be either an
[`Issuer` or `ClusterIssuer`](https://cert-manager.io/docs/concepts/issuer).
### 2. Create the certificate
Replace `$DOMAIN` with your domain. This example assumes you have created a
`ClusterIssuer` called `gitpod-issuer` - please change this if necessary.
This certificate is called `https-certificates` - please use that in your
Gitpod installer [config](#config).
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: https-certificates
spec:
secretName: https-certificates
issuerRef:
name: gitpod-issuer
kind: ClusterIssuer
dnsNames:
- $DOMAIN
- "*.$DOMAIN"
- "*.ws.$DOMAIN"
```
## How do I use my own TLS certificate?
If you don't wish to use cert-manager to create a TLS certificate with a public
certificate authority, you can bring your own.
To do this, generate your certificate as you would normally and then create a
secret with the CRT set to `tls.crt` and the Key set to `tls.key`.
The DNS names must be `$DOMAIN`, `*.$DOMAIN` and `*.ws.$DOMAIN`, where `$DOMAIN`
is your domain.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: https-certificates
data:
tls.crt: xxx
tls.key: xxx
```
## How can I install to a Kubernetes namespace?
By default, Gitpod will be installed to the `default`
[Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces).
To install to a different namespace, pass a `namespace` flag to the `render`
command.
```shell
gitpod-installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
```
The `validate cluster` command also accepts a namespace, allowing you to run
the checks on that namespace.
```shell
gitpod-installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml --namespace gitpod
```
**IMPORTANT**: this does not create the namespace, so you will need to create
that separately. This is so that uninstallation of Gitpod does not remove any
Kubernetes objects, such as your TLS certificate or connection secrets.
```shell
kubectl create namespace gitpod
```