Skip to content

Getting Started

Before jumping right in the getting started guide, please check the supported providers (AWS, DigitalOcean, minikube) to learn more about the differences and prerequisites.

To get started, you will need to have a GitLab account with the necessary permissions to create and configure Groups. Also, you need an account with the necessary permissions at the desired provider.

Prerequisites

  1. Create a GitLab Group under which you will be creating a new cluster. You need to create a Group as Grove uses GitLab Dependency Proxy which is only available under Groups. We use it to bypass Docker rate limit.
  2. In the repository settings of the group, create a Group Deploy Token with read_registry and write_registry permissions. Take a note about the deploy token username and password. Later on, this token will be used by the Kubernetes cluster to pull images from the container registry and dependency proxy.
  3. Create a fork of grove-template. Note that this is different from grove; the grove-template repository is used to setup and configure the cluster, while the grove repository provides the provisioning scripts and management commands.
  4. In the new repository, create a new Gitlab Cluster Agent token by going to the Infrastructure -> Kubernetes Clusters menu and tapping the "Connect a new cluster(agent)" button. Fill in the name for your agent and note the generated token on the Register screen.
  5. Important: When the forking finished, go to Settings > General > Visibility, and change the project visibility to Private. You will store credentials in this repository, hence you need to keep them private.
  6. When viewing your fork in GitLab, go to Settings > General > Advanced > Change Path, and rename the fork from grove-template to the desired cluster name, like my-cluster.

In the documentation we will refer to the forked repository as my-cluster or mycluster.

Provider and Cluster configuration

Differences between providers

Grove operates equally on both AWS and DigitalOcean, though there are some minor differences to keep in mind when selecting a provider. To get a more detailed view, please consult the provider specific documentation.

The major differences between the two providers is the cost of running the cluster and how MongoDB is installed. In order to use the AWS provider, you also need to set up MongoDB Atlas manually.

Also, Grove is able to operate on a local minikube cluster through a tunnel, which might be useful for development.

Getting started on AWS

  1. Clone the my-cluster repository.
  2. Edit the cluster.yml to set the name of your cluster. This file contains the cluster configuration. Feel free to adjust any other settings based on your needs.
  3. Update the README.md to include the purpose of your cluster any additional information.
  4. Do not commit your changes yet, we will do that at a later step.
  5. Login to AWS and create an IAM user with user name grove - Set the Access Type to "Programmatic access" and attach the existing "AdministratorAccess" policy directly. Take a note about the access key and secret key for the next steps.
  6. Create an API key pair in the MongoDB Atlas console.
  7. In the my-cluster repository on GitLab, go to Settings > CI / CD > Variables, and set the following variables:
    • AWS_ACCESS_KEY_ID (use the access key value given for the new IAM user - it should start with AKIA)
    • AWS_SECRET_ACCESS_KEY (use the secret key value given for the new IAM user)
    • GITLAB_TOKEN (create a personal access token in GitLab)
    • TF_VAR_gitlab_group_deploy_token_username (use the username of the Deploy token you have created earlier)
    • TF_VAR_gitlab_group_deploy_token_password (use the password of the Deploy token you have created earlier)
    • TF_VAR_gitlab_cluster_agent_token (use the Cluster agent token created earlier)
    • MONGODB_ATLAS_PRIVATE_KEY (The MongoDB Atlas private key created above)
    • MONGODB_ATLAS_PUBLIC_KEY (The MongoDB Atlas public key created above)
    • K8S_CLUSTER_AGENT_CONTEXT (use the "Agent Name" you used when adding the K8S cluster agent in the Prerequisites)
    • Set each variable to (at least) masked. This will ensure the values are not printed to the job output accidentally.

Getting started on DigitalOcean

  1. Clone the my-cluster repository.
  2. Edit the cluster.yml to set the name of your cluster. This file contains the cluster configuration. Feel free to adjust any other settings based on your needs.
  3. Set the TF_VAR_cluster_provider to digitalocean.
  4. Update the README.md to include the purpose of your cluster any additional information.
  5. Do not commit your changes yet, we will do that at a later step.
  6. Login to the DigitalOcean control panel and go to Account > API > Tokens.
  7. Create both a "Personal access token" and a "Spaces access key".
  8. In the my-cluster repository on GitLab, go to Settings > CI / CD > Variables, and set the following variables:
    • DIGITALOCEAN_TOKEN (use the secret part of your personal access token)
    • SPACES_ACCESS_KEY_ID (use the new access key ID)
    • SPACES_SECRET_ACCESS_KEY (use the new access key secret)
    • GITLAB_TOKEN (create a personal access token in GitLab)
    • TF_VAR_do_token (same as the DIGITALOCEAN_TOKEN)
    • TF_VAR_gitlab_group_deploy_token_username (use the username of the Deploy token you have created earlier)
    • TF_VAR_gitlab_group_deploy_token_password (use the password of the Deploy token you have created earlier)
    • TF_VAR_gitlab_cluster_agent_token (use the Cluster agent token created earlier)
    • K8S_CLUSTER_AGENT_CONTEXT (use the "Agent Name" you used when adding the K8S cluster agent in the Prerequisites)
    • It will be easiest if you set each variable to be "Masked" but not "Protected", however you can do what you feel is best.

Getting started on minikube

  1. Create a cluster as described on the minikube provider page.
  2. Clone the my-cluster repository.
  3. Edit the cluster.yml to set the name of your cluster. This file contains the cluster configuration. Feel free to adjust any other settings based on your needs.
  4. Set the TF_VAR_cluster_name to minikube.
  5. Set the TF_VAR_cluster_provider to minikube.
  6. Update the README.md to include the purpose of your cluster any additional information.
  7. Do not commit your changes yet, we will do that at a later step.
  8. In the my-cluster repository on GitLab, go to Settings > CI / CD > Variables, and set the following variables:
    • AWS_ACCESS_KEY_ID (set to mock_access_key)
    • AWS_SECRET_ACCESS_KEY (set to mock_secret_key)
    • GITLAB_TOKEN (create a personal access token in GitLab)
    • TF_VAR_minikube_host (tunneled minikubes K8S API-server URL)
    • TF_VAR_localstack_host (tunneled LocalStack URL)
    • TF_VAR_gitlab_group_deploy_token_username (use the username of the Deploy token you have created earlier)
    • TF_VAR_gitlab_group_deploy_token_password (use the password of the Deploy token you have created earlier)
    • TF_VAR_gitlab_cluster_agent_token (use the Cluster agent token created earlier)
    • K8S_CLUSTER_AGENT_CONTEXT (use the "Agent Name" you used when adding the K8S cluster agent in the Prerequisites)
    • Set each variable to (at least) masked. This will ensure the values are not printed to the job output accidentally.

Warning

ARMv8 doesn't have an MySQL 5.7 image. Therefore, you cannot run OpenEdX instance on them by default. Although v8.x images are available, they cannot be used as there are breaking changes.

To resolve the issue, ensure set DOCKER_IMAGE_MYSQL: docker.io/mariadb:10.2.44 in your instance's config.yml to use the compatible MariaDB instead.

Note

You may experience "random" unable to connect to the server: net/http: tls handshake timeout errors raised by minikube. The root cause of the issue is that the minikube server was OOM killed. Try to increase both your Docker memory limits in Docker settings and start minikube like the following:

minikube start --memory <DESIRED MEMORY -- at least 7GB>

Note

If you set the minikube driver to docker, it may happen that the images cannot be pulled from the private GitLab docker registry. To workaround this problem, you need to create a kubernetes secret and assign it to the default service account in the desired namespace(s) -- one by one.

To do so, run the following commands, filling the credentials as it fits:

export NAMESPACE=<NAMESPACE OF THE INSTANCE>
export GITLAB_USERNAME=<YOUR GITLAB USERNAME>
export GITLAB_TOKEN=<YOUR GITLAB PERSONAL ACCESS TOKEN>
export GITLAB_EMAIL=<YOUR GITLAB EMAIL>

minikube kubectl -- --namespace="${NAMESPACE}" create secret docker-registry default-credentials \
    --docker-server=registry.gitlab.com \
    --docker-username="${GITLAB_USERNAME}" \
    --docker-password="${GITLAB_TOKEN}" \
    --docker-email="${GITLAB_EMAIL}"

minikube kubectl -- --namespace="${NAMESPACE}" patch serviceaccount default -p '{"imagePullSecrets": [{"name": "default-credentials"}]}'

Then, try to redeploy the instance, or delete the affected pods -- kubernetes will recreate them.

Provision the infrastructure

The GitLab pipelines defined by Grove are parsing the commit messages, can be triggered by pipeline triggers and scheduled by GitLab Schedules.

To simplify the cluster management, the pipelines are listening to the [AutoDeploy][Infrastructure] <COMMIT MESSAGE> message pattern. By triggering a pipeline using this commit message, the infrastructure provisioning will be started.

Now, that the repository is prepared and the missing configuration is provided in the cluster.yml config, commit your changes using git commit -m '[AutoDeploy][Infrastructure] cluster setup'. Make sure you don't append any trailing message, like sign-off or co-author notes.

After pushing your changes, the GitLab pipeline should start and provision the infrastructure.

Create your first instance

Although the repository contains an example instance, you will want to create a new instance. To do so, please follow the Deploying Instances guide.

Warning

The instance name must meet the following restrictions to have a smooth experience and successful provisioning:

  • The instance name must be longer than 2 and shorter than 50 characters
  • The instance name must be slugified
  • The instance name must start with letter
  • The instance name shall not match any reserved word:
    • default
    • gitlab-kubernetes-agent
    • kube-node-lease
    • kube-public
    • kube-system
    • monitoring