Skip to content

Working Locally

This guide describes how to work with Grove locally. Under normal circumstances, you would let the GitLab pipelines manage the cluster, but there are cases, when manual intervention is more convinient or necessary for debugging.

Make sure you meet the prerequisites in order to have a smooth experience.

Prerequisites

  1. Ensure that Ruby and Docker is installed on your computer, and that Docker is running.
  2. Clone my-cluster if it is not cloned yet.
  3. Change directory to my-cluster.
  4. Run git submodule init and git submodule update.
  5. Create a new file, called private.yml by copying private.yml.example. This file is intentionally not version controlled. Variables section must contain all provider-specific variables.
  6. Change to the control directory. We will execute commands from the control directory to manage the cluster.
  7. Run ./tf init to initialize Terraform. Please note that this is a mandatory step, even if you are not planning to run terraform.

Warning

Before running any commands below, remember to complete the prerequisites and make sure that your working directory is my-cluster/control. Also, ensure to setup the cluster with bare minimum features enabled and add extra settings, like monitoring, after your DNS record is pointed to the cluster.

Run Terraform commands

To run Terraform, use the ./tf <subcommand> script, for example, ./tf plan. As these scripts are tightly integrated with the GitLab pipelines, we always create a plan output and apply that. The ./tf wrapper script will take care of that, so you don't have to. Though this comes with an extra requirement: Whenever you would like to run ./tf apply you have to run ./tf plan beforehand. The command will be executed in a Docker container, hence it may have a different version than you locally if you have Terraform installed.

If you are about to locally provision a new DigitalOcean backed cluster, please note that the Kubernetes cluster won't exist, therefore the providers depending on that name cannot be initialized. To launch a new cluster you will need to apply the terraform scripts in two steps:

  1. Create the DigitalOcean Kubernetes cluster
  2. Run all other terraform targets

Creating only the Kubernetes resource requires planning and applying the changes in a targeted mode:

# Plan the changes first
$ ./tf plan

# Apply all the changes
$ ./tf apply

The ./tf plan command will create a temporary plan file, which will be used by the ./tf apply command. Therefore, running ./tf apply without running ./tf plan first or running ./tf apply multiple times with a stale plan will result in an error. Hence, targeted mode (-target <resource id>) has no effect on the ./tf apply command. It shall be used only for ./tf plan, so you can safely run ./tf apply after that.

Run Kubernetes maintenance commands

Warning

Some network-related kubectl commands may not work as expected, since we run kubectl inside a docker container. Specifically, by default, kubectl can only forward network traffic using port 8001 on your host, and kubectl must be listening on 0.0.0.0. See "How to access the Kubernetes Dashboard" for an example of port forwarding.

To run Kubernetes commands, use the ./kubectl <subcommand> script, for example, ./kubectl get pods -n test-instance. The Kubernetes configuration will be retrieved from the terraform state and used by the wrapper script, hence you don't have to have kubect installed or configured locally. The configuration is rendered in the my-cluster repository, but the file is intentionally excluded from version control.

Some useful commands:

# Launch a shell on the first LMS pod
# Replace <NAME> with your instance's name

NAMESPACE=<NAME>; ./kubectl exec -it $(./kubectl get pods --namespace="${NAMESPACE}" -l app.kubernetes.io/name=lms -o jsonpath='{range .items[0]}{@.metadata.name}{end}') -n "${NAMESPACE}" -- bash
# Tail the logs from the LMS pod
# Replace <NAME> with your instance's name

NAMESPACE=<NAME>; ./kubectl logs -f $(./kubectl get pods --namespace="${NAMESPACE}" -l app.kubernetes.io/name=lms -o jsonpath='{range .items[*]}{@.metadata.name}{end}') -n "${NAMESPACE}"

Run Tutor commands

To run Tutor commands, use the ./tutor <subcommand> script, for example, ./tutor test-instance k8s quickstart --non-interactive. The command will be executed in a Docker container, hence you don't need to install Tutor locally.

Access the Kubernetes Dashboard

In the case of DigitalOcean, the Kubernetes dashboard is available on the UI. Log in to your DigitalOcean account, click Clusters > CLUSTER NAME > Dashboard.

On the other hand, when using AWS, you will need to expose the UI on your own. To do so

  1. Run ./kubectl -n kube-system describe secret eks-admin-token | grep token: and copy the token value.
  2. Run ./kubectl proxy --address='0.0.0.0' to start the proxy, which opens a tunnel that allows you to connect to services running on the cluster.
  3. Go to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:https/proxy/ and enter the token that the earlier command printed.

Access the tools-container shell

While working locally it might be useful to debug or access the running tools-container shell. To access it run ./shell from control directory.