Skip to content

Configuring your Grove cluster

There are two levels that you have to consider when doing configuration for Grove. The cluster and each individual instances. This page deals with the configuration options for the cluster.

There are two places to configure your cluster. Secrets should be configured repo's CI/CD or private.yml. Any other cluster configuration should be saved in cluster.yml.

Notes

Regardless of doing an automated (using GitLab CI) or manual cluster installation (using CLI), it is recommended to setup the cluster in multiple steps. Some features, like the monitoring setup, requires a DNS record pointing to the cluster. Since Grove is not controlling DNS records, it cannot set them before proceeding with dependant.

The rule of thumb is create your infrastructure with the bare minimum features enabled and after pointing the domain to the cluster, proceed with turning on specific features and update the infra.

Secrets

GITLAB_PROJECT_NUMERIC_ID: required
The numeric ID of the GitLab project. Doesn't need to be set on GitLab CI as it can be autodetected (via $CI_PROJECT_ID). For example the grove-template repo's project ID is 24377526
GITLAB_USERNAME: required

As part of the Grove setup you need to set up a deploy token. This field contains the generated username. You can also use your account's username.

This and the GITLAB_PASSWORD field are used for authenticating you to Gitlab's Terraform state backend and Container registry.

GITLAB_PASSWORD: required
As part of the Grove setup you need to set up a deploy token. This field contains the generated password.
TF_VAR_gitlab_group_deploy_token_username: required
Contains the generated username.
TF_VAR_gitlab_group_deploy_token_password: required
Contains the generated password.
CI_REGISTRY_IMAGE: required
Your Gitlab container registry where the images that Grove creates will be stored. It is of the form registry.gitlab.com/foo/bar where foo/bar is the GitLab repository.
AWS_ACCESS_KEY_ID: required for AWS clusters
If your cluster runs on AWS, complete this field with your AWS access key.
AWS_SECRET_ACCESS_KEY: required for AWS clusters
If your cluster runs on AWS, complete this field with your AWS secret key.
MONGODB_ATLAS_PUBLIC_KEY: required for AWS clusters
Since AWS doesn't provide a MongoDB service, we need to use MongoDB Atlas. Complete this field with your MongoDB public key. For instructions, please visit the AWS documentation
MONGODB_ATLAS_PRIVATE_KEY: required for AWS clusters
If running an AWS cluster, add your MongoDB private key to this field.
DIGITALOCEAN_TOKEN: required for Digital Ocean clusters
For clusters running on Digital Ocean, you will need to complete this field to authenticate to the Digital Ocean API. You can create this token directly on Digital Ocean
TF_VAR_do_token: required for Digital Ocean clusters
Contains the same value as DIGITALOCEAN_TOKEN and is used in the terraform scripts.
SPACES_ACCESS_KEY_ID: required for Digital Ocean clusters
For Digital Ocean clusters, Grove makes use of their Spaces in lieu of Amazon's S3 service. Add your Spaces Access key to this field.
SPACES_SECRET_ACCESS_KEY: required for Digital Ocean clusters
For Digital Ocean clusters, Grove makes use of their Spaces in lieu of Amazon's S3 service. Add your Spaces Secret key to this field.

Other cluster configuration

The configuration options below are not secret and therefore belong in your repository's cluster.yml file.

TF_VAR_cluster_name: required

The name of this cluster. Example: "myorg_aws_openedx". The name must satisfy these conditions:

  • The length is at most 63 characters.
  • It must contain only lowercase alphanumeric characters or '-'.
  • It has to start and end with an alphabetic character.
  • As well as end with an alphanumeric character.
  • Unique in your service provider's account.
TF_VAR_cluster_provider: required
The infrastructure provider to use for this cluster. One of aws or digitalocean.
TF_VAR_cluster_domain: required for Digital Ocean clusters

The main domain of your cluster. Subdomains of this domain will be used to set up services such as openfaas and monitoring.

This domain needs to point to the DO or AWS load balancer, and a wildcard CNAME for subdomains pointing to the main domain has to be set up. Eg.

*.mycluster 300 IN CNAME mycluster.grove.com.

Note

When setting up DNS for instances on Digital Ocean, create CNAME records that point to this domain instead of A records. Otherwise SSL certficates will not be provisioned on your cluster.

TF_VAR_aws_region: required for AWS clusters
AWS specific, include the desired region in this setting.
TF_VAR_ami_id: optional, AWS only.
AWS specific, specify the AWS AMI ID related to the region. For Ubuntu AMIs (if you want to use Codejail), use image IDs listed at https://cloud-images.ubuntu.com/aws-eks/.
TF_VAR_do_region: required for Digital Ocean clusters
DigitalOcean specific, include the desired region in this setting.
TF_VAR_max_worker_node_count: default(5)
As the cluster auto-scales, what's the max. number of nodes you'll allow? Choose what you're comfortable with based on the # of instances and your scaling/budget needs.
TF_VAR_rds_instance_class: default(db.t3.micro), AWS only
The RDS MySQL instance class used for AWS. Please visit the Amazon docs for more details.
TF_VAR_rds_min_storage: default(10), AWS only
Your RDS cluster's minimum storage size in GB.
TF_VAR_rds_max_storage: default(15), AWS only
Your RDS cluster's maximum storage size in GB.
TF_VAR_mongodbatlas_project_id: required, AWS only
Your MongoDB Atlas project ID.
TF_VAR_worker_node_volume_size: default(20), AWS only
Your EC2 worker nodes EBS volume size in GB.
DEPLOYMENT_BRANCH: default(main)
GitLab CI deployment related jobs will only run on the specified branch.
TUTOR_DOCKER_REGISTRY: required
GitLab dependency proxy url. Make sure it ends with a slash.
TF_VAR_k8s_resource_quotas: optional

Configure the Kubernetes resources for Grove services. This value should be a valid YAML string.

Example Configuration
TF_VAR_k8s_resource_quotas: |
    nginx:
      limits:
        cpu: "200m"

    monitoring-opensearch:
      limits:
        cpu: "200m"
      requests:
        cpu: "200m"
        memory: 2Gi

!!! note

 For OpenSearch the default value for the CPU request is "1000m". When updating the limit, be explicit and set the request as well as
 the deployment will fail if the limit is less than the requets.
TF_VAR_alertmanager_config: default(null)
To receive alerts from Alert Manager/Prometheus, update this configuration option. Complete documentation is available on the Cluster Monitoring page.
TF_VAR_opensearch_persistence_size: default(8Gi)
Set the size of the PVC claim for the OpenSearch statefulset. Note that this value cannot be changed once the PVC has been created. For details view the Cluster Monitoring page.
TF_VAR_opensearch_index_retention_days
The number of days to retain logs in OpenSearch. For this setting to work TF_VAR_monitoring_ingress needs to be enabled.