Skip to content

Deploying Instances

Creating a new instance

You can create a new instance from both local environment and CI.

Locally

  1. Make sure you are set up to work with this locally (see Working Locally).
  2. Change into the control directory, e.g. cd my-cluster/control/.
  3. Run ./grove prepare NAME where NAME is the ID you want to use for the new instance.
  4. If you're using minikube, review my-cluster/instance/<INSTANCE_NAME>/config.yml file, remove the s3 plugin and set values:
RUN_MYSQL: true
RUN_MONGODB: true
ENABLE_HTTPS: false
  1. Run ./tf init && ./tf plan && ./tf apply to create infrastructure (MySQL, S3 etc) for the new instance.
  2. Run ./grove tutor sync NAME where NAME is the instance ID to generate tutor env and push it to s3.

Through the CI

  1. In your private fork, go to Settings > CI / CD > Pipeline triggers, and create a new trigger (you can call it, e.g. "Deploy").
  2. Go to Settings > Repository > Protected branches, and set the following:

    • Branch: deployment/* (type it manually and click on the "Create wildcard" option)
    • Allowed to merge: Maintainers
    • Allowed to push: Maintainers
    • Allowed to force push: false
  3. Run the following request:

    curl -X POST \
        -F token=YOUR_TOKEN `# Replace "YOUR_TOKEN" with a token you have generated in the previous point.` \
        -F "ref=main" `# A branch or tag you want to use as a base.` \
        -F "variables[INSTANCE_NAME]=my-instance" `# A name of the Open edX instance.` \
        -F "variables[DEPLOYMENT_REQUEST_ID]=1" `# The deployment (release) number of this instance.` \
        `# Tutor-specific configurations. All tutor configurations can be added with a "TUTOR_" prefix.` \
        -F "variables[TUTOR_CONTACT_EMAIL]=test@example.com" \
        -F "variables[NEW_INSTANCE_TRIGGER]=1" `# to differentiate between other triggered job.` \
        `# Replace "PROJECT_ID" with the numeric ID of the GitLab project.` \
        `# The project should be a fork of https://gitlab.com/opencraft/dev/grove-template/` \
        `# You can see at https://gitlab.com/opencraft/dev/grove-template/ that its project ID is 24377526.` \
        https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline
    

    For nested data structures you should pass the request data as JSON payload (refer to the Gitlab API docs):

    curl -X POST \
        --header "Content-Type: application/json" \
        --data \
        '{
           "ref":"main",
           "token":"YOUR_TOKEN",
           "variables":{
              "INSTANCE_NAME":"my-instance",
              "DEPLOYMENT_REQUEST_ID":"1",
              "TUTOR_CONTACT_EMAIL":"test@example.com",
              "NEW_INSTANCE_TRIGGER":"1"
           },
           "TUTOR_LMS_HOST":"LMS_HOSTNAME",
           "TUTOR_CMS_HOST":"STUDIO_HOSTNAME",
           "GROVE_SIMPLE_THEME_SCSS_OVERRIDES":{
              "footer-bg": "#0075b4",
              "footer-color": "#6d9cae"
           },
           "TUTOR_SITE_CONFIG":{
              "version":0,
              "static_template_about_content":"<p>This is a custom about page</p>",
           }
        }' \
        https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline
    
  4. You can put any supported tutor config with the TUTOR_ prefix and grove config with the GROVE_ prefix. It will override the config.yml and the grove.yml file accordingly.

  5. Add or override LMS or CMS environment variables, using the following config keys:
  6. In your private fork, go to Pipelines to see the progress of the configuration job.
  7. The MR will get merged automatically once the pipeline succeeds.

Note: Right now parallel deployment requests for the same instance are not supported. Firing them one after another will create Merge Requests from the same git branch. Thus, only one of them might be merged automatically. You need to handle this case on the caller's end for now (e.g. with rate limiting on a backend that sends trigger requests). Before triggering another deployment for the same instance, you should wait for the previous one to finish. This means - the GitLab job finishes and the new Merge Request is merged.

Deployment

Deploy through the CI

Deploying an instance via CI should be straightforward. Check the generated pipeline and click the Run button to start deployment.

Aborting CI Deployment

In cases when a CI deployment needs to be aborted, like a bug is discovered or the timing is not right. In such scenario, cancelling the CI pipeline will not cancel the deploymnet, as deployments are run as child pipelines and Gitlab doesn't cascade cacelling of pipelines. Instead, the deployment process can be aborted using the following methods.

1. From Local Machine
  1. Access Gitlab Pipelines page of your project and get the Pipeline ID of CI pipeline that has been started.
  2. From the control directory run

    ./grove abortpipeline <your-pipeline-id>
    

This will cancel the parent pipeline and all the child pipelines using the GitLab API

2. Using a Gitlab Trigger

The abort command can also be exectued remotely using another GitLab CI task using the following request:

curl -X POST \
    -F token=YOUR_TOKEN `# Replace "YOUR_TOKEN" with a token you have generated in the previous point.` \
    -F "ref=main" `# A branch or tag you want to use as a base.` \
    -F "variables[ABORT_DEPLOYMENT_TRIGGER]=1" `# A name of the Open edX instance.` \
    -F "variables[PIPELINE_ID]=123456  `# Gitlab Pipeline ID that is running the deployment"` \
    `# Replace "PROJECT_ID" with the numeric ID of the GitLab project.` \
    `# The project should be a fork of https://gitlab.com/opencraft/dev/grove-template/` \
    `# You can see at https://gitlab.com/opencraft/dev/grove-template/ that its project ID is 24377526.` \
    https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline

This will cancel the running pipeline all the child pipelines, thus aborting the full deployment process.

Deploy from a local machine

If you are creating the instance for the first time, you need to run Terraform to create the required infrastructure (s3 storage, database, etc. for the instance) first.

  1. Run ./tf init && ./tf plan && ./tf apply to deploy the infrastructure.

Then deploy the instance using Tutor.

  1. Run ./grove deploy NAME to deploy the new instance onto your cluster.
  2. Commit your changes using git to save the new config.yml. (It contains sensitive values, so your repo better be private!)
  3. Log in to your cloud provider's control panel (AWS or DigitalOcean, etc.), and go to the "load balancers" page. Find the IP or hostname of the load balancer, and set up the required DNS records to point to it (your LMS domain, preview.[lms domain], and your Studio domain all need to point to the load balancer). Currently, it is different for each instance, but we will soon fix that so it uses a consistent IP / load balancer for the whole cluster (much more affordable and simple).
  4. Wait a few minutes, then try accessing the instance. If you get an "SSL protocol error", wait a bit longer - it takes some time until DNS records have propagated and Caddy configures the required HTTPS certificates.
  5. Create an admin user with ./tutor NAME k8s createuser --staff --superuser Username email

Update an Open edX instance

Say you've made changes to a config.yml, or new Open edX images have been released upstream.

  1. Change into the control directory, e.g. cd my-cluster/control/
  2. Run ./grove deploy NAME to update the instance with ID NAME.

For more involved changes, like a change to the instance's requirements.txt, or for instances using custom images:

  1. Change into the control directory, e.g. cd my-cluster/control/
  2. Update the python venv with ./grove venv update NAME
  3. Re-generate the tutor config with ./grove tutor sync NAME
  4. Build new images for the instance with ./grove run-pipeline build-instance-image NAME openedx
    (Note: if this fails with the error "An image does not exist locally with the tag", check your [GitLab] container image registry; the image may be already pushed. If you see a recently updated image in the registry, this step is done and it's safe to ignore the error.)
  5. Deploy the new images with ./grove deploy NAME
  6. If needed, roll out the new pods/images using ./kubectl -n NAME rollout restart deployment/cms and ./kubectl -n NAME rollout restart deployment/lms