Deploying Instances¶
Creating a new instance¶
You can create a new instance from both local environment and CI.
Locally¶
- Make sure you are set up to work with this locally (see Working Locally).
- Change into the
control
directory, e.g.cd my-cluster/control/
. - Run
./grove prepare NAME
whereNAME
is the ID you want to use for the new instance.
RUN_MYSQL: true
RUN_MONGODB: true
ENABLE_HTTPS: false
- Run
./tf init && ./tf plan && ./tf apply
to create infrastructure (MySQL, S3 etc) for the new instance. - Run
./grove tutor sync NAME
whereNAME
is the instance ID to generate tutor env and push it to s3.
Through the CI¶
- In your private fork, go to Settings > CI / CD > Pipeline triggers, and create a new trigger (you can call it, e.g. "Deploy").
-
Go to Settings > Repository > Protected branches, and set the following:
- Branch:
deployment/*
(type it manually and click on the "Create wildcard" option) - Allowed to merge:
Maintainers
- Allowed to push:
Maintainers
- Allowed to force push:
false
- Branch:
-
Run the following request:
curl -X POST \ -F token=YOUR_TOKEN `# Replace "YOUR_TOKEN" with a token you have generated in the previous point.` \ -F "ref=main" `# A branch or tag you want to use as a base.` \ -F "variables[INSTANCE_NAME]=my-instance" `# A name of the Open edX instance.` \ -F "variables[DEPLOYMENT_REQUEST_ID]=1" `# The deployment (release) number of this instance.` \ `# Tutor-specific configurations. All tutor configurations can be added with a "TUTOR_" prefix.` \ -F "variables[TUTOR_CONTACT_EMAIL]=test@example.com" \ -F "variables[NEW_INSTANCE_TRIGGER]=1" `# to differentiate between other triggered job.` \ `# Replace "PROJECT_ID" with the numeric ID of the GitLab project.` \ `# The project should be a fork of https://gitlab.com/opencraft/dev/grove-template/` \ `# You can see at https://gitlab.com/opencraft/dev/grove-template/ that its project ID is 24377526.` \ https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline
For nested data structures you should pass the request data as JSON payload (refer to the Gitlab API docs):
curl -X POST \ --header "Content-Type: application/json" \ --data \ '{ "ref":"main", "token":"YOUR_TOKEN", "variables":{ "INSTANCE_NAME":"my-instance", "DEPLOYMENT_REQUEST_ID":"1", "TUTOR_CONTACT_EMAIL":"test@example.com", "NEW_INSTANCE_TRIGGER":"1" }, "TUTOR_LMS_HOST":"LMS_HOSTNAME", "TUTOR_CMS_HOST":"STUDIO_HOSTNAME", "GROVE_SIMPLE_THEME_SCSS_OVERRIDES":{ "footer-bg": "#0075b4", "footer-color": "#6d9cae" }, "TUTOR_SITE_CONFIG":{ "version":0, "static_template_about_content":"<p>This is a custom about page</p>", } }' \ https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline
-
You can put any supported tutor config with the
TUTOR_
prefix and grove config with theGROVE_
prefix. It will override theconfig.yml
and thegrove.yml
file accordingly. - Add or override LMS or CMS environment variables, using the following config keys:
TUTOR_GROVE_COMMON_SETTINGS
: Settings common to all environments .TUTOR_GROVE_COMMON_ENV_FEATURES
: Common feature flags. Applied to both LMS and CMS configs.TUTOR_GROVE_CMS_ENV
: CMS env configs.TUTOR_GROVE_CMS_ENV_FEATURES
: CMS feature flags.TUTOR_GROVE_LMS_ENV
: LMS env configs.TUTOR_GROVE_LMS_ENV_FEATURES
: LMS feature flags.TUTOR_GROVE_OPENEDX_AUTH
: Openedx Auth configuration.
- In your private fork, go to Pipelines to see the progress of the configuration job.
- The MR will get merged automatically once the pipeline succeeds.
Note: Right now parallel deployment requests for the same instance are not supported. Firing them one after another will create Merge Requests from the same git branch. Thus, only one of them might be merged automatically. You need to handle this case on the caller's end for now (e.g. with rate limiting on a backend that sends trigger requests). Before triggering another deployment for the same instance, you should wait for the previous one to finish. This means - the GitLab job finishes and the new Merge Request is merged.
Deployment¶
Deploy through the CI¶
Deploying an instance via CI should be straightforward. Check the generated pipeline and click the Run button to start deployment.
Aborting CI Deployment¶
In cases when a CI deployment needs to be aborted, like a bug is discovered or the timing is not right. In such scenario, cancelling the CI pipeline will not cancel the deploymnet, as deployments are run as child pipelines and Gitlab doesn't cascade cacelling of pipelines. Instead, the deployment process can be aborted using the following methods.
1. From Local Machine¶
- Access Gitlab Pipelines page of your project and get the Pipeline ID of CI pipeline that has been started.
-
From the
control
directory run./grove abortpipeline <your-pipeline-id>
This will cancel the parent pipeline and all the child pipelines using the GitLab API
2. Using a Gitlab Trigger¶
The abort command can also be exectued remotely using another GitLab CI task using the following request:
curl -X POST \
-F token=YOUR_TOKEN `# Replace "YOUR_TOKEN" with a token you have generated in the previous point.` \
-F "ref=main" `# A branch or tag you want to use as a base.` \
-F "variables[ABORT_DEPLOYMENT_TRIGGER]=1" `# A name of the Open edX instance.` \
-F "variables[PIPELINE_ID]=123456 `# Gitlab Pipeline ID that is running the deployment"` \
`# Replace "PROJECT_ID" with the numeric ID of the GitLab project.` \
`# The project should be a fork of https://gitlab.com/opencraft/dev/grove-template/` \
`# You can see at https://gitlab.com/opencraft/dev/grove-template/ that its project ID is 24377526.` \
https://gitlab.com/api/v4/projects/PROJECT_ID/trigger/pipeline
This will cancel the running pipeline all the child pipelines, thus aborting the full deployment process.
Deploy from a local machine¶
If you are creating the instance for the first time, you need to run Terraform to create the required infrastructure (s3 storage, database, etc. for the instance) first.
- Run
./tf init && ./tf plan && ./tf apply
to deploy the infrastructure.
Then deploy the instance using Tutor.
- Run
./grove deploy NAME
to deploy the new instance onto your cluster. - Commit your changes using git to save the new
config.yml
. (It contains sensitive values, so your repo better be private!) - Log in to your cloud provider's control panel (AWS or DigitalOcean, etc.), and go to the "load balancers" page. Find the IP or hostname of the load balancer, and set up the required DNS records to point to it (your LMS domain,
preview.[lms domain]
, and your Studio domain all need to point to the load balancer). Currently, it is different for each instance, but we will soon fix that so it uses a consistent IP / load balancer for the whole cluster (much more affordable and simple). - Wait a few minutes, then try accessing the instance. If you get an "SSL protocol error", wait a bit longer - it takes some time until DNS records have propagated and Caddy configures the required HTTPS certificates.
- Create an admin user with
./tutor NAME k8s createuser --staff --superuser Username email
Update an Open edX instance¶
Say you've made changes to a config.yml
, or new Open edX images have been released upstream.
- Change into the
control
directory, e.g.cd my-cluster/control/
- Run
./grove deploy NAME
to update the instance with IDNAME
.
For more involved changes, like a change to the instance's requirements.txt
, or for instances using custom images:
- Change into the
control
directory, e.g.cd my-cluster/control/
- Update the python venv with
./grove venv update NAME
- Re-generate the tutor config with
./grove tutor sync NAME
- Build new images for the instance with
./grove run-pipeline build-instance-image NAME openedx
(Note: if this fails with the error "An image does not exist locally with the tag", check your [GitLab] container image registry; the image may be already pushed. If you see a recently updated image in the registry, this step is done and it's safe to ignore the error.) - Deploy the new images with
./grove deploy NAME
- If needed, roll out the new pods/images using
./kubectl -n NAME rollout restart deployment/cms
and./kubectl -n NAME rollout restart deployment/lms