Skip to content

Tracking logs

Grove is capable of extracting the tracking logs of instances. This is useful to get more insights about the instances and their usage.

By default the tracking logs are sent to OpenSearch, however, you can also send them to S3 or Cloudwatch.

Sending logs to S3

You can send the tracking and kubernetes logs of the pods to S3. The two log types are separated and have a separate config, hence, you can send them to different buckets, regions, or can disable one of them.

In both cases, you must specify the TF_VAR_fluent_bit_aws_config in the cluster.yml or CI/CD settings of the cluster repo which sets the AWS credentials. The configuration is in JSON but must be convertable to YAML.

An example configuration to send both kubernetes and tracking logs to buckets could be:

TF_VAR_fluent_bit_aws_config: |
    {
        "aws_access_key_id": "<AWS-ACCESS-KEY-ID>",
        "aws_secret_access_key": "<AWS-SECRET-ACCESS-KEY>"
    }

TF_VAR_fluent_bit_aws_kube_log_config: |
    {
        "s3_bucket_name": "<S3-BUCKET-NAME>",
        "s3_endpoint": "s3.<S3-REGION>.amazonaws.com",
        "s3_region": "<S3-REGION>",
        "s3_compress_logs": "gzip",
        "s3_log_key_format":  "/logs/kubernetes/$TAG[1]-$TAG[3]/kube.log-%Y%m%d-%s.gz"
    }

TF_VAR_fluent_bit_aws_tracking_log_config: |
    {
        "s3_bucket_name": "<S3-BUCKET-NAME>",
        "s3_endpoint": "s3.<S3-REGION>.amazonaws.com",
        "s3_region": "<S3-REGION>",
        "s3_compress_logs": "gzip",
        "s3_log_key_format": "/logs/tracking/$TAG[1]/tracking.log-%Y%m%d-%s.gz"
    }

The s3_log_key_format is the format of the log file name. The file name must follow fluent-bit's format. The format is documented here.

The TAGs are parsed from the log tag, which is kube.<namespace_name>.<pod_name>.<container_name>.<docker_id>, hence, in the example above, the $TAG[1] and $TAG[3] would be resolved to the corresponding namespace and container name.

Sending logs to CloudWatch

It is also possible to send logs to CloudWatch instead of S3. The configuration is similar to the S3 configuration, and the logs can be sent separately, just as in case of S3.

In both cases, you must specify the TF_VAR_fluent_bit_aws_config in the cluster.yml or CI/CD settings of the cluster repo which sets the AWS credentials. The configuration is in JSON but must be convertable to YAML.

An example configuration to send both kubernetes and tracking logs to buckets could be:

TF_VAR_fluent_bit_aws_config: |
    {
        "aws_access_key_id": "<AWS-ACCESS-KEY-ID>",
        "aws_secret_access_key": "<AWS-SECRET-ACCESS-KEY>"
    }

TF_VAR_fluent_bit_aws_kube_log_config: |
    {
        "cloudwatch_group_name": "<GROUP-NAME>",
        "cloudwatch_region": "<AWS-REGION>"
    }

TF_VAR_fluent_bit_aws_tracking_log_config: |
    {
        "cloudwatch_group_name": "<GROUP-NAME>",
        "cloudwatch_region": "<AWS-REGION>"
    }