Explore default alerts

Every Gloo Platform installation includes a default Prometheus server that collects metrics from the Gloo agents and the management server. To monitor the Gloo Platform components more easily, Gloo automatically sets up alerts for certain metrics and observes these metrics to notify you if issues occur.

Review the following default alerts that are automatically set up in Gloo, and explore options for customization.

You can optionally review the default alert configuration by running the following command:

kubectl get cm prometheus-server -n gloo-mesh -o yaml

If you want to customize the default alerts or add your own alerts, you can add them to your Helm values file and apply the file during a Gloo upgrade. To get the current Helm values and upgrade Gloo, see Upgrade Gloo Mesh.

Latency alerts

Gloo sets up default alerts that monitor the time it takes for a Gloo resource to get translated or reconciled.

GlooPlatformTranslationLatencyIsHigh

Use this alert to receive warnings when the Gloo management server takes longer than usual to translate Gloo resources in to the corresponding Istio resources. The alert is configured as a histogram and is set on the 99th percentile. Alerts are sent when the translation time is higher than 10 seconds for 99% of the time during a specific timeframe.

You can customize this alert and for example send critical alerts when you reach the 99th percentile, and warnings for lower percentiles, such as 70. Note that the percentile value depends on your environment. For example, if the cluster that runs your Gloo management server is shared, you might to want use a lower percentile so that you have enough time to reschedule workloads or add resources if translation times become critical. On the other hand, if your cluster is dedicated to the Gloo management plane only and comes with additional compute resources, you can use higher percentile values for your critical alerts.

Review the default configuration of the alert. You can customize values, such as the severity, the overall timeframe that the threshold must meet before an alert is sent (duration), or the interval in which data is collected for the histogram (bucket distribution).

Characteristic Value
Type Histogram
Expression histogram_quantile(0.99, sum(rate(gloo_mesh_translation_time_sec_bucket[5m])) by(le)) > 10
Duration 15 Minutes
Severity Warning
Bucket distribution in seconds 1, 2, 5, 10, 15, 20, 25, 30, 45, 60, 120
Recommended troubleshooting guide Link

GlooPlatformReconcilerLatencyIsHigh

The Gloo reconciler applies translated Gloo resources in your workload clusters so that the desired state in your Gloo environment can be reached. This alert notifies you when the time that the reconciler needs to apply the desired resources takes longer than 80 seconds. The alert is configured as a histogram and set on the 99th percentile.

You can customize this alert and for example send critical alerts when you reach the 99th percentile, and warnings for lower percentiles, such as 70. Note that the percentile value depends on your environment. For example, if the cluster that runs your Gloo management server is shared, you might want to use a lower percentile so that you have enough time to reschedule workloads or add resources if reconciliation times become critical. On the other hand, if your cluster is dedicated to the Gloo management plane only and comes with additional compute resources, you can use higher percentile values for your critical alerts.

Review the default configuration of the alert. You can customize values, such as the severity, the overall timeframe that the threshold must be met before an alert is sent (duration), or the interval in which data is collected for the histogram (bucket distribution).

Characteristic Value
Type Histogram
Expression histogram_quantile(0.99, sum(rate(gloo_mesh_reconciler_time_sec_bucket[5m])) by(le)) > 80
Duration 15 Minutes
Severity Warning
Bucket distribution in seconds 1, 2, 5, 10, 15, 30, 50, 80, 100, 200
Recommended troubleshooting guide Link

Gloo agents alerts

Gloo automatically monitors the relay connection between the Gloo management server and Gloo agents, and notifies you if issues are found.

GlooPlatformAgentsAreDisconnected

This alert is used to notify you when a Gloo agent in a workload cluster is not connected to the Gloo management server. By default, you get a warning alert as soon as one cluster loses connectivity to the management server. However, depending on your cluster environment you might want to change this alert to critical.

Review the default configuration of the alert. You can customize values, such as the severity, or the overall timeframe that the threshold must be met before an alert is sent (duration).

Characteristic Value
Type Counter
Expression count by(cluster) (sum by(cluster) (relay_push_clients_warmed == 0)) > 0
Duration 5 Minutes
Severity Warning
Recommended troubleshooting guide Relay connection
Agent

Translation alerts

Gloo automatically sets up alerts to monitor Gloo resources that cannot be translated correctly.

GlooPlatformTranslationWarnings

Sometimes Gloo resource configurations include partial errors or refer to other unknown Gloo resources, such as a gateway or destination. When the Gloo management server translates the resource, the translation itself works, but referenced resources for example could not be found. The Gloo resource is then marked with a warning state. When a resource with a warning state is found during a 5 minute timeframe, the alert is triggered.

Review the default configuration of the alert. You can customize values, such as the expression or severity.

Characteristic Value
Type Counter
Expression increase(translation_warning[5m]) > 0
Severity Warning
Recommended troubleshooting guide Link

GlooPlatformTranslationErrors

Translation errors can happen if the Gloo resource configuration is correct, but the Gloo management server has an issue with applying the resource or reconciling Gloo agents. If such an error occurs, the resource state changes to Failed. When a resource with a failed state is found during a 5 minute timeframe, the alert is triggered.

Review the default configuration of the alert. You can customize values, such as the expression or severity.

Characteristic Value
Type Counter
Expression increase(translation_error[5m]) > 0
Severity Warning
Recommended troubleshooting guide Link

Redis alerts

Gloo sets up alerts to monitor the read and write operations between the Gloo management server and Redis.

GlooPlatformRedisErrors

In the event that the Gloo management server cannot read from the Gloo Redis instance during a 5 minute timeframe, an alert is automatically triggered.

Characteristic Value
Type Counter
Expression increase(gloo_mesh_redis_sync_err[5m]) > 0
Severity Warning
Recommended troubleshooting guide Link