Alerts
Review alerts for Gloo Network components that are automatically configured for you in Prometheus.
To monitor the Gloo Network components more easily, Gloo automatically sets up alerts for certain metrics. These metrics include:
- Latency: The time it takes to translate or reconcile Gloo resources in your environment.
- Gloo agents: Monitors the connection between the Gloo management server and workload clusters.
- Translation errors: Reports the Gloo resources that cannot be correctly translated into Istio or Cilium resources.
- Redis errors: Lists connection failures between the Gloo management server and the Redis database where all of the Gloo configuration is stored.
To view the alerts that Gloo automatically sets up and any alerts that you configure in your Gloo Network environment:
- Check the
/alerts
page of the Prometheus UI to manually review alerts. - Deploy an instance of Prometheus Alertmanager to set up alert notifications.
View default alerts
The Prometheus server alert configuration is stored in the gloo-prometheus-server
secret in the gloo-mesh
namespace. You can extract the alert configmap from that secret by using the following command:
kubectl get secret gloo-prometheus-server -n gloo-mesh --context $MGMT_CONTEXT \
-o=jsonpath='{.data.alerting_rules\.yml}' | \
base64 -d
Latency alerts
Gloo Network sets up default alerts that monitor the time it takes for a Gloo resource to get translated or reconciled.
GlooPlatformTranslationLatencyIsHigh
Use this alert to receive warnings when the Gloo management server takes longer than usual to translate Gloo resources in to the corresponding Istio resources. The alert is configured as a histogram and is set on the 99th percentile. Alerts are sent when the translation time is higher than 10 seconds for 99% of the time during a specific timeframe.
You can customize this alert and for example send critical alerts when you reach the 99th percentile, and warnings for lower percentiles, such as 70. Note that the percentile value depends on your environment. For example, if the cluster that runs your Gloo management server is shared, you might to want use a lower percentile so that you have enough time to reschedule workloads or add resources if translation times become critical. On the other hand, if your cluster is dedicated to the Gloo management plane only and comes with additional compute resources, you can use higher percentile values for your critical alerts.
Review the default configuration of the alert. You can customize values, such as the severity, the overall timeframe that the threshold must meet before an alert is sent (duration), or the interval in which data is collected for the histogram (bucket distribution).
Characteristic | Value |
---|---|
Type | Histogram |
Expression | histogram_quantile(0.99, sum(rate(gloo_mesh_translation_time_sec_bucket[5m])) by(le)) > 10 |
Duration | 15 Minutes |
Severity | Warning |
Bucket distribution in seconds | 1, 2, 5, 10, 15, 20, 25, 30, 45, 60, 120 |
Recommended troubleshooting guide | Link |
GlooPlatformReconcilerLatencyIsHigh
The Gloo reconciler applies translated Gloo resources in your workload clusters so that the desired state in your Gloo environment can be reached. This alert notifies you when the time that the reconciler needs to apply the desired resources takes longer than 80 seconds. The alert is configured as a histogram and set on the 99th percentile.
You can customize this alert and for example send critical alerts when you reach the 99th percentile, and warnings for lower percentiles, such as 70. Note that the percentile value depends on your environment. For example, if the cluster that runs your Gloo management server is shared, you might want to use a lower percentile so that you have enough time to reschedule workloads or add resources if reconciliation times become critical. On the other hand, if your cluster is dedicated to the Gloo management plane only and comes with additional compute resources, you can use higher percentile values for your critical alerts.
Review the default configuration of the alert. You can customize values, such as the severity, the overall timeframe that the threshold must be met before an alert is sent (duration), or the interval in which data is collected for the histogram (bucket distribution).
Characteristic | Value |
---|---|
Type | Histogram |
Expression | histogram_quantile(0.99, sum(rate(gloo_mesh_reconciler_time_sec_bucket[5m])) by(le)) > 80 |
Duration | 15 Minutes |
Severity | Warning |
Bucket distribution in seconds | 1, 2, 5, 10, 15, 30, 50, 80, 100, 200 |
Recommended troubleshooting guide | Link |
Gloo agents alerts
Gloo automatically monitors the relay connection between the Gloo management server and Gloo agents, and notifies you if issues are found.
GlooPlatformAgentsAreDisconnected
This alert is used to notify you when a Gloo agent in a workload cluster is not connected to the Gloo management server. By default, you get a warning alert as soon as one cluster loses connectivity to the management server. However, depending on your cluster environment you might want to change this alert to critical.
Review the default configuration of the alert. You can customize values, such as the severity, or the overall timeframe that the threshold must be met before an alert is sent (duration).
Characteristic | Value |
---|---|
Type | Counter |
Expression | count by(cluster) (sum by(cluster) (relay_push_clients_warmed == 0)) > 0 |
Duration | 5 Minutes |
Severity | Warning |
Recommended troubleshooting guide | Relay connectionAgent |
GlooPlatformClustersNotWarming
For environments where safe mode is enabled, this alert is triggered when one or more Gloo management server replicas run in safe mode for more than 10 minutes. If you see this alert, make sure to check the workload clusters that triggered safe mode for connectivity issues. Safe mode might also be triggered, because of the Gloo management server or Redis being unavailable or unstable.
Characteristic | Value |
---|---|
Type | Counter |
Expression | count by(cluster) (sum by(cluster) (gloo_mesh_safe_mode_active != 0)) > 0 |
Duration | 10 Minutes |
Severity | Error |
For more information about how to troubleshoot this error, see Management server or Agent .
Redis alerts
Gloo sets up alerts to monitor the read and write operations between the Gloo management server and Redis.
GlooPlatformRedisErrors
In the event that the Gloo management server cannot read from the Gloo Redis instance during a 5 minute timeframe, an alert is automatically triggered.
Characteristic | Value |
---|---|
Type | Counter |
Expression | increase(gloo_mesh_redis_sync_err_total[5m]) > 0 |
Severity | Warning |
Recommended troubleshooting guide | Link |