Customization options
Review options to customize the default Prometheus setup.
Bring your own Prometheus
The built-in Prometheus server is the recommended approach for scraping metrics from Gloo components and feeding them to the Gloo UI Graph to visualize workload communication. When you enable the built-in Prometheus during your installation, it is set up with a custom scraping configuration that ensures that only a minimum set of metrics and metric labels are collected.
However, the Prometheus pod is not set up with persistent storage and metrics are lost when the pod restarts or when the deployment is scaled down. Additionally, you might want to replace the built-in Prometheus server and use your organization’s own Prometheus-compatible solution or time series database that is hardened for production and integrates with other applications that might exist outside the cluster where your API Gateway runs. Review the options that you have for bringing your own Prometheus server.
Forward metrics to the built-in Prometheus in OpenShift
OpenShift comes with built-in Prometheus instances that you can use to monitor metrics for your workloads. Instead of using the built-in Prometheus that Gloo Network provides, you might want to forward the metrics from the telemetry gateway and collector agents to the OpenShift Prometheus to have a single observability layer for all of your workloads in the cluster.
For more information, see Forward metrics to OpenShift.
Replace the built-in Prometheus with your own
If you have an existing Prometheus instance that you want to use in place of the built-in Prometheus server, you configure Gloo Network to disable the built-in Prometheus instance and to use your production Prometheus instance instead. This setup is a reasonable approach if you want to scrape raw Istio metrics to collect them in your production Prometheus instance. However, you cannot control the number of metrics that you collect, or federate and aggregate the metrics before you scrape them with your production Prometheus.
To query the metrics and compute results, you use the compute resources of the cluster where your production Prometheus instance runs. Note that depending on the number and complexity of the queries that you plan to run in your production Prometheus instance, especially if you use the instance to consolidate metrics of other apps as well, your production instance might get overloaded or start to respond more slowly.
To have more granular control over the metrics that you want to collect, it is recommended to set up additional receivers, processors, and exporters in the Gloo telemetry pipeline to make these metrics available to the Gloo telemetry gateway. Then, forward these metrics to the third-party solution or time series database of your choice, such as your production Prometheus or Datadog instance. For more information, see the Prometheus receiver and Prometheus exporter OpenTelemetry documentation.
Run another Prometheus instance alongside the built-in one
You might want to run multiple Prometheus instances in your cluster that each capture metrics for certain components. For example, you might use the built-in Prometheus server in Gloo Network to capture metrics for the Gloo components, and use a different Prometheus server for your own apps’ metrics. While this setup is supported, make sure that you check the scraping configuration for each of your Prometheus instances to prevent metrics from being scraped multiple times.
In Gloo version 2.5.0, the prometheus.io/port: "<port_number>"
annotation was removed from the Gloo management server and agent. However, the prometheus.io/scrape: true
annotation is still present. If you have another Prometheus instance that runs in your cluster, and it is not set up with custom scraping jobs for the Gloo management server and agent, the instance automatically scrapes all ports on the management server and agent pods. This can lead to error messages in the management server and agent logs. Note that this issue is resolved in version 2.5.2. To resolve this issue in earlier patch versions, you can choose between the following options:
- Add the
prometheus.io/port: "<port_number>"
annotation to the management server and agent pods by using the deployment override option in your Helm chart.glooMgmtServer: deploymentOverrides: spec: template: metadata: annotations: prometheus.io/port: 9091 glooAgent: deploymentOverrides: spec: template: metadata: annotations: prometheus.io/port: 9093
- Configure your Prometheus server with the same scraping configuration that the built-in Prometheus server uses to capture metrics from the management server and agents. To get the scraping configuration of the built-in Prometheus, see Default Prometheus setup.