Gloo Edge Metrics
All Gloo Edge pods ship with optional Prometheus monitoring capabilities.
This functionality is turned on by default, and can be turned off a couple of different ways: through Helm chart install options; and through environment variables.
You can take a look at the Help strings we publish to see what kind of metrics are available.
Toggling Pod Metrics
Helm Chart Options
The first way is via the Helm chart. A global settings value for enabling metrics and debug endpoints on all pods part
of the Gloo Edge installation can be toggled using
In addition, all deployment resources in the chart accept an argument
stats which when set, override any default
value inherited from
For example, to add stats to the Gloo Edge
gateway, when installing with Helm add
For example, to add stats to the Gloo Edge
discovery pod, first write your values file. Run:
echo "discovery: deployment: stats: enabled: true " > stats-values.yaml
Then install using one of the following methods. Note that using Helm 2 is not supported in Gloo Edge v1.8.0.
glooctl install gateway --values stats-values.yaml
helm install gloo gloo/gloo --namespace gloo-system -f stats-values.yaml
Here’s what the resulting
discovery manifest would look like. Note the additions of the
START_STATS_SERVER environment variable.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: gloo gloo: discovery name: discovery namespace: gloo-system spec: replicas: 1 selector: matchLabels: gloo: discovery template: metadata: labels: gloo: discovery annotations: prometheus.io/path: /metrics prometheus.io/port: "9091" prometheus.io/scrape: "true" spec: containers: - image: "quay.io/solo-io/discovery:0.11.1" imagePullPolicy: Always name: discovery env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: START_STATS_SERVER value: "true"
This flag will set the
START_STATS_SERVER environment variable to true in the container which will start the stats
server on port
The other method is to manually set the
START_STATS_SERVER=1 in the pod.
Monitoring Gloo Edge with Prometheus
Prometheus has great support for monitoring kubernetes pods. Docs for that can be found here. If the stats are enabled through the Helm chart than the Prometheus annotations are automatically added to the pod spec. And those Prometheus stats are available from the admin page in our pods.
For example, assuming you installed Gloo Edge as previously using Helm, and enabled stats for discovery, you
kubectl port-forward <pod> 9091:9091 those pods (or deployments/services selecting those pods) to access
their admin page as follows.
kubectl --namespace gloo-system port-forward deployment/discovery 9091:9091
And then open http://localhost:9091 for the admin page, including the Prometheus metrics at http://localhost:9091/metrics.
More information on Gloo Edge’s admin ports can be found here.
You can see exactly what metrics are published from a particular pod by taking a look at our Prometheus
Help strings. For a given
pod you’re interested in, you can curl
/metrics on its stats port (usually
9091) to see this content.
For example, here’s a look at the Help strings published by our
gloo pod as of 0.20.13. You can do the
same thing for any of our pods, including the closed-source ones in the case of Gloo Edge Enterprise.
$ kubectl port-forward deployment/gloo 9091 & $ portForwardPid=$! $ curl -s localhost:9091/metrics | grep HELP # HELP api_gloo_solo_io_emitter_resources_in The number of resource lists received on open watch channels # HELP api_gloo_solo_io_emitter_snap_in Deprecated. Use api.gloo.solo.io/emitter/resources_in. The number of snapshots updates coming in. # HELP api_gloo_solo_io_emitter_snap_missed The number of snapshots updates going missed. this can happen in heavy load. missed snapshot will be re-tried after a second. # HELP api_gloo_solo_io_emitter_snap_out The number of snapshots updates going out # HELP eds_gloo_solo_io_emitter_resources_in The number of resource lists received on open watch channels # HELP eds_gloo_solo_io_emitter_snap_in Deprecated. Use eds.gloo.solo.io/emitter/resources_in. The number of snapshots updates coming in. # HELP eds_gloo_solo_io_emitter_snap_out The number of snapshots updates going out # HELP gloo_solo_io_setups_run The number of times the main setup loop has run # HELP grpc_io_server_completed_rpcs Count of RPCs by method and status. # HELP grpc_io_server_received_bytes_per_rpc Distribution of received bytes per RPC, by method. # HELP grpc_io_server_received_messages_per_rpc Distribution of messages received count per RPC, by method. # HELP grpc_io_server_sent_bytes_per_rpc Distribution of total sent bytes per RPC, by method. # HELP grpc_io_server_sent_messages_per_rpc Distribution of messages sent count per RPC, by method. # HELP grpc_io_server_server_latency Distribution of server latency in milliseconds, by method. # HELP kube_events_count The number of events sent from kuberenets to us # HELP kube_lists_count The number of list calls # HELP kube_req_in_flight The number of requests in flight # HELP kube_updates_count The number of update calls # HELP kube_watches_count The number of watch calls # HELP runtime_goroutines The number of goroutines # HELP setup_gloo_solo_io_emitter_resources_in The number of resource lists received on open watch channels # HELP setup_gloo_solo_io_emitter_snap_in Deprecated. Use setup.gloo.solo.io/emitter/resources_in. The number of snapshots updates coming in. $ kill $portForwardPid