This guide describes how to get started with Gloo Mesh Enterprise's out of the box metrics suite.

This feature currently only supports Istio meshes.

Before you begin

This guide assumes the following:

Environment Prerequisites


Each managed Istio control plane must be installed with the following configuration in the IstioOperator manifest.

kind: IstioOperator
  name: example-istiooperator
  namespace: istio-system
        address: enterprise-agent.gloo-mesh:9977
        # needed for annotating Gloo Mesh cluster name on envoy requests (i.e. access logs, metrics)
        GLOO_MESH_CLUSTER_NAME: ${gloo-mesh-registered-cluster-name}
      # needed for annotating istio metrics with cluster
        clusterName: ${gloo-mesh-registered-cluster-name}

The envoyMetricsService config ensures that all Envoy proxies are configured to emit their metrics to the Enterprise Agent, which acts as an Envoy metrics service sink. The Enterprise Agents then forward all received metrics to Enterprise Networking, where metrics across all managed clusters are centralized.

The multiCluster config enables Istio collected metrics to be annotated with the Gloo Mesh registered cluster name. This allows for proper attribution of metrics in multicluster environments, and is particularly important for attributing requests that cross cluster boundaries.

Gloo Mesh Enterprise

When installing Gloo Mesh Enterprise, the metricsBackend.prometheus.enabled Helm value must be set to true. This can be done by providing the following argument to Helm install, --set metricsBackend.prometheus.enabled=true.

This configures Gloo Mesh to install a Prometheus server which comes preconfigured to scrape the centralized metrics from the Enterprise Networking metrics endpoint.

After installation of the Gloo Mesh management plane into cluster-1, you should see the following deployments:

gloo-mesh      enterprise-networking-69d74c9744-8nlkd               1/1     Running   0          23m
gloo-mesh      prometheus-server-68b58c79f8-rlq54                   2/2     Running   0          23m

OpenShift Integration

If you are installing Gloo Mesh Enterprise on an OpenShift cluster, you will need some additional helm values to make Prometheus run, as Openshift will require a user ID:

Where $OPENSHIFT_ID is a single valid ID from the range that OpenShift has assigned to your intended Gloo Mesh Enterprise namespace. The valid ID range can be found by examining your namespace's metadata. Note that this requires that your intended installation namespace already exists. If it does not, you must create it first:

% MESH_NAMESPACE='gloo-mesh' # Replace with your namespace if are installing Gloo Mesh Enterprise elsewhere.
% oc create ns $MESH_NAMESPACE 

Once your namespace is established, check its metadata:

% oc get ns $MESH_NAMESPACE -ojsonpath='{.metadata.annotations}' 
map[ s0:c27,c9 1000720000/10000 1000720000/10000]

OpenShift's range syntax is N through N + M - 1 inclusive, given the format N/M. So in this case, the valid ID range would be 1000720000 through 1000729999. Select a number from this range to be your ID. Assuming the number1000720000 is a valid option, an example installation command would look like this:

% OPENSHIFT_ID=1000720000
% helm install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise --namespace gloo-mesh \
--set licenseKey=${GLOO_MESH_LICENSE_KEY} \
--set enterprise-networking.metricsBackend.prometheus.enabled=true \
--set gloo-mesh-ui.GlooMeshDashboard.apiserver.floatingUserId=true \
--set enterprise-networking.prometheus.server.securityContext.runAsUser=$OPENSHIFT_ID \
--set enterprise-networking.prometheus.server.securityContext.runAsGroup=$OPENSHIFT_ID \
--set enterprise-networking.prometheus.server.securityContext.fsGroup=$OPENSHIFT_ID


Generate Traffic

Before any meaningful metrics are collected, traffic has to be generated in the system.

Port forward the productpage deployment (the productpage workload is convenient because it makes requests to the other workloads, but any workload of your choice will suffice).

kubectl -n bookinfo port-forward deploy/productpage-v1 9080

Then using a utility like hey, send requests to that destination:

# send 1 request per second
hey -z 1h -c 1 -q 1 http://localhost:9080/productpage\?u\=normal

Note that you may need to wait a few minutes before the metrics are returned from the Gloo Mesh API discussed below. The metrics need time to propagate from the Envoy proxies to the Gloo Mesh server, and for the Prometheus server to scrape the data from Gloo Mesh.

Prometheus UI

The Prometheus server comes with a builtin UI suitable for basic metrics querying. You can view it with the following commands:

# port forward prometheus server
kubectl -n gloo-mesh port-forward deploy/prometheus-server 9090

Then open localhost:9090 in your browser of choice. Here is a simple promql query to get you started with navigating the collected metrics. This query fetches the istio_requests_total metric (which counts the total number of requests) emitted by the productpage-v1.bookinfo.cluster-1 workload's Envoy proxy. You can read more about PromQL in the official documentation.

) by (