Get started on OpenShift
Quickly get started with Gloo Mesh Enterprise by deploying a demo environment to your OpenShift clusters.
With this guide, you can use a managed OpenShift environment, such as clusters in Google Cloud Platform (GCP) or Amazon Web Services (AWS), to install Gloo Mesh Enterprise in a management cluster, register remote clusters, and try out multicluster traffic management. The following figure depicts the multi-mesh architecture created by this quick-start setup.

This quick start guide creates a setup that you can use for testing purposes across three clusters. To set up a production-level deployment, see the Setup guide instead.
Before you begin
-
Install the following CLI tools.
istioctl
, the Istio command line tool. The resources in the guide use Istio version 1.12.5.helm
, the Kubernetes package manager.oc
, the OpenShift command line tool. Download theoc
version that is the same minor version of the OpenShift clusters you plan to use with Gloo Mesh.meshctl
, the Gloo Mesh command line tool for bootstrapping Gloo Mesh, registering clusters, describing configured resources, and more.
-
Create three OpenShift clusters. In this guide, the cluster names
mgmt-cluster
,cluster1
, andcluster2
are used. Themgmt-cluster
serves as the management cluster, andcluster1
andcluster2
serve as the remote clusters in this setup. -
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt-cluster export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context>
-
Add your Gloo Mesh Enterprise license that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative.
export GLOO_MESH_LICENSE_KEY=<license_key>
Step 1: Install Istio in the remote clusters
Install an Istio service mesh into both remote clusters. Later in this guide, you register these clusters with the Gloo Mesh Enterprise management plane so that Gloo Mesh can discover and configure Istio workloads running in these registered clusters. By installing Istio into remote clusters before you install the Gloo Mesh management plane, your Istio service meshes can be immediately discovered when you register the remote clusters.
Note that the following Istio installation profiles are provided for their simplicity, but Gloo Mesh can discover and manage Istio deployments regardless of their installation options. Additionally, to configure multicluster traffic routing later in this guide, ensure that the Istio deployment on each cluster has an externally accessible ingress gateway.
For more information, see the Istio documentation for OpenShift installation.
-
Set the Istio version. The latest version is used as an example. If you downloaded a different version, make sure to specify that version instead.
export ISTIO_IMAGE=1.12.5-solo
-
Set the Istio image repo. For Istio 1.12 and later, use a Gloo Mesh Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. Or, for Istio 1.11 or earlier, you can use
gcr.io/istio-enterprise
. For more information, see Get the Gloo Mesh Istio version that you want to use.export REPO=<repo-key>
-
Elevate the permissions of the
istio-system
andistio-operator
service accounts that will be created incluster1
andcluster2
. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator
-
Install Istio in
cluster1
.CLUSTER_NAME=$REMOTE_CLUSTER1 cat << EOF | istioctl install -y --context $REMOTE_CONTEXT1 -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: gloo-mesh-demo namespace: istio-system spec: # Openshift specifc installation (https://istio.io/latest/docs/setup/additional-setup/config-profiles/) profile: openshift # Solo.io Istio distribution repository hub: $REPO # Solo.io Gloo Mesh Istio tag tag: ${ISTIO_IMAGE} meshConfig: # enable access logging to standard output accessLogFile: /dev/stdout defaultConfig: # wait for the istio-proxy to start before application pods holdApplicationUntilProxyStarts: true # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard) envoyMetricsService: address: enterprise-agent.gloo-mesh:9977 # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging) envoyAccessLogService: address: enterprise-agent.gloo-mesh:9977 proxyMetadata: # Enable Istio agent to handle DNS requests for known hosts # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf # (for proxy-dns) ISTIO_META_DNS_CAPTURE: "true" # Enable automatic address allocation (for proxy-dns) ISTIO_META_DNS_AUTO_ALLOCATE: "true" # Used for gloo mesh metrics aggregation # should match trustDomain (required for Gloo Mesh Dashboard) GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME} # Set the default behavior of the sidecar for handling outbound traffic from the application. outboundTrafficPolicy: mode: ALLOW_ANY # The trust domain corresponds to the trust root of a system. # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity trustDomain: ${CLUSTER_NAME} components: ingressGateways: # enable the default ingress gateway - name: istio-ingressgateway enabled: true k8s: service: type: LoadBalancer ports: # health check port (required to be first for aws elbs) - name: status-port port: 15021 targetPort: 15021 # main http ingress port - port: 80 targetPort: 8080 name: http2 # main https ingress port - port: 443 targetPort: 8443 name: https # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing) - port: 15443 targetPort: 15443 # Gloo Mesh looks for this default name 'tls' on an ingress gateway name: tls pilot: k8s: env: # Allow multiple trust domains (Required for Gloo Mesh east/west routing) - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" values: # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options global: # needed for connecting VirtualMachines to the mesh network: ${CLUSTER_NAME} # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME) multiCluster: clusterName: ${CLUSTER_NAME} EOF
Example output:
✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ Installation complete
-
Install Istio in
cluster2
.CLUSTER_NAME=$REMOTE_CLUSTER2 cat << EOF | istioctl install -y --context $REMOTE_CONTEXT2 -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: gloo-mesh-demo namespace: istio-system spec: # Openshift specifc installation (https://istio.io/latest/docs/setup/additional-setup/config-profiles/) profile: openshift # Solo.io Istio distribution repository hub: $REPO # Solo.io Gloo Mesh Istio tag tag: ${ISTIO_IMAGE} meshConfig: # enable access logging to standard output accessLogFile: /dev/stdout defaultConfig: # wait for the istio-proxy to start before application pods holdApplicationUntilProxyStarts: true # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard) envoyMetricsService: address: enterprise-agent.gloo-mesh:9977 # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging) envoyAccessLogService: address: enterprise-agent.gloo-mesh:9977 proxyMetadata: # Enable Istio agent to handle DNS requests for known hosts # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf # (for proxy-dns) ISTIO_META_DNS_CAPTURE: "true" # Enable automatic address allocation (for proxy-dns) ISTIO_META_DNS_AUTO_ALLOCATE: "true" # Used for gloo mesh metrics aggregation # should match trustDomain (required for Gloo Mesh Dashboard) GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME} # Set the default behavior of the sidecar for handling outbound traffic from the application. outboundTrafficPolicy: mode: ALLOW_ANY # The trust domain corresponds to the trust root of a system. # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity trustDomain: ${CLUSTER_NAME} components: ingressGateways: # enable the default ingress gateway - name: istio-ingressgateway enabled: true k8s: service: type: LoadBalancer ports: # health check port (required to be first for aws elbs) - name: status-port port: 15021 targetPort: 15021 # main http ingress port - port: 80 targetPort: 8080 name: http2 # main https ingress port - port: 443 targetPort: 8443 name: https # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing) - port: 15443 targetPort: 15443 # Gloo Mesh looks for this default name 'tls' on an ingress gateway name: tls pilot: k8s: env: # Allow multiple trust domains (Required for Gloo Mesh east/west routing) - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" values: # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options global: # needed for connecting VirtualMachines to the mesh network: ${CLUSTER_NAME} # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME) multiCluster: clusterName: ${CLUSTER_NAME} EOF
-
Expose the
istio-ingressgateway
load balancer on each cluster by using an OpenShift route.oc --context $REMOTE_CONTEXT1 -n istio-system expose svc/istio-ingressgateway --port=http2 oc --context $REMOTE_CONTEXT2 -n istio-system expose svc/istio-ingressgateway --port=http2
To prevent issues during cluster registration in subsequent steps, do not label the gloo-mesh
project in each remote cluster for automatic Istio injection. To ensure the project is not labeled for injection, run oc label namespace gloo-mesh istio-injection=disabled --overwrite
.
Step 2: Install Gloo Mesh Enterprise in the management cluster
Install the Gloo Mesh Enterprise management components into your management cluster. The management components serve as the control plane where you define all service mesh configurations that you want Gloo Mesh to enforce across clusters and service meshes. The control plane also aggregates all of the discovered Istio service mesh components into the simplified Gloo Mesh API Mesh
, Workload
, and Destination
custom resources.
Note that this guide uses helm
to install a mimimum deployment of Gloo Mesh Enterprise for testing purposes, and some optional components are not installed. For example:
- Self-signed certificates are used.
- The RBAC role-based API is not enforced.
- Prometheus is installed, but the security context for the default Prometheus instance is set to
false
due to a Helm bug wherenull
values do not overwrite non-null
subchart values. Although you see a Helm warning due to this setting, the rendered YAML file is still valid. Alternatively, you can configure a custom Prometheus instance. - The floatingUserId is needed for proper dashboard functionality in OpenShift.
To learn more about these installation options, including advanced configuration options available in the Gloo Mesh Enterprise Helm chart, see the Setup guide.
-
Switch to the management cluster context.
oc config use-context $MGMT_CONTEXT
-
Add and update the
gloo-mesh-enterprise
Helm repository.helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise helm repo update
-
Save the
gloo-mesh
project in an evironment variable, and create the project. If you want to install Gloo Mesh into a different project, specify that project instead.INSTALL_NAMESPACE=gloo-mesh oc new-project $INSTALL_NAMESPACE
-
Install Gloo Mesh Enterprise in your management cluster.
helm install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise --kube-context $MGMT_CONTEXT -n $INSTALL_NAMESPACE \ --set licenseKey=${GLOO_MESH_LICENSE_KEY} \ --set rbac-webhook.enabled=false \ --set enterprise-networking.prometheus.server.securityContext=false \ --set enterprise-networking.enterpriseNetworking.floatingUserId=true \ --set gloo-mesh-ui.dashboard.floatingUserId=true \ --set gloo-mesh-ui.redis-dashboard.redisDashboard.floatingUserId=true
By default, self-signed certificates are used to secure communication between the management and data planes. If you prefer to set up Gloo Mesh without secure communication for quick demonstrations, include the--set enterprise-networking.global.insecure=true
flag. For more information, see Gloo Mesh-managed certificates for POC installations. -
Verify that the management components have a status of
Running
.oc get pods -n gloo-mesh
Example output:
NAME READY STATUS RESTARTS AGE dashboard-749dc7875c-4z77k 3/3 Running 0 41s enterprise-networking-778d45c7b5-5d9nh 1/1 Running 0 41s prometheus-server-86854b778-r6r52 2/2 Running 0 41s redis-dashboard-844dc4f9-jnb4j 1/1 Running 0 41s
-
Verify that the management plane is correctly installed. This check might take a few seconds to complete.
meshctl check server --kubecontext $MGMT_CONTEXT
Note that because no remote clusters are registered yet, the agent connectivity check returns a warning.
Gloo Mesh Management Cluster Installation 🟢 Gloo Mesh Pods Status 🟡 Gloo Mesh Agents Connectivity Hints: * No registered clusters detected. To register a remote cluster that has a deployed Gloo Mesh agent, add a KubernetesCluster CR. For more info, see: https://docs.solo.io/gloo-mesh-enterprise/latest/setup/enterprise_cluster_registration/ Management Configuration 2021-10-08T17:33:05.871382Z info klog apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition 🟢 Gloo Mesh CRD Versions 🟢 Gloo Mesh Networking Configuration Resources
Step 3: Register remote clusters
Register your remote clusters with the Gloo Mesh management plane.
When you installed Gloo Mesh Enterprise in the management cluster, a deployment named enterprise-networking
was created to run the relay server. The relay server is exposed by the enterprise-networking
LoadBalancer service. When you register remote clusters to be managed by Gloo Mesh Enterprise, a deployment named enterprise-agent
is created on each remote cluster to run a relay agent. Each relay agent is exposed by an enterprise-agent
ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the Architecture page.
Cluster registration also creates a KubernetesCluster
custom resource on the management cluster to represent the remote cluster and store
relevant data, such as the remote cluster's local domain (“cluster.local”). To learn more about cluster registration and how to register clusters with Helm rather than meshctl
, review the cluster registration guide.
-
In the management cluster, find the external address that was assigned by your cloud provider to the
enterprise-networking
LoadBalancer service. When you register the remote clusters in subsequent steps, theenterprise-agent
relay agent in each cluster accesses this address via a secure connection. Note that it might take a few minutes for your cloud provider to assign an external address to the LoadBalancer.ENTERPRISE_NETWORKING_DOMAIN=$(oc get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ENTERPRISE_NETWORKING_PORT=$(oc -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT} echo $ENTERPRISE_NETWORKING_ADDRESS
ENTERPRISE_NETWORKING_DOMAIN=$(oc get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') ENTERPRISE_NETWORKING_PORT=$(oc -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT} echo $ENTERPRISE_NETWORKING_ADDRESS
-
Create a Helm values file to ensure that the
enterprise-agent
Helm chart usesfloatingUserId
.cat > /tmp/enterprise-agent-values.yaml << EOF enterpriseAgent: floatingUserId: true EOF
-
Register
cluster1
with the management plane. If you installed the management plane insecurely, include the--relay-server-insecure=true
flag in this command.meshctl cluster register enterprise \ --mgmt-context=$MGMT_CONTEXT \ --remote-context=$REMOTE_CONTEXT1 \ --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \ --enterprise-agent-chart-values /tmp/enterprise-agent-values.yaml \ $REMOTE_CLUSTER1
Example output:
Registering cluster 📃 Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster 📃 Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster 💻 Installing relay agent in the remote cluster Finished installing chart 'enterprise-agent' as release gloo-mesh:enterprise-agent 📃 Creating remote.cluster KubernetesCluster CRD in management cluster ⌚ Waiting for relay agent to have a client certificate Checking... Checking... 🗑 Removing bootstrap token ✅ Done registering cluster!
-
Register
cluster2
with the management plane. If you installed the management plane insecurely, include the--relay-server-insecure=true
flag in this command.meshctl cluster register enterprise \ --mgmt-context=$MGMT_CONTEXT \ --remote-context=$REMOTE_CONTEXT2 \ --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \ --enterprise-agent-chart-values /tmp/enterprise-agent-values.yaml \ $REMOTE_CLUSTER2
-
Verify that each remote cluster is successfully registered with Gloo Mesh.
oc get kubernetescluster -n gloo-mesh
Example output:
NAME AGE cluster1 27s cluster2 23s
-
Verify that Gloo Mesh successfully discovered the Istio service meshes in each remote cluster.
oc get mesh -n gloo-mesh
Example output:
NAME AGE istiod-istio-system-cluster1 68s istiod-istio-system-cluster2 28s
Now that Gloo Mesh Enterprise is installed in the management cluster, the remote clusters are registered with the management plane, and the Istio meshes in the remote clusters are discovered by Gloo Mesh, your Gloo Mesh Enterprise setup is complete! To try out some of Gloo Mesh Enterprise's features, continue with the following sections to configure Gloo Mesh for a multicluster use case.
Step 4: Create a virtual mesh
Bootstrap connectivity between the Istio service meshes in each remote cluster by creating a VirtualMesh. When you create a VirtualMesh resource in the management cluster, each service mesh in the remote clusters is configured with certificates that share a common root of trust. After trust is established, the virtual mesh configuration facilitates cross-cluster communications by federating services so that they are accessible across clusters. To learn more, see the concepts documentation.
-
Create a VirtualMesh resource named
virtual-mesh
in thegloo-mesh
project of the management cluster.oc apply -f - << EOF apiVersion: networking.mesh.gloo.solo.io/v1 kind: VirtualMesh metadata: name: virtual-mesh namespace: gloo-mesh spec: mtlsConfig: # Note: Do NOT use this autoRestartPods setting in production!! autoRestartPods: true shared: rootCertificateAuthority: generated: {} federation: # federate all Destinations to all external meshes selectors: - {} # Disable global access policy enforcement for demonstration purposes. globalAccessPolicy: DISABLED meshes: - name: istiod-istio-system-${REMOTE_CLUSTER1} namespace: gloo-mesh - name: istiod-istio-system-${REMOTE_CLUSTER2} namespace: gloo-mesh EOF
-
Verify that the virtual mesh is created and your services meshes are configured for multicluster traffic.
oc get virtualmesh -n gloo-mesh virtual-mesh -oyaml
In the
status
section of the output, ensure that each service mesh and the virtual mesh have a state ofACCEPTED
.... status: deployedSharedTrust: rootCertificateAuthority: generated: {} destinations: istio-ingressgateway-istio-system-cluster1.gloo-mesh.: state: ACCEPTED istio-ingressgateway-istio-system-cluster2.gloo-mesh.: state: ACCEPTED meshes: istiod-istio-system-cluster1.gloo-mesh.: state: ACCEPTED istiod-istio-system-cluster2.gloo-mesh.: state: ACCEPTED observedGeneration: 1 state: ACCEPTED
Step 5: Route multicluster traffic
Now that the individual Istio service meshes are unified by a single virtual mesh, use the Bookinfo sample app to see how Gloo Mesh can facilitate multicluster traffic.
Deploy Bookinfo across clusters
Start by deploying different versions of the Bookinfo sample app to both of the remote clusters. cluster1
runs the app with versions 1 and 2 of the reviews service (reviews-v1
and reviews-v2
), and cluster2
runs version 3 of the reviews service (reviews-v3
).
-
Create the
bookinfo
project in each cluster.oc new-project bookinfo --context $REMOTE_CONTEXT1 oc new-project bookinfo --context $REMOTE_CONTEXT2
-
Create a NetworkAttachmentDefinition custom resource for the
bookinfo
project of each remote cluster. In each OpenShift project where Istio must create workloads, a NetworkAttachmentDefinition is required.cat <<EOF | oc --context $REMOTE_CONTEXT1 -n bookinfo create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
cat <<EOF | oc --context $REMOTE_CONTEXT2 -n bookinfo create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
-
Elevate the permissions of the
bookinfo
service account to allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo
-
Install Bookinfo with the
reviews-v1
andreviews-v2
services tocluster1
.# prepare the bookinfo namespace for Istio sidecar injection kubectl --context $REMOTE_CONTEXT1 label namespace bookinfo istio-injection=enabled # deploy bookinfo application components for all versions less than v3 kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # deploy all bookinfo service accounts kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' # configure ingress gateway to access bookinfo kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/networking/bookinfo-gateway.yaml
-
Verify that the Bookinfo app is running in
cluster1
.oc --context $REMOTE_CONTEXT1 get pods -n bookinfo
Example output:
NAME READY STATUS RESTARTS AGE details-v1-558b8b4b76-w9qp8 2/2 Running 0 2m33s productpage-v1-6987489c74-54lvk 2/2 Running 0 2m34s ratings-v1-7dc98c7588-pgsxv 2/2 Running 0 2m34s reviews-v1-7f99cc4496-lwtsr 2/2 Running 0 2m34s reviews-v2-7d79d5bd5d-mpsk2 2/2 Running 0 2m34s
If your Bookinfo deployment is stuck in a pending state, you might see the following error:
admission webhook "sidecar-injector.istio.io" denied the request: template: inject:1: function "Template_Version_And_Istio_Version_Mismatched_Check_Installation" not defined
Your
istioctl
version does not match the IstioOperator version that was used during Istio installation. Ensure that you download the same version ofistioctl
, which is 1.12.5 in this example. -
Install Bookinfo with the
reviews-v3
service tocluster2
.# prepare the bookinfo namespace for Istio sidecar injection kubectl --context $REMOTE_CONTEXT2 label namespace bookinfo istio-injection=enabled # deploy reviews and ratings services kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)' # deploy reviews-v3 kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)' # deploy ratings kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)' # deploy reviews and ratings service accounts kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
-
Verify that the Bookinfo app is running on
cluster2
.oc --context $REMOTE_CONTEXT2 get pods -n bookinfo
Example output:
NAME READY STATUS RESTARTS AGE ratings-v1-7dc98c7588-qbmmh 2/2 Running 0 3m11s reviews-v3-7dbcdcbc56-w4kbf 2/2 Running 0 3m11s
-
Get the address of the Istio ingress gateway on
cluster1
.CLUSTER_1_INGRESS_ADDRESS=$(oc --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
CLUSTER_1_INGRESS_ADDRESS=$(oc --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
-
Navigate to
http://$CLUSTER_1_INGRESS_ADDRESS/productpage
in a web browser.open http://$CLUSTER_1_INGRESS_ADDRESS/productpage
-
Refresh the page a few times to see the black stars in the Book Reviews column appear and disappear. The presence of black stars represents
reviews-v1
and the absence of black stars representsreviews-v2
.

Shift traffic across clusters
In order for the product-page
service on cluster1
to access reviews-v3
on cluster2
, you can use Gloo Mesh to route traffic across the remote clusters. You can route traffic to reviews-v3
by creating a Gloo Mesh TrafficPolicy resource that diverts 75% of reviews
traffic to the reviews-v3
service.
-
Create a TrafficPolicy resource named
simple
in thegloo-mesh
project of the management cluster.cat << EOF | oc apply -f - apiVersion: networking.mesh.gloo.solo.io/v1 kind: TrafficPolicy metadata: namespace: gloo-mesh name: simple spec: sourceSelector: - kubeWorkloadMatcher: namespaces: - bookinfo destinationSelector: - kubeServiceRefs: services: - clusterName: ${REMOTE_CLUSTER1} name: reviews namespace: bookinfo policy: trafficShift: destinations: - kubeService: clusterName: ${REMOTE_CLUSTER2} name: reviews namespace: bookinfo subset: version: v3 weight: 75 - kubeService: clusterName: ${REMOTE_CLUSTER1} name: reviews namespace: bookinfo subset: version: v1 weight: 15 - kubeService: clusterName: ${REMOTE_CLUSTER1} name: reviews namespace: bookinfo subset: version: v2 weight: 10 EOF
-
In the
http://$CLUSTER_1_INGRESS_ADDRESS/productpage
page in your web browser, refresh the page a few times again. Now, the red stars forreviews-v3
are shown in the book reviews.

Bookinfo services in cluster1
are now successfully accessing the Bookinfo services in cluster2
!
Step 6: Launch the Gloo Mesh Enterprise dashboard
The Gloo Mesh Enterprise dashboard provides a single pane of glass through which you can observe the status of your service meshes, workloads, and services that run across all of your clusters. You can also view the policies that configure the behavior of your network.

-
Access the Gloo Mesh Enterprise dashboard.
meshctl dashboard
-
Click through the tabs on the dashboard navigation, such as the Overview, Meshes, and Policies tabs, to visualize and check the health of your Gloo Mesh environment. For example, click the Graph tab to see the visualization of 75% of traffic flowing to
reviews-v3
, 15% toreviews-v1
, and 10% toreviews-v2
, as defined by your traffic policy.
To learn more about what you can do with the dashboard, see the dashboard guide.
Next steps
Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.
- Browse the complete set of Gloo Mesh guides to try out some of Gloo Mesh Enterprise's features,
- Check out the setup guide for advanced installation and cluster registration options.
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo Mesh workshops.
Cleanup
If you no longer need this quick-start Gloo Mesh environment, you can deregister remote clusters, uninstall management components from the management cluster, and uninstall Istio resources from the remote clusters.
Deregister remote clusters
-
Uninstall the
enterprise-agent
Helm chart that runs oncluster1
andcluster2
.helm uninstall enterprise-agent -n gloo-mesh --kube-context $REMOTE_CONTEXT1 helm uninstall enterprise-agent -n gloo-mesh --kube-context $REMOTE_CONTEXT2
-
Delete the corresponding KubernetesCluster resources from the management cluster.
oc delete kubernetescluster $REMOTE_CLUSTER1 $REMOTE_CLUSTER2 -n gloo-mesh
-
Delete the Custom Resource Definitions (CRDs) that were installed on
cluster1
andcluster2
during registration.for crd in $(oc get crd --context $REMOTE_CONTEXT1 | grep mesh.gloo | awk '{print $1}'); do oc --context $REMOTE_CONTEXT1 delete crd $crd; done for crd in $(oc get crd --context $REMOTE_CONTEXT2| grep mesh.gloo | awk '{print $1}'); do oc --context $REMOTE_CONTEXT2 delete crd $crd; done
-
Delete the
gloo-mesh
project fromcluster1
andcluster2
.oc --context $REMOTE_CONTEXT1 delete project gloo-mesh oc --context $REMOTE_CONTEXT2 delete project gloo-mesh
Uninstall management components
Uninstall the Gloo Mesh management components from the management cluster.
-
Uninstall the Gloo Mesh management plane components.
helm uninstall gloo-mesh-enterprise -n gloo-mesh --kube-context $MGMT_CONTEXT
-
Delete the Gloo Mesh CRDs.
for crd in $(oc get crd | grep mesh.gloo | awk '{print $1}'); do oc delete crd $crd; done
-
Delete the
gloo-mesh
project.oc delete project gloo-mesh
Uninstall Bookinfo and Istio
Uninstall Bookinfo resources and Istio from each remote cluster.
-
Uninstall Bookinfo from
cluster1
.# remove the sidecar injection label from the bookinfo namespace kubectl --context $REMOTE_CONTEXT1 label namespace bookinfo istio-injection- # remove bookinfo application components for all versions less than v3 kubectl --context $REMOTE_CONTEXT1 delete -n bookinfo -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # remove all bookinfo service accounts kubectl --context $REMOTE_CONTEXT1 delete -n bookinfo -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' # remove ingress gateway configuration for accessing bookinfo kubectl --context $REMOTE_CONTEXT1 delete -n bookinfo -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/networking/bookinfo-gateway.yaml
-
Uninstall Istio and delete the
istio-system
project fromcluster1
.istioctl --context $REMOTE_CONTEXT1 x uninstall --purge oc --context $REMOTE_CONTEXT1 delete project istio-system
-
Uninstall Bookinfo from
cluster2
.# remove the sidecar injection label from the bookinfo namespace kubectl --context $REMOTE_CONTEXT2 label namespace bookinfo istio-injection- # remove reviews and ratings services kubectl --context $REMOTE_CONTEXT2 -n bookinfo delete -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)' # remove reviews-v3 kubectl --context $REMOTE_CONTEXT2 -n bookinfo delete -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)' # remove ratings kubectl --context $REMOTE_CONTEXT2 -n bookinfo delete -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)' # remove reviews and ratings service accounts kubectl --context $REMOTE_CONTEXT2 -n bookinfo delete -f https://raw.githubusercontent.com/istio/istio/{{< readfile file="static/content/version_istio.txt" markdown="true">}}/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
-
Uninstall Istio and delete the
istio-system
project fromcluster2
.istioctl --context $REMOTE_CONTEXT2 x uninstall --purge oc --context $REMOTE_CONTEXT2 delete project istio-system
-
Revoke the Istio project ID permissions.
oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-system oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-operator oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:bookinfo oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-system oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-operator oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:bookinfo
-
Delete the NetworkAttachmentDefinition resources.
oc --context $REMOTE_CONTEXT1 -n bookinfo delete network-attachment-definition istio-cni oc --context $REMOTE_CONTEXT2 -n bookinfo delete network-attachment-definition istio-cni
About the Gloo Mesh-managed certificates in your POC installation
If you install Gloo Mesh Enterprise for exploration, testing, or proof-of-concept purposes, you can use the default root CA certificate and intermediate signing CAs that are automatically generated and self-signed by Gloo Mesh to secure communication between the management and data planes. Certificates are used by relay agents in remote clusters to secure communication with the relay server, and by Istio deployments to assign certificates to workload pods in the service mesh.
The root CA certificates and unencrypted keys that Gloo Mesh Enterprise autogenerates are stored in Kubernetes secrets. Using the autogenerated certificates is not recommended for production use. For more information about certificates for production setups, see Certificate management.
Autogenerated root CA certificate and intermediate CA for relay agents
To secure communication between the management and data planes, relay agents (enterprise-agent
) in remote clusters use server/client mTLS certificates to secure communication with the relay server (enterprise-networking
) in the management cluster.
By default, the Gloo Mesh Enterprise Helm chart autogenerates its own root CA certificate and intermediate signing CA for issuing the server and client certificates.
- The
relay-root-tls-secret
secret for the root CA certificate is stored in thegloo-mesh
project of the management cluster. - The
relay-tls-signing-secret
secret for the relay server certificate is stored in thegloo-mesh
project of the management cluster. - The
relay-client-tls-secret
secret for the relay agent certificate is stored in thegloo-mesh
project of each remote cluster.
When a remote cluster is registered with Gloo Mesh Enterprise, the initial setup of a secure communication channel between the management server and remote server follows this flow:
- To validate authenticity, the relay agent uses simple TLS to transmit a token value, which is defined in
relay-identity-token-secret
on the remote cluster, to the relay server. - The token must match the value stored in
relay-identity-token-secret
on the management cluster, which is created during deployment of the relay server. - When the token is validated, the relay server generates a TLS client certificate for the relay agent.
- The relay agent saves the client certificate in the
relay-client-tls-secret
. - All future communication from relay agents to the server, which uses the gRPC protocol, is secured by using mTLS provided by this certificate.
Autogenerated root CA certificate for Istio
The Istio deployment in each remote cluster requires a certificate authority (CA) certificate in the cacerts
Kubernetes secret in the istio-system
project.
Gloo Mesh Enterprise uses a VirtualMesh resource to configure the relay server (enterprise-networking
) to generate and self-sign a root CA certificate. This root CA certificate can sign Istio intermediate CA certificates whenever an Istio deployment in a remote cluster must create a certificate for a workload pod in its service mesh. Gloo Mesh stores the signed intermediate CA certificate in the cacerts
Kubernetes secret in the istio-system
project of the remote cluster.
Example VirtualMesh resource to autogenerate a root CA to sign intermediate CA certificates for each Istio deployment:
apiVersion: networking.mesh.gloo.solo.io/v1
kind: VirtualMesh
metadata:
name: virtual-mesh
namespace: gloo-mesh
spec:
# specify Gloo Mesh certificate policy
mtlsConfig:
# When a new intermediate CA certificate is signed for Istio,
# a restart is required of all pods in the mesh to receive new certificates.
# Note: Do NOT use this autoRestartPods setting in production!
autoRestartPods: true
shared:
# root ca config
rootCertificateAuthority:
# autogenerate root CA certificate
generated: {}
federation:
selectors:
- {}
meshes:
- name: istiod-istio-system-cluster1
namespace: gloo-mesh
- name: istiod-istio-system-cluster2
namespace: gloo-mesh
Saving the autogenerated certificates
The root CA certificates and unencrypted keys that Gloo Mesh Enterprise autogenerates are stored in Kubernetes secrets. If the secrets are deleted, you must regenerate new certificates for all relay agents and Istio deployments. Be sure to download them and store them in a safe location in the event of an accidental deletion.
-
From the management cluster, download the root certificate from the
relay-root-tls-secret
secret.mkdir relay-root-tls-secret kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' relay-root-tls-secret | base64 --decode > relay-root-tls-secret/ca.crt kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.tls\.key}' relay-root-tls-secret | base64 --decode > relay-root-tls-secret/tls.key
-
Download the relay signing certificate from the
relay-tls-signing-secret
secret.mkdir relay-tls-signing-secret kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' relay-tls-signing-secret | base64 --decode > relay-tls-signing-secret/ca.crt kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.tls\.key}' relay-tls-signing-secret | base64 --decode > relay-tls-signing-secret/tls.key
-
Download the Istio root certificate from each
VirtualMesh
resource.VIRTUAL_MESH_NAME=gloo-mesh SECRET_NAME=virtual-mesh.$VIRTUAL_MESH_NAME mkdir $SECRET_NAME kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.root-cert\.pem}' $SECRET_NAME | base64 --decode > $SECRET_NAME/ca.crt kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.key\.pem}' $SECRET_NAME | base64 --decode > $SECRET_NAME/tls.key
-
In the event that the secrets are deleted, you can use these downloaded
ca.crt
andtls.key
pairs to recreate the secrets in the management cluster. For example, if therelay-root-tls-secret
is deleted, you can recreate the secret by running the following:kubectl -n gloo-mesh --context $MGMT_CONTEXT create secret tls relay-root-tls-secret \ --cert=relay-root-tls-secret/ca.crt \ --key=relay-root-tls-secret/tls.key
Insecure installations
If you prefer to set up Gloo Mesh without secure communication for quick demonstrations, you can install Gloo Mesh and register remote clusters in insecure mode. Insecure mode means that no certificates are generated to secure the connection between the Gloo Mesh server and agent components. Instead, the connection uses HTTP.
- In the
helm install gloo-mesh-enterprise
command to create the Gloo Mesh management plane, include the--set enterprise-networking.global.insecure=true
flag. - In the
meshctl cluster register
command to register a remote cluster, include the--relay-server-insecure=true
flag.