Get started on OpenShift

Quickly get started with Gloo Mesh Enterprise by deploying a demo environment to your OpenShift clusters.

With this guide, you can use a managed OpenShift environment, such as clusters in Google Cloud Platform (GCP) or Amazon Web Services (AWS), to install Gloo Mesh Enterprise in a management cluster, register remote clusters, and try out multicluster traffic management. The following figure depicts the multi-mesh architecture created by this quick-start setup.

Figure of a three-cluster Gloo Mesh quick-start architecture.

This quick start guide creates a setup that you can use for testing purposes across three clusters. To set up a production-level deployment, see the Setup guide instead.

Before you begin

  1. Install the following CLI tools.

    • istioctl, the Istio command line tool. The resources in the guide use Istio version 1.11.4.
    • helm, the Kubernetes package manager.
    • oc, the OpenShift command line tool. Download the oc version that is the same minor version of the OpenShift clusters you plan to use with Gloo Mesh.
    • meshctl, the Gloo Mesh command line tool for bootstrapping Gloo Mesh, registering clusters, describing configured resources, and more.
  2. Create three OpenShift clusters. In this guide, the cluster names mgmt-cluster, cluster-1, and cluster-2 are used. The mgmt-cluster serves as the management cluster, and cluster-1 and cluster-2 serve as the remote clusters in this setup.

  3. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt-cluster
    export REMOTE_CLUSTER1=cluster-1
    export REMOTE_CLUSTER2=cluster-2
    
  4. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for the cluster name in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster-1-context>
    export REMOTE_CONTEXT2=<remote-cluster-2-context>
    
  5. Add your Gloo Mesh Enterprise license that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative.

    export GLOO_MESH_LICENSE_KEY=<license_key>
    
  6. Set the Istio version. The latest version is used as an example. If you downloaded a different version, make sure to specify that version instead.

    export ISTIO_VERSION=1.11.4
    

Step 1: Install Istio in the remote clusters

Install an Istio service mesh into both remote clusters. Later in this guide, you register these clusters with the Gloo Mesh Enterprise management plane so that Gloo Mesh can discover and configure Istio workloads running in these registered clusters. By installing Istio into remote clusters before you install the Gloo Mesh management plane, your Istio service meshes can be immediately discovered when you register the remote clusters.

Note that the following Istio installation profiles are provided for their simplicity, but Gloo Mesh can discover and manage Istio deployments regardless of their installation options. Additionally, to configure multicluster traffic routing later in this guide, ensure that the Istio deployment on each cluster has an externally accessible ingress gateway.

For more information, see the Istio documentation for OpenShift installation.

  1. Elevate the permissions of the istio-system and istio-operator service accounts that will be created in cluster-1 and cluster-2. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
    oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator
    oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
    oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator
    
  2. Install Istio in cluster-1.

    
    CLUSTER_NAME=$REMOTE_CLUSTER1
    cat << EOF | istioctl install -y --context $REMOTE_CONTEXT1 -f -
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: gloo-mesh-demo
      namespace: istio-system
    spec:
      # Openshift specifc installation (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
      profile: openshift
      # Solo.io Istio distribution repository
      hub: gcr.io/istio-enterprise
      # Solo.io Gloo Mesh Istio tag
      tag: ${ISTIO_VERSION}
    
      meshConfig:
        # enable access logging to standard output
        accessLogFile: /dev/stdout
    
        defaultConfig:
          # wait for the istio-proxy to start before application pods
          holdApplicationUntilProxyStarts: true
          # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard)
          envoyMetricsService:
            address: enterprise-agent.gloo-mesh:9977
           # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging)
          envoyAccessLogService:
            address: enterprise-agent.gloo-mesh:9977
          proxyMetadata:
            # Enable Istio agent to handle DNS requests for known hosts
            # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
            # (for proxy-dns)
            ISTIO_META_DNS_CAPTURE: "true"
            # Enable automatic address allocation (for proxy-dns)
            ISTIO_META_DNS_AUTO_ALLOCATE: "true"
            # Used for gloo mesh metrics aggregation
            # should match trustDomain (required for Gloo Mesh Dashboard)
            GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME}
    
        # Set the default behavior of the sidecar for handling outbound traffic from the application.
        outboundTrafficPolicy:
          mode: ALLOW_ANY
        # The trust domain corresponds to the trust root of a system. 
        # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity
        trustDomain: ${CLUSTER_NAME}
      components:
        ingressGateways:
        # enable the default ingress gateway
        - name: istio-ingressgateway
          enabled: true
          k8s:
            service:
              type: LoadBalancer
              ports:
                # health check port (required to be first for aws elbs)
                - name: status-port
                  port: 15021
                  targetPort: 15021
                # main http ingress port
                - port: 80
                  targetPort: 8080
                  name: http2
                # main https ingress port
                - port: 443
                  targetPort: 8443
                  name: https
                # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing)
                - port: 15443
                  targetPort: 15443
                  # Gloo Mesh looks for this default name 'tls' on an ingress gateway
                  name: tls
        pilot:
          k8s:
            env:
             # Allow multiple trust domains (Required for Gloo Mesh east/west routing)
              - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
                value: "true"
      values:
        # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options
        global:
          # needed for connecting VirtualMachines to the mesh
          network: ${CLUSTER_NAME}
          # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME)
          multiCluster:
            clusterName: ${CLUSTER_NAME}
    EOF
    

    Example output:

    ✔ Istio core installed
    ✔ Istiod installed
    ✔ Ingress gateways installed
    ✔ Installation complete
    
  3. Install Istio in cluster-2.

    
    CLUSTER_NAME=$REMOTE_CLUSTER2
    cat << EOF | istioctl install -y --context $REMOTE_CONTEXT2 -f -
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: gloo-mesh-demo
      namespace: istio-system
    spec:
      # Openshift specifc installation (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
      profile: openshift
      # Solo.io Istio distribution repository
      hub: gcr.io/istio-enterprise
      # Solo.io Gloo Mesh Istio tag
      tag: ${ISTIO_VERSION}
    
      meshConfig:
        # enable access logging to standard output
        accessLogFile: /dev/stdout
    
        defaultConfig:
          # wait for the istio-proxy to start before application pods
          holdApplicationUntilProxyStarts: true
          # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard)
          envoyMetricsService:
            address: enterprise-agent.gloo-mesh:9977
           # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging)
          envoyAccessLogService:
            address: enterprise-agent.gloo-mesh:9977
          proxyMetadata:
            # Enable Istio agent to handle DNS requests for known hosts
            # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
            # (for proxy-dns)
            ISTIO_META_DNS_CAPTURE: "true"
            # Enable automatic address allocation (for proxy-dns)
            ISTIO_META_DNS_AUTO_ALLOCATE: "true"
            # Used for gloo mesh metrics aggregation
            # should match trustDomain (required for Gloo Mesh Dashboard)
            GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME}
    
        # Set the default behavior of the sidecar for handling outbound traffic from the application.
        outboundTrafficPolicy:
          mode: ALLOW_ANY
        # The trust domain corresponds to the trust root of a system. 
        # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity
        trustDomain: ${CLUSTER_NAME}
      components:
        ingressGateways:
        # enable the default ingress gateway
        - name: istio-ingressgateway
          enabled: true
          k8s:
            service:
              type: LoadBalancer
              ports:
                # health check port (required to be first for aws elbs)
                - name: status-port
                  port: 15021
                  targetPort: 15021
                # main http ingress port
                - port: 80
                  targetPort: 8080
                  name: http2
                # main https ingress port
                - port: 443
                  targetPort: 8443
                  name: https
                # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing)
                - port: 15443
                  targetPort: 15443
                  # Gloo Mesh looks for this default name 'tls' on an ingress gateway
                  name: tls
        pilot:
          k8s:
            env:
             # Allow multiple trust domains (Required for Gloo Mesh east/west routing)
              - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
                value: "true"
      values:
        # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options
        global:
          # needed for connecting VirtualMachines to the mesh
          network: ${CLUSTER_NAME}
          # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME)
          multiCluster:
            clusterName: ${CLUSTER_NAME}
    EOF
    

  4. Expose the istio-ingressgateway load balancer on each cluster by using an OpenShift route.

    oc --context $REMOTE_CONTEXT1 -n istio-system expose svc/istio-ingressgateway --port=http2
    oc --context $REMOTE_CONTEXT2 -n istio-system expose svc/istio-ingressgateway --port=http2
    

To prevent issues during cluster registration in subsequent steps, do not label the gloo-mesh namespace in each remote cluster for automatic Istio injection. To ensure the namespace is not labeled for injection, run oc label namespace gloo-mesh istio-injection=disabled --overwrite.

Step 2: Install Gloo Mesh Enterprise in the management cluster

Install the Gloo Mesh Enterprise management components into your management cluster. The management components serve as the control plane where you define all service mesh configurations that you want Gloo Mesh to enforce across clusters and service meshes. The control plane also aggregates all of the discovered Istio service mesh components into the simplified Gloo Mesh API Mesh, Workload, and Destination custom resources.

Note that this guide uses helm to install a mimimum deployment of Gloo Mesh Enterprise for testing purposes, and some optional components are not installed. For example:

To learn more about these installation options, including advanced configuration options available in the Gloo Mesh Enterprise Helm chart, see the Setup guide.

  1. Switch to the management cluster context.

    oc config use-context $MGMT_CONTEXT
    
  2. Add and update the gloo-mesh-enterprise Helm repository.

    helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
    helm repo update
    
  3. Save the gloo-mesh namespace in an evironment variable, and create the namespace. If you want to install Gloo Mesh into a different namespace, specify that namespace instead.

    INSTALL_NAMESPACE=gloo-mesh
    oc create namespace $INSTALL_NAMESPACE
    
  4. Install Gloo Mesh Enterprise in your management cluster.

    helm install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise --kube-context $MGMT_CONTEXT -n $INSTALL_NAMESPACE \
    --set licenseKey=${GLOO_MESH_LICENSE_KEY} \
    --set rbac-webhook.enabled=false \
    --set enterprise-networking.prometheus.server.securityContext=false \
    --set enterprise-networking.enterpriseNetworking.floatingUserId=true \
    --set gloo-mesh-ui.dashboard.floatingUserId=true \
    --set gloo-mesh-ui.redis-dashboard.redisDashboard.floatingUserId=true
    
    By default, self-signed certificates are used to secure communication between the management and data planes. If you prefer to set up Gloo Mesh without secure communication for quick demonstrations, include the --set enterprise-networking.global.insecure=true flag. For more information, see Gloo Mesh-managed certificates for POC installations.
  5. Verify that the management components have a status of Running.

    oc get pods -n gloo-mesh
    

    Example output:

    NAME                                     READY   STATUS    RESTARTS   AGE
    dashboard-749dc7875c-4z77k               3/3     Running   0          41s
    enterprise-networking-778d45c7b5-5d9nh   1/1     Running   0          41s
    prometheus-server-86854b778-r6r52        2/2     Running   0          41s
    redis-dashboard-844dc4f9-jnb4j           1/1     Running   0          41s
    
  6. Verify that the management plane is correctly installed. This check might take a few seconds to complete.

    meshctl check server --kubecontext $MGMT_CONTEXT
    

    Note that because no remote clusters are registered yet, the agent connectivity check returns a warning.

    Gloo Mesh Management Cluster Installation
    
    🟢 Gloo Mesh Pods Status
    
    🟡 Gloo Mesh Agents Connectivity
       Hints:
       * No registered clusters detected. To register a remote cluster that has a deployed Gloo Mesh agent, add a KubernetesCluster CR.
          For more info, see: https://docs.solo.io/gloo-mesh/latest/setup/enterprise_cluster_registration/
    
    Management Configuration
    2021-10-08T17:33:05.871382Z	info	klog	apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
    
    🟢 Gloo Mesh CRD Versions
    
    🟢 Gloo Mesh Networking Configuration Resources
    

Step 3: Register remote clusters

Register your remote clusters with the Gloo Mesh management plane.

When you installed Gloo Mesh Enterprise in the management cluster, a deployment named enterprise-networking was created to run the relay server. The relay server is exposed by the enterprise-networking LoadBalancer service. When you register remote clusters to be managed by Gloo Mesh Enterprise, a deployment named enterprise-agent is created on each remote cluster to run a relay agent. Each relay agent is exposed by an enterprise-agent ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the Architecture page.

Cluster registration also creates a KubernetesCluster custom resource on the management cluster to represent the remote cluster and store relevant data, such as the remote cluster's local domain (“cluster.local”). To learn more about cluster registration and how to register clusters with Helm rather than meshctl, review the cluster registration guide.

  1. In the management cluster, find the external address that was assigned by your cloud provider to the enterprise-networking LoadBalancer service. When you register the remote clusters in subsequent steps, the enterprise-agent relay agent in each cluster accesses this address via a secure connection. Note that it might take a few minutes for your cloud provider to assign an external address to the LoadBalancer.

    
       ENTERPRISE_NETWORKING_DOMAIN=$(oc get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       ENTERPRISE_NETWORKING_PORT=$(oc -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
       echo $ENTERPRISE_NETWORKING_ADDRESS
       
    
       ENTERPRISE_NETWORKING_DOMAIN=$(oc get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       ENTERPRISE_NETWORKING_PORT=$(oc -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
       echo $ENTERPRISE_NETWORKING_ADDRESS
       

  2. Create a Helm values file to ensure that the enterprise-agent Helm chart uses floatingUserId.

    cat > /tmp/enterprise-agent-values.yaml << EOF
    enterpriseAgent:
      floatingUserId: true
    EOF
    
  3. Register cluster-1 with the management plane. If you installed the management plane insecurely, include the --relay-server-insecure=true flag in this command.

    meshctl cluster register enterprise \
      --mgmt-context=$MGMT_CONTEXT \
      --remote-context=$REMOTE_CONTEXT1 \
      --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \
      --enterprise-agent-chart-values /tmp/enterprise-agent-values.yaml \
      $REMOTE_CLUSTER1
    

    Example output:

    Registering cluster
    📃 Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster
    📃 Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster
    💻 Installing relay agent in the remote cluster
    Finished installing chart 'enterprise-agent' as release gloo-mesh:enterprise-agent
    📃 Creating remote.cluster KubernetesCluster CRD in management cluster
    ⌚ Waiting for relay agent to have a client certificate
           Checking...
           Checking...
    🗑 Removing bootstrap token
    ✅ Done registering cluster!
    
  4. Register cluster-2 with the management plane. If you installed the management plane insecurely, include the --relay-server-insecure=true flag in this command.

    meshctl cluster register enterprise \
      --mgmt-context=$MGMT_CONTEXT \
      --remote-context=$REMOTE_CONTEXT2 \
      --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \
      --enterprise-agent-chart-values /tmp/enterprise-agent-values.yaml \
      $REMOTE_CLUSTER2
    
  5. Verify that each remote cluster is successfully registered with Gloo Mesh.

    oc get kubernetescluster -n gloo-mesh
    

    Example output:

    NAME           AGE
    cluster-1      27s
    cluster-2      23s
    
  6. Verify that Gloo Mesh successfully discovered the Istio service meshes in each remote cluster.

    oc get mesh -n gloo-mesh
    

    Example output:

    NAME                            AGE
    istiod-istio-system-cluster-1   68s
    istiod-istio-system-cluster-2   28s
    

Now that Gloo Mesh Enterprise is installed in the management cluster, the remote clusters are registered with the management plane, and the Istio meshes in the remote clusters are discovered by Gloo Mesh, your Gloo Mesh Enterprise setup is complete! To try out some of Gloo Mesh Enterprise's features, continue with the following sections to configure Gloo Mesh for a multicluster use case.

Step 4: Create a virtual mesh

Bootstrap connectivity between the Istio service meshes in each remote cluster by creating a VirtualMesh. When you create a VirtualMesh resource in the management cluster, each service mesh in the remote clusters is configured with certificates that share a common root of trust. After trust is established, the virtual mesh configuration facilitates cross-cluster communications by federating services so that they are accessible across clusters. To learn more, see the concepts documentation.

  1. Create a VirtualMesh resource named virtual-mesh in the gloo-mesh namespace of the management cluster.

    
    oc apply -f - << EOF
    apiVersion: networking.mesh.gloo.solo.io/v1
    kind: VirtualMesh
    metadata:
      name: virtual-mesh
      namespace: gloo-mesh
    spec:
      mtlsConfig:
        # Note: Do NOT use this autoRestartPods setting in production!!
        autoRestartPods: true
        shared:
          rootCertificateAuthority:
            generated: {}
      federation:
        # federate all Destinations to all external meshes
        selectors:
        - {}
      # Disable global access policy enforcement for demonstration purposes.
      globalAccessPolicy: DISABLED
      meshes:
      - name: istiod-istio-system-${REMOTE_CLUSTER1}
        namespace: gloo-mesh
      - name: istiod-istio-system-${REMOTE_CLUSTER2}
        namespace: gloo-mesh
    EOF
    

  2. Verify that the virtual mesh is created and your services meshes are configured for multicluster traffic.

    oc get virtualmesh -n gloo-mesh virtual-mesh -oyaml
    

    In the status section of the output, ensure that each service mesh and the virtual mesh have a state of ACCEPTED.

    ...
     status:
      deployedSharedTrust:
        rootCertificateAuthority:
          generated: {}
      destinations:
        istio-ingressgateway-istio-system-cluster-1.gloo-mesh.:
          state: ACCEPTED
        istio-ingressgateway-istio-system-cluster-2.gloo-mesh.:
          state: ACCEPTED
      meshes:
        istiod-istio-system-cluster-1.gloo-mesh.:
          state: ACCEPTED
        istiod-istio-system-cluster-2.gloo-mesh.:
          state: ACCEPTED
      observedGeneration: 1
      state: ACCEPTED
    

Step 5: Route multicluster traffic

Now that the individual Istio service meshes are unified by a single virtual mesh, use the Bookinfo sample app to see how Gloo Mesh can facilitate multicluster traffic.

Deploy Bookinfo across clusters

Start by deploying different versions of the Bookinfo sample app to both of the remote clusters. cluster-1 runs the app with versions 1 and 2 of the reviews service (reviews-v1 and reviews-v2), and cluster-2 runs version 3 of the reviews service (reviews-v3).

  1. Create a NetworkAttachmentDefinition custom resource for the default namespace of each remote cluster. In each OpenShift namespace where Istio must create workloads, a NetworkAttachmentDefinition is required.

    cat <<EOF | oc --context $REMOTE_CONTEXT1 -n default create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    
    cat <<EOF | oc --context $REMOTE_CONTEXT2 -n default create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    
  2. Elevate the permissions of the default service account to allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:default
    oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:default
    
  3. Install Bookinfo with the reviews-v1 and reviews-v2 services to cluster-1.

    # create a bookinfo namespace to organize your demo resources
    kubectl create ns bookinfo
    # prepare the bookinfo namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT1 label namespace bookinfo istio-injection=enabled
    # deploy bookinfo application components for all versions less than v3
    kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
    # deploy all bookinfo service accounts
    kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
    # configure ingress gateway to access bookinfo
    kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/networking/bookinfo-gateway.yaml
    
  4. Verify that the Bookinfo app is running in cluster-1.

    oc --context $REMOTE_CONTEXT1 get pods
    

    Example output:

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-558b8b4b76-w9qp8       2/2     Running   0          2m33s
    productpage-v1-6987489c74-54lvk   2/2     Running   0          2m34s
    ratings-v1-7dc98c7588-pgsxv       2/2     Running   0          2m34s
    reviews-v1-7f99cc4496-lwtsr       2/2     Running   0          2m34s
    reviews-v2-7d79d5bd5d-mpsk2       2/2     Running   0          2m34s
    

    If your Bookinfo deployment is stuck in a pending state, you might see the following error:

    admission webhook "sidecar-injector.istio.io" denied the request: template:
          inject:1: function "Template_Version_And_Istio_Version_Mismatched_Check_Installation"
          not defined
    

    Your istioctl version does not match the IstioOperator version that was used during Istio installation. Ensure that you download the same version of istioctl, which is 1.11.4 in this example.

  5. Install Bookinfo with the reviews-v3 service to cluster-2.

    # create a bookinfo namespace to organize your demo resources
    kubectl create ns bookinfo
    # prepare the bookinfo namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT2 label namespace default istio-injection=enabled
    # deploy reviews and ratings services
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)'
    # deploy reviews-v3
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)'
    # deploy ratings
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)'
    # deploy reviews and ratings service accounts
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
    
  6. Verify that the Bookinfo app is running on cluster-2.

    oc --context $REMOTE_CONTEXT2 get pods
    

    Example output:

    NAME                          READY   STATUS    RESTARTS   AGE
    ratings-v1-7dc98c7588-qbmmh   2/2     Running   0          3m11s
    reviews-v3-7dbcdcbc56-w4kbf   2/2     Running   0          3m11s
    
  7. Get the address of the Istio ingress gateway on cluster-1.

    
       CLUSTER_1_INGRESS_ADDRESS=$(oc --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
       
    
       CLUSTER_1_INGRESS_ADDRESS=$(oc --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
       

  8. Navigate to http://$CLUSTER_1_INGRESS_ADDRESS/productpage in a web browser.

    open http://$CLUSTER_1_INGRESS_ADDRESS/productpage
    
  9. Refresh the page a few times to see the black stars in the Book Reviews column appear and disappear. The presence of black stars represents reviews-v1 and the absence of black stars represents reviews-v2.

Shift traffic across clusters

In order for the product-page service on cluster-1 to access reviews-v3 on cluster-2, you can use Gloo Mesh to route traffic across the remote clusters. You can route traffic to reviews-v3 by creating a Gloo Mesh TrafficPolicy resource that diverts 75% of reviews traffic to the reviews-v3 service.

  1. Create a TrafficPolicy resource named simple in the gloo-mesh namespace of the management cluster.

    
    cat << EOF | oc apply -f -
    apiVersion: networking.mesh.gloo.solo.io/v1
    kind: TrafficPolicy
    metadata:
      namespace: gloo-mesh
      name: simple
    spec:
      sourceSelector:
      - kubeWorkloadMatcher:
          namespaces:
          - default
      destinationSelector:
      - kubeServiceRefs:
          services:
            - clusterName: ${REMOTE_CLUSTER1}
              name: reviews
              namespace: default
      policy:
        trafficShift:
          destinations:
            - kubeService:
                clusterName: ${REMOTE_CLUSTER2}
                name: reviews
                namespace: default
                subset:
                  version: v3
              weight: 75
            - kubeService:
                clusterName: ${REMOTE_CLUSTER1}
                name: reviews
                namespace: default
                subset:
                  version: v1
              weight: 15
            - kubeService:
                clusterName: ${REMOTE_CLUSTER1}
                name: reviews
                namespace: default
                subset:
                  version: v2
              weight: 10
    EOF
    

  2. In the http://$CLUSTER_1_INGRESS_ADDRESS/productpage page in your web browser, refresh the page a few times again. Now, the red stars for reviews-v3 are shown in the book reviews.

Bookinfo services in cluster-1 are now successfully accessing the Bookinfo services in cluster-2!

Step 6: Launch the Gloo Mesh Enterprise dashboard

The Gloo Mesh Enterprise dashboard provides a single pane of glass through which you can observe the status of your service meshes, workloads, and services that run across all of your clusters. You can also view the policies that configure the behavior of your network.

  1. Access the Gloo Mesh Enterprise dashboard.

    meshctl dashboard
    
  2. Click through the tabs on the dashboard navigation, such as the Overview, Meshes, and Policies tabs, to visualize and check the health of your Gloo Mesh environment. For example, click the Graph tab to see the visualization of 75% of traffic flowing to reviews-v3, 15% to reviews-v1, and 10% to reviews-v2, as defined by your traffic policy.

To learn more about what you can do with the dashboard, see the dashboard guide.

Next steps

Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.

Cleanup

If you no longer need this quick-start Gloo Mesh environment, you can deregister remote clusters, uninstall management components from the management cluster, and uninstall Istio resources from the remote clusters.

Deregister remote clusters

  1. Uninstall the enterprise-agent Helm chart that runs on cluster-1 and cluster-2.

    helm uninstall enterprise-agent -n gloo-mesh --kube-context $REMOTE_CONTEXT1
    helm uninstall enterprise-agent -n gloo-mesh --kube-context $REMOTE_CONTEXT2
    
  2. Delete the corresponding KubernetesCluster resources from the management cluster.

    oc delete kubernetescluster $REMOTE_CLUSTER1 $REMOTE_CLUSTER2 -n gloo-mesh
    
  3. Delete the Custom Resource Definitions (CRDs) that were installed on cluster-1 and cluster-2 during registration.

    for crd in $(oc get crd --context $REMOTE_CONTEXT1 | grep mesh.gloo | awk '{print $1}'); do oc --context $REMOTE_CONTEXT1 delete crd $crd; done
    for crd in $(oc get crd --context $REMOTE_CONTEXT2| grep mesh.gloo | awk '{print $1}'); do oc --context $REMOTE_CONTEXT2 delete crd $crd; done
    
  4. Delete the gloo-mesh namespace from cluster-1 and cluster-2.

    oc --context $REMOTE_CONTEXT1 delete namespace gloo-mesh
    oc --context $REMOTE_CONTEXT2 delete namespace gloo-mesh
    

Uninstall management components

Uninstall the Gloo Mesh management components from the management cluster.

  1. Uninstall the Gloo Mesh management plane components.

    helm uninstall gloo-mesh-enterprise -n gloo-mesh --kube-context $MGMT_CONTEXT
    
  2. Delete the Gloo Mesh CRDs.

    for crd in $(oc get crd | grep mesh.gloo | awk '{print $1}'); do oc delete crd $crd; done
    
  3. Delete the gloo-mesh namespace.

    oc delete namespace gloo-mesh
    

Uninstall Bookinfo and Istio

Uninstall Bookinfo resources and Istio from each remote cluster.

  1. Uninstall Bookinfo from cluster-1.

    # remove the sidecar injection label from the default namespace
    kubectl --context $REMOTE_CONTEXT1 label namespace default istio-injection-
    # remove bookinfo application components for all versions less than v3
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
    # remove all bookinfo service accounts
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
    # remove ingress gateway configuration for accessing bookinfo
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/networking/bookinfo-gateway.yaml
    
  2. Uninstall Istio and delete the istio-system namespace from cluster-1.

    istioctl --context $REMOTE_CONTEXT1 x uninstall --purge
    oc --context $REMOTE_CONTEXT1 delete namespace istio-system
    
  3. Uninstall Bookinfo from cluster-2.

    # remove the sidecar injection label from the default namespace
    kubectl --context $REMOTE_CONTEXT2 label namespace default istio-injection-
    # remove reviews and ratings services
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)'
    # remove reviews-v3
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)'
    # remove ratings
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)'
    # remove reviews and ratings service accounts
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
    
  4. Uninstall Istio and delete the istio-system namespace from cluster-2.

    istioctl --context $REMOTE_CONTEXT2 x uninstall --purge
    oc --context $REMOTE_CONTEXT2 delete namespace istio-system
    
  5. Revoke the Istio namespace ID permissions.

    oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-system
    oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-operator
    oc --context $REMOTE_CONTEXT1 adm policy remove-scc-from-group anyuid system:serviceaccounts:default
    oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-system
    oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:istio-operator
    oc --context $REMOTE_CONTEXT2 adm policy remove-scc-from-group anyuid system:serviceaccounts:default
    
  6. Delete the NetworkAttachmentDefinition resources.

    oc --context $REMOTE_CONTEXT1 -n default delete network-attachment-definition istio-cni
    oc --context $REMOTE_CONTEXT2 -n default delete network-attachment-definition istio-cni
    

About the Gloo Mesh-managed certificates in your POC installation

If you install Gloo Mesh Enterprise for exploration, testing, or proof-of-concept purposes, you can use the default root CA certificate and intermediate signing CAs that are automatically generated and self-signed by Gloo Mesh to secure communication between the management and data planes. Certificates are used by relay agents in remote clusters to secure communication with the relay server, and by Istio deployments to assign certificates to workload pods in the service mesh.

The root CA certificates and unencrypted keys that Gloo Mesh Enterprise autogenerates are stored in Kubernetes secrets. Using the autogenerated certificates is not recommended for production use. For more information about certificates for production setups, see Certificate management.

Autogenerated root CA certificate and intermediate CA for relay agents

To secure communication between the management and data planes, relay agents (enterprise-agent) in remote clusters use server/client mTLS certificates to secure communication with the relay server (enterprise-networking) in the management cluster.

By default, the Gloo Mesh Enterprise Helm chart autogenerates its own root CA certificate and intermediate signing CA for issuing the server and client certificates.

Relay Certificates

When a remote cluster is registered with Gloo Mesh Enterprise, the initial setup of a secure communication channel between the management server and remote server follows this flow:

  1. To validate authenticity, the relay agent uses simple TLS to transmit a token value, which is defined in relay-identity-token-secret on the remote cluster, to the relay server.
  2. The token must match the value stored in relay-identity-token-secret on the management cluster, which is created during deployment of the relay server.
  3. When the token is validated, the relay server generates a TLS client certificate for the relay agent.
  4. The relay agent saves the client certificate in the relay-client-tls-secret.
  5. All future communication from relay agents to the server, which uses the gRPC protocol, is secured by using mTLS provided by this certificate.

Relay Exchange

Autogenerated root CA certificate for Istio

The Istio deployment in each remote cluster requires a certificate authority (CA) certificate in the cacerts Kubernetes secret in the istio-system namespace.

Gloo Mesh Enterprise uses a VirtualMesh resource to configure the relay server (enterprise-networking) to generate and self-sign a root CA certificate. This root CA certificate can sign Istio intermediate CA certificates whenever an Istio deployment in a remote cluster must create a certificate for a workload pod in its service mesh. Gloo Mesh stores the signed intermediate CA certificate in the cacerts Kubernetes secret in the istio-system namespace of the remote cluster.

Istio CA Cert

Example VirtualMesh resource to autogenerate a root CA to sign intermediate CA certificates for each Istio deployment:

apiVersion: networking.mesh.gloo.solo.io/v1
kind: VirtualMesh
metadata:
  name: virtual-mesh
  namespace: gloo-mesh
spec:
  # specify Gloo Mesh certificate policy 
  mtlsConfig:
    # When a new intermediate CA certificate is signed for Istio,
    # a restart is required of all pods in the mesh to receive new certificates.
    # Note: Do NOT use this autoRestartPods setting in production!
    autoRestartPods: true
    shared:
      # root ca config
      rootCertificateAuthority:
        # autogenerate root CA certificate
        generated: {}
  federation:
    selectors:
    - {}
  meshes:
  - name: istiod-istio-system-cluster-1 
    namespace: gloo-mesh
  - name: istiod-istio-system-cluster-2
    namespace: gloo-mesh

Saving the autogenerated certificates

The root CA certificates and unencrypted keys that Gloo Mesh Enterprise autogenerates are stored in Kubernetes secrets. If the secrets are deleted, you must regenerate new certificates for all relay agents and Istio deployments. Be sure to download them and store them in a safe location in the event of an accidental deletion.

  1. From the management cluster, download the root certificate from the relay-root-tls-secret secret.

    mkdir relay-root-tls-secret
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' relay-root-tls-secret | base64 --decode > relay-root-tls-secret/ca.crt
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.tls\.key}' relay-root-tls-secret | base64 --decode > relay-root-tls-secret/tls.key
    
    
  2. Download the relay signing certificate from the relay-tls-signing-secret secret.

    mkdir relay-tls-signing-secret
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' relay-tls-signing-secret | base64 --decode > relay-tls-signing-secret/ca.crt
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.tls\.key}' relay-tls-signing-secret | base64 --decode > relay-tls-signing-secret/tls.key
    
  3. Download the Istio root certificate from each VirtualMesh resource.

    VIRTUAL_MESH_NAME=gloo-mesh
    SECRET_NAME=virtual-mesh.$VIRTUAL_MESH_NAME
    mkdir $SECRET_NAME
    
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.root-cert\.pem}' $SECRET_NAME | base64 --decode > $SECRET_NAME/ca.crt
    kubectl get secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.key\.pem}' $SECRET_NAME | base64 --decode > $SECRET_NAME/tls.key
    
  4. In the event that the secrets are deleted, you can use these downloaded ca.crt and tls.key pairs to recreate the secrets in the management cluster. For example, if the relay-root-tls-secret is deleted, you can recreate the secret by running the following:

    kubectl -n gloo-mesh --context $MGMT_CONTEXT create secret tls relay-root-tls-secret \
      --cert=relay-root-tls-secret/ca.crt \
      --key=relay-root-tls-secret/tls.key
    

Insecure installations

If you prefer to set up Gloo Mesh without secure communication for quick demonstrations, you can install Gloo Mesh and register remote clusters in insecure mode. Insecure mode means that no certificates are generated to secure the connection between the Gloo Mesh server and agent components. Instead, the connection uses HTTP.