Get started on Kubernetes

Quickly get started with Gloo Mesh Enterprise by deploying a demo environment to your Kubernetes clusters.

With this guide, you can use a managed Kubernetes environment, such as clusters in Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS), to install Gloo Mesh Enterprise in a management cluster, register remote clusters, and try out multicluster traffic management. The following figure depicts the multi-mesh architecture created by this quick-start setup.

Figure of a three-cluster Gloo Mesh quick-start architecture.

This quick start guide creates a setup that you can use for testing purposes across three clusters. To set up a production-level deployment, see the Setup guide instead.

Before you begin

  1. Install the following CLI tools.

    • istioctl, the Istio command line tool. The resources in the guide use Istio version 1.10.5.
    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use with Gloo Mesh.
    • meshctl, the Gloo Mesh command line tool for bootstrapping Gloo Mesh, registering clusters, describing configured resources, and more.
  2. Create three Kubernetes clusters. In this guide, the cluster names mgmt-cluster, cluster-1, and cluster-2 are used. The mgmt-cluster serves as the management cluster, and cluster-1 and cluster-2 serve as the remote clusters in this setup.

  3. Save the name of each cluster in environment variables. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt-cluster
    export REMOTE_CLUSTER1=cluster-1
    export REMOTE_CLUSTER2=cluster-2
    
  4. Save the context names of each cluster in environment variables. If you don't know the cluster context names, you can run kubectl config get-contexts, look for the cluster name in the CLUSTER column, and get the context name in the NAME column.

    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster-1-context>
    export REMOTE_CONTEXT2=<remote-cluster-2-context>
    
  5. Save your enterprise license key that you received after working with an account representative as environment variable. If you do not have a key yet, you can get a trial license by contacting an account representative.

    export GLOO_MESH_LICENSE_KEY=<license_key>
    
  6. Save the Istio version 1.10.5 as an environment variable. If you downloaded a different version, make sure to specify that version instead.

    export ISTIO_VERSION=1.10.5
    

Step 1: Install Istio in the remote clusters

Install an Istio service mesh into both remote clusters. Later in this guide, you register these clusters with the Gloo Mesh Enterprise management plane so that Gloo Mesh can discover and configure Istio workloads running in these registered clusters. By installing Istio into remote clusters before you install the Gloo Mesh management plane, your Istio service meshes can be immediately discovered when you register the remote clusters.

Note that the following Istio installation profiles are provided for their simplicity, but Gloo Mesh can discover and manage Istio deployments regardless of their installation options. Additionally, to configure multicluster traffic routing later in this guide, ensure that the Istio deployment on each cluster has an externally accessible ingress gateway.

  1. Install Istio in cluster-1.

    
       CLUSTER_NAME=$REMOTE_CLUSTER1
       cat << EOF | istioctl install -y --context $REMOTE_CONTEXT1 -f -
       apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: gloo-mesh-istio
      namespace: istio-system
    spec:
      # only the control plane components are installed (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
      profile: minimal
      # Solo.io Istio distribution repository
      hub: gcr.io/istio-enterprise
      # Solo.io Gloo Mesh Istio tag
      tag: ${ISTIO_VERSION}
    
      meshConfig:
        # enable access logging to standard output
        accessLogFile: /dev/stdout
    
        defaultConfig:
          # wait for the istio-proxy to start before application pods
          holdApplicationUntilProxyStarts: true
          # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard)
          envoyMetricsService:
            address: enterprise-agent.gloo-mesh:9977
           # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging)
          envoyAccessLogService:
            address: enterprise-agent.gloo-mesh:9977
          proxyMetadata:
            # Enable Istio agent to handle DNS requests for known hosts
            # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
            # (for proxy-dns)
            ISTIO_META_DNS_CAPTURE: "true"
            # Enable automatic address allocation (for proxy-dns)
            ISTIO_META_DNS_AUTO_ALLOCATE: "true"
            # Used for gloo mesh metrics aggregation
            # should match trustDomain (required for Gloo Mesh Dashboard)
            GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME}
    
        # Set the default behavior of the sidecar for handling outbound traffic from the application.
        outboundTrafficPolicy:
          mode: ALLOW_ANY
        # The trust domain corresponds to the trust root of a system. 
        # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity
        trustDomain: ${CLUSTER_NAME}
      components:
        ingressGateways:
        # enable the default ingress gateway
        - name: istio-ingressgateway
          enabled: true
          k8s:
            service:
              type: LoadBalancer
              ports:
                # main http ingress port
                - port: 80
                  targetPort: 8080
                  name: http2
                # main https ingress port
                - port: 443
                  targetPort: 8443
                  name: https
                # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing)
                - port: 15443
                  targetPort: 15443
                  # Gloo Mesh looks for this default name 'tls' on an ingress gateway
                  name: tls
        pilot:
          k8s:
            env:
             # Allow multiple trust domains (Required for Gloo Mesh east/west routing)
              - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
                value: "true"
      values:
        # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options
        global:
          # needed for connecting VirtualMachines to the mesh
          network: ${CLUSTER_NAME}
          # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME)
          multiCluster:
            clusterName: ${CLUSTER_NAME}
       EOF
       

    Example output:

    ✔ Istio core installed
    ✔ Istiod installed
    ✔ Ingress gateways installed
    ✔ Installation complete
    
  2. Install Istio in cluster-2.

    
       CLUSTER_NAME=$REMOTE_CLUSTER2
       cat << EOF | istioctl install -y --context $REMOTE_CONTEXT2 -f -
       apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      name: gloo-mesh-istio
      namespace: istio-system
    spec:
      # only the control plane components are installed (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
      profile: minimal
      # Solo.io Istio distribution repository
      hub: gcr.io/istio-enterprise
      # Solo.io Gloo Mesh Istio tag
      tag: ${ISTIO_VERSION}
    
      meshConfig:
        # enable access logging to standard output
        accessLogFile: /dev/stdout
    
        defaultConfig:
          # wait for the istio-proxy to start before application pods
          holdApplicationUntilProxyStarts: true
          # enable Gloo Mesh metrics service (required for Gloo Mesh Dashboard)
          envoyMetricsService:
            address: enterprise-agent.gloo-mesh:9977
           # enable GlooMesh accesslog service (required for Gloo Mesh Access Logging)
          envoyAccessLogService:
            address: enterprise-agent.gloo-mesh:9977
          proxyMetadata:
            # Enable Istio agent to handle DNS requests for known hosts
            # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
            # (for proxy-dns)
            ISTIO_META_DNS_CAPTURE: "true"
            # Enable automatic address allocation (for proxy-dns)
            ISTIO_META_DNS_AUTO_ALLOCATE: "true"
            # Used for gloo mesh metrics aggregation
            # should match trustDomain (required for Gloo Mesh Dashboard)
            GLOO_MESH_CLUSTER_NAME: ${CLUSTER_NAME}
    
        # Set the default behavior of the sidecar for handling outbound traffic from the application.
        outboundTrafficPolicy:
          mode: ALLOW_ANY
        # The trust domain corresponds to the trust root of a system. 
        # For Gloo Mesh this should be the name of the cluster that cooresponds with the CA certificate CommonName identity
        trustDomain: ${CLUSTER_NAME}
      components:
        ingressGateways:
        # enable the default ingress gateway
        - name: istio-ingressgateway
          enabled: true
          k8s:
            service:
              type: LoadBalancer
              ports:
                # main http ingress port
                - port: 80
                  targetPort: 8080
                  name: http2
                # main https ingress port
                - port: 443
                  targetPort: 8443
                  name: https
                # Port for gloo-mesh multi-cluster mTLS passthrough (Required for Gloo Mesh east/west routing)
                - port: 15443
                  targetPort: 15443
                  # Gloo Mesh looks for this default name 'tls' on an ingress gateway
                  name: tls
        pilot:
          k8s:
            env:
             # Allow multiple trust domains (Required for Gloo Mesh east/west routing)
              - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN
                value: "true"
      values:
        # https://istio.io/v1.5/docs/reference/config/installation-options/#global-options
        global:
          # needed for connecting VirtualMachines to the mesh
          network: ${CLUSTER_NAME}
          # needed for annotating istio metrics with cluster (should match trust domain and GLOO_MESH_CLUSTER_NAME)
          multiCluster:
            clusterName: ${CLUSTER_NAME}
       EOF
       

To prevent issues during cluster registration in subsequent steps, do not label the gloo-mesh namespace in each remote cluster for automatic Istio injection. To ensure the namespace is not labeled for injection, run kubectl label namespace gloo-mesh istio-injection=disabled --overwrite.

Step 2: Install Gloo Mesh Enterprise in the management cluster

Install the Gloo Mesh Enterprise management components into your management cluster. The management components serve as the control plane where you define all service mesh configurations that you want Gloo Mesh to enforce across clusters and service meshes. The control plane also aggregates all of the discovered Istio service mesh components into the simplified Gloo Mesh API Mesh, Workload, and Destination custom resources.

Note that this guide uses meshctl to install a mimimum deployment of Gloo Mesh Enterprise for testing purposes, and some optional components are not installed. For example, self-signed certificates are used and the RBAC role-based API is not enforced. To learn more about these installation options, including advanced configuration options available in the Gloo Mesh Enterprise Helm chart, see the Setup guide.

  1. Install Gloo Mesh Enterprise in your management cluster. This command creates a gloo-mesh namespace and uses default Helm chart values to install the management components.

    meshctl install --kubecontext=$MGMT_CONTEXT --license $GLOO_MESH_LICENSE_KEY
    

    Example output:

    Finished installing chart 'gloo-mesh-enterprise' as release gloo-mesh:gloo-mesh
    
  2. Verify that the management components have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                     READY   STATUS    RESTARTS   AGE
    dashboard-749dc7875c-4z77k               3/3     Running   0          41s
    enterprise-networking-778d45c7b5-5d9nh   1/1     Running   0          41s
    prometheus-server-86854b778-r6r52        2/2     Running   0          41s
    redis-dashboard-844dc4f9-jnb4j           1/1     Running   0          41s
    
  3. Verify that the management plane is correctly installed. This check might take a few seconds to complete.

    meshctl check server --kubecontext $MGMT_CONTEXT
    

    Note that because no remote clusters are registered yet, the agent connectivity check returns a warning.

    Gloo Mesh Management Cluster Installation
    
    🟢 Gloo Mesh Pods Status
    
    🟡 Gloo Mesh Agents Connectivity
       Hints:
       * No registered clusters detected. To register a remote cluster that has a deployed Gloo Mesh agent, add a KubernetesCluster CR.
          For more info, see: https://docs.solo.io/gloo-mesh/latest/setup/cluster_registration/enterprise_cluster_registration/
    
    Management Configuration
    2021-10-08T17:33:05.871382Z	info	klog	apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
    
    🟢 Gloo Mesh CRD Versions
    
    🟢 Gloo Mesh Networking Configuration Resources
    

Step 3: Register remote clusters

Register your remote clusters with the Gloo Mesh management plane.

When you installed Gloo Mesh Enterprise in the management cluster, a deployment named enterprise-networking was created to run the relay server. The relay server is exposed by the enterprise-networking LoadBalancer service. When you register remote clusters to be managed by Gloo Mesh Enterprise, a deployment named enterprise-agent is created on each remote cluster to run a relay agent. Each relay agent is exposed by an enterprise-agent ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the Architecture page.

Cluster registration also creates a KubernetesCluster custom resource on the management cluster to represent the remote cluster and store relevant data, such as the remote cluster's local domain (“cluster.local”). To learn more about cluster registration and how to register clusters with Helm rather than meshctl, review the cluster registration guide.

  1. In the management cluster, find the external address that was assigned by your cloud provider to the enterprise-networking LoadBalancer service. When you register the remote clusters in subsequent steps, the enterprise-agent relay agent in each cluster accesses this address via a secure connection. Note that it might take a few minutes for your cloud provider to assign an external address to the LoadBalancer.

    
       ENTERPRISE_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       ENTERPRISE_NETWORKING_PORT=$(kubectl -n gloo-mesh get service enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
       echo $ENTERPRISE_NETWORKING_ADDRESS
       
    
       ENTERPRISE_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       ENTERPRISE_NETWORKING_PORT=$(kubectl -n gloo-mesh get service enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
       echo $ENTERPRISE_NETWORKING_ADDRESS
       

  2. Register cluster-1 with the management plane.

    meshctl cluster register \
      --mgmt-context=$MGMT_CONTEXT \
      --remote-context=$REMOTE_CONTEXT1 \
      --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \
      $REMOTE_CLUSTER1
    

    Example output:

    Registering cluster
    📃 Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster
    📃 Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster
    💻 Installing relay agent in the remote cluster
    Finished installing chart 'enterprise-agent' as release gloo-mesh:enterprise-agent
    📃 Creating remote.cluster KubernetesCluster CRD in management cluster
    ⌚ Waiting for relay agent to have a client certificate
           Checking...
           Checking...
    🗑 Removing bootstrap token
    ✅ Done registering cluster!
    
  3. Register cluster-2 with the management plane.

    meshctl cluster register \
      --mgmt-context=$MGMT_CONTEXT \
      --remote-context=$REMOTE_CONTEXT2 \
      --relay-server-address $ENTERPRISE_NETWORKING_ADDRESS \
      $REMOTE_CLUSTER2
    
  4. Verify that each remote cluster is successfully registered with Gloo Mesh.

    kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME           AGE
    cluster-1      27s
    cluster-2      23s
    
  5. Verify that Gloo Mesh successfully discovered the Istio service meshes in each remote cluster.

    kubectl get mesh -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                            AGE
    istiod-istio-system-cluster-1   68s
    istiod-istio-system-cluster-2   28s
    

Now that Gloo Mesh Enterprise is installed in the management cluster, the remote clusters are registered with the management plane, and the Istio meshes in the remote clusters are discovered by Gloo Mesh, your Gloo Mesh Enterprise setup is complete! To try out some of Gloo Mesh Enterprise's features, continue with the following sections to configure Gloo Mesh for a multicluster use case.

Step 4: Create a virtual mesh

Bootstrap connectivity between the Istio service meshes in each remote cluster by creating a VirtualMesh. When you create a VirtualMesh resource in the management cluster, each service mesh in the remote clusters is configured with certificates that share a common root of trust. After trust is established, the virtual mesh configuration facilitates cross-cluster communications by federating services so that they are accessible across clusters. To learn more, see the concepts documentation.

  1. Create a VirtualMesh resource named virtual-mesh in the gloo-mesh namespace of the management cluster.

    
       kubectl apply --context $MGMT_CONTEXT -f - << EOF
       apiVersion: networking.mesh.gloo.solo.io/v1
       kind: VirtualMesh
       metadata:
         name: virtual-mesh
         namespace: gloo-mesh
       spec:
         mtlsConfig:
           # Note: Do NOT use this autoRestartPods setting in production!!
           autoRestartPods: true
           # automatically generate CA certificates for istiod deployments
           shared:
             rootCertificateAuthority:
               generated: {}
         federation:
           # federate all Destinations to all external meshes
           selectors:
           - {}
         # Disable global access policy enforcement for demonstration purposes.
         globalAccessPolicy: DISABLED
         meshes:
         - name: istiod-istio-system-${REMOTE_CLUSTER1}
           namespace: gloo-mesh
         - name: istiod-istio-system-${REMOTE_CLUSTER2}
           namespace: gloo-mesh
       EOF
       

  2. Verify that the virtual mesh is created and your services meshes are configured for multicluster traffic.

    kubectl get virtualmesh -n gloo-mesh virtual-mesh -oyaml --context $MGMT_CONTEXT
    

    In the status section of the output, ensure that each service mesh and the virtual mesh have a state of ACCEPTED.

    ...
     status:
      deployedSharedTrust:
        rootCertificateAuthority:
          generated: {}
      destinations:
        istio-ingressgateway-istio-system-cluster-1.gloo-mesh.:
          state: ACCEPTED
        istio-ingressgateway-istio-system-cluster-2.gloo-mesh.:
          state: ACCEPTED
      meshes:
        istiod-istio-system-cluster-1.gloo-mesh.:
          state: ACCEPTED
        istiod-istio-system-cluster-2.gloo-mesh.:
          state: ACCEPTED
      observedGeneration: 1
      state: ACCEPTED
    

Step 5: Route multicluster traffic

Now that the individual Istio service meshes are unified by a single virtual mesh, use the Bookinfo sample app to see how Gloo Mesh can facilitate multicluster traffic.

Deploy Bookinfo across clusters

Start by deploying different versions of the Bookinfo sample app to both of the remote clusters. cluster-1 runs the app with versions 1 and 2 of the reviews service (reviews-v1 and reviews-v2), and cluster-2 runs version 3 of the reviews service (reviews-v3).

  1. Install Bookinfo with the reviews-v1 and reviews-v2 services to cluster-1.

    # prepare the default namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT1 label namespace default istio-injection=enabled
    # deploy bookinfo application components for all versions less than v3
    kubectl --context $REMOTE_CONTEXT1 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
    # deploy all bookinfo service accounts
    kubectl --context $REMOTE_CONTEXT1 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
    # configure ingress gateway to access bookinfo
    kubectl --context $REMOTE_CONTEXT1 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/networking/bookinfo-gateway.yaml
    
  2. Verify that the Bookinfo app is running in cluster-1.

    kubectl --context $REMOTE_CONTEXT1 get pods
    

    Example output:

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-558b8b4b76-w9qp8       2/2     Running   0          2m33s
    productpage-v1-6987489c74-54lvk   2/2     Running   0          2m34s
    ratings-v1-7dc98c7588-pgsxv       2/2     Running   0          2m34s
    reviews-v1-7f99cc4496-lwtsr       2/2     Running   0          2m34s
    reviews-v2-7d79d5bd5d-mpsk2       2/2     Running   0          2m34s
    

    If your Bookinfo deployment is stuck in a pending state, you might see the following error:

    admission webhook "sidecar-injector.istio.io" denied the request: template:
          inject:1: function "Template_Version_And_Istio_Version_Mismatched_Check_Installation"
          not defined
    

    Your istioctl version does not match the IstioOperator version that was used during Istio installation. Ensure that you download the same version of istioctl, which is 1.10.5 in this example.

  3. Install Bookinfo with the reviews-v3 service to cluster-2.

    # prepare the default namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT2 label namespace default istio-injection=enabled
    # deploy reviews and ratings services
    kubectl --context $REMOTE_CONTEXT2 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)'
    # deploy reviews-v3
    kubectl --context $REMOTE_CONTEXT2 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)'
    # deploy ratings
    kubectl --context $REMOTE_CONTEXT2 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)'
    # deploy reviews and ratings service accounts
    kubectl --context $REMOTE_CONTEXT2 apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
    
  4. Verify that the Bookinfo app is running on cluster-2.

    kubectl --context $REMOTE_CONTEXT2 get pods
    

    Example output:

    NAME                          READY   STATUS    RESTARTS   AGE
    ratings-v1-7dc98c7588-qbmmh   2/2     Running   0          3m11s
    reviews-v3-7dbcdcbc56-w4kbf   2/2     Running   0          3m11s
    
  5. Get the address of the Istio ingress gateway on cluster-1.

    
       CLUSTER_1_INGRESS_ADDRESS=$(kubectl --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
       
    
       CLUSTER_1_INGRESS_ADDRESS=$(kubectl --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
       

  6. Navigate to http://$CLUSTER_1_INGRESS_ADDRESS/productpage in a web browser.

    open http://$CLUSTER_1_INGRESS_ADDRESS/productpage
    
  7. Refresh the page a few times to see the black stars in the Book Reviews column appear and disappear. The presence of black stars represents reviews-v1 and the absence of black stars represents reviews-v2.

If you use EKS clusters and cannot connect to the Bookinfo product page, your istio-ingressgateway load balancer for cluster-1 might not use the required port 15443. See the troubleshooting steps at the end of this guide.

Shift traffic across clusters

In order for the product-page service on cluster-1 to access reviews-v3 on cluster-2, you can use Gloo Mesh to route traffic across the remote clusters. You can route traffic to reviews-v3 by creating a Gloo Mesh TrafficPolicy resource that diverts 75% of reviews traffic to the reviews-v3 service.

  1. Create a TrafficPolicy resource named simple in the gloo-mesh namespace of the management cluster.

    
       cat << EOF | kubectl --context $MGMT_CONTEXT apply -f -
       apiVersion: networking.mesh.gloo.solo.io/v1
       kind: TrafficPolicy
       metadata:
         namespace: gloo-mesh
         name: simple
       spec:
         sourceSelector:
         - kubeWorkloadMatcher:
             namespaces:
             - default
         destinationSelector:
         - kubeServiceRefs:
             services:
               - clusterName: ${REMOTE_CLUSTER1}
                 name: reviews
                 namespace: default
         policy:
           trafficShift:
             destinations:
               - kubeService:
                   clusterName: ${REMOTE_CLUSTER2}
                   name: reviews
                   namespace: default
                   subset:
                     version: v3
                 weight: 75
               - kubeService:
                   clusterName: ${REMOTE_CLUSTER1}
                   name: reviews
                   namespace: default
                   subset:
                     version: v1
                 weight: 15
               - kubeService:
                   clusterName: ${REMOTE_CLUSTER1}
                   name: reviews
                   namespace: default
                   subset:
                     version: v2
                 weight: 10
       EOF
       

  2. In the http://$CLUSTER_1_INGRESS_ADDRESS/productpage page in your web browser, refresh the page a few times again. Now, the red stars for reviews-v3 are shown in the book reviews.

Bookinfo services in cluster-1 are now successfully accessing the Bookinfo services in cluster-2!

Step 6: Launch the Gloo Mesh Enterprise dashboard

The Gloo Mesh Enterprise dashboard provides a single pane of glass through which you can observe the status of your service meshes, workloads, and services that run across all of your clusters. You can also view the policies that configure the behavior of your network.

  1. Access the Gloo Mesh Enterprise dashboard.

    meshctl dashboard --kubecontext $MGMT_CONTEXT
    
  2. Click through the tabs on the dashboard navigation, such as the Overview, Meshes, and Policies tabs, to visualize and check the health of your Gloo Mesh environment. For example, click the Graph tab to see the visualization of 75% of traffic flowing to reviews-v3, 15% to reviews-v1, and 10% to reviews-v2, as defined by your traffic policy.

To learn more about what you can do with the dashboard, see the dashboard guide.

Next steps

Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.

Cleanup

If you no longer need this quick-start Gloo Mesh environment, you can deregister remote clusters, uninstall management components from the management cluster, and uninstall Istio resources from the remote clusters.

Deregister remote clusters

  1. Uninstall the enterprise-agent that runs on remote cluster-1 and delete the corresponding KubernetesCluster resource from the management cluster.

    meshctl cluster deregister \
      --mgmt-context $MGMT_CONTEXT \
      --remote-context $REMOTE_CONTEXT1 \
      $REMOTE_CLUSTER1
    

    Example output:

    deleting KubernetesCluster CR cluster-1.gloo-mesh from management cluster...
    uninstalling enterprise agent from remote cluster...
    Finished uninstalling release enterprise-agent
    
  2. Uninstall the enterprise-agent that runs on remote cluster-2 and delete the corresponding KubernetesCluster resource from the management cluster.

    meshctl cluster deregister \
      --mgmt-context $MGMT_CONTEXT \
      --remote-context $REMOTE_CONTEXT2 \
      $REMOTE_CLUSTER2
    
  3. Delete the Custom Resource Definitions (CRDs) that were installed on cluster-1 and cluster-2 during registration.

    for crd in $(kubectl get crd --context $REMOTE_CONTEXT1 | grep mesh.gloo | awk '{print $1}'); do kubectl --context $REMOTE_CONTEXT1 delete crd $crd; done
    for crd in $(kubectl get crd --context $REMOTE_CONTEXT2 | grep mesh.gloo | awk '{print $1}'); do kubectl --context $REMOTE_CONTEXT2 delete crd $crd; done
    
  4. Delete the gloo-mesh namespace from cluster-1 and cluster-2.

    kubectl --context $REMOTE_CONTEXT1 delete namespace gloo-mesh
    kubectl --context $REMOTE_CONTEXT2 delete namespace gloo-mesh
    

Uninstall management components

Uninstall the Gloo Mesh management components from the management cluster.

  1. Uninstall the Gloo Mesh management plane components.

    meshctl uninstall --kubecontext $MGMT_CONTEXT
    

    Example output:

    Uninstalling Helm chart
    Finished uninstalling release gloo-mesh
    
  2. Delete the Gloo Mesh CRDs.

    for crd in $(kubectl get crd --context $MGMT_CONTEXT | grep mesh.gloo | awk '{print $1}'); do kubectl --context $MGMT_CONTEXT delete crd $crd; done
    
  3. Delete the gloo-mesh namespace.

    kubectl --context $MGMT_CONTEXT delete namespace gloo-mesh
    

Uninstall Bookinfo and Istio

Uninstall Bookinfo resources and Istio from each remote cluster.

  1. Uninstall Bookinfo from cluster-1.

    # remove the sidecar injection label from the default namespace
    kubectl --context $REMOTE_CONTEXT1 label namespace default istio-injection-
    # remove bookinfo application components for all versions less than v3
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
    # remove all bookinfo service accounts
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
    # remove ingress gateway configuration for accessing bookinfo
    kubectl --context $REMOTE_CONTEXT1 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/networking/bookinfo-gateway.yaml
    
  2. Uninstall Istio and delete the istio-system namespace from cluster-1.

    istioctl --context $REMOTE_CONTEXT1 x uninstall --purge
    kubectl --context $REMOTE_CONTEXT1 delete namespace istio-system
    
  3. Uninstall Bookinfo from cluster-2.

    # remove the sidecar injection label from the default namespace
    kubectl --context $REMOTE_CONTEXT2 label namespace default istio-injection-
    # remove reviews and ratings services
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)'
    # remove reviews-v3
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)'
    # remove ratings
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)'
    # remove reviews and ratings service accounts
    kubectl --context $REMOTE_CONTEXT2 delete -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
    
  4. Uninstall Istio and delete the istio-system namespace from cluster-2.

    istioctl --context $REMOTE_CONTEXT2 x uninstall --purge
    kubectl --context $REMOTE_CONTEXT2 delete namespace istio-system
    

Troubleshooting

If you use EKS clusters and cannot connect to the Bookinfo product page in step 5, your istio-ingressgateway load balancer for cluster-1 might not use the required port 15443. EKS load balancer health checks use the first port listed in the load balancer's port list by default. In some cases, this causes istio-ingressgateway to listen on port 80 instead of 15443 because 80 is listed first, such as in this example load balancer YAML file:

...
spec:
  clusterIP: 10.100.108.166
  externalTrafficPolicy: Cluster
  ports:
  - name: http2
    nodePort: 31143
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 30131
    port: 443
    protocol: TCP
    targetPort: 8443
  - name: tls
    nodePort: 32287
    port: 15443
    protocol: TCP
    targetPort: 15443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway

To redeploy the istio-ingressgateway load balancer with port 15443 instead, edit the istio-ingressgateway load balancer service in cluster-1 by running kubectl edit svc istio-ingressgateway -n istio-system --context $REMOTE_CONTEXT1. Then move the tls port for 15442 to the top of the ports list, such as the following:

...
spec:
  clusterIP: 10.100.108.166
  externalTrafficPolicy: Cluster
  ports:
  - name: tls
    nodePort: 32287
    port: 15443
    protocol: TCP
    targetPort: 15443
  - name: http2
    nodePort: 31143
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 30131
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway