Federate clusters and isolate workloads

After you install Gloo Mesh in a multicluster environment, use the Bookinfo sample app and Gloo resources to test multitenancy, federation, and isolation across multiple clusters.

The following figure depicts the multi-mesh architecture created by this quick-start guide.

Figure of multicluster Bookinfo traffic in a Gloo Mesh quick-start architecture.

Step 1: Set up multitenancy by creating workspaces for your workloads

Gloo introduces a new concept for Kubernetes-based multi-tenancy, the Workspace custom resource. A workspace consists of one or more Kubernetes namespaces that are in one or more clusters. Think of a workspace as the boundary of your team's resources. To get started, you can create a workspace for each of your teams. Your teams might start with their apps in a couple Kubernetes namespaces in a single cluster. As your teams scale across namespaces and clusters, their workspaces scale with them. Note that for simplicity, you create all your Gloo custom resources only in the management cluster. For more information, see the Multi-tenancy concept.

  1. Create the namespaces for your workspaces in each cluster.

    kubectl create ns bookinfo --context $MGMT_CONTEXT
    kubectl create ns istio-system --context $MGMT_CONTEXT
    kubectl create ns bookinfo --context $REMOTE_CONTEXT1
    kubectl create ns bookinfo --context $REMOTE_CONTEXT2
    
  2. Create a bookinfo workspace that spans across all your clusters, and includes only the bookinfo namespaces in each cluster. Note that you must create the workspace resource in the gloo-mesh namespace of the management cluster. For more information about setting up workspaces, see Create a workspace.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: bookinfo
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
          - name: bookinfo
    EOF
    
  3. Configure settings for the bookinfo workspace, which include:

    • Enabled federation: Services in different clusters can communicate with each other. For example, even though the services in the bookinfo workspace are in different clusters, they are able to communicate with each other. In subsequent steps, you create routes in a route table to forward traffic between Bookinfo services across clusters.
    • Enabled service isolation: Services are isolated and cannot communicate with services outside the mesh or in another workspace by default. mTLS is used for communication across services.
    • Exporting: Because service isolation is enabled, the resources in this workspace are exported to the istio-system workspace. This setting enables the ingress gateway in the istio-system workspace to use the resources in the bookinfo workspace.
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: bookinfo-settings
      namespace: bookinfo
    spec:
      exportTo:
      - workspaces:  
        - name: istio-system
      options:
        serviceIsolation:
          enabled: true
        federation:
          enabled: false
          serviceSelector:
            - {}
          hostSuffix: 'global'
    EOF
    
  4. Create an istio-system workspace that spans across clusters, and includes the istio-system and gloo-mesh-gateways namespaces in each cluster. This ensures that the istiod control plane components as well as the gateways are included in the same workspace.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: istio-system
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
          - name: istio-system
          - name: gloo-mesh-gateways
    EOF
    
  5. Configure settings for the istio-system workspace. Resources are imported for use from the bookinfo workspace.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: istio-system-settings
      namespace: istio-system
    spec:
      importFrom:
      - workspaces:
        - name: bookinfo
    EOF
    
  6. After you set up your other workspaces, modify the default workspace that you created in the quick-start guide to become a management-only workspace.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '$MGMT_CLUSTER'
          namespaces:
            - name: 'gloo-mesh'
    EOF
    

As you create more workspaces, you can create global workspace settings in the management cluster that apply by default to each workspace in your Gloo Mesh environment. For more information, see Configure workspace settings.

Step 2: Deploy Bookinfo across clusters

To test out service federation and isolation, deploy different versions of the Bookinfo sample app to both of the workload clusters. cluster-1 runs the app with versions 1 and 2 of the reviews service (reviews-v1 and reviews-v2), and cluster-2 runs version 3 of the reviews service (reviews-v3).

  1. OpenShift only: Create a NetworkAttachmentDefinition custom resource for the bookinfo project of each workload cluster, and elevate the permissions of the bookinfo service account to allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. Use the commands in both tabs.

    
    cat <<EOF | oc --context $REMOTE_CONTEXT1 -n bookinfo create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    oc --context $REMOTE_CONTEXT1 adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo
    
    
    cat <<EOF | oc --context $REMOTE_CONTEXT2 -n bookinfo create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    oc --context $REMOTE_CONTEXT2 adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo
    

  2. Export the Istio version that your cluster runs as an environment variable, such as 1.16.2 in the following example. You can check your Istio version by running istioctl version --context $REMOTE_CONTEXT1.

    export ISTIO_VERSION=1.16.2
    
  3. Use the commands in both tabs to install Bookinfo with the reviews-v1 and reviews-v2 services in cluster-1, and the reviews-v3 service in cluster-2.

    
    # prepare the bookinfo namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT1 label namespace bookinfo istio-injection=enabled
    # deploy bookinfo application components for all versions less than v3
    kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
    # deploy all bookinfo service accounts
    kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
    
    
    # prepare the bookinfo namespace for Istio sidecar injection
    kubectl --context $REMOTE_CONTEXT2 label namespace bookinfo istio-injection=enabled
    # deploy reviews and ratings services
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)'
    # deploy reviews-v3
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)'
    # deploy ratings
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)'
    # deploy reviews and ratings service accounts
    kubectl --context $REMOTE_CONTEXT2 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/$ISTIO_VERSION/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)'
    

  4. Verify that the Bookinfo pods are have a status of Running in each cluster. If not, try Troubleshooting.

    kubectl --context $REMOTE_CONTEXT1 get pods -n bookinfo
    kubectl --context $REMOTE_CONTEXT2 get pods -n bookinfo
    

Step 3: Verify in-cluster routing

Verify that the productpage service can route to the reviews-v1 and reviews-v2 services within the same service mesh in cluster-1.

  1. In a separate tab in your terminal, open the Bookinfo product page from your local host.

    1. Enable port-forwarding on the product page deployment.
      kubectl --context ${REMOTE_CONTEXT1} -n bookinfo port-forward deployment/productpage-v1 9080:9080
      
    2. Open your browser to http://localhost:9080/. You might need to click Normal user to open the app.
  2. Refresh the page a few times to see the black stars in the Book Reviews column appear and disappear. The presence of black stars represents reviews-v2 and the absence of black stars represents reviews-v1. Note that the styling of red stars from reviews-v3 is not shown because the services in cluster-1 do not currently communicate with the services in cluster-2.

Step 4: Test service federation by routing multicluster traffic

Next, to test whether your services are federated across service meshes, use Gloo Mesh to route traffic across the workload clusters. In order for the productpage service on cluster-1 to access reviews-v3 on cluster-2, you create a virtual destination that represents all versions of the reviews app across both clusters. Then, you create a route table to route from productpage to the virtual destination, and divert 75% of reviews traffic to the reviews-v3 service.

  1. Create a Gloo Mesh root trust policy to ensure that services in cluster-1 securely communicate with the reviews service in cluster-2. The root trust policy sets up the domain and certificates to establish a shared trust model across multiple clusters in your service mesh.

    kubectl apply --context $MGMT_CONTEXT -f - <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: RootTrustPolicy
    metadata:
      name: root-trust
      namespace: gloo-mesh
    spec:
      config:
        autoRestartPods: true
        mgmtServerCa:
          generated: {}
    EOF
    
  2. Create a virtual destination resource and define a unique hostname that in-mesh gateways can use to send requests to the reviews app. This virtual destination is configured to listen for incoming traffic on the internal-only, arbitrary hostname reviews.mesh.internal.com:8080. Note that this host value is different than the actual internal address that the reviews app can be reached by, because this host is an internal address that is used only by the gateways in your mesh.

    kubectl apply --context $MGMT_CONTEXT -n bookinfo -f- <<EOF
    apiVersion: networking.gloo.solo.io/v2
    kind: VirtualDestination
    metadata:
      name: reviews-vd
      namespace: bookinfo
    spec:
      hosts:
      # Arbitrary, internal-only hostname assigned to the endpoint
      - reviews.mesh.internal.com
      ports:
      - number: 8080
        protocol: HTTP
        targetPort:
          number: 9080
      services:
        - labels:
            app: reviews
    EOF
    
  3. Create a route table that defines how east-west requests within your mesh from the productpage service to the reviews-vd virtual destination should be routed. Create 3 references for the virtual destination, and specify the subset version and a weight in each reference. When you apply this route table, requests from productpage to /reviews now route to one of the three reviews versions depending on their differently assigned weights.

    • For hosts, specify reviews.bookinfo.svc.cluster.local, which is the actual internal hostname that the reviews app listens on. The east-west gateway in your mesh does the work of taking requests made to the reviews.bookinfo.svc.cluster.local hostname and routing them to the reviews.mesh.internal.com virtual destination hostname that you specified in the previous step.
    • Create 3 references for the virtual destination, and specify the subset version and a weight in each reference. When you apply this route table, requests from productpage to /reviews now route to one of the three reviews versions depending on their differently assigned weights.
    kubectl apply --context $MGMT_CONTEXT -n bookinfo -f- <<EOF
    apiVersion: networking.gloo.solo.io/v2
    kind: RouteTable
    metadata:
      name: bookinfo-east-west
      namespace: bookinfo
    spec:
      hosts:
        - 'reviews.bookinfo.svc.cluster.local'
      workloadSelectors:
        - selector:
            labels:
              app: productpage
      http:
        - name: reviews
          matchers:
          - uri:
              prefix: /reviews
          forwardTo:
            destinations:
              # Reference to the virtual destination that directs 15% of reviews traffic to reviews-v1 in cluster-1
              - ref:
                  name: reviews-vd
                kind: VIRTUAL_DESTINATION
                port:
                  number: 8080
                subset:
                  version: v1
                weight: 15
              # Reference to the virtual destination that directs 10% of reviews traffic to reviews-v2 in cluster-1
              - ref:
                  name: reviews-vd
                kind: VIRTUAL_DESTINATION
                port:
                  number: 8080
                subset:
                  version: v2
                weight: 10
              # Reference to the virtual destination that directs 75% of reviews traffic to reviews-v3 in cluster-2
              - ref:
                  name: reviews-vd
                kind: VIRTUAL_DESTINATION
                port:
                  number: 8080
                subset:
                  version: v3
                weight: 75
    EOF
    
  4. In the http://localhost:9080/ page in your web browser, refresh the page a few times again. Now, the red stars for reviews-v3 are shown in the book reviews.

Bookinfo services in cluster-1 are now successfully accessing the Bookinfo services in cluster-2!

Step 5: Test service isolation by blocking access from services

In Step 1, you enabled service isolation for the Bookinfo workspace. When service isolation is enabled:

To verify that services within the workspace are isolated, deploy two httpbin demo apps to an httpbin namespace in cluster-1, which is not part of the bookinfo workspace. One version of the app is included in the Istio service mesh, and one version is not included in the mesh. Then, attempt to curl the Bookinfo services from the httpbin apps to test service isolation.

  1. Create the httpbin namespace on cluster-1. Do not label this namespace for Istio injection.

    kubectl --context $REMOTE_CONTEXT1 create ns httpbin
    
  2. Deploy the in-mesh app on cluster-1. These commands download the deployment file, manually inject the Istio sidecar to the deployment, and create the deplyment in your cluster.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/getting-started/2.2/in-mesh.yaml > in-mesh.yaml
    istioctl kube-inject -f in-mesh.yaml | kubectl apply --context $REMOTE_CONTEXT1 -n httpbin -f -
    
  3. Deploy the not-in-mesh app on cluster-1. These commands download the deployment file and create the deplyment in your cluster, but no Istio sidecar is injected.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/getting-started/2.2/not-in-mesh.yaml > not-in-mesh.yaml
    kubectl apply --context $REMOTE_CONTEXT1 -n httpbin -f not-in-mesh.yaml
    
  4. Verify that both app pods have a status of running.

    kubectl --context $REMOTE_CONTEXT1 -n httpbin get pods
    
  5. Curl the reviews service from the in-mesh service. This curl returns a 403 response code because the sidecar of the reviews service blocks the request. Even though the services are both in the same Istio service mesh, they cannot communicate because they are not in the same workspace, and the httpbin namespace is not in a workspace with import/export rules for the Bookinfo workspace.

    pod=$(kubectl --context ${REMOTE_CONTEXT1} -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}')
    kubectl --context ${REMOTE_CONTEXT1} -n httpbin debug -i ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo.svc.cluster.local:9080/reviews/0
    
  6. Curl the reviews service from the not-in-mesh service. This curl returns a 000 response code because communication cannot be established. Because the Bookinfo services are isolated, they cannot communicate with services outside of the same Istio service mesh.

    pod=$(kubectl --context ${REMOTE_CONTEXT1} -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}')
    kubectl --context ${REMOTE_CONTEXT1} -n httpbin debug -i ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo.svc.cluster.local:9080/reviews/0
    
  7. Optional: For additional testing, you can change the Bookinfo workspace settings to disable service isolation. Then, when you repeat the commands in steps 5 - 6, the curl commands from both the in-mesh and not-in-mesh services now receive 200 response codes. Services within the same mesh and outside of the mesh can access the Bookinfo services because they are no longer isolated.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: bookinfo-settings
      namespace: bookinfo
    spec:
      exportTo:
      - workspaces:
         - name: 'istio-system'
      options:
        serviceIsolation:
          enabled: false
        federation:
          enabled: false
          serviceSelector:
            - {}
          hostSuffix: 'global'
    EOF
    

Step 6: Launch the Gloo Mesh UI

The Gloo Mesh UI provides a single pane of glass through which you can observe the status of your service meshes, workloads, and services that run across all of your clusters. You can also view the policies that configure the behavior of your network.

Figure: Gloo Mesh UI overview page.
  1. Access the Gloo Mesh UI.

    meshctl dashboard --kubecontext $MGMT_CONTEXT
    
  2. Click through the Workspace cards to view the configuration of your Bookinfo resources.

    • From the default collapsed view, you can see the number of clusters, namespaces, and gateways in each workspace.
    • When you expand the card, you see more information about the imported and exported resources, destinations, and policies.
    • Click MORE DETAILS to jump to the workspace details page for that workspace. Take a look at the Destinations and Routing tabs to review which resources are being exported. For example, in the details for the istio-system workspace, you can see the various Bookinfo destinations listed as imported to this workspace.

To learn more about what you can do with the UI, see the Gloo UI guides.

Next steps

Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.

Cleanup

If you no longer need this quick-start Gloo Mesh environment, you can deregister workload clusters, uninstall management components from the management cluster, and uninstall Istio resources from the workload clusters by following the steps in Uninstalling Gloo Mesh and Istio.