Overview

If you have existing Istio installations and want to switch to using the Gloo Operator for service mesh management, you can use one of the following guides:

  • Revisioned Helm: You installed Istio with Helm. To add namespaces to the service mesh, you used revision labels such as istio.io/rev=1-27.
  • Revisionless Helm: You installed Istio with Helm. To add namespaces to the service mesh, you used the sidecar injection label, istio-injection=enabled.
  • Istio lifecycle manager: You installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources.

Migrate from revisioned Helm installations

If you currently install Istio by using Helm and use revisions to manage your installations, you can migrate from your community Istio revision, such as 1-27, to the gloo revision. The Gloo Operator uses the gloo revision by default to manage Istio installations in your cluster.

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo URL are required.

           export SOLO_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.27.4
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.4.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${SOLO_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed gloo control plane.

    1. Get the workload namespaces that you previously labeled with an Istio revision, such as 1-27 in the following example.

        kubectl get namespaces -l istio.io/rev=1-27
        
    2. Overwrite the revision label for each of the workload namespaces with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
        
  4. Update any existing Istio ingress or egress gateways to the gloo revision.

    1. Get the name and namespace of your gateway Helm release.

        helm ls -A
        
    2. Get the current values for the gateway Helm release in your cluster.

        helm get values <gateway_release> -n <namespace> -o yaml > gateway.yaml
        
    3. Upgrade your gateway Helm release.

        helm upgrade -i <gateway_release> istio/gateway \
        --version 1.27.4 \
        --namespace <namespace> \
        --set "revision=gloo" \
        -f gateway.yaml
        
    4. Verify that the gateway is successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the gateway is now included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.27.4-solo
        
  5. Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Get the name and namespace of your previous istiod Helm release.

      helm ls -A
      
  7. Uninstall the unmanaged control plane.

      helm uninstall <istiod_release> -n istio-system
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Migrate from revisionless Helm installations

If you currently install Istio by using Helm and do not use revisions to manage your installations, such as by labeling namespaces with istio-injection: enabled, you can migrate the management of the MutatingWebhookConfiguration to the Gloo Operator. The Gloo Operator uses the gloo revision by default to manage Istio installations in your cluster.

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo URL are required.

           export SOLO_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.27.4
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.4.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${SOLO_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Describe the ServiceMeshController and note that it cannot take over the istio-injection: enabled label until the webhook is deleted.

        kubectl describe ServiceMeshController -n gloo-mesh managed-istio
        

      Example output:

          - lastTransitionTime: "2024-12-12T19:41:52Z"
          message: MutatingWebhookConfiguration istio-sidecar-injector references default
            Istio revision istio-system/istiod; must be deleted before migration
          observedGeneration: 1
          reason: ErrorConflictDetected
          status: "False"
          type: WebhookDeployed
        
    5. Delete the existing webhook.

        kubectl delete mutatingwebhookconfiguration istio-sidecar-injector -n istio-system
        
    6. Verify that the ServiceMeshController is now healthy. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed control plane.

    1. Get the workload namespaces that you previously included in the service mesh by using the istio-injection=enabled label.

        kubectl get namespaces -l istio-injection=enabled
        
    2. Label each workload namespace with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart your workloads so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
        
    5. Remove the istio-injection=enabled label from the workload namespaces.

        kubectl label ns <namespace> istio-injection-
        
  4. Migrate any existing Istio ingress or egress gateways to the managed gloo control plane.

    1. Get the deployment name of your gateway.

        kubectl get deploy -n <gateway_namespace>
        
    2. Update each Istio gateway by restarting it.

        kubectl rollout restart deploy <gateway_name> -n <namespace>
        
    3. Verify that the gateway is successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the gateway is now included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.27.4-solo
        
  5. Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Get the name and namespace of your previous istiod Helm release.

      helm ls -A
      
  7. Uninstall the unmanaged control plane.

      helm uninstall <istiod_release> -n istio-system
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Migrate from the Istio lifecycle manager

You might have previously installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources. You can migrate from the Istio revision that your lifecycle manager currently runs, such as 1-27, to the revision that the Gloo Operator uses by default to manage Istio installations in your cluster, gloo.

Single cluster

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo URL are required.

           export SOLO_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.27.4
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.4.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${SOLO_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed gloo control plane.

    1. Get the workload namespaces that you previously labeled with an Istio revision, such as 1-27 in the following example.

        kubectl get namespaces -l istio.io/rev=1-27
        
    2. Overwrite the revision label for each of the workload namespaces with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.27.4-solo
        
  4. For each gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the gloo revision.

    1. Create a new ingress gateway Helm release for the gloo control plane revision. Note that if you maintain your own services to expose gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the --set service.type=None flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.

        helm install istio-ingressgateway istio/gateway \
        --version ${ISTIO_VERSION} \
        --namespace istio-ingress \
        --set "revision=gloo"
        
    2. Verify that the gateway is successfully deployed. In the output, the name of istiod includes the gloo revision, indicating that the gateway is included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.27.4-solo
        
  5. Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the istioInstallations section of the gloo-platform Helm chart.

  7. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  8. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Multicluster

Considerations

Before you install a multicluster sidecar mesh, review the following considerations and requirements.

Version and license requirements

  • Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.
  • This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh (OSS APIs). Contact your account representative to obtain a valid license.

Components

In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh. For more information about running both ambient and sidecar components in one mesh setup, see Ambient-sidecar interoperability.

Migrate each service mesh

  1. Save your Istio installation values in environment variables.

    1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

           export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
           
    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

    3. Save the Solo distribution of Istio version.

           export ISTIO_VERSION=1.27.4
         export ISTIO_IMAGE=${ISTIO_VERSION}-solo
           
    4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

           # 12-character hash at the end of the repo URL
         export REPO_KEY=<repo_key>
         export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
         export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
           
    5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

           bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
         export PATH=${HOME}/.istioctl/bin:${PATH}
           

  2. Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

  3. Save the name and kubeconfig context of a workload cluster in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

      export CLUSTER_NAME=<workload-cluster-name>
    export CLUSTER_CONTEXT=<workload-cluster-context>
      
  4. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.4.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl --context ${CLUSTER_CONTEXT} apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: ${CLUSTER_NAME}
        network: ${CLUSTER_NAME}
        dataplaneMode: Ambient # required for multicluster setups
        installNamespace: istio-system
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl --context ${CLUSTER_CONTEXT} describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  5. Migrate your Istio-managed workloads to the managed gloo control plane. The steps vary based on whether you labeled workload namespaces with revision labels, such as istio.io/rev=1-27, or with injection labels, such as istio-injection=enabled.

  6. For each ingress or egress gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the gloo revision.

    1. For ingress gateways: Create a new ingress gateway Helm release for the gloo control plane revision. Note that if you maintain your own services to expose the gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the --set service.type=None flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.

        helm install istio-ingressgateway istio/gateway \
        --kube-context ${CLUSTER_CONTEXT} \
        --version ${ISTIO_VERSION} \
        --namespace istio-ingress \
        --create-namespace \
        --set "revision=gloo"
        
    2. Verify that the gateways are successfully deployed. In the output, the name of istiod includes the gloo revision, indicating that the gateways are included in the Gloo-revisioned data plane.

        istioctl --context ${CLUSTER_CONTEXT} proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-eastwestgateway-bdc4fd65f-ftmz9.istio-eastwest  cluster1    ...     istiod-gloo-6495985689-rkwwd     1.27.4-solo
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.27.4-solo
        
  7. Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  10. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use the following istioctl command to quickly create the east-west gateway configuration. For customization options, see the gateway guide in the Istio docs.

      kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
    istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate > ew-gateway.yaml
    kubectl apply -f ew-gateway.yaml --context ${CLUSTER_CONTEXT}
      

    In this example generated Gateway resource, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode. For customization options, see the gateway guide in the Istio docs.

      apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      labels:
        istio.io/expose-istiod: "15012"
        topology.istio.io/network: "cluster1"
        topology.kubernetes.io/region: "us-east"
        topology.kubernetes.io/zone: "us-east-1"
      name: istio-eastwest
      namespace: istio-eastwest
    spec:
      gatewayClassName: istio-eastwest
      listeners:
      - name: cross-network
        port: 15008
        protocol: HBONE
        tls:
          mode: Passthrough
      - name: xds-tls
        port: 15012
        protocol: TLS
        tls:
          mode: Passthrough
      
  11. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
      
  12. If you have Istio installations in multiple clusters that the GatewayLifecycleManager and IstioLifecycleManager managed, be sure to repeat steps 3 - 11 in each cluster before you continue. The next step deletes the GatewayLifecycleManager and IstioLifecycleManager resources from the management cluster, which uninstalls the old Istio installations from every workload cluster in your multicluster setup. Be sure to reset the value of the $CLUSTER_NAME and $CLUSTER_CONTEXT environment variables to the next workload cluster.

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check --precheck command. For more information about this command, see the CLI reference. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --precheck --contexts="<context1>,<context2>,<context3>"
      

    Before continuing to the next step, make sure that the following checks pass as expected:
    ✅ Relevant environment variables on istiod are supported.
    ✅ The license in use by istiod supports multicluster.
    ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
    ✅ The east-west gateway is programmed.

  2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.

  3. Verify that peer linking was successful by running the istioctl multicluster check command. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --contexts="<context1>,<context2>,<context3>"
      

    In this example output, the remote peer gateways are successfully connected, the intermediate certificates are compatibile between the clusters, each cluster has a unique, properly configured network, and no stale workloads were found because no autogenerated workload entries existed in the clusters prior to peering. If you do have preexisting autogenerated workload entries, the check verifies whether all entries are up to date.

      === Cluster: cluster1 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
    ✅ Peers Check: all clusters connected
    ====== 
    
    === Cluster: cluster2 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
    ✅ Peers Check: all clusters connected
    ====== 
    
    ✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
    ✅ Network Configuration Check: all network configurations are valid
    ⚠  Stale Workloads Check: no autogenflat workload entries found
      
  4. Optional: Verify that the istiod control plane for each peered cluster is included in each cluster’s proxy status list.

      istioctl proxy-status --context $REMOTE_CONTEXT1
    istioctl proxy-status --context $REMOTE_CONTEXT2
      

    Example output for cluster-1, in which you can verify that the istiod control plane for cluster-2 is listed:

      NAME                                               CLUSTER          ISTIOD                      VERSION              SUBSCRIBED TYPES
    istio-eastwest-67fd5679dc-fhsxs.istio-eastwest     cluster-1        istiod-7b7c9cc4c6-bdm9c     1.27.4-solo-fips     2 (WADS,WDS)
    istiod-6bc6765484-5bbhd.istio-system               cluster-2        istiod-7b7c9cc4c6-bdm9c     1.27.4-solo-fips     3 (FSDS,SGDS,WDS)
    ztunnel-5f8rb.kube-system                          cluster-1        istiod-7b7c9cc4c6-bdm9c     1.27.4-solo-fips     2 (WADS,WDS)
    ztunnel-f96kh.kube-system                          cluster-1        istiod-7b7c9cc4c6-bdm9c     1.27.4-solo-fips     2 (WADS,WDS)
    ztunnel-vtj4f.kube-system                          cluster-1        istiod-7b7c9cc4c6-bdm9c     1.27.4-solo-fips     2 (WADS,WDS)
      

Delete previous resources

  1. Now that your multicluster mesh is set up, delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the istioInstallations section of the gloo-platform Helm chart.

  2. Send another request to your apps to verify that traffic is still flowing.

      kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Optional: Validate your multicluster setup

Both before and after you link clusters into a multicluster mesh, you can use the istioctl multicluster check command, along with other observability checks, to verify multiple aspects of multicluster ambient mesh support and status.

For example, you can use the istioctl multicluster check --precheck command to check the individual readiness of each cluster before running istioctl multicluster link to link them in a multicluster mesh, and run it again after linking to confirm that the connections were successful. This command performs checks listed in the following sections, which you can review to understand what each check validates. Additionally, if any of the checks fail, run the command with the --verbose option, and review the following troubleshooting recommendations.

  istioctl multicluster check --verbose --contexts="<context1>,<context2>,<context3>"
  

For more information about this command, see the CLI reference.

Incompatible environment variables

Checks whether the ENABLE_PEERING_DISCOVERY=true and optionally K8S_SELECT_WORKLOAD_ENTRIES=true environment variables are set incorrectly or are not supported for multicluster ambient mesh.

Example verbose output:

  --- Incompatible Environment Variable Check ---

✅ Incompatible Environment Variable Check: K8S_SELECT_WORKLOAD_ENTRIES is valid ("")
✅ Incompatible Environment Variable Check: ENABLE_PEERING_DISCOVERY is valid ("true")
✅ Incompatible Environment Variable Check: all relevant environment variables are valid
  

If this check fails, check your environment variables in your istiod configuration, such as by running helm get values --kube-context ${CLUSTER_CONTEXT} istiod -n istio-system -o yaml, and update your configuration.

License validity

Checks whether the license in use by istiod is valid for multicluster ambient mesh. Multicluster capabilities require an Enterpise level license for Gloo Mesh.

Example verbose output:

  --- License Check ---

✅ License Check: license is valid for multicluster
  

If your license does not support multicluster ambient mesh, contact your Solo account representative.

Pod health

Checks the health of the pods in the cluster. All istiod, ztunnel, and east-west gateway pods across the checked clusters must be healthy and running for the multicluster mesh to function correctly.

Example verbose output:

  --- Pod Check (istiod) ---

NAME                        READY     STATUS      RESTARTS     AGE
istiod-6d9cdf88cf-l47tf     1/1       Running     0            10m18s

✅ Pod Check (istiod): all pods healthy


--- Pod Check (ztunnel) ---

NAME              READY     STATUS      RESTARTS     AGE
ztunnel-dvlwk     1/1       Running     0            10m6s

✅ Pod Check (ztunnel): all pods healthy


--- Pod Check (eastwest gateway) ---

NAME                                READY     STATUS      RESTARTS     AGE
istio-eastwest-857b77fc5d-qgnrl     1/1       Running     0            9m33s

✅ Pod Check (eastwest gateway): all pods healthy
  

To check any unhealthy pods, run the following commands. Consider checking the pod logs, and review Debug Istio.

  kubectl get po -n istio-system
kubectl get po -n istio-eastwest
  

East-west gateway status

Checks the status of the east-west gateways in the cluster. When an east-west gateway is created, the gateway controller creates a Kubernetes service to expose the gateway. Once this service is correctly attached to the gateway and has an address assigned, the east-west gateway has a Programmed status of true.

Example verbose output:

  --- Gateway Check ---

Gateway: istio-eastwest
Addresses:
- 172.18.7.110
Status: programmed ✅

✅ Gateway Check: all eastwest gateways programmed
  

If the Programmed status is not true, an issue might exist with the address allocation for the service. Check the east-west gateway with a command such as kubectl get svc -n istio-eastwest, and verify that your cloud provider can correctly allocate addresses to the service.

Remote peer gateway status

Checks the status of the remote peer gateways in the cluster, which represent the other peered clusters in the multicluster setup. These remote gateways configure the connection between the local cluster’s istiod control plane, and the peered clusters’ remote networks to enable XDS communication between peers. When the initial network connection between istiod and a remote peer is made, the gateway’s gloo.solo.io/PeerConnected status updates to true. Then, when the full XDS sync occurs between peers, the gateway’s gloo.solo.io/PeeringSucceeded status also updates to true.

Example verbose output:

  --- Peers Check ---

Cluster: cluster2
Addresses:
- 172.18.7.130
Conditions:
- Accepted: True
- Programmed: True
- gloo.solo.io/PeerConnected: True
- gloo.solo.io/PeeringSucceeded: True
- gloo.solo.io/PeerDataPlaneProgrammed: True
Status: connected ✅

✅ Peers Check: all clusters connected
  

If the connection is severed between the peers, the gloo.solo.io/PeerConnected status becomes false. A failed connection between peers can be due to either a misconfiguration in the peering setup, or a network issue blocking port 15008 on the remote cluster, which is the cross-network HBONE port that the east-west gateway listens on. Review the steps you took to link clusters together, such as the steps outlined in the Helm default network guide. Additionally, review any firewall rules or network policies that might block access through port 15008 on the remote cluster.

Intermediate certificate compatibility

Confirms the certificate compatibility between peered clusters. This check reads the root-cert.pem from the istio-ca-root-cert configmap in the istio-system namespace, and uses x509 certificate validation to confirm the root cert is compatible with all of the clusters’ ca-cert.pem intermediate certificate chains from the cacerts secret.

Example verbose output:

  --- Intermediate Certs Compatibility Check ---

ℹ  Intermediate Certs Compatibility Check: cluster cluster1 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
ℹ  Intermediate Certs Compatibility Check: cluster cluster2 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
✅ Intermediate Certs Compatibility Check: cluster cluster1 has compatible intermediate certificates with cluster cluster2 
✅ Intermediate Certs Compatibility Check: cluster cluster2 has compatible intermediate certificates with cluster cluster1 
✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
  

If this check fails because the root certs are not valid for each peered clusters’ intermediate certificate chain, you can check the istiod logs for TLS errors when attempting to communicate with a peered cluster, such as the following:

  2025-12-04T22:09:22.474517Z     warn    deltaadsc       disconnected, retrying in 24.735483751s: delta stream: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: remote error: tls: unknown certificate authority"       target=peering-cluster2
  

Ensure each cluster has a cacerts secret in the istio-system namespace. To regenerate invalid certificates for each cluster, follow the example steps in Create a shared root of trust.

Network configuration

Confirms the network configuration of the multicluster mesh. For multicluster peering setups that do not use a flat network topology, each cluster must occupy a unique network. The network name must be defined with the label topology.istio.io/network and set on both the istio-system namespace and the istio-eastwest gateway resource. The same network name must also be set as the NETWORK environment variable on the ztunnel daemonset. Each remote gateway that represents that cluster must have the topology.istio.io/network label equal to the network of the remote cluster.

Example verbose output:

  --- Network Configuration Check ---

✅ Cluster cluster1 has network: cluster1
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster1
✅ Cluster cluster2 has network: cluster2
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster2
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster2 references network cluster2 (clusters: [cluster2])
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster1 references network cluster1 (clusters: [cluster1])
✅ Network Configuration Check: all network configurations are valid
  

Mismatched network identities cause errors in cross-cluster communication, which leads to error logs in ztunnel pods that indicate a network timeout on the outbound communication. Notably, the destination address on these errors is a 240.X.X.X address, instead of the correct remote peer gateway address. You can run kubectl logs -l app=ztunnel -n istio-system --tail=10 --context ${CLUSTER_CONTEXT} | grep -iE "error|warn" to review logs such as the following:

  2025-11-18T16:14:53.490573Z     error   access  connection complete     src.addr=240.0.2.27:46802 src.workload="ratings-v1-5dc79b6bcd-zm8v6" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" dst.addr=240.0.9.43:15008 dst.hbone_addr=240.0.9.43:9080 dst.service="productpage.bookinfo.mesh.internal" dst.workload="autogenflat.portfolio1-soloiopoc-cluster1.bookinfo.productpage-v1-54bb874995-hblwp.ee508601917c" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" direction="outbound" bytes_sent=0 bytes_recv=0 duration="10001ms" error="connection timed out, maybe a NetworkPolicy is blocking HBONE port 15008: deadline has elapsed"
  

To troubleshoot these issues, be sure that you use unique network names to represent each cluster, and that you correctly labeled the cluster’s istio-system namespace with that network name, such as by running kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}. You can also relabel the east-west gateway in the cluster, and the remote peer gateways in other clusters that represent this cluster.

Stale workload entries

In flat network setups, checks for any outdated workload entries that must be removed from the multicluster mesh. Stale workload entries might exist from pods that were deleted, but the autogenerated entries for those workloads were not correctly cleaned up. If you do not use a flat network topology, no autogenerated workload entries exist to be validated, and this check can be ignored.

Example verbose output for a non-flat network setup:

  --- Stale Workloads Check ---

⚠  Stale Workloads Check: no autogenflat workload entries found
  

If you use a flat network topology, and this check fails with stale workload entries, run kubectl get workloadentries -n istio-system | grep autogenflat to list the autogenerated workload entries in the remote cluster, and compare the list to the output of kubectl get pods in the source cluster for those workloads. You can safely manually delete the stale workload entries in the remote cluster for pods that no longer exist in the source cluster, such as by running kubectl get workloadentries -n istio-system <entry_name>.

Next

  • Launch the Gloo UI to review the Istio insights that were captured for your service mesh setup. Gloo Mesh (OSS APIs) comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
  • When it’s time to upgrade your service mesh, you can perform a safe in-place upgrade by using the Gloo Operator.