Overview

If you have existing Istio installations and want to switch to using the Gloo Operator for service mesh management, you can use one of the following guides:

  • Revisioned Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use revision labels such as istio.io/rev=1-25.
  • Revisionless Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use the sidecar injection label, istio-injection=enabled.
  • Istio lifecycle manager: You might installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources.

Migrate from revisioned Helm installations

If you currently install Istio by using Helm and use revisions to manage your installations, you can migrate from your community Istio revision, such as 1-25, to the gloo revision. The Gloo Operator uses the gloo revision by default to manage Istio installations in your cluster.

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo key are required.

           export GLOO_MESH_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.25.2
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.2.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed gloo control plane.

    1. Get the workload namespaces that you previously labeled with an Istio revision, such as 1-25 in the following example.

        kubectl get namespaces -l istio.io/rev=1-25
        
    2. Overwrite the revision label for each of the workload namespaces with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
        
  4. Update any existing Istio ingress or egress gateways to the gloo revision.

    1. Get the name and namespace of your gateway Helm release.

        helm ls -A
        
    2. Get the current values for the gateway Helm release in your cluster.

        helm get values <gateway_release> -n <namespace> -o yaml > gateway.yaml
        
    3. Upgrade your gateway Helm release.

        helm upgrade -i <gateway_release> istio/gateway \
        --version 1.25.2 \
        --namespace <namespace> \
        --set "revision=gloo" \
        -f gateway.yaml
        
    4. Verify that the gateway is successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the gateway is now included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.25.2-solo
        
  5. Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Get the name and namespace of your previous istiod Helm release.

      helm ls -A
      
  7. Uninstall the unmanaged control plane.

      helm uninstall <istiod_release> -n istio-system
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Migrate from revisionless Helm installations

If you currently install Istio by using Helm and do not use revisions to manage your installations, such as by labeling namespaces with istio-injection: enabled, you can migrate the management of the MutatingWebhookConfiguration to the Gloo Operator. The Gloo Operator uses the gloo revision by default to manage Istio installations in your cluster.

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo key are required.

           export GLOO_MESH_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.25.2
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.2.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Describe the ServiceMeshController and note that it cannot take over the istio-injection: enabled label until the webhook is deleted.

        kubectl describe ServiceMeshController -n gloo-mesh managed-istio
        

      Example output:

          - lastTransitionTime: "2024-12-12T19:41:52Z"
          message: MutatingWebhookConfiguration istio-sidecar-injector references default
            Istio revision istio-system/istiod; must be deleted before migration
          observedGeneration: 1
          reason: ErrorConflictDetected
          status: "False"
          type: WebhookDeployed
        
    5. Delete the existing webhook.

        kubectl delete mutatingwebhookconfiguration istio-sidecar-injector -n istio-system
        
    6. Verify that the ServiceMeshController is now healthy. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed control plane.

    1. Get the workload namespaces that you previously included in the service mesh by using the istio-injection=enabled label.

        kubectl get namespaces -l istio-injection=enabled
        
    2. Label each workload namespace with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart your workloads so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
        
    5. Remove the istio-injection=enabled label from the workload namespaces.

        kubectl label ns <namespace> istio-injection-
        
  4. Migrate any existing Istio ingress or egress gateways to the managed gloo control plane.

    1. Get the deployment name of your gateway.

        kubectl get deploy -n <gateway_namespace>
        
    2. Update each Istio gateway by restarting it.

        kubectl rollout restart deploy <gateway_name> -n <namespace>
        
    3. Verify that the gateway is successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the gateway is now included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.25.2-solo
        
  5. Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Get the name and namespace of your previous istiod Helm release.

      helm ls -A
      
  7. Uninstall the unmanaged control plane.

      helm uninstall <istiod_release> -n istio-system
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Migrate from the Istio lifecycle manager

You might have previously installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources. You can migrate from the Istio revision that your lifecycle manager currently runs, such as 1-25, to the revision that the Gloo Operator uses by default to manage Istio installations in your cluster, gloo.

Single cluster

  1. Save your Istio installation values in environment variables.

    1. If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.

    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.

    3. Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo key are required.

           export GLOO_MESH_LICENSE_KEY=<license_key>
         export ISTIO_VERSION=1.25.2
           
    4. Install or upgrade istioctl with the same version of Istio that you saved.

           curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
         cd istio-${ISTIO_VERSION}
         export PATH=$PWD/bin:$PATH
           

  2. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.2.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: $CLUSTER_NAME
        dataplaneMode: Sidecar
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  3. Migrate your Istio-managed workloads to the managed gloo control plane.

    1. Get the workload namespaces that you previously labeled with an Istio revision, such as 1-25 in the following example.

        kubectl get namespaces -l istio.io/rev=1-25
        
    2. Overwrite the revision label for each of the workload namespaces with the gloo revision label.

        kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
        
    3. Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.

      • To restart all deployments in the namespace:
          kubectl rollout restart deployment -n <namespace>
          
      • To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
          kubectl rollout restart deployment <deployment> -n <namespace>
          
    4. Verify that the workloads are successfully migrated. In the output, the name of istiod includes the gloo revision, indicating that the workload is now part of the Gloo-revisioned service mesh.

        istioctl proxy-status
        

      Example output:

        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-gloo-7c8f6fd4c4-m9k9t   1.25.2-solo
        
  4. For each gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the gloo revision.

    1. Create a new ingress gateway Helm release for the gloo control plane revision. Note that if you maintain your own services to expose gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the --set service.type=None flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.

        helm install istio-ingressgateway istio/gateway \
        --version ${ISTIO_VERSION} \
        --namespace istio-ingress \
        --set "revision=gloo"
        
    2. Verify that the gateway is successfully deployed. In the output, the name of istiod includes the gloo revision, indicating that the gateway is included in the Gloo-revisioned data plane.

        istioctl proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.25.2-solo
        
  5. Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  6. Delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the istioInstallations section of the gloo-platform Helm chart.

  7. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  8. Send another request to your apps to verify that traffic is still flowing.

      kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Multicluster

Considerations

Before you install a multicluster sidecar mesh, review the following considerations and requirements.

Version and license requirements

  • In Gloo Mesh version 2.7 and later, multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.
  • This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh. Contact your account representative to obtain a valid license.

Components

In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh. For more information about running both ambient and sidecar components in one mesh setup, see Ambient-sidecar interoperability.

Migrate each service mesh

  1. Save your Istio installation values in environment variables.

    1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

           export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
           
    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions. In Gloo Mesh version 2.7 and later, multicluster setups require version 1.24.3 or later.

    3. Save the details for the version of the Solo distribution of Istio that you want to install.

    4. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

      1. Get the OS and architecture that you use on your machine.

             OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
           ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
           echo $OS
           echo $ARCH
             
      2. Download the Solo distribution of Istio binary and install istioctl.

             mkdir -p ~/.istioctl/bin
           curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
           chmod +x ~/.istioctl/bin/istioctl
           
           export PATH=${HOME}/.istioctl/bin:${PATH}
             
      3. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.

             istioctl version --remote=false
             

        Example output:

             client version: 1.25.2-solo
             

  2. Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

  3. Save the name and kubeconfig context of a workload cluster in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

      export CLUSTER_NAME=<workload-cluster-name>
    export CLUSTER_CONTEXT=<workload-cluster-context>
      
  4. Install the Gloo Operator and deploy a managed istiod control plane.

    1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

        helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
        --version 0.2.3 \
        -n gloo-mesh \
        --create-namespace \
        --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
        
    2. Verify that the operator pod is running.

        kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operator
        

      Example output:

        gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
        
    3. Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.

        kubectl --context ${CLUSTER_CONTEXT} apply -n gloo-mesh -f -<<EOF
      apiVersion: operator.gloo.solo.io/v1
      kind: ServiceMeshController
      metadata:
        name: managed-istio
        labels:
          app.kubernetes.io/name: managed-istio
      spec:
        cluster: ${CLUSTER_NAME}
        network: ${CLUSTER_NAME}
        dataplaneMode: Ambient # required for multicluster setups
        installNamespace: istio-system
        version: ${ISTIO_VERSION}
        # Uncomment if you installed the istio-cni
        # onConflict: Force
      EOF
        
    4. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

        kubectl --context ${CLUSTER_CONTEXT} describe servicemeshcontroller -n gloo-mesh managed-istio
        

      Example output:

        ...
      Status:
        Conditions:
          Last Transition Time:  2024-12-27T20:47:01Z
          Message:               Manifests initialized
          Observed Generation:   1
          Reason:                ManifestsInitialized
          Status:                True
          Type:                  Initialized
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               CRDs installed
          Observed Generation:   1
          Reason:                CRDInstalled
          Status:                True
          Type:                  CRDInstalled
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  ControlPlaneDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  CNIDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               Deployment succeeded
          Observed Generation:   1
          Reason:                DeploymentSucceeded
          Status:                True
          Type:                  WebhookDeployed
          Last Transition Time:  2024-12-27T20:47:02Z
          Message:               All conditions are met
          Observed Generation:   1
          Reason:                SystemReady
          Status:                True
          Type:                  Ready
        Phase:                   SUCCEEDED
      Events:                    <none>
        
  5. Migrate your Istio-managed workloads to the managed gloo control plane. The steps vary based on whether you labeled workload namespaces with revision labels, such as istio.io/rev=1-25, or with injection labels, such as istio-injection=enabled.

  6. For each ingress or egress gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the gloo revision.

    1. For ingress gateways: Create a new ingress gateway Helm release for the gloo control plane revision. Note that if you maintain your own services to expose the gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the --set service.type=None flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.

        helm install istio-ingressgateway istio/gateway \
        --kube-context ${CLUSTER_CONTEXT} \
        --version ${ISTIO_VERSION} \
        --namespace istio-ingress \
        --create-namespace \
        --set "revision=gloo"
        
    2. Verify that the gateways are successfully deployed. In the output, the name of istiod includes the gloo revision, indicating that the gateways are included in the Gloo-revisioned data plane.

        istioctl --context ${CLUSTER_CONTEXT} proxy-status | grep gateway
        

      Example output:

        NAME                                                  CLUSTER    ...     ISTIOD                           VERSION
      istio-eastwestgateway-bdc4fd65f-ftmz9.istio-eastwest  cluster1    ...     istiod-gloo-6495985689-rkwwd     1.25.2-solo
      istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress    cluster1    ...     istiod-gloo-6495985689-rkwwd     1.25.2-solo
        
  7. Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.

      kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      
  8. Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.

      helm uninstall <cni_release> -n istio-system
    kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
      
  9. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  10. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.

    • You can use the following istioctl command to quickly create the east-west gateway.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
        
    • To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate
        
      In this example output, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode.
        apiVersion: gateway.networking.k8s.io/v1
      kind: Gateway
      metadata:
        labels:
          istio.io/expose-istiod: "15012"
          topology.istio.io/network: "<cluster_network_name>"
        name: istio-eastwest
        namespace: istio-eastwest
      spec:
        gatewayClassName: istio-eastwest
        listeners:
        - name: cross-network
          port: 15008
          protocol: HBONE
          tls:
            mode: Passthrough
        - name: xds-tls
          port: 15012
          protocol: TLS
          tls:
            mode: Passthrough
        
  11. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
      
  12. If you have Istio installations in multiple clusters that the GatewayLifecycleManager and IstioLifecycleManager managed, be sure to repeat steps 3 - 11 in each cluster before you continue. The next step deletes the GatewayLifecycleManager and IstioLifecycleManager resources from the management cluster, which uninstalls the old Istio installations from every workload cluster in your multicluster setup. Be sure to reset the value of the $CLUSTER_NAME and $CLUSTER_CONTEXT environment variables to the next workload cluster.

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.

      kubectl config get-contexts
      
    • In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
    • If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
        KUBECONFIG=<kubeconfig_file1>.yaml:<file2>.yaml:<file3>.yaml kubectl config view --flatten
        
  2. Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

Delete previous resources

  1. Now that your multicluster mesh is set up, delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the istioInstallations section of the gloo-platform Helm chart.

  2. Send another request to your apps to verify that traffic is still flowing.

      kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80
    curl -v http://localhost:8080/productpage
      

The migration of your service mesh is now complete!

Next

  • Launch the Gloo UI to review the Istio insights that were captured for your service mesh setup. Gloo Mesh comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
  • When it’s time to upgrade your service mesh, you can perform a safe in-place upgrade by using the Gloo Operator.