Migrate to Gloo-managed service meshes
Switch from your existing sidecar installations to Gloo-managed service meshes.
Overview
If you have existing Istio installations and want to switch to using the Gloo Operator for service mesh management, you can use one of the following guides:
- Revisioned Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use revision labels such as
istio.io/rev=1-25
. - Revisionless Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use the sidecar injection label,
istio-injection=enabled
. - Istio lifecycle manager: You might installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the
istioInstallations
Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources.
The Gloo Operator uses the gloo
revision by default to manage Istio installations in your cluster. This revision is used to facilitate initial migration to the Gloo Operator. However, after migration, in-place upgrades are recommended for further operator-managed changes. For more information, see the Gloo Operator upgrade guide.
Migrate from revisioned Helm installations
If you currently install Istio by using Helm and use revisions to manage your installations, you can migrate from your community Istio revision, such as 1-25
, to the gloo
revision. The Gloo Operator uses the gloo
revision by default to manage Istio installations in your cluster.
Save your Istio installation values in environment variables.
If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.
Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.
Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the
-solo
image tag nor the repo key are required.export GLOO_MESH_LICENSE_KEY=<license_key> export ISTIO_VERSION=1.25.2
Install or upgrade
istioctl
with the same version of Istio that you saved.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-${ISTIO_VERSION} export PATH=$PWD/bin:$PATH
Install the Gloo Operator and deploy a managed istiod control plane.
Install the Gloo Operator to the
gloo-mesh
namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys
flag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.2.3 \ -n gloo-mesh \ --create-namespace \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.
kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: ${ISTIO_VERSION} # Uncomment if you installed the istio-cni # onConflict: Force EOF
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.If you set theinstallNamespace
to a namespace other thangloo-system
,gloo-mesh
, oristio-system
, you must include the--set manager.env.WATCH_NAMESPACES=<namespace>
setting.Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane.Get the workload namespaces that you previously labeled with an Istio revision, such as
1-25
in the following example.kubectl get namespaces -l istio.io/rev=1-25
Overwrite the revision label for each of the workload namespaces with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo
Update any existing Istio ingress or egress gateways to the
gloo
revision.Get the name and namespace of your gateway Helm release.
helm ls -A
Get the current values for the gateway Helm release in your cluster.
helm get values <gateway_release> -n <namespace> -o yaml > gateway.yaml
Upgrade your gateway Helm release.
helm upgrade -i <gateway_release> istio/gateway \ --version 1.25.2 \ --namespace <namespace> \ --set "revision=gloo" \ -f gateway.yaml
Verify that the gateway is successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is now included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.25.2-solo
Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Get the name and namespace of your previous istiod Helm release.
helm ls -A
Uninstall the unmanaged control plane.
helm uninstall <istiod_release> -n istio-system
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Migrate from revisionless Helm installations
If you currently install Istio by using Helm and do not use revisions to manage your installations, such as by labeling namespaces with istio-injection: enabled
, you can migrate the management of the MutatingWebhookConfiguration
to the Gloo Operator. The Gloo Operator uses the gloo
revision by default to manage Istio installations in your cluster.
Save your Istio installation values in environment variables.
If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.
Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.
Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the
-solo
image tag nor the repo key are required.export GLOO_MESH_LICENSE_KEY=<license_key> export ISTIO_VERSION=1.25.2
Install or upgrade
istioctl
with the same version of Istio that you saved.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-${ISTIO_VERSION} export PATH=$PWD/bin:$PATH
Install the Gloo Operator and deploy a managed istiod control plane.
Install the Gloo Operator to the
gloo-mesh
namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys
flag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.2.3 \ -n gloo-mesh \ --create-namespace \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.
kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: ${ISTIO_VERSION} # Uncomment if you installed the istio-cni # onConflict: Force EOF
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.If you set theinstallNamespace
to a namespace other thangloo-system
,gloo-mesh
, oristio-system
, you must include the--set manager.env.WATCH_NAMESPACES=<namespace>
setting.Describe the ServiceMeshController and note that it cannot take over the
istio-injection: enabled
label until the webhook is deleted.kubectl describe ServiceMeshController -n gloo-mesh managed-istio
Example output:
- lastTransitionTime: "2024-12-12T19:41:52Z" message: MutatingWebhookConfiguration istio-sidecar-injector references default Istio revision istio-system/istiod; must be deleted before migration observedGeneration: 1 reason: ErrorConflictDetected status: "False" type: WebhookDeployed
Delete the existing webhook.
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector -n istio-system
Verify that the ServiceMeshController is now healthy. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed control plane.
Get the workload namespaces that you previously included in the service mesh by using the
istio-injection=enabled
label.kubectl get namespaces -l istio-injection=enabled
Label each workload namespace with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart your workloads so that they are managed by the Gloo Operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo
Remove the
istio-injection=enabled
label from the workload namespaces.kubectl label ns <namespace> istio-injection-
Migrate any existing Istio ingress or egress gateways to the managed
gloo
control plane.Get the deployment name of your gateway.
kubectl get deploy -n <gateway_namespace>
Update each Istio gateway by restarting it.
kubectl rollout restart deploy <gateway_name> -n <namespace>
Verify that the gateway is successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is now included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.25.2-solo
Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Get the name and namespace of your previous istiod Helm release.
helm ls -A
Uninstall the unmanaged control plane.
helm uninstall <istiod_release> -n istio-system
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Migrate from the Istio lifecycle manager
You might have previously installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations
Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources. You can migrate from the Istio revision that your lifecycle manager currently runs, such as 1-25
, to the revision that the Gloo Operator uses by default to manage Istio installations in your cluster, gloo
.
Single cluster
Save your Istio installation values in environment variables.
If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.
Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.
Save each value in an environment variable. If you prefer to specify license keys in a secret instead, see Licensing. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the
-solo
image tag nor the repo key are required.export GLOO_MESH_LICENSE_KEY=<license_key> export ISTIO_VERSION=1.25.2
Install or upgrade
istioctl
with the same version of Istio that you saved.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-${ISTIO_VERSION} export PATH=$PWD/bin:$PATH
Install the Gloo Operator and deploy a managed istiod control plane.
Install the Gloo Operator to the
gloo-mesh
namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys
flag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.2.3 \ -n gloo-mesh \ --create-namespace \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.
kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: ${ISTIO_VERSION} # Uncomment if you installed the istio-cni # onConflict: Force EOF
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.If you set theinstallNamespace
to a namespace other thangloo-system
,gloo-mesh
, oristio-system
, you must include the--set manager.env.WATCH_NAMESPACES=<namespace>
setting.Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane.Get the workload namespaces that you previously labeled with an Istio revision, such as
1-25
in the following example.kubectl get namespaces -l istio.io/rev=1-25
Overwrite the revision label for each of the workload namespaces with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart the workloads in each labeled namespace so that they are managed by the Gloo Operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.25.2-solo
For each gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the
gloo
revision.Create a new ingress gateway Helm release for the
gloo
control plane revision. Note that if you maintain your own services to expose gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the--set service.type=None
flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.helm install istio-ingressgateway istio/gateway \ --version ${ISTIO_VERSION} \ --namespace istio-ingress \ --set "revision=gloo"
Verify that the gateway is successfully deployed. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.25.2-solo
Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the
istioInstallations
section of thegloo-platform
Helm chart.Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Multicluster
Considerations
Before you install a multicluster sidecar mesh, review the following considerations and requirements.
Version and license requirements
- In Gloo Mesh version 2.7 and later, multicluster setups require the Solo distribution of Istio version 1.24.3 or later (
1.24.3-solo
), including the Solo distribution ofistioctl
. - This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh. Contact your account representative to obtain a valid license.
Components
In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh. For more information about running both ambient and sidecar components in one mesh setup, see Ambient-sidecar interoperability.
Migrate each service mesh
Save your Istio installation values in environment variables.
Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions. In Gloo Mesh version 2.7 and later, multicluster setups require version 1.24.3 or later.
Save the details for the version of the Solo distribution of Istio that you want to install.
Get the Solo distribution of Istio binary and install
istioctl
, which you use for multicluster linking and gateway commands.Get the OS and architecture that you use on your machine.
OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/') ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/') echo $OS echo $ARCH
Download the Solo distribution of Istio binary and install
istioctl
.mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}
Verify that the
istioctl
client runs the Solo distribution of Istio that you want to install.istioctl version --remote=false
Example output:
client version: 1.25.2-solo
Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
Save the name and kubeconfig context of a workload cluster in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.
export CLUSTER_NAME=<workload-cluster-name> export CLUSTER_CONTEXT=<workload-cluster-context>
Install the Gloo Operator and deploy a managed istiod control plane.
Install the Gloo Operator to the
gloo-mesh
namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys
flag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.2.3 \ -n gloo-mesh \ --create-namespace \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.
kubectl --context ${CLUSTER_CONTEXT} apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: ${CLUSTER_NAME} network: ${CLUSTER_NAME} dataplaneMode: Ambient # required for multicluster setups installNamespace: istio-system version: ${ISTIO_VERSION} # Uncomment if you installed the istio-cni # onConflict: Force EOF
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.If you set theinstallNamespace
to a namespace other thangloo-system
,gloo-mesh
, oristio-system
, you must include the--set manager.env.WATCH_NAMESPACES=<namespace>
setting.Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl --context ${CLUSTER_CONTEXT} describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane. The steps vary based on whether you labeled workload namespaces with revision labels, such asistio.io/rev=1-25
, or with injection labels, such asistio-injection=enabled
.For each ingress or egress gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the
gloo
revision.For ingress gateways: Create a new ingress gateway Helm release for the
gloo
control plane revision. Note that if you maintain your own services to expose the gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the--set service.type=None
flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.helm install istio-ingressgateway istio/gateway \ --kube-context ${CLUSTER_CONTEXT} \ --version ${ISTIO_VERSION} \ --namespace istio-ingress \ --create-namespace \ --set "revision=gloo"
Verify that the gateways are successfully deployed. In the output, the name of istiod includes the
gloo
revision, indicating that the gateways are included in the Gloo-revisioned data plane.istioctl --context ${CLUSTER_CONTEXT} proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-eastwestgateway-bdc4fd65f-ftmz9.istio-eastwest cluster1 ... istiod-gloo-6495985689-rkwwd 1.25.2-solo istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.25.2-solo
Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the
Gateway
resource, and more.kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${CLUSTER_CONTEXT}
Create an east-west gateway in the
istio-eastwest
namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.- You can use the following
istioctl
command to quickly create the east-west gateway.kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT} istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
- To take a look at the Gateway resource that this command creates, you can include the
--generate
flag in the command.In this example output, thekubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT} istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate
gatewayClassName
that is used,istio-eastwest
, is included by default when you install Istio in ambient mode.apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: labels: istio.io/expose-istiod: "15012" topology.istio.io/network: "<cluster_network_name>" name: istio-eastwest namespace: istio-eastwest spec: gatewayClassName: istio-eastwest listeners: - name: cross-network port: 15008 protocol: HBONE tls: mode: Passthrough - name: xds-tls port: 15012 protocol: TLS tls: mode: Passthrough
- You can use the following
Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
If you have Istio installations in multiple clusters that the GatewayLifecycleManager and IstioLifecycleManager managed, be sure to repeat steps 3 - 11 in each cluster before you continue. The next step deletes the GatewayLifecycleManager and IstioLifecycleManager resources from the management cluster, which uninstalls the old Istio installations from every workload cluster in your multicluster setup. Be sure to reset the value of the
$CLUSTER_NAME
and$CLUSTER_CONTEXT
environment variables to the next workload cluster.
Link clusters
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.
Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.
kubectl config get-contexts
- In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
- If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
KUBECONFIG=<kubeconfig_file1>.yaml:<file2>.yaml:<file3>.yaml kubectl config view --flatten
Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.
Note that these istio-remote
gateways are used for cluster peering only, and do not create deployments or services. Traffic requests between linked clusters are routed through the istio-eastwest
gateways.
For example, you can list the gateways in a cluster to see both the remote and east-west gateways.
kubectl get gateways -n istio-eastwest --context $CLUSTER_CONTEXT
NAME CLASS ADDRESS PROGRAMMED AGE
istio-eastwest istio-eastwest <address> True 29m
istio-remote-peer-cluster1 istio-remote <address> True 16m
...
However, if you list the services in the same namespace, only a service for the east-west gateway exists.
kubectl get svc -n istio-eastwest --context $CLUSTER_CONTEXT
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwest LoadBalancer 172.20.44.255 <address> 15021:31730/TCP,15008:30632/TCP,15012:32356/TCP 29m
Delete previous resources
Now that your multicluster mesh is set up, delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the
istioInstallations
section of thegloo-platform
Helm chart.Send another request to your apps to verify that traffic is still flowing.
kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Next
- Launch the Gloo UI to review the Istio insights that were captured for your service mesh setup. Gloo Mesh comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
- When it’s time to upgrade your service mesh, you can perform a safe in-place upgrade by using the Gloo Operator.