Migrate to Gloo-managed service meshes
Switch from your existing Istio installations to Gloo-managed service meshes.
In Gloo Mesh Enterprise version 2.7, the Gloo operator is an alpha feature. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Gloo feature maturity.
If you have existing Istio installations and want to switch to using the Gloo operator for service mesh management, you can use one of the following guides:
- Revisioned Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use revision labels such as
istio.io/rev=1-24
. - Revisionless Helm: You installed Istio with Helm. To add namespaces to the service mesh, you use the sidecar injection label,
istio-injection=enabled
. - Istio lifecycle manager: You might installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the
istioInstallations
Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources.
The Gloo operator uses the gloo
revision by default to manage Istio installations in your cluster. This revision is used to facilitate initial migration to the Gloo operator. However, after migration, in-place upgrades are recommended for further operator-managed changes. For more information, see the Gloo operator upgrade guide.
Migrate from revisioned Helm installations
If you currently install Istio by using Helm and use revisions to manage your installations, you can migrate from your community Istio revision, such as 1-24
, to the gloo
revision. The Gloo operator uses the gloo
revision by default to manage Istio installations in your cluster.
In multicluster setups, you repeat the following steps to migrate to Gloo operator-managed installations in each cluster. Before you complete these steps, be sure to target the correct cluster context by either using kubectl config use-context <context>
or by adding --kube-context ${CLUSTER_CONTEXT}
to each Helm command.
Install the Gloo operator and deploy a managed istiod control plane.
Install the Gloo operator to the
gloo-mesh
namespace.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.1.0-rc.1 \ -n gloo-mesh \ --create-namespace
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh | grep operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation.
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: 1.24.2 # Uncomment if you installed the istio-cni # onConflict: Force EOF
Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane.Get the workload namespaces that you previously labeled with an Istio revision, such as
1-24
in the following example.kubectl get namespaces -l istio.io/rev=1-24
Overwrite the revision label for each of the workload namespaces with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart the workloads in each labeled namespace so that they are managed by the Gloo operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo
Update any existing Istio gateways to the
gloo
revision.Get the name and namespace of your gateway Helm release.
helm ls -A
Get the current values for the gateway Helm release in your cluster.
helm get values <gateway_release> -n <namespace> -o yaml > gateway.yaml
Upgrade your gateway Helm release.
helm upgrade -i <gateway_release> istio/gateway \ --version 1.24.2 \ --namespace <namespace> \ --set "revision=gloo" \ -f gateway.yaml
Verify that the gateway is successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is now included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.24.2-solo
Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Get the name and namespace of your previous istiod Helm release.
helm ls -A
Uninstall the unmanaged control plane.
helm uninstall <istiod_release> -n istio-system
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Migrate from revisionless Helm installations
If you currently install Istio by using Helm and do not use revisions to manage your installations, such as by labeling namespaces with istio-injection: enabled
, you can migrate the management of the MutatingWebhookConfiguration
to the Gloo operator. The Gloo operator uses the gloo
revision by default to manage Istio installations in your cluster.
In multicluster setups, you repeat the following steps to migrate to Gloo operator-managed installations in each cluster. Before you complete these steps, be sure to target the correct cluster context by either using kubectl config use-context <context>
or by adding --kube-context ${CLUSTER_CONTEXT}
to each Helm command.
Install the Gloo operator and deploy a managed istiod control plane.
Install the Gloo operator to the
gloo-mesh
namespace.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.1.0-rc.1 \ -n gloo-mesh \ --create-namespace
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh | grep operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation. For more information about the configurable fields, see the installation guide.
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: 1.24.2 # Uncomment if you installed the istio-cni # onConflict: Force EOF
Describe the ServiceMeshController and note that it cannot take over the
istio-injection: enabled
label until the webhook is deleted.kubectl describe ServiceMeshController -n gloo-mesh managed-istio
Example output:
- lastTransitionTime: "2024-12-12T19:41:52Z" message: MutatingWebhookConfiguration istio-sidecar-injector references default Istio revision istio-system/istiod; must be deleted before migration observedGeneration: 1 reason: ErrorConflictDetected status: "False" type: WebhookDeployed
Delete the existing webhook.
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector -n istio-system
Verify that the ServiceMeshController is now healthy. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed control plane.
Get the workload namespaces that you previously included in the service mesh by using the
istio-injection=enabled
label.kubectl get namespaces -l istio-injection=enabled
Label each workload namespace with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart your workloads so that they are managed by the Gloo operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo
Remove the
istio-injection=enabled
label from the workload namespaces.kubectl label ns <namespace> istio-injection-
Migrate any existing Istio gateways to the managed
gloo
control plane.Get the deployment name of your gateway.
kubectl get deploy -n <gateway_namespace>
Update each Istio gateway by restarting it.
kubectl rollout restart deploy <gateway_name> -n <namespace>
Verify that the gateway is successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is now included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.24.2-solo
Verify that Istio still correctly routes traffic requests to apps in your mesh. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Get the name and namespace of your previous istiod Helm release.
helm ls -A
Uninstall the unmanaged control plane.
helm uninstall <istiod_release> -n istio-system
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Migrate from the Istio lifecycle manager
You might have previously installed Istio and gateways by using Solo’s Istio lifecycle manager, such as by using the default settings in the getting started guides, the istioInstallations
Helm settings in your Gloo Helm chart, or by directly creating IstioLifecycleManager and GatewayLifecycleManager custom resources. You can migrate from the Istio revision that your lifecycle manager currently runs, such as 1-24
, to the revision that the Gloo operator uses by default to manage Istio installations in your cluster, gloo
.
Single cluster
Install the Gloo operator and deploy a managed istiod control plane.
Install the Gloo operator to the
gloo-mesh
namespace.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.1.0-rc.1 \ -n gloo-mesh \ --create-namespace
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh | grep operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation.
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.kubectl apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: 1.24.2 # Uncomment if you installed the istio-cni # onConflict: Force EOF
Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane.Get the workload namespaces that you previously labeled with an Istio revision, such as
1-24
in the following example.kubectl get namespaces -l istio.io/rev=1-24
Overwrite the revision label for each of the workload namespaces with the
gloo
revision label.kubectl label namespace <namespace> istio.io/rev=gloo --overwrite
Restart the workloads in each labeled namespace so that they are managed by the Gloo operator Istio installation.
- To restart all deployments in the namespace:
kubectl rollout restart deployment -n <namespace>
- To restart individual deployments in the namespace, such as to test a small number of deployments or to stagger the restart process:
kubectl rollout restart deployment <deployment> -n <namespace>
- To restart all deployments in the namespace:
Verify that the workloads are successfully migrated. In the output, the name of istiod includes the
gloo
revision, indicating that the workload is now part of the Gloo-revisioned service mesh.istioctl proxy-status
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-gloo-7c8f6fd4c4-m9k9t 1.24.2-solo
For each gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the
gloo
revision.Create a new ingress gateway Helm release for the
gloo
control plane revision. Note that if you maintain your own services to expose gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the--set service.type=None
flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.helm install istio-ingressgateway istio/gateway \ --version 1.24.2 \ --namespace istio-ingress \ --set "revision=gloo"
Verify that the gateway is successfully deployed. In the output, the name of istiod includes the
gloo
revision, indicating that the gateway is included in the Gloo-revisioned data plane.istioctl proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.24.2-solo
Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the
istioInstallations
section of thegloo-platform
Helm chart.Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
Send another request to your apps to verify that traffic is still flowing.
kubectl port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Multicluster
Save the kubeconfig context of a workload cluster in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s context.
export CLUSTER_CONTEXT=<workload-cluster-context>
Install the Gloo operator and deploy a managed istiod control plane.
Install the Gloo operator to the
gloo-mesh
namespace.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --kube-context ${CLUSTER_CONTEXT} \ --version 0.1.0-rc.1 \ -n gloo-mesh \ --create-namespace
Verify that the operator pod is running.
kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} | grep operator
Example output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s
Create a ServiceMeshController custom resource to configure an Istio installation.
If you currently install theistio-cni
plugin by using Helm, you must directly replace the CNI to avoid downtime by settingonConflict: Force
.kubectl --context ${CLUSTER_CONTEXT} apply -n gloo-mesh -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: $CLUSTER_NAME dataplaneMode: Sidecar version: 1.24.2 # Uncomment if you installed the istio-cni # onConflict: Force EOF
Verify that the ServiceMeshController is ready. In the
Status
section of the output, make sure that all statuses areTrue
, and that the phase isSUCCEEDED
.kubectl --context ${CLUSTER_CONTEXT} describe servicemeshcontroller -n gloo-mesh managed-istio
Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>
Migrate your Istio-managed workloads to the managed
gloo
control plane. The steps vary based on whether you labeld workload namespaces with revision labels, such asistio.io/rev=1-24
, or with injection labels, such asistio-injection=enabled
.For each gateway that the gateway lifecycle manager created, create Helm releases to deploy new Istio gateways to the
gloo
revision.Create a new east-west gateway Helm release for the
gloo
control plane revision. Note that if you maintain your own services to expose the gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the--set service.type=None
flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.helm install istio-eastwestgateway istio/gateway \ --kube-context ${CLUSTER_CONTEXT} \ --version 1.24.2 \ --namespace istio-eastwest \ --create-namespace \ --set "revision=gloo"
For ingress gateways: Create a new ingress gateway Helm release for the
gloo
control plane revision. Note that if you maintain your own services to expose the gateways, you can disable the load balancer services that are defined by default in the gateway Helm release by including the--set service.type=None
flag in this command. Then, you can switch from the old to the new gateways by updating the load balancer services to point to the new gateways.helm install istio-ingressgateway istio/gateway \ --kube-context ${CLUSTER_CONTEXT} \ --version 1.24.2 \ --namespace istio-ingress \ --create-namespace \ --set "revision=gloo"
Verify that the gateways are successfully deployed. In the output, the name of istiod includes the
gloo
revision, indicating that the gateways are included in the Gloo-revisioned data plane.istioctl --context ${CLUSTER_CONTEXT} proxy-status | grep gateway
Example output:
NAME CLUSTER ... ISTIOD VERSION istio-eastwestgateway-bdc4fd65f-ftmz9.istio-eastwest cluster1 ... istiod-gloo-6495985689-rkwwd 1.24.2-solo istio-ingressgateway-bdc4fd65f-ftmz9.istio-ingress cluster1 ... istiod-gloo-6495985689-rkwwd 1.24.2-solo
Verify that Istio now routes traffic requests to apps in your mesh through the new gateway that you deployed. For example, if you deployed the Bookinfo sample app, you can send a curl request to the product page.
kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
Optional: If you previously installed the Istio CNI pods with a Helm chart, uninstall the release and delete the secret stored by Helm.
helm uninstall <cni_release> -n istio-system kubectl delete secret "sh.helm.release.v1.istio-cni.v1" -n istio-system
If you have Istio installations in multiple clusters that the GatewayLifecycleManager and IstioLifecycleManager managed, be sure to repeat steps 1 - 6 in each cluster before you continue. The next step deletes the GatewayLifecycleManager and IstioLifecycleManager resources from the management cluster, which uninstalls the old Istio installations from every workload cluster in your multicluster setup. Be sure to reset the value of the
$CLUSTER_CONTEXT
environment variable to the next workload cluster’s context.Delete the GatewayLifecycleManager and IstioLifecycleManager managed installations. The steps vary based on whether you created the resources directly, or used the
istioInstallations
section of thegloo-platform
Helm chart.Send another request to your apps to verify that traffic is still flowing.
kubectl --context ${CLUSTER_CONTEXT} port-forward -n istio-ingress svc/istio-ingressgateway 8080:80 curl -v http://localhost:8080/productpage
The migration of your service mesh is now complete!
Next
- Launch the Gloo UI to review the Istio insights that were captured for your service mesh setup. Gloo Mesh Enterprise comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
- Monitor and observe your Istio environment with Gloo Mesh Enterprise’s built-in telemetry tools.
- When it’s time to upgrade your service mesh, you can perform a safe in-place upgrade by using the Gloo operator.