Switch from unmanaged to managed Istio installations
Use the Istio lifecycle manager to switch from your existing, unmanaged Istio installations to Gloo-managed Istio installations. The takeover process follows these general steps:
- Create
IstioLifecycleManager
andGatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. Theistiod
control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time. - Test the new control plane and gateways by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control planes to be active, and rollout a restart to data plane workloads so that they are managed by the new control planes.
- Update service selectors or update internal/external DNS entries to point to the new gateways.
- Uninstall the old Istio installations.
Considerations
Before you follow this takeover process, review the following important considerations.
- Revisions: This process involves creating
IstioLifecycleManager
andGatewayLifecycleManager
resources that make use of a different revision than your existing Istio installations. If you do not currently use revisions, no conflict will exist between the new installations and existing installations. If you do currently use revisions, be sure to choose a different revision for the new installations than your existing installations. - Gateways: To prevent conflicts, be sure to choose a different name or namespace for the new managed gateways than your existing gateways. For example, if your existing gateway is named
istio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
. - Testing: Always test this process in a representative test environment before attempting this process in a production setup.
Before you begin
-
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export REMOTE_CLUSTER1=<cluster1> export REMOTE_CLUSTER2=<cluster2> ...
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context> ...
-
To use a Gloo Mesh hardened image of Istio, you must have a Solo account. Make sure that you can log in to the Support Center. If not, contact your account administrator to get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.
Deploy the managed Istio installations
Create IstioLifecycleManager
and GatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. The istiod
control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time.
-
Save the Istio version information as environment variables.
- For
REPO
, use a Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Gloo Istio version that you want to use. - For
ISTIO_IMAGE
, save the version that you downloaded, such as 1.17.2, and append thesolo
tag, which is required to use many enterprise features. You can optionally append other Gloo Istio tags, as described in About Gloo Istio. If you downloaded a different version than the following, make sure to specify that version instead. - For
REVISION
, specify any name or integer. For example, you can specify the version, such as1-17-2
. If you currently use a revision for your existing Istio installations, be sure to use a different revision than the existing one.
export REPO=<repo-key> export ISTIO_IMAGE=1.17.2-solo export REVISION=1-17-2
- For
-
Prepare an
IstioLifecycleManager
resource to manageistiod
control planes.- Download the
gm-istiod.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod.yaml > gm-istiod.yaml
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod-openshift.yaml > gm-istiod.yaml
- Update the example file with the environment variables that you previously set for
$REPO
,$ISTIO_IMAGE
,$REVISION
,$REMOTE_CLUSTER1
, and$REMOTE_CLUSTER2
. Save the updated file asgm-istiod-values.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst < gm-istiod.yaml > gm-istiod-values.yaml open gm-istiod-values.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
- Check the settings in the
IstioLifecycleManager
resource. You can further edit the file to provide your own details.- Clusters: Specify the registered cluster names in the
clusters
section. For single-cluster setups, you must edit the file to specify only the name of your cluster (value of$CLUSTER_NAME
). For each cluster,defaultRevision: false
ensures that the Istio operator spec for the control plane installation is NOT active in the cluster. - Root namespace: If you do not specify a namespace, the root namespace for the installed Istio resources in workload clusters is set to
istio-system
. If theistio-system
namespace does not already exist, it is created for you. - Trust domain: By default, the
trustDomain
value is automatically set by the installer to the name of each workload cluster. To override thetrustDomain
for each cluster, you can instead specify the override value in thetrustDomain
field, and include the value in the list of cluster names. For example, if you specifytrustDomain: cluster1-trust-override
in the operator spec, you then specify the cluster name (cluster1
) and the trust domain (cluster1-trust-override
) in the list of cluster names. Additionally, because Gloo requires multiple trust domains for east-west routing, thePILOT_SKIP_VALIDATE_TRUST_DOMAIN
field is set to"true"
by default.
- Clusters: Specify the registered cluster names in the
- Apply the
IstioLifecycleManager
resource to your management cluster.kubectl apply -f gm-istiod-values.yaml --context $MGMT_CONTEXT
- Download the
-
Optional: If you have a multicluster setup, prepare a
GatewayLifecycleManager
custom resource to manage the east-west gateways.- Download the
gm-ew-gateway.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > gm-ew-gateway.yaml
- Update the example file with the environment variables that you previously set for
$REPO
,$ISTIO_IMAGE
,$REVISION
,$REMOTE_CLUSTER1
, and$REMOTE_CLUSTER2
. Save the updated file asgm-ew-gateway-values.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst < gm-ew-gateway.yaml > gm-ew-gateway-values.yaml open gm-ew-gateway-values.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
- Check the settings in the
GatewayLifecycleManager
resource. You can further edit the file to provide your own details.- Clusters: Specify the registered cluster names in the
clusters
section. For each cluster,activeGateway: true
ensures that the Istio operator spec for the gateway is deployed and actively used by theistiod
control plane. - Gateway name and namespace: The default name for the gateway is set to
istio-eastwestgateway
, and the default namespace for the gateway is set togloo-mesh-gateways
. If thegloo-mesh-gateways
namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is namedistio-eastwestgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-eastwestgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
.
- Clusters: Specify the registered cluster names in the
- Apply the
GatewayLifecycleManager
resource to your management cluster.kubectl apply -f gm-ew-gateway-values.yaml --context $MGMT_CONTEXT
- Download the
-
Optional: If you also have a Gloo Gateway license, prepare a
GatewayLifecycleManager
custom resource to manage the ingress gateways.- Download the
gm-ingress-gateway.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
- Update the example file with the environment variables that you previously set for
$REPO
,$ISTIO_IMAGE
,$REVISION
,$REMOTE_CLUSTER1
, and$REMOTE_CLUSTER2
. Save the updated file asgm-ingress-gateway-values.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst < gm-ingress-gateway.yaml > gm-ingress-gateway-values.yaml open gm-ingress-gateway-values.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
- Check the settings in the
GatewayLifecycleManager
resource. You can further edit the file to provide your own details.- Clusters: Specify the registered cluster names in the
clusters
section. For single-cluster setups, you must edit the file to specify only the name of your cluster (value of$CLUSTER_NAME
). For each cluster,activeGateway: true
ensures that the Istio operator spec for the gateway is deployed and actively used by theistiod
control plane. - Gateway name and namespace: The default name for the gateway is set to
istio-ingressgateway
, and the default namespace for the gateway is set togloo-mesh-gateways
. If thegloo-mesh-gateways
namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is namedistio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
.
- Clusters: Specify the registered cluster names in the
- Apply the
GatewayLifecycleManager
resource to your management cluster.kubectl apply -f gm-ingress-gateway-values.yaml --context $MGMT_CONTEXT
- Download the
Verify and test the new managed installations
Verify that the new control plane and gateways are deployed to your workload clusters. Then test them by deploying workloads with a label for the new revision and generating traffic to those workloads.
-
In each workload cluster, verify that the namespaces for your managed Istio installations are created.
kubectl get ns --context $REMOTE_CONTEXT1
For example, the
gm-iop-1-17-2
andgloo-mesh-gateways
namespaces are created alongside the namespaces you might already use for your existing Istio installations (such asistio-system
andistio-gateways
):NAME STATUS AGE default Active 56m gloo-mesh Active 36m gm-iop-1-17-2 Active 91s gloo-mesh-gateways Active 90s istio-gateways Active 50m istio-system Active 50m kube-node-lease Active 57m kube-public Active 57m kube-system Active 57m
-
In each namespace, verify that the Istio resources for the new revision are successfully installed.
kubectl get all -n gm-iop-1-17-2 --context $REMOTE_CONTEXT1
Example output:
NAME READY STATUS RESTARTS AGE pod/istio-operator-1-17-2-678fd95cc6-ltbvl 1/1 Running 0 4m12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-operator-1-17-2 ClusterIP 10.204.15.247 <none> 8383/TCP 4m12s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-operator-1-17-2 1/1 1 1 4m12s NAME DESIRED CURRENT READY AGE replicaset.apps/istio-operator-1-17-2-678fd95cc6 1 1 1 4m12s
kubectl get all -n istio-system --context $REMOTE_CONTEXT1
Example output: Note that your existing Istio control plane pods might be deployed to this namespace too.
NAME READY STATUS RESTARTS AGE pod/istiod-1-17-2-b65676555-g2vmr 1/1 Running 0 8m57s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istiod-1-17-2 ClusterIP 10.204.6.56 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 8m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istiod-1-17-2 1/1 1 1 8m57s NAME DESIRED CURRENT READY AGE replicaset.apps/istiod-1-17-2-b65676555 1 1 1 8m57s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istiod-1-17-2 Deployment/istiod-1-17-2 1%/80% 1 5 1 8m58s
kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
Example output: Your output might vary depending on which gateways you installed. Note that the gateways might take a few minutes to be created.
NAME READY STATUS RESTARTS AGE pod/istio-eastwestgateway-1-17-2-66f464ff44-qlhfk 1/1 Running 0 2m6s pod/istio-ingressgateway-1-17-2-77d5f76bc8-j6qkp 1/1 Running 0 2m18s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-eastwestgateway LoadBalancer 10.204.4.172 34.86.225.164 15021:30889/TCP,15443:32489/TCP 2m5s service/istio-ingressgateway LoadBalancer 10.44.4.140 34.150.235.221 15021:31321/TCP,80:32525/TCP,443:31826/TCP 2m16s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-eastwestgateway-1-17-2 1/1 1 1 2m6s deployment.apps/istio-ingressgateway-1-17-2 1/1 1 1 2m18s NAME DESIRED CURRENT READY AGE replicaset.apps/istio-eastwestgateway-1-17-2-66f464ff44 1 1 1 2m6s replicaset.apps/istio-ingressgateway-1-17-2-77d5f76bc8 1 1 1 2m18s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-eastwestgateway-1-17-2 Deployment/istio-eastwestgateway-1-17-2 <unknown>/80% 1 5 0 2m7s horizontalpodautoscaler.autoscaling/istio-ingressgateway-1-17-2 Deployment/istio-ingressgateway-1-17-2 4%/80% 1 5 1 2m19s
-
Verify that your workloads and existing gateways still point to the old revision, and only the new gateway points to the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1
In this example output, the Bookinfo apps and existing east-west gateway in
cluster1
still point to the existing Istio installation that uses version1.16.4
. Only the new east-west gateway points to the managed Istio installation that uses version1.17.2-solo
and revision1-17-2
.NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.16.4 istio-eastwestgateway-575b697f9-49v4c.istio-gateways cluster1 ... istiod-66d54b865-6b6zt 1.16.4 istio-eastwestgateway-1-17-2-575b697f9-49v4c.gloo-mesh-gateways cluster1 ... istiod-1-17-2-5b7b9df586-95sq6 1.17.2-solo productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.16.4 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.16.4 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.16.4 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.16.4
-
Deploy test workloads with a label for the new revision, such as
istio.io/rev=1-17-2
.-
In both workload clusters, create the
petstore
namespace, and label it with the new revision so that apps in the namespace are injected with sidecars for the managed control plane.kubectl --context $REMOTE_CONTEXT1 create ns petstore kubectl --context $REMOTE_CONTEXT1 label ns petstore istio.io/rev=$REVISION kubectl --context $REMOTE_CONTEXT1 label ns istio.io/injection=enabled
kubectl --context $REMOTE_CONTEXT2 create ns petstore kubectl --context $REMOTE_CONTEXT2 label ns petstore istio.io/rev=$REVISION kubectl --context $REMOTE_CONTEXT2 label ns istio.io/injection=enabled
-
In the first workload cluster, deploy the
petstore
app.kubectl apply --context $REMOTE_CONTEXT1 -n petstore -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app: petstore name: petstore namespace: petstore spec: selector: matchLabels: app: petstore replicas: 1 template: metadata: labels: app: petstore spec: containers: - image: openapitools/openapi-petstore name: petstore env: - name: DISABLE_OAUTH value: "1" - name: DISABLE_API_KEY value: "1" ports: - containerPort: 8080 name: http --- apiVersion: v1 kind: Service metadata: name: petstore namespace: petstore labels: service: petstore spec: ports: - port: 8080 protocol: TCP selector: app: petstore EOF
-
-
Generate traffic through the new gateways to the test workloads.
- East-west (cross-cluster) routing:
-
Create a Gloo root trust policy to ensure that services in each workload cluster can communicate securely. The root trust policy sets up the domain and certificates to establish a shared trust model across multiple clusters in your service mesh.
kubectl apply --context $MGMT_CONTEXT -f - <<EOF apiVersion: admin.gloo.solo.io/v2 kind: RootTrustPolicy metadata: name: root-trust namespace: gloo-mesh spec: config: autoRestartPods: true mgmtServerCa: generated: {} EOF
-
Create a Gloo virtual destination for the petstore app.
kubectl apply --context $REMOTE_CONTEXT1 -n petstore -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualDestination metadata: name: petstore-vd namespace: petstore spec: hosts: # Arbitrary, internal-only hostname assigned to the endpoint - petstore.mesh.internal.com ports: - number: 8080 protocol: HTTP targetPort: number: 8080 services: - labels: app: petstore EOF
-
Create a curl pod in the second cluster.
kubectl run -it -n petstore --context $REMOTE_CONTEXT2 curl \ --image=curlimages/curl:7.73.0 --rm -- sh
-
Send a request to the petstore app's virtual destination hostname.
curl http://petstore.mesh.internal.com/ -v
Example output:
* Trying 45.33.2.79:80... * Connected to petstore.mesh.internal.com (45.33.2.79) port 80 (#0) > GET / HTTP/1.1 > Host: petstore.mesh.internal.com > User-Agent: curl/7.73.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < server: envoy < date: Fri, 28 Oct 2022 20:11:00 GMT < content-type: application/octet-stream,text/html < content-length: 134 < x-envoy-upstream-service-time: 68 < * Connection #0 to host petstore.mesh.internal.com left intact <html><head><title>petstore.mesh.internal.com</title></head><body><h1>petstore.mesh.internal.com</h1><p>Coming soon.</p></body></html>
-
Exit the temporary pod. The pod deletes itself.
exit
-
- North-south (ingress) routing: If you also have a Gloo Gateway license and deployed an ingress gateway for the new revision, you can test traffic to your workloads by using the ingress gateway.
- Save the external address of the ingress gateway for the new revision.
export INGRESS_GW_IP=$(kubectl get svc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo $INGRESS_GW_IP
export INGRESS_GW_IP=$(kubectl get svc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo $INGRESS_GW_IP
- Apply a virtual gateway to the ingress gateway for the new revision.
kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: istio-ingressgateway-$REVISION namespace: petstore spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway cluster: ${REMOTE_CLUSTER1} namespace: gloo-mesh-gateways EOF
- Apply a route table to allow requests to the
/api/pets
path of the petstore app.kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: petstore-routes namespace: petstore spec: hosts: - '*' virtualGateways: - name: istio-ingressgateway-$REVISION cluster: ${REMOTE_CLUSTER1} namespace: petstore http: - name: petstore matchers: - uri: prefix: /api/pets forwardTo: destinations: - ref: name: petstore namespace: petstore cluster: ${REMOTE_CLUSTER1} port: number: 8080 EOF
- Test the ingress gateway by sending a request to the petstore service.
curl http://$INGRESS_GW_IP:80/api/pets
- Save the external address of the ingress gateway for the new revision.
- East-west (cross-cluster) routing:
Activate the managed installations
After you finish testing, change the new control planes to be active, and rollout a restart to data plane workloads so that they are managed by the new control planes. Then, you can update service selectors or update internal/external DNS entries to point to the new gateways. You can also optionally uninstall the old Istio installations.
-
Switch to the new
istiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh --context $MGMT_CONTEXT istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-17-2 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true - name: cluster2 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
-
In each workload cluster, roll out a restart to your workload apps so that they are managed by the new control planes.
- Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands to update the Bookinfo microservices, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v3 --context $REMOTE_CONTEXT2 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT2 sleep 20s
- Verify that your workloads and new gateways point to the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1
Example output:
NAME CLUSTER ... ISTIOD VERSION details-v1-7b6df9d8c8-s6kg5.bookinfo cluster1 ... istiod-1-17-2-7c8f6fd4c4-m9k9t 1.17.2-solo istio-eastwestgateway-1-17-2-bdc4fd65f-ftmz9.gloo-mesh-gateways cluster1 ... istiod-1-17-2-6495985689-rkwwd 1.17.2-solo productpage-v1-bb494b7d7-xbtxr.bookinfo cluster1 ... istiod-1-17-2-7c8f6fd4c4-m9k9t 1.17.2-solo ratings-v1-55b478cfb6-wv2m5.bookinfo cluster1 ... istiod-1-17-2-7c8f6fd4c4-m9k9t 1.17.2-solo reviews-v1-6dfcc9fc7d-7k6qh.bookinfo cluster1 ... istiod-1-17-2-7c8f6fd4c4-m9k9t 1.17.2-solo reviews-v2-7dddd799b5-m5n2z.bookinfo cluster1 ... istiod-1-17-2-7c8f6fd4c4-m9k9t 1.17.2-solo
- Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands to update the Bookinfo microservices, 20 seconds elapse between each restart to ensure that the pods have time to start running.
-
If you use your own load balancer services for gateways, update the service selectors to point to the gateways for the new revision. Alternatively, if you use the load balancer services that are deployed by default, update any internal or external DNS entries to point to the new gateway IP addresses.
-
Uninstall the old Istio installations from each workload cluster. The uninstallation process varies depending on your original installation method. For example, if you created Istio operators in each workload cluster, the uninstallation process might follow these general steps:
- Delete the
IstioOperator
resources for the control plane, ingress gateway, and east-west gateway.kubectl delete -n istio-system --context $REMOTE_CONTEXT1 IstioOperator <istiod-control-plane> kubectl delete -n istio-system --context $REMOTE_CONTEXT1 IstioOperator <ingress-gateway> kubectl delete -n istio-system --context $REMOTE_CONTEXT1 IstioOperator <east-west-gateway>
kubectl delete -n istio-system --context $REMOTE_CONTEXT2 IstioOperator <istiod-control-plane> kubectl delete -n istio-system --context $REMOTE_CONTEXT2 IstioOperator <ingress-gateway> kubectl delete -n istio-system --context $REMOTE_CONTEXT2 IstioOperator <east-west-gateway>
- Uninstall the control plane for the old revision. If you did not use revisions, specify
default
in the revision flag.istioctl uninstall --context $REMOTE_CONTEXT1 --revision <revision_or_default>
istioctl uninstall --context $REMOTE_CONTEXT2 --revision <revision_or_default>
- Delete the operator and the operator's ClusterIP service.
kubectl delete -n istio-operator --context $REMOTE_CONTEXT1 deploy <istio-operator> kubectl delete -n istio-operator --context $REMOTE_CONTEXT1 svc <istio-operator>
kubectl delete -n istio-operator --context $REMOTE_CONTEXT2 deploy <istio-operator> kubectl delete -n istio-operator --context $REMOTE_CONTEXT2 svc <istio-operator>
- Delete any unused namespaces that were for the old Istio installation only. WARNING: Do not delete namespaces that the Istio lifecycle manager uses, such as
istio-system
,gloo-mesh-gateways
, orgm-iop-1-17-2
. Check the contents of each namespace before you delete it.kubectl delete namespace istio-operator --context $REMOTE_CONTEXT1 kubectl delete namespace istio-ingress --context $REMOTE_CONTEXT1 kubectl delete namespace istio-eastwest --context $REMOTE_CONTEXT1
kubectl delete namespace istio-operator --context $REMOTE_CONTEXT2 kubectl delete namespace istio-ingress --context $REMOTE_CONTEXT2 kubectl delete namespace istio-eastwest --context $REMOTE_CONTEXT2
- Repeat these steps for each workload cluster.
- Delete the
-
Optional: Delete any workloads that you used for testing, such as the petstore apps.
kubectl delete ns petstore --context $REMOTE_CONTEXT1 kubectl delete ns petstore --context $REMOTE_CONTEXT2
Next steps
- Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo Mesh to manage your Istio service mesh resources. You don't need to directly configure any Istio resources going forward.
- When it's time to upgrade Istio, you can use Gloo Mesh to upgrade Gloo Mesh-managed Istio installations.