Switch from unmanaged to managed Istio installations
Use the Istio lifecycle manager to switch from your existing, unmanaged Istio installations to Gloo-managed Istio installations.
Introduction
If you manually maintain Istio installations, you can migrate those installations to the Gloo Mesh Enterprise Istio and gateway lifecycle management system. Then, you can use Gloo to manage the configuration and maintenance of the Istio installations. By using Gloo-managed installations, you no longer need to manually install and manage the istiod
control plane and gateways in each workload cluster. Instead, you provide the Istio configuration to Gloo, and Gloo translates this configuration into managed istiod
control planes and gateways for you in the workload clusters.
Considerations
Before you follow this takeover process, review the following important considerations.
- Revisions:
- This process involves creating
IstioLifecycleManager
andGatewayLifecycleManager
resources that make use of a different revision than your existing Istio installations. If you do not currently use revisions, no conflict will exist between the new installations and existing installations. If you do currently use revisions, be sure to choose a different revision for the new installations than your existing installations. - If you plan to run multiple revisions of Istio in your cluster and use
discoverySelectors
in each revision to discover the resources in specific namespaces, enable theglooMgmtServer.extraEnvs.IGNORE_REVISIONS_FOR_VIRTUAL_DESTINATION_TRANSLATION
environment variable on the Gloo management server. For more information, see Multiple Istio revisions in the same cluster.
- This process involves creating
- Gateways: To prevent conflicts, be sure to choose a different name or namespace for the new managed gateways than your existing gateways. For example, if your existing gateway is named
istio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
. - Testing: Always test this process in a representative test environment before attempting this process in a production setup.
For FIPS-compliant Solo distributions of Istio 1.17.2 and 1.16.4, you must use the -patch1
versions of the latest Istio builds published by Solo, such as 1.17.2-patch1-solo-fips
for Solo distribution of Istio 1.17. These patch versions fix a FIPS-related issue introduced in the upstream Envoy code. In 1.17.3 and later, FIPS compliance is available in the -fips
tags of regular Solo distributions of Istio, such as 1.17.3-solo-fips
.
Before you begin
Follow the get started or advanced installation guide to install the Gloo Mesh Enterprise components.
Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.
REPO
: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.ISTIO_IMAGE
: The version that you want to use with thesolo
tag, such as1.18.7-patch3-solo
. You can optionally append other tags of Solo distributions of Istio as needed.REVISION
: Take the Istio major and minor versions and replace the periods with hyphens, such as1-18
.
export REPO=<repo-key> export ISTIO_IMAGE=1.18.7-patch3-solo export REVISION=1-18
Install
istioctl
, the Istio CLI tool.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-$ISTIO_VERSION export PATH=$PWD/bin:$PATH
Multicluster
Use Gloo Mesh Enterprise to deploy and manage Istio installations across multiple clusters. The takeover process follows these general steps:
- Install managed installations with the same settings as your unmanaged Istio installations. You create
IstioLifecycleManager
andGatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. Gloo then deploysistiod
control planes and gateways for the new revision to each workload cluster, but the new, managed control planes and gateways are not active at deployment time. - Test the new control plane and gateways by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control planes to be active, and rollout a restart to data plane workloads so that the new control planes manage them.
- Update service selectors or update internal/external DNS entries to point to the new gateways.
- Uninstall the old Istio installations.
Deploy
Use Gloo Mesh Enterprise to deploy and manage Istio installations in each workload cluster.
istiod
control planes
Prepare an IstioLifecycleManager
CR to manage istiod
control planes.
Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.Update the example file with the environment variables that you previously set. Save the updated file as
istiod-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < istiod.yaml > istiod-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. For example, in
spec.installations.clusters
, verify that entries exist for each workload cluster name. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.open istiod-values.yaml
Apply the
IstioLifecycleManager
resource to your management cluster.kubectl apply -f istiod-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the
istiod
pod for the new revision has a status ofRunning
. Note that these new, managed control planes are not currently the active Istio installations.kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istiod-1-18-b65676555-g2vmr 1/1 Running 0 1m57s istiod-1-17-yt72566r9-8j5tr 1/1 Running 0 23d
East-west gateways
Prepare a GatewayLifecycleManager
custom resource to manage the east-west gateways.
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > ew-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ew-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ew-gateway.yaml > ew-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio gateway installation. For more information, see the API reference.
open ew-gateway-values.yaml
Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ew-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the east-west gateway pod for the new revision is running in the
gloo-mesh-gateways
namespace.kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-eastwestgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service for the new revision has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Optional: Ingress gateways
If you want to allow traffic from outside the cluster to enter your mesh, create a GatewayLifecycleManager
resource to deploy and manage an ingress gateway. The ingress gateway allows you to specify basic routing rules for how to match and forward incoming traffic to a workload in the mesh. However, to also apply policies, such as rate limits, external authentication, or a Web Application Firewall to the gateway, you must have a Gloo Mesh Gateway license. For more information about Gloo Mesh Gateway, see the docs. If you want a service mesh-only environment without ingress, you can skip this step.
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > ingress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ingress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio gateway installation. For more information, see the API reference.
open ingress-gateway-values.yaml
- You can add cloud provider-specific load balancer annotations to the
istioOperatorSpec.components.ingressGateways.k8s
section, such as the following AWS annotations:... k8s: service: ... serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" service.beta.kubernetes.io/aws-load-balancer-type: external
- You can add cloud provider-specific load balancer annotations to the
Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ingress-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the ingress gateway pod for the new revision is running in the
gloo-mesh-gateways
namespace.kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service for the new revision has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Optional: Egress gateways
Prepare a GatewayLifecycleManager
resource to deploy and manage egress gateways.
Download the example file,
egress-gateway.yaml
, which contains a basicGatewayLifecycleManager
configuration for an egress gateway.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-egress-gateway.yaml > egress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
egress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < egress-gateway.yaml > egress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. For example, in
spec.installations.clusters
, verify that entries exist for each workload cluster name. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.open egress-gateway-values.yaml
Apply the
GatewayLifecycleManager
resource to your management cluster.kubectl apply -f egress-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the egress gateway pod for the new revision is running in the
gloo-mesh-gateways
namespace.kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-egressgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service for the new revision has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Test
Test the new Istio installation by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.
Create the
bookinfo
namespace in one workload cluster.kubectl create ns bookinfo --context $REMOTE_CONTEXT1
Label the namespaces for Istio injection with the old revision so that the old revision’s control plane manages the services.
kubectl label ns bookinfo istio.io/rev=<old_revision> --context $REMOTE_CONTEXT1
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection --context $REMOTE_CONTEXT1
.Deploy the Bookinfo app to your workload cluster.
# deploy bookinfo app components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' --context $REMOTE_CONTEXT1 # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml --context $REMOTE_CONTEXT1 # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' --context $REMOTE_CONTEXT1
Verify that the Bookinfo app deployed successfully.
kubectl get pods -n bookinfo --context $REMOTE_CONTEXT1
Verify that Bookinfo still points to the old revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 | grep "\.bookinfo "
In this example output, the Bookinfo apps in
cluster1
still point to the existing Istio installation that uses version 1.17.8.NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8
Prepare any testing to confirm that the Bookinfo apps are still available when you update the apps’ sidecars to use the new revision. For example, if you have an ingress gateway load balancer in your environment, you might follow these steps to ensure you can access Bookinfo:
- Apply Istio gateway and virtual service resources to expose the Bookinfo app.
kubectl --context $REMOTE_CONTEXT1 -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/networking/bookinfo-gateway.yaml
- Get the address for your existing ingress gateway, and send a request to the productpage service.
curl http://<gateway_address>:80/productpage
- Apply Istio gateway and virtual service resources to expose the Bookinfo app.
Test the transition to the new installation on Bookinfo by changing the label on the
bookinfo
namespace to use the new revision.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection- --context $REMOTE_CONTEXT1
andkubectl label ns bookinfo istio.io/rev=$REVISION --context $REMOTE_CONTEXT1
.Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2 --context $REMOTE_CONTEXT1
Verify that the Bookinfo pods now use the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 | grep "\.bookinfo "
If you exposed Bookinfo, verify that the
productpage
for Bookinfo is now reachable after the upgrade through the new ingress gateway.- Save the external address of the ingress gateway that runs the new revision.
export INGRESS_GW_ADDRESS=$(kubectl get svc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
- Send a request to the productpage service.
open http://$INGRESS_GW_ADDRESS/productpage
- Save the external address of the ingress gateway that runs the new revision.
Activate
After you finish testing, change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them. You can also optionally uninstall the old Istio installation.
In your
IstioLifecycleManager
resource, switch to the newistiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh --context $MGMT_CONTEXT istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-18 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true - name: cluster2 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
In each workload cluster, roll out a restart to your workload apps so that the new control planes manage them.
- Change the label on any Istio-managed namespaces to use the new revision.
kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1 kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT2
If you did not previously use revision labels for your apps, you can instead runkubectl label ns [namespace] istio-injection- --context $REMOTE_CONTEXT1
andkubectl label ns [namespace] istio.io/rev=$REVISION --context $REMOTE_CONTEXT1
. - Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time.
- Verify that your workloads and new gateways point to the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 istioctl proxy-status --context $REMOTE_CONTEXT2
- Change the label on any Istio-managed namespaces to use the new revision.
If you use any gateways, you can either update the service selectors for your load balancers to point to the new revision, or update your DNS entries to point to the addresses of the new gateways.
Uninstall the old Istio installations. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.
Single cluster
Use Gloo Mesh Enterprise to take over the Istio installation in one cluster.
The takeover process follows these general steps:
- Install a managed installation with the same settings as your unmanaged Istio installation. You create
IstioLifecycleManager
andGatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installation. Gloo then deploys aistiod
control plane and gateway for the new revision, but the new, managed control plane and gateway are not active at deployment time. - Test the new control plane by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them.
- Update service selectors or update internal/external DNS entries for your Istio gateways.
- Uninstall the old Istio installation.
Deploy
Use Gloo Mesh Enterprise to deploy a managed installation that replicates the settings from your unmanaged Istio installation.
istiod
control plane
Prepare an IstioLifecycleManager
CR to manage the istiod
control plane.
Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh-core/istio-install/managed/single/takeover-istiod.yaml > istiod.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
istiod-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < istiod.yaml > istiod-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to provide your own details. For more information, see the API reference.
open istiod-values.yaml
Apply the
IstioLifecycleManager
resource to your cluster.kubectl apply -f istiod-values.yaml
Verify that the
istiod
pod for the new revision has a status ofRunning
. Note that this new, managed control plane is not currently the active Istio installation.kubectl get pods -n istio-system
Example output:
NAME READY STATUS RESTARTS AGE istiod-1-18-b65676555-g2vmr 1/1 Running 0 1m57s istiod-1-17-yt72566r9-8j5tr 1/1 Running 0 23d
Optional: Ingress gateway
If you want to allow traffic from outside the cluster to enter your mesh, create a GatewayLifecycleManager
resource to deploy and manage an ingress gateway. The ingress gateway allows you to specify basic routing rules for how to match and forward incoming traffic to a workload in the mesh. However, to also apply policies, such as rate limits, external authentication, or a Web Application Firewall to the gateway, you must have a Gloo Mesh Gateway license. For more information about Gloo Mesh Gateway, see the docs. If you want a service mesh-only environment without ingress, you can skip this step.
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/single-cluster/gm-ingress-gateway.yaml > ingress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ingress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.
open ingress-gateway-values.yaml
- You can add cloud provider-specific load balancer annotations to the
istioOperatorSpec.components.ingressGateways.k8s
section, such as the following AWS annotations:... k8s: service: ... serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" service.beta.kubernetes.io/aws-load-balancer-type: external
- You can add cloud provider-specific load balancer annotations to the
Apply the
GatewayLifecycleManager
CR to your cluster.kubectl apply -f ingress-gateway-values.yaml
In the
gloo-mesh-gateways
namespace, verify that the ingress gateway pod for the new revision is running and that the load balancer service has an external address.kubectl get pods -n gloo-mesh-gateways kubectl get svc -n gloo-mesh-gateways
Example output:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.Optional for OpenShift: Expose the ingress gateway by using an OpenShift route.
oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2
Optional: Egress gateway
Prepare a GatewayLifecycleManager
resource to deploy and manage an egress gateway.
- Download the example file,
egress-gateway.yaml
, which contains a basicGatewayLifecycleManager
configuration for an egress gateway.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/single-cluster/gm-egress-gateway.yaml > egress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
egress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < egress-gateway.yaml > egress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.
open egress-gateway-values.yaml
Apply the
GatewayLifecycleManager
resource to your cluster.kubectl apply -f egress-gateway-values.yaml
In the
gloo-mesh-gateways
namespace, verify that the egress gateway pod for the new revision is running and that the load balancer service has an external address.kubectl get pods -n gloo-mesh-gateways kubectl get svc -n gloo-mesh-gateways
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-egressgateway-665d46686f-nhh52 1/1 Running 0 106s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Test
Test the new managed installation by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.
Create the
bookinfo
namespace.kubectl create ns bookinfo
Label the namespace for Istio injection with the old revision so that the old revision’s control plane manages the services.
kubectl label ns bookinfo istio.io/rev=<old_revision>
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection
.Deploy the Bookinfo app.
# deploy bookinfo app components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
Verify that the Bookinfo app deployed successfully.
kubectl get pods -n bookinfo kubectl get svc -n bookinfo
Verify that Bookinfo still points to the old revision.
istioctl proxy-status | grep "\.bookinfo "
In this example output, the Bookinfo apps still point to the existing Istio installation that uses version 1.17.8.
NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.17.8
Prepare any testing to confirm that the Bookinfo apps are still available when you update the apps’ sidecars to use the new revision. For example, if you have an ingress gateway load balancer in your environment, you might follow these steps to ensure you can access Bookinfo:
- Apply Istio gateway and virtual service resources to expose the Bookinfo app.
kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/bookinfo/networking/bookinfo-gateway.yaml
- Get the address for your existing ingress gateway, and send a request to the productpage service.
curl http://<gateway_address>:80/productpage
- Apply Istio gateway and virtual service resources to expose the Bookinfo app.
Test the transition to the new installation on Bookinfo by changing the label on the
bookinfo
namespace to use the new revision.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection-
andkubectl label ns bookinfo istio.io/rev=$REVISION
.Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2
Verify that the Bookinfo pods now use the new revision.
istioctl proxy-status | grep "\.bookinfo "
If you exposed Bookinfo, verify that the
productpage
for Bookinfo is now reachable after the upgrade through the new ingress gateway.- Save the external address of the ingress gateway that runs the new revision.
export INGRESS_GW_ADDRESS=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
- Send a request to the productpage service.
open http://$INGRESS_GW_ADDRESS/productpage
- Save the external address of the ingress gateway that runs the new revision.
Activate
After you finish testing, change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them. You can also optionally uninstall the old Istio installation.
In your
IstioLifecycleManager
resource, switch to the newistiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-18 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
Roll out a restart to your workload apps so that the new control plane manages them.
- Change the label on any Istio-managed namespaces to use the new revision.
kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns [namespace] istio-injection-
andkubectl label ns [namespace] istio.io/rev=$REVISION
. - Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time.
- Verify that your workloads point to the new revision.
istioctl proxy-status
- Change the label on any Istio-managed namespaces to use the new revision.
If you use any gateways, you can either update the service selectors for your load balancers to point to the new revision, or update your DNS entries to point to the addresses of the new gateways.
Uninstall the old Istio installation. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.
Next steps
When it’s time to upgrade to a new version or change Istio settings, you can use Gloo Mesh Enterprise to upgrade your managed installations.