Switch from unmanaged to managed gateways
Use the Istio lifecycle manager to switch from your existing, unmanaged Istio gateway installation to a Gloo-managed Istio gateway installation. The takeover process follows these general steps:
- Create
IstioLifecycleManager
andGatewayLifecycleManager
resources in your cluster that use a different revision than the existing Istio installations. Theistiod
control plane and Istio ingress gateway for the new installation are deployed, but are not active at deployment time. - Test the new control plane and gateway by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control plane to be active, and rollout a restart to data plane workloads so that they are managed by the new control plane.
- Update load balancer service selectors or update internal/external DNS entries to point to the new ingress gateway.
- Uninstall the old Istio installations.
Considerations
Before you follow this takeover process, review the following important considerations.
- Revisions: This process involves creating
IstioLifecycleManager
andGatewayLifecycleManager
resources that make use of a different revision than your existing Istio installations. If you do not currently use revisions, no conflict will exist between the new installations and existing installations. If you do currently use revisions, be sure to choose a different revision for the new installations than your existing installations. - Gateways: To prevent conflicts, be sure to choose a different name or namespace for the new managed gateways than your existing gateways. For example, if your existing gateway is named
istio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
. - Testing: Always test this process in a representative test environment before attempting this process in a production setup.
If you also use Gloo Mesh Enterprise alongside Gloo Gateway, follow the steps in the Gloo Mesh documentation instead. The Gloo Mesh guide shows you how to upgrade your workload sidecars along with your control planes and gateways.
Before you begin
-
Install Gloo Gateway. When you run the Helm install command, include
--set istioInstallations.enabled=false
to ensure that the default managed gateway proxy is not created automatically. -
Save the names of your clusters from your infrastructure provider as environment variables.
export CLUSTER_NAME=<cluster-name>
-
To use a Solo Istio image, you must have a Solo account. Make sure that you can log in to the Support Center. If not, contact your account administrator to get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.
Deploy the managed gateway installations
Create IstioLifecycleManager
and GatewayLifecycleManager
resources that use a different revision than your existing Istio installation. The istiod
control plane and ingress gateway for the new installation are deployed, but are not active at deployment time.
-
Save the Istio version information as environment variables.
- For
REPO
, use a Solo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Solo Istio version that you want to use. - For
ISTIO_IMAGE
, save the version that you downloaded, such as 1.19.3, and append thesolo
tag, which is required to use many enterprise features. You can optionally append other Solo Istio tags, as described in About Solo Istio. If you downloaded a different version than the following, make sure to specify that version instead. - For
REVISION
, specify any name or integer. For example, you can specify the version, such as1-19-3
. If you currently use a revision for your existing Istio installations, be sure to use a different revision than the existing one.
export REPO=<repo-key> export ISTIO_IMAGE=1.19.3-solo export REVISION=1-19-3
- For
-
Prepare an
IstioLifecycleManager
resource to manage theistiod
control plane.- Download the
gm-istiod.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod.yaml > gm-istiod.yaml
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod-openshift.yaml > gm-istiod.yaml
- Update the example file with the environment variables that you previously set. Save the updated file as
gm-istiod-values.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst < gm-istiod.yaml > gm-istiod-values.yaml open gm-istiod-values.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
- Check the settings in the
IstioLifecycleManager
resource. You can also further edit the file to replicate the settings in your existing Istio installation.- Root namespace: If you do not specify a namespace, the root namespace for the installed Istio resources in workload clusters is set to
istio-system
. If theistio-system
namespace does not already exist, it is created for you. - Trust domain: By default, the
trustDomain
value is automatically set by the installer to the name of each workload cluster. To override thetrustDomain
for each cluster, you can instead specify the override value in thetrustDomain
field, and include the value in the list of cluster names. For example, if you specifytrustDomain: cluster1-trust-override
in the operator spec, you then specify the cluster name (cluster1
) and the trust domain (cluster1-trust-override
) in the list of cluster names. Additionally, because Gloo requires multiple trust domains for east-west routing, thePILOT_SKIP_VALIDATE_TRUST_DOMAIN
field is set to"true"
by default.
- Root namespace: If you do not specify a namespace, the root namespace for the installed Istio resources in workload clusters is set to
- Apply the
IstioLifecycleManager
resource to your cluster.kubectl apply -f gm-istiod-values.yaml
- Download the
-
Prepare a
GatewayLifecycleManager
custom resource to manage the ingress gateway proxy.- Download the
gm-ingress-gateway.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
- Update the example file with the environment variables that you previously set. Save the updated file as
gm-ingress-gateway-values.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst < gm-ingress-gateway.yaml > gm-ingress-gateway-values.yaml open gm-ingress-gateway-values.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
- Check the settings in the
GatewayLifecycleManager
resource. You can also further edit the file to replicate the settings in your existing Istio installation.- Gateway name and namespace: The default name for the gateway is set to
istio-ingressgateway
, and the default namespace for the gateway is set togloo-mesh-gateways
. If thegloo-mesh-gateways
namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is namedistio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
.
- Gateway name and namespace: The default name for the gateway is set to
- Apply the
GatewayLifecycleManager
resource to your cluster.kubectl apply -f gm-ingress-gateway-values.yaml
- Download the
Verify the new managed installations
Verify that the new control plane and gateway are deployed to your cluster.
-
Verify that the namespaces for your managed Istio installations are created.
kubectl get ns
For example, the
gm-iop-1-19-3
andgloo-mesh-gateways
namespaces are created alongside the namespaces you might already use for your existing Istio installations (such asistio-system
andistio-gateways
):NAME STATUS AGE default Active 56m gloo-mesh Active 36m gm-iop-1-19-3 Active 91s gloo-mesh-gateways Active 90s istio-gateways Active 50m istio-system Active 50m kube-node-lease Active 57m kube-public Active 57m kube-system Active 57m
-
In each namespace, verify that the Istio resources for the new revision are successfully installed.
kubectl get all -n gm-iop-1-19-3
Example output:
NAME READY STATUS RESTARTS AGE pod/istio-operator-1-19-3-678fd95cc6-ltbvl 1/1 Running 0 4m12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-operator-1-19-3 ClusterIP 10.204.15.247 <none> 8383/TCP 4m12s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-operator-1-19-3 1/1 1 1 4m12s NAME DESIRED CURRENT READY AGE replicaset.apps/istio-operator-1-19-3-678fd95cc6 1 1 1 4m12s
kubectl get all -n istio-system
Example output: Note that your existing Istio control plane pods might be deployed to this namespace too.
NAME READY STATUS RESTARTS AGE pod/istiod-1-19-3-b65676555-g2vmr 1/1 Running 0 8m57s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istiod-1-19-3 ClusterIP 10.204.6.56 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 8m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istiod-1-19-3 1/1 1 1 8m57s NAME DESIRED CURRENT READY AGE replicaset.apps/istiod-1-19-3-b65676555 1 1 1 8m57s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istiod-1-19-3 Deployment/istiod-1-19-3 1%/80% 1 5 1 8m58s
kubectl get all -n gloo-mesh-gateways
Example output: Your output might vary depending on which gateways you installed. Note that the gateways might take a few minutes to be created.
NAME READY STATUS RESTARTS AGE pod/istio-ingressgateway-1-19-3-77d5f76bc8-j6qkp 1/1 Running 0 2m18s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-ingressgateway LoadBalancer 10.44.4.140 34.150.235.221 15021:31321/TCP,80:32525/TCP,443:31826/TCP 2m16s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-ingressgateway-1-19-3 1/1 1 1 2m18s NAME DESIRED CURRENT READY AGE replicaset.apps/istio-ingressgateway-1-19-3-77d5f76bc8 1 1 1 2m18s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/istio-ingressgateway-1-19-3 Deployment/istio-ingressgateway-1-19-3 4%/80% 1 5 1 2m19s
Test the new managed installations
Test the new Istio installation by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.
-
Create the
bookinfo
namespace.kubectl create ns bookinfo
-
Label the namespace for Istio injection with the old revision so that the services are managed by the old revision's control plane.
kubectl label ns bookinfo istio.io/rev=<old_revision>
Deploy the Bookinfo app.
# deploy bookinfo application components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.19.3/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.19.3/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
Verify that the Bookinfo app is deployed successfully.
kubectl get pods -n bookinfo kubectl get svc -n bookinfo
Verify that your workloads and existing gateways still point to the old revision, and only the new gateway points to the new revision.
istioctl proxy-status
In this example output, the Bookinfo apps and existing ingress gateway still point to the existing Istio installation that uses version
1.18.3
. Only the new ingress gateway points to the managed Istio installation that uses version1.19.3-solo
and revision1-19-3
.NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.18.3 istio-ingressgateway-575b697f9-49v4c.istio-gateways cluster1 ... istiod-66d54b865-6b6zt 1.18.3 istio-ingressgateway-1-19-3-575b697f9-49v4c.gloo-mesh-gateways cluster1 ... istiod-1-19-3-5b7b9df586-95sq6 1.19.3-solo productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.18.3 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.18.3 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.18.3 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.18.3
Generate traffic through the old ingress gateway to Bookinfo.
- Apply a virtual gateway to the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, your virtual gateway might look like the following:kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: old-vg namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: istio-ingress EOF
- Apply a route table to allow requests to the Bookinfo services.
kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: old-vg namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
- Get the external address of the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, you might run a command similar to the following:kubectl get svc -n istio-ingress istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
kubectl get svc -n istio-ingress istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
- Test the old ingress gateway by sending a request to the productpage service.
curl http://<old_gateway_address>:80/productpage
Test the transition to the new installation on Bookinfo by changing the label on the
bookinfo
namespace to use the new revision.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection-
andkubectl label ns bookinfo istio.io/rev=$REVISION
.Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2
Verify that the Bookinfo pods now use the new revision.
istioctl proxy-status | grep "\.bookinfo "
Verify that the
productpage
for Bookinfo is still reachable after the upgrade.Apply a virtual gateway to the ingress gateway for the new revision.
kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: istio-ingressgateway namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: gloo-mesh-gateways EOF
Apply a route table to allow requests to the Bookinfo services.
kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: istio-ingressgateway namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
3. Save the external address of the ingress gateway for the new revision.
export INGRESS_GW_IP=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo $INGRESS_GW_IP
export INGRESS_GW_IP=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo $INGRESS_GW_IP
Activate the managed installations
After you finish testing, change the new control plane to be active, and rollout a restart to data plane workloads so that they are managed by the new control plane. Then, you can update service selectors or update internal/external DNS entries to point to the new ingress gateway. You can also optionally uninstall the old Istio installations.
-
In your
IstioLifecycleManager
resource, switch to the newistiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-19-3 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
-
Roll out a restart to your workload apps so that they are managed by the new control plane.
- Change the label on any Istio-managed namespaces to use the new revision.
kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns <namespace> istio-injection-
andkubectl label ns <namespace> istio.io/rev=$REVISION
. - Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time.
- Verify that your workloads and new gateways point to the new revision.
istioctl proxy-status
- Change the label on any Istio-managed namespaces to use the new revision.
-
If you use your own load balancer services for the gateway, update the service selectors to point to the gateway for the new revision. Alternatively, if you use the load balancer service that is deployed by default, update any internal or external DNS entries to point to the new gateway IP address.
-
Uninstall the old Istio installation. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.
Next steps
When it's time to upgrade Istio, you can use Gloo Gateway to upgrade Gloo-managed gateways.