Switch from unmanaged to managed gateways
Use the Istio lifecycle manager to switch from your existing, unmanaged Istio gateways to Gloo-managed Istio gateways.
Introduction
If you manually maintain Istio gateways, you can migrate those gateways to the Gloo Mesh Gateway Istio and gateway lifecycle management system. Then, you can use Gloo to manage the configuration and maintenance of the Istio gateways. By using Gloo-managed gateways, you no longer need to manually install and manage the istiod
control plane and gateways in each workload cluster. Instead, you provide the Istio configuration to Gloo, and Gloo translates this configuration into managed istiod
control planes and gateways for you in the workload clusters.
Considerations
Before you follow this takeover process, review the following important considerations.
- Revisions:
- This process involves creating
IstioLifecycleManager
andGatewayLifecycleManager
resources that make use of a different revision than your existing Istio installations. If you do not currently use revisions, no conflict will exist between the new installations and existing installations. If you do currently use revisions, be sure to choose a different revision for the new installations than your existing installations. - If you plan to run multiple revisions of Istio in your cluster and use
discoverySelectors
in each revision to discover the resources in specific namespaces, enable theglooMgmtServer.extraEnvs.IGNORE_REVISIONS_FOR_VIRTUAL_DESTINATION_TRANSLATION
environment variable on the Gloo management server. For more information, see Multiple Istio revisions in the same cluster.
- This process involves creating
- Gateways: To prevent conflicts, be sure to choose a different name or namespace for the new managed gateways than your existing gateways. For example, if your existing gateway is named
istio-ingressgateway
and deployed in a namespace such asistio-gateways
, you can still name the new gatewayistio-ingressgateway
, but you must deploy it in a different namespace, such asgloo-mesh-gateways
. - Testing: Always test this process in a representative test environment before attempting this process in a production setup.
- Workload sidecars: If you also use Gloo Mesh Enterprise alongside Gloo Mesh Gateway, follow the steps in the Gloo Mesh Enterprise documentation instead. The Gloo Mesh guide shows you how to upgrade your workload sidecars along with your control planes and gateways.
Istio 1.22 is supported only as patch version 1.22.1-patch0
and later. Do not use patch versions 1.22.0 and 1.22.1, which contain bugs that impact several Gloo Mesh Gateway routing features that rely on virtual destinations. Additionally, in Istio 1.22.0-1.22.3, the ISTIO_DELTA_XDS
environment variable must be set to false
. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4.
Istio 1.20 is supported only as patch version 1.20.1-patch1
and later. Do not use patch versions 1.20.0 and 1.20.1, which contain bugs that impact several Gloo Mesh Gateway features that rely on Istio ServiceEntries.
If you have multiple external services that use the same host and plan to use Istio 1.20, 1.21, or 1.22, you must use patch versions 1.20.7, 1.21.3, or 1.22.1-patch0 or later to ensure that the Istio service entry that is created for those external services is correct.
Before you begin
Follow the get started or advanced installation guide to install the Gloo Mesh Gateway components.
Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.
REPO
: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.ISTIO_IMAGE
: The version that you want to use with thesolo
tag, such as1.24.1-patch1-solo
. You can optionally append other tags of Solo distributions of Istio as needed.REVISION
: Take the Istio major and minor versions and replace the periods with hyphens, such as1-24
.
export REPO=<repo-key> export ISTIO_IMAGE=1.24.1-patch1-solo export REVISION=1-24
Install
istioctl
, the Istio CLI tool.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-$ISTIO_VERSION export PATH=$PWD/bin:$PATH
Multicluster
Use Gloo Mesh Gateway to deploy and manage Istio gateways across multiple clusters. The takeover process follows these general steps:
- Install managed gateways with the same settings as your unmanaged Istio installations. You create
IstioLifecycleManager
andGatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. Gloo then deploysistiod
control planes and gateways for the new revision to each workload cluster, but the new, managed control planes and gateways are not active at deployment time. - Test the new control plane and gateways by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control planes to be active, and rollout a restart to data plane workloads so that the new control planes manage them.
- Update service selectors or update internal/external DNS entries to point to the new gateways.
- Uninstall the old Istio installations.
Deploy
Use Gloo Mesh Gateway to deploy and manage Istio gateways in each workload cluster.
istiod
control planes
Prepare an IstioLifecycleManager
CR to manage istiod
control planes.
Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.Update the example file with the environment variables that you previously set. Save the updated file as
istiod-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < istiod.yaml > istiod-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. For example, in
spec.installations.clusters
, verify that entries exist for each workload cluster name. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.open istiod-values.yaml
Apply the
IstioLifecycleManager
resource to your management cluster.kubectl apply -f istiod-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the
istiod
pod for the new revision has a status ofRunning
. Note that these new, managed control planes are not currently the active Istio installations.kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istiod-1-24-b65676555-g2vmr 1/1 Running 0 1m57s istiod-1-23-yt72566r9-8j5tr 1/1 Running 0 23d
East-west gateways
Prepare a GatewayLifecycleManager
custom resource to manage the east-west gateways.
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > ew-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ew-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ew-gateway.yaml > ew-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio gateway installation. For more information, see the API reference.
open ew-gateway-values.yaml
Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ew-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the east-west gateway pod for the new revision is running in the
gloo-mesh-gateways
namespace.kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-eastwestgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service for the new revision has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Ingress gateways
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > ingress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ingress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio gateway installation. For more information, see the API reference.
open ingress-gateway-values.yaml
- You can add cloud provider-specific load balancer annotations to the
istioOperatorSpec.components.ingressGateways.k8s
section, such as the following AWS annotations:... k8s: service: ... serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" service.beta.kubernetes.io/aws-load-balancer-type: external
- You can add cloud provider-specific load balancer annotations to the
Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ingress-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the ingress gateway pod for the new revision is running in the
gloo-mesh-gateways
namespace.kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service for the new revision has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Test
Test the new Istio installation by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.
Create the
bookinfo
namespace in one workload cluster.kubectl create ns bookinfo --context $REMOTE_CONTEXT1
Label the namespaces for Istio injection with the old revision so that the old revision’s control plane manages the services.
kubectl label ns bookinfo istio.io/rev=<old_revision> --context $REMOTE_CONTEXT1
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection --context $REMOTE_CONTEXT1
.Deploy the Bookinfo app to your workload cluster.
# deploy bookinfo app components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' --context $REMOTE_CONTEXT1 # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml --context $REMOTE_CONTEXT1 # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' --context $REMOTE_CONTEXT1
Verify that the Bookinfo app deployed successfully.
kubectl get pods -n bookinfo --context $REMOTE_CONTEXT1
Verify that Bookinfo still points to the old revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 | grep "\.bookinfo "
In this example output, the Bookinfo apps in
cluster1
still point to the existing Istio installation that uses version 1.22.5.NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5
Generate traffic through the old ingress gateway to Bookinfo.
- Apply a virtual gateway to the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, your virtual gateway might look like the following:kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: old-vg namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: istio-ingress EOF
- Apply a route table to allow requests to the Bookinfo services.
kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: old-vg namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
- Get the external address of the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, you might run a command similar to the following:kubectl get svc --context $REMOTE_CONTEXT1 -n istio-ingress istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"
- Test the old ingress gateway by sending a request to the productpage service.
curl http://<old_gateway_address>:80/productpage
- Apply a virtual gateway to the ingress gateway for the old revision. For example, if your gateway named
Test the transition to the new installation on Bookinfo by changing the label on the
bookinfo
namespace to use the new revision.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection- --context $REMOTE_CONTEXT1
andkubectl label ns bookinfo istio.io/rev=$REVISION --context $REMOTE_CONTEXT1
.Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 --context $REMOTE_CONTEXT1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2 --context $REMOTE_CONTEXT1
Verify that the Bookinfo pods now use the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 | grep "\.bookinfo "
Verify that the
productpage
for Bookinfo is still reachable after the upgrade. 3. Apply a virtual gateway to the ingress gateway for the new revision.kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: istio-ingressgateway namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: gloo-mesh-gateways EOF
- Apply a route table to allow requests to the Bookinfo services.
kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: istio-ingressgateway namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
- Save the external address of the ingress gateway for the new revision.
export INGRESS_GW_ADDRESS=$(kubectl get svc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
- Verify that you can access Bookinfo through the new gateway.
open http://$INGRESS_GW_ADDRESS/productpage
- Apply a route table to allow requests to the Bookinfo services.
Activate
After you finish testing, change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them. You can also optionally uninstall the old Istio installation.
In your
IstioLifecycleManager
resource, switch to the newistiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh --context $MGMT_CONTEXT istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-24 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true - name: cluster2 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
In each workload cluster, roll out a restart to your workload apps so that the new control planes manage them.
- Change the label on any Istio-managed namespaces to use the new revision.
kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1 kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT2
If you did not previously use revision labels for your apps, you can instead runkubectl label ns [namespace] istio-injection- --context $REMOTE_CONTEXT1
andkubectl label ns [namespace] istio.io/rev=$REVISION --context $REMOTE_CONTEXT1
. - Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time.
- Verify that your workloads and new gateways point to the new revision.
istioctl proxy-status --context $REMOTE_CONTEXT1 istioctl proxy-status --context $REMOTE_CONTEXT2
- Change the label on any Istio-managed namespaces to use the new revision.
If you use your own load balancer services for the gateways, update the service selectors to point to the gateways for the new revision. Alternatively, if you use the load balancer services that are deployed by default, update any internal or external DNS entries to point to the new gateway IP addresses.
Uninstall the old Istio installations. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.
Single cluster
Use Gloo Mesh Gateway to take over the Istio gateway in one cluster.
The takeover process follows these general steps:
- Install a managed gateway with the same settings as your unmanaged Istio installation. You create
IstioLifecycleManager
andGatewayLifecycleManager
resources in the management cluster that use a different revision than the existing Istio installation. Gloo then deploys aistiod
control plane and gateway for the new revision, but the new, managed control plane and gateway are not active at deployment time. - Test the new control plane by deploying workloads with a label for the new revision and generating traffic to those workloads.
- Change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them.
- Update service selectors or update internal/external DNS entries for your Istio gateways.
- Uninstall the old Istio installation.
Deploy
Use Gloo Mesh Gateway to deploy a managed gateway that replicates the settings from your unmanaged Istio installation.
istiod
control plane
Prepare an IstioLifecycleManager
CR to manage the istiod
control plane.
Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh-core/istio-install/managed/single/takeover-istiod.yaml > istiod.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
istiod-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < istiod.yaml > istiod-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to provide your own details. For more information, see the API reference.
open istiod-values.yaml
Apply the
IstioLifecycleManager
resource to your cluster.kubectl apply -f istiod-values.yaml
Verify that the
istiod
pod for the new revision has a status ofRunning
. Note that this new, managed control plane is not currently the active Istio installation.kubectl get pods -n istio-system
Example output:
NAME READY STATUS RESTARTS AGE istiod-1-24-b65676555-g2vmr 1/1 Running 0 1m57s istiod-1-23-yt72566r9-8j5tr 1/1 Running 0 23d
Ingress gateway
Download the example file.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/single-cluster/gm-ingress-gateway.yaml > ingress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ingress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to replicate the settings in your existing Istio installation. For more information, see the API reference.
open ingress-gateway-values.yaml
- You can add cloud provider-specific load balancer annotations to the
istioOperatorSpec.components.ingressGateways.k8s
section, such as the following AWS annotations:... k8s: service: ... serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" service.beta.kubernetes.io/aws-load-balancer-type: external
- You can add cloud provider-specific load balancer annotations to the
Apply the
GatewayLifecycleManager
CR to your cluster.kubectl apply -f ingress-gateway-values.yaml
In the
gloo-mesh-gateways
namespace, verify that the ingress gateway pod for the new revision is running and that the load balancer service has an external address.kubectl get pods -n gloo-mesh-gateways kubectl get svc -n gloo-mesh-gateways
Example output:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.Optional for OpenShift: Expose the ingress gateway by using an OpenShift route.
oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2
Test
Test the new managed gateway by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.
Create the
bookinfo
namespace.kubectl create ns bookinfo
Label the namespace for Istio injection with the old revision so that the old revision’s control plane manages the services.
kubectl label ns bookinfo istio.io/rev=<old_revision>
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection
.Deploy the Bookinfo app.
# deploy bookinfo app components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
Verify that the Bookinfo app deployed successfully.
kubectl get pods -n bookinfo kubectl get svc -n bookinfo
Verify that Bookinfo still points to the old revision.
istioctl proxy-status | grep "\.bookinfo "
In this example output, the Bookinfo apps still point to the existing Istio installation that uses version 1.22.5.
NAME CLUSTER ... ISTIOD VERSION details-v1-6758dd9d8d-rh4db.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 productpage-v1-b4cf67f67-s5lsh.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 ratings-v1-f849dc6d-wqdc8.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 reviews-v1-74fb8fdbd8-z8bzc.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5 reviews-v2-58d564d4db-g8jzr.bookinfo cluster1 ... istiod-66d54b865-6b6zt 1.22.5
Generate traffic through the old ingress gateway to Bookinfo.
- Apply a virtual gateway to the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, your virtual gateway might look like the following:kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: old-vg namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: istio-ingress EOF
- Apply a route table to allow requests to the Bookinfo services.
kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: old-vg namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
- Get the external address of the ingress gateway for the old revision. For example, if your gateway named
istio-ingressgateway
exists in theistio-ingress
namespace, you might run a command similar to the following:kubectl get svc -n istio-ingress istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"
- Test the old ingress gateway by sending a request to the productpage service.
curl http://<old_gateway_address>:80/productpage
- Apply a virtual gateway to the ingress gateway for the old revision. For example, if your gateway named
Test the transition to the new installation on Bookinfo by changing the label on the
bookinfo
namespace to use the new revision.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns bookinfo istio-injection-
andkubectl label ns bookinfo istio.io/rev=$REVISION
.Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.
kubectl rollout restart deployment -n bookinfo details-v1 sleep 20s kubectl rollout restart deployment -n bookinfo ratings-v1 sleep 20s kubectl rollout restart deployment -n bookinfo productpage-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v1 sleep 20s kubectl rollout restart deployment -n bookinfo reviews-v2
Verify that the Bookinfo pods now use the new revision.
istioctl proxy-status | grep "\.bookinfo "
Verify that the
productpage
for Bookinfo is still reachable after the upgrade. 3. Apply a virtual gateway to the ingress gateway for the new revision.kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualGateway metadata: name: istio-ingressgateway namespace: bookinfo spec: listeners: - http: {} port: number: 80 workloads: - selector: labels: istio: ingressgateway namespace: gloo-mesh-gateways EOF
- Apply a route table to allow requests to the Bookinfo services.
kubectl apply -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: bookinfo namespace: bookinfo spec: hosts: - '*' # Selects the virtual gateway you previously created virtualGateways: - name: istio-ingressgateway namespace: bookinfo http: # Route for the main productpage app - name: productpage matchers: - uri: prefix: /productpage forwardTo: destinations: - ref: name: productpage namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /reviews requests to the reviews-v1 or reviews-v2 apps - name: reviews labels: route: reviews matchers: - uri: prefix: /reviews forwardTo: destinations: - ref: name: reviews namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 # Routes all /ratings requests to the ratings-v1 app - name: ratings-ingress labels: route: ratings matchers: - uri: prefix: /ratings forwardTo: destinations: - ref: name: ratings namespace: bookinfo cluster: $CLUSTER_NAME port: number: 9080 EOF
- Save the external address of the ingress gateway for the new revision.
export INGRESS_GW_ADDRESS=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
- Verify that you can access Bookinfo through the new gateway.
open http://$INGRESS_GW_ADDRESS/productpage
- Apply a route table to allow requests to the Bookinfo services.
Activate
After you finish testing, change the new control plane to be active, and rollout a restart to data plane workloads so that the new control plane manages them. You can also optionally uninstall the old Istio installation.
In your
IstioLifecycleManager
resource, switch to the newistiod
control plane revision by changingdefaultRevision
totrue
.kubectl edit IstioLifecycleManager -n gloo-mesh istiod-control-plane
Example:
apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - revision: 1-24 clusters: - name: cluster1 # Set this field to TRUE defaultRevision: true istioOperatorSpec: profile: minimal ...
Roll out a restart to your workload apps so that the new control plane manages them.
- Change the label on any Istio-managed namespaces to use the new revision.
kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite
If you did not previously use revision labels for your apps, you can instead runkubectl label ns [namespace] istio-injection-
andkubectl label ns [namespace] istio.io/rev=$REVISION
. - Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice update to use the new Istio version. Make sure that you only restart one microservice at a time.
- Verify that your workloads point to the new revision.
istioctl proxy-status
- Change the label on any Istio-managed namespaces to use the new revision.
If you use your own load balancer services for the gateway, update the service selectors to point to the gateway for the new revision. Alternatively, if you use the load balancer service that is deployed by default, update any internal or external DNS entries to point to the new gateway IP address.
Uninstall the old Istio installation. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.
Next steps
When it’s time to upgrade to a new version or change Istio settings, you can use Gloo Mesh Gateway to upgrade your managed gateways.