Switch from unmanaged to managed gateways

Use the Istio lifecycle manager to switch from your existing, unmanaged Istio gateway installations to Gloo-managed Istio gateway installations. The takeover process follows these general steps:

  1. Create IstioLifecycleManager and GatewayLifecycleManager resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. The istiod control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time.
  2. Test the new control plane and gateways by deploying workloads with a label for the new revision and generating traffic to those workloads.
  3. Change the new control planes to be active, and rollout a restart to data plane workloads so that they are managed by the new control planes.
  4. Update load balancer service selectors or update internal/external DNS entries to point to the new gateways.
  5. Uninstall the old Istio installations.

Considerations

Before you follow this takeover process, review the following important considerations.

If you also use Gloo Mesh Enterprise alongside Gloo Gateway, follow the steps in the Gloo Mesh documentation instead. The Gloo Mesh guide shows you how to upgrade your workload sidecars along with your control planes and gateways.

Before you begin

  1. Save the names of your clusters from your infrastructure provider as environment variables.
    export CLUSTER_NAME=<cluster-name>
    
  2. To use a Gloo Mesh hardened image of Istio, you must have a Solo account. Make sure that you can log in to the Support Center. If not, contact your account administrator to get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.

Deploy the managed Istio gateway installations

Create IstioLifecycleManager and GatewayLifecycleManager resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. The istiod control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time.

  1. Save the Istio version information as environment variables.

    • For REPO, use a Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Gloo Istio version that you want to use.
    • For ISTIO_IMAGE, save the version that you downloaded, such as 1.18.2, and append the solo tag, which is required to use many enterprise features. You can optionally append other Gloo Istio tags, as described in About Gloo Istio. If you downloaded a different version than the following, make sure to specify that version instead.
    • For REVISION, specify any name or integer. For example, you can specify the version, such as 1-18-2. If you currently use a revision for your existing Istio installations, be sure to use a different revision than the existing one.
    export REPO=<repo-key>
    export ISTIO_IMAGE=1.18.2-solo
    export REVISION=1-18-2
    
  2. Prepare an IstioLifecycleManager resource to manage istiod control planes.

    1. Download the gm-istiod.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod.yaml > gm-istiod.yaml
      
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod-openshift.yaml > gm-istiod.yaml
      
    2. Update the example file with the environment variables that you previously set for $REPO, $ISTIO_IMAGE, $REVISION, and $CLUSTER_NAME. Save the updated file as gm-istiod-values.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-istiod.yaml > gm-istiod-values.yaml
        open gm-istiod-values.yaml
        
    3. Check the settings in the IstioLifecycleManager resource. You can further edit the file to provide your own details.
      • Clusters: Specify the registered cluster names in the clusters section. For multicluster setups, you must edit the file to specify the name of your workload clusters. For each cluster, defaultRevision: false ensures that the Istio operator spec for the control plane installation is NOT active in the cluster.
      • Root namespace: If you do not specify a namespace, the root namespace for the installed Istio resources in workload clusters is set to istio-system. If the istio-system namespace does not already exist, it is created for you.
      • Trust domain: By default, the trustDomain value is automatically set by the installer to the name of each workload cluster. To override the trustDomain for each cluster, you can instead specify the override value in the trustDomain field, and include the value in the list of cluster names. For example, if you specify trustDomain: cluster1-trust-override in the operator spec, you then specify the cluster name (cluster1) and the trust domain (cluster1-trust-override) in the list of cluster names. Additionally, because Gloo requires multiple trust domains for east-west routing, the PILOT_SKIP_VALIDATE_TRUST_DOMAIN field is set to "true" by default.
    4. Apply the IstioLifecycleManager resource to your management cluster.
      kubectl apply -f gm-istiod-values.yaml
      
  3. Prepare a GatewayLifecycleManager custom resource to manage the ingress gateway proxies.

    1. Download the gm-ingress-gateway.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
      
    2. Update the example file with the environment variables that you previously set for $REPO, $ISTIO_IMAGE, $REVISION, and $CLUSTER_NAME. Save the updated file as gm-ingress-gateway-values.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-ingress-gateway.yaml > gm-ingress-gateway-values.yaml
        open gm-ingress-gateway-values.yaml
        
    3. Check the settings in the GatewayLifecycleManager resource. You can further edit the file to provide your own details.
      • Clusters: Specify the registered cluster names in the clusters section. For multicluster setups, you must edit the file to specify the name of your workload clusters. For each cluster, activeGateway: true ensures that the Istio operator spec for the gateway is deployed and actively used by the istiod control plane.
      • Gateway name and namespace: The default name for the gateway is set to istio-ingressgateway, and the default namespace for the gateway is set to gloo-mesh-gateways. If the gloo-mesh-gateways namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is named istio-ingressgateway and deployed in a namespace such as istio-gateways, you can still name the new gateway istio-ingressgateway, but you must deploy it in a different namespace, such as gloo-mesh-gateways.
    4. Apply the GatewayLifecycleManager resource to your management cluster.
      kubectl apply -f gm-ingress-gateway-values.yaml
      
  4. Optional: If you have a multicluster setup, prepare a GatewayLifecycleManager custom resource to manage the east-west gateways.

    1. Download the gm-ew-gateway.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > gm-ew-gateway.yaml
      
    2. Update the example file with the environment variables that you previously set for $REPO, $ISTIO_IMAGE, and $REVISION, and replace the $REMOTE_CLUSTER1 and $REMOTE_CLUSTER2 variables with your workload cluster names. Save the updated file as gm-ew-gateway-values.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-ew-gateway.yaml > gm-ew-gateway-values.yaml
        open gm-ew-gateway-values.yaml
        
    3. Check the settings in the GatewayLifecycleManager resource. You can further edit the file to provide your own details.
      • Clusters: Specify the registered cluster names in the clusters section. For each cluster, activeGateway: true ensures that the Istio operator spec for the gateway is deployed and actively used by the istiod control plane.
      • Gateway name and namespace: The default name for the gateway is set to istio-eastwestgateway, and the default namespace for the gateway is set to gloo-mesh-gateways. If the gloo-mesh-gateways namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is named istio-eastwestgateway and deployed in a namespace such as istio-gateways, you can still name the new gateway istio-eastwestgateway, but you must deploy it in a different namespace, such as gloo-mesh-gateways.
    4. Apply the GatewayLifecycleManager resource to your management cluster.
      kubectl apply -f gm-ew-gateway-values.yaml
      

Verify and test the new managed installations

Verify that the new control plane and gateways are deployed to your workload clusters. Then test them by deploying workloads and generating traffic through the new gateways to those workloads.

  1. In each workload cluster, verify that the namespaces for your managed Istio installations are created.

    1. Operator revision namespace:
      kubectl get all -n gm-iop-1-18-2
      

      Example output:

      NAME                                       READY   STATUS    RESTARTS   AGE
      pod/istio-operator-1-18-2-678fd95cc6-ltbvl   1/1     Running   0          4m12s
      
      NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
      service/istio-operator-1-18-2   ClusterIP   10.204.15.247   <none>        8383/TCP   4m12s
      
      NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/istio-operator-1-18-2   1/1     1            1           4m12s
      
      NAME                                             DESIRED   CURRENT   READY   AGE
      replicaset.apps/istio-operator-1-18-2-678fd95cc6   1         1         1       4m12s
      
    2. Istiod namespace: Note that your existing Istio control plane pods might be deployed to this namespace too.
      kubectl get all -n istio-system
      

      Example output:

      NAME                              READY   STATUS    RESTARTS   AGE
      pod/istiod-1-18-2-b65676555-g2vmr   1/1     Running   0          8m57s
      
      NAME                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                 AGE
      service/istiod-1-18-2   ClusterIP   10.204.6.56   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP   8m56s
      
      NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/istiod-1-18-2   1/1     1            1           8m57s
      
      NAME                                    DESIRED   CURRENT   READY   AGE
      replicaset.apps/istiod-1-18-2-b65676555   1         1         1       8m57s
      
      NAME                                              REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
      horizontalpodautoscaler.autoscaling/istiod-1-18-2   Deployment/istiod-1-18-2   1%/80%    1         5         1          8m58s
      
    3. Gateway namespace: Note that the gateways might take a few minutes to be created.
      kubectl get all -n gloo-mesh-gateways
      

      Example output:

      NAME                                              READY   STATUS    RESTARTS   AGE
      pod/istio-ingressgateway-1-18-2-77d5f76bc8-j6qkp    1/1     Running   0          2m18s
      
      NAME                                      TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                                      AGE
      service/istio-ingressgateway              LoadBalancer   10.44.4.140    34.150.235.221   15021:31321/TCP,80:32525/TCP,443:31826/TCP   2m16s
      
      NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/istio-ingressgateway-1-18-2    1/1     1            1           2m18s
      
      NAME                                                    DESIRED   CURRENT   READY   AGE
      replicaset.apps/istio-ingressgateway-1-18-2-77d5f76bc8    1         1         1       2m18s
      
      NAME                                                             REFERENCE                                    TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
      horizontalpodautoscaler.autoscaling/istio-ingressgateway-1-18-2    Deployment/istio-ingressgateway-1-18-2    4%/80%          1         5         1          2m19s
      
  2. Verify that your existing gateways still point to the old revision, and only the new gateway points to the new revision.

    istioctl proxy-status
    

    In this example output, the existing ingress gateway in cluster1 still points to the existing Istio installation that uses version 1.17.4. Only the new ingress gateway points to the managed Istio installation that uses version 1.18.2-solo and revision 1-18-2.

    NAME                                                             CLUSTER    ...  ISTIOD                           VERSION
    istio-ingressgateway-575b697f9-49v4c.istio-gateways              cluster1  ...  istiod-66d54b865-6b6zt           1.17.4
    istio-ingressgateway-1-18-2-575b697f9-49v4c.gloo-mesh-gateways     cluster1  ...  istiod-1-18-2-5b7b9df586-95sq6     1.18.2-solo
    
  3. Deploy test workloads to test the new revision.

    1. Create the petstore namespace in each workload cluster.
      kubectl create ns petstore
      
    2. In the first workload cluster, deploy the petstore app.
      kubectl apply -n petstore -f - <<EOF
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: petstore
        name: petstore
        namespace: petstore
      spec:
        selector:
          matchLabels:
            app: petstore
        replicas: 1
        template:
          metadata:
            labels:
              app: petstore
          spec:
            containers:
            - image: openapitools/openapi-petstore
              name: petstore
              env:
                - name: DISABLE_OAUTH
                  value: "1"
                - name: DISABLE_API_KEY
                  value: "1"
              ports:
              - containerPort: 8080
                name: http
      

    apiVersion: v1 kind: Service metadata: name: petstore namespace: petstore labels: service: petstore spec: ports: - port: 8080 protocol: TCP selector: app: petstore EOF

    
    
  4. Test traffic to your workloads by using the ingress gateway for the new revision.

    1. Save the external address of the ingress gateway for the new revision.
      export INGRESS_GW_IP=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      echo $INGRESS_GW_IP
      
      export INGRESS_GW_IP=$(kubectl get svc -n gloo-mesh-gateways istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
      echo $INGRESS_GW_IP
      
    2. Apply a virtual gateway to the ingress gateway for the new revision.
      kubectl apply -f- <<EOF
      apiVersion: networking.gloo.solo.io/v2
      kind: VirtualGateway
      metadata:
        name: istio-ingressgateway-$REVISION
        namespace: petstore
      spec:
        listeners:
        - http: {}
          port:
            number: 80
        workloads:
        - selector:
            labels:
              istio: ingressgateway
            namespace: gloo-mesh-gateways
      EOF
      
    3. Apply a route table to allow requests to the /api/pets path of the petstore app.
      kubectl apply -f- <<EOF
      apiVersion: networking.gloo.solo.io/v2
      kind: RouteTable
      metadata:
        name: petstore-routes
        namespace: petstore
      spec:
        hosts:
          - '*'
        virtualGateways:
          - name: istio-ingressgateway-$REVISION
            namespace: petstore
        http:
          - name: petstore
            matchers:
            - uri:
                prefix: /api/pets
            forwardTo:
              destinations:
                - ref:
                    name: petstore
                    namespace: petstore
                    cluster: $CLUSTER_NAME
                  port:
                    number: 8080
      EOF
      
    4. Test the ingress gateway by sending a request to the petstore service.
      curl http://$INGRESS_GW_IP:80/api/pets
      

Activate the managed installations

After you finish testing, change the new control planes to be active. Then, you can update service selectors or update internal/external DNS entries to point to the new gateways. You can also optionally uninstall the old Istio installations.

  1. Switch to the new istiod control plane revision by changing defaultRevision to true.

    kubectl edit IstioLifecycleManager -n gloo-mesh istiod-control-plane
    

    Example:

    apiVersion: admin.gloo.solo.io/v2
    kind: IstioLifecycleManager
    metadata:
      name: istiod-control-plane
      namespace: gloo-mesh
    spec:
      installations:
        - revision: 1-18-2
          clusters:
          - name: cluster1
            # Set this field to TRUE
            defaultRevision: true
          - name: cluster2
            # Set this field to TRUE
            defaultRevision: true
          istioOperatorSpec:
            profile: minimal
            ...
    
  2. If you use your own load balancer services for gateways, update the service selectors to point to the gateways for the new revision. Alternatively, if you use the load balancer services that are deployed by default, update any internal or external DNS entries to point to the new gateway IP addresses.

  3. Uninstall the old Istio installations from each workload cluster. The uninstallation process varies depending on your original installation method. For example, if you used the Istio Helm charts to deploy the istiod control plane and gateways, the uninstallation process might follow these general steps.

    1. Find the name of your Istio Helm chart release in the gloo-mesh-gateways namespace, such as istio-ingressgateway-1-18-2.
      helm ls -n gloo-mesh-gateways
      
    2. Delete the Helm release for the ingress gateway.
      helm delete istio-ingressgateway-1-18-2 -n gloo-mesh-gateways
      
    3. Find the name of your Istio Helm chart release in the istio-system namespace, such as istiod-1-18-2.
      helm ls -n istio-system
      
    4. Delete the Helm release for the istiod control plane.
      helm delete istiod-1-18-2 -n istio-system
      
  4. Optional: Delete any workloads that you used for testing, such as the petstore apps.

    kubectl delete ns petstore
    

Next steps

When it's time to upgrade Istio, you can use Gloo Gateway to upgrade Gloo-managed gateways.