Switch from unmanaged to managed Istio installations

Use the Istio lifecycle manager to switch from your existing, unmanaged Istio installations to Gloo-managed Istio installations. The takeover process follows these general steps:

  1. Create IstioLifecycleManager and GatewayLifecycleManager resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. The istiod control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time.
  2. Test the new control plane and gateways by deploying workloads with a label for the new revision and generating traffic to those workloads.
  3. Change the new control planes to be active, and rollout a restart to data plane workloads so that they are managed by the new control planes.
  4. Update service selectors or update internal/external DNS entries to point to the new gateways.
  5. Uninstall the old Istio installations.

Considerations

Before you follow this takeover process, review the following important considerations.

Before you begin

  1. Install Gloo Mesh Enterprise without any Istio installations.

  2. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

    export REMOTE_CLUSTER1=<cluster1>
    export REMOTE_CLUSTER2=<cluster2>
    ...
    
  3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
    ...
    
  4. To use a Solo distribution of Istio, you must have a Solo account. Make sure that you can log in to the Support Center. If not, contact your account administrator to get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.

Deploy the managed Istio installations

Create IstioLifecycleManager and GatewayLifecycleManager resources in the management cluster that use a different revision than the existing Istio installations in your workload clusters. The istiod control planes and Istio gateways for the new installation are deployed to each workload cluster, but the new, managed control planes are not active at deployment time.

  1. Save the Istio version information as environment variables.

    • For REPO, use a Solo repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Solo distribution of Istio that you want to use.
    • For ISTIO_IMAGE, save the version that you downloaded, such as 1.20.2, and append the solo tag, which is required to use many enterprise features. You can optionally append other tags for the Solo distribution of Istio, as described in About the Solo distribution of Istio. If you downloaded a different version than the following, make sure to specify that version instead.
    • For REVISION, specify any name or integer. For example, you can specify the version, such as 1-20. If you currently use a revision for your existing Istio installations, be sure to use a different revision than the existing one.
    export REPO=<repo-key>
    export ISTIO_IMAGE=1.20.2-solo
    export REVISION=1-20
    
  2. Prepare an IstioLifecycleManager resource to manage istiod control planes.

    1. Download the gm-istiod.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod.yaml > gm-istiod.yaml
      
      1. Elevate the permissions of the following service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT1
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT1
        # Update revision as needed
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-20 --context $REMOTE_CONTEXT1
        
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT2
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT2
        # Update revision as needed
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-20 --context $REMOTE_CONTEXT2
        
      2. Create the gloo-mesh-gateways project, and create a NetworkAttachmentDefinition custom resource for the project.
        kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT1
        cat <<EOF | oc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways create -f -
        apiVersion: "k8s.cni.cncf.io/v1"
        kind: NetworkAttachmentDefinition
        metadata:
          name: istio-cni
        EOF
        
        kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT2
        cat <<EOF | oc --context $REMOTE_CONTEXT2 -n gloo-mesh-gateways create -f -
        apiVersion: "k8s.cni.cncf.io/v1"
        kind: NetworkAttachmentDefinition
        metadata:
          name: istio-cni
        EOF
        
      3. Download the gm-istiod.yaml example file.
        curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/takeover/gm-istiod-openshift.yaml > gm-istiod.yaml
        
    2. Update the example file with the environment variables that you previously set for $REPO, $ISTIO_IMAGE, $REVISION, $REMOTE_CLUSTER1, and $REMOTE_CLUSTER2. Save the updated file as gm-istiod-values.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-istiod.yaml > gm-istiod-values.yaml
        open gm-istiod-values.yaml
        
    3. Check the settings in the IstioLifecycleManager resource. You can further edit the file to provide your own details.
      • Clusters: Specify the registered cluster names in the clusters section. For single-cluster setups, you must edit the file to specify only the name of your cluster (value of $CLUSTER_NAME). For each cluster, defaultRevision: false ensures that the Istio operator spec for the control plane installation is NOT active in the cluster.
      • Root namespace: If you do not specify a namespace, the root namespace for the installed Istio resources in workload clusters is set to istio-system. If the istio-system namespace does not already exist, it is created for you.
      • Trust domain: By default, the trustDomain value is automatically set by the installer to the name of each workload cluster. To override the trustDomain for each cluster, you can instead specify the override value in the trustDomain field, and include the value in the list of cluster names. For example, if you specify trustDomain: cluster1-trust-override in the operator spec, you then specify the cluster name (cluster1) and the trust domain (cluster1-trust-override) in the list of cluster names. Additionally, because Gloo requires multiple trust domains for east-west routing, the PILOT_SKIP_VALIDATE_TRUST_DOMAIN field is set to "true" by default.
    4. Apply the IstioLifecycleManager resource to your management cluster.
      kubectl apply -f gm-istiod-values.yaml --context $MGMT_CONTEXT
      
  3. Optional: If you have a multicluster setup, prepare a GatewayLifecycleManager custom resource to manage the east-west gateways.

    1. Download the gm-ew-gateway.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > gm-ew-gateway.yaml
      
    2. Update the example file with the environment variables that you previously set for $REPO, $ISTIO_IMAGE, $REVISION, $REMOTE_CLUSTER1, and $REMOTE_CLUSTER2. Save the updated file as gm-ew-gateway-values.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-ew-gateway.yaml > gm-ew-gateway-values.yaml
        open gm-ew-gateway-values.yaml
        
    3. Check the settings in the GatewayLifecycleManager resource. You can further edit the file to provide your own details.
      • Clusters: Specify the registered cluster names in the clusters section. For each cluster, activeGateway: true ensures that the Istio operator spec for the gateway is deployed and actively used by the istiod control plane.
      • Gateway name and namespace: The default name for the gateway is set to istio-eastwestgateway, and the default namespace for the gateway is set to gloo-mesh-gateways. If the gloo-mesh-gateways namespace does not already exist, it is created in each workload cluster for you. Note: To prevent conflicts, be sure to choose a different name or namespace than your existing gateway. For example, if your existing gateway is named istio-eastwestgateway and deployed in a namespace such as istio-gateways, you can still name the new gateway istio-eastwestgateway, but you must deploy it in a different namespace, such as gloo-mesh-gateways.
    4. Apply the GatewayLifecycleManager resource to your management cluster.
      kubectl apply -f gm-ew-gateway-values.yaml --context $MGMT_CONTEXT
      

Verify the new managed installations

Verify that the new control plane and gateways are deployed to your workload clusters.

  1. In each workload cluster, verify that the namespaces for your managed Istio installations are created.

    kubectl get ns --context $REMOTE_CONTEXT1
    

    For example, the gm-iop-1-20 and gloo-mesh-gateways namespaces are created alongside the namespaces you might already use for your existing Istio installations (such as istio-system and istio-gateways):

    NAME               STATUS   AGE
    default            Active   56m
    gloo-mesh          Active   36m
    gm-iop-1-20          Active   91s
    gloo-mesh-gateways Active   90s
    istio-gateways     Active   50m
    istio-system       Active   50m
    kube-node-lease    Active   57m
    kube-public        Active   57m
    kube-system        Active   57m
    
  2. In each namespace, verify that the Istio resources for the new revision are successfully installed.

    kubectl get all -n gm-iop-1-20 --context $REMOTE_CONTEXT1
    

    Example output:

    NAME                                        READY   STATUS    RESTARTS   AGE
    pod/istio-operator-1-20-678fd95cc6-ltbvl     1/1     Running   0          4m12s
    
    NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    service/istio-operator-1-20     ClusterIP   10.204.15.247   <none>        8383/TCP   4m12s
    
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-operator-1-20     1/1     1            1           4m12s
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-operator-1-20-678fd95cc6     1         1         1       4m12s
    
    kubectl get all -n istio-system --context $REMOTE_CONTEXT1
    

    Example output: Note that your existing Istio control plane pods might be deployed to this namespace too.

    NAME                                READY   STATUS    RESTARTS   AGE
    pod/istiod-1-20-b65676555-g2vmr     1/1     Running   0          8m57s
    
    NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                 AGE
    service/istiod-1-20     ClusterIP   10.204.6.56   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP   8m56s
    
    NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istiod-1-20     1/1     1            1           8m57s
    
    NAME                                      DESIRED   CURRENT   READY   AGE
    replicaset.apps/istiod-1-20-b65676555     1         1         1       8m57s
    
    NAME                                                REFERENCE                  TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/istiod-1-20     Deployment/istiod-1-20     1%/80%    1         5         1          8m58s
    
    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    

    Example output: Your output might vary depending on which gateways you installed. Note that the gateways might take a few minutes to be created.

    NAME                                                READY   STATUS    RESTARTS   AGE
    pod/istio-eastwestgateway-1-20-66f464ff44-qlhfk     1/1     Running   0          2m6s
    
    NAME                             TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                                      AGE
    service/istio-eastwestgateway    LoadBalancer   10.204.4.172   34.86.225.164    15021:30889/TCP,15443:32489/TCP              2m5s
    
    NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-eastwestgateway-1-20     1/1     1            1           2m6s
    
    NAME                                                      DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-eastwestgateway-1-20-66f464ff44     1         1         1       2m6s
    
    NAME                                                               REFERENCE                                 TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/istio-eastwestgateway-1-20     Deployment/istio-eastwestgateway-1-20     <unknown>/80%   1         5         0          2m7s
    

Test the new managed installations

Test the new Istio installation by deploying the Istio sample app, Bookinfo, and updating its sidecars from the old revision to the new.

  1. Create the bookinfo namespace in both workload clusters.

    kubectl create ns bookinfo --context $REMOTE_CONTEXT1
    kubectl create ns bookinfo --context $REMOTE_CONTEXT2
    
  2. Label the namespaces for Istio injection with the old revision so that the services are managed by the old revision's control plane.

    kubectl label ns bookinfo istio.io/rev=<old_revision> --context $REMOTE_CONTEXT1
    kubectl label ns bookinfo istio.io/rev=<old_revision> --context $REMOTE_CONTEXT2
    
    If you did not previously use revision labels for your apps, you can instead run kubectl label ns bookinfo istio-injection --context $REMOTE_CONTEXT1 and kubectl label ns bookinfo istio-injection --context $REMOTE_CONTEXT2.
  3. Deploy Bookinfo with the details, productpage, ratings, reviews-v1, and reviews-v2 services in cluster1.

    # deploy bookinfo application components for all versions less than v3
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' --context $REMOTE_CONTEXT1
    # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
    # deploy all bookinfo service accounts --context $REMOTE_CONTEXT1
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' --context $REMOTE_CONTEXT1
    
  4. Deploy Bookinfo with the ratings and reviews-v3 services in cluster2.

    # deploy reviews and ratings services
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service in (reviews)' --context $REMOTE_CONTEXT2
    # deploy reviews-v3
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (reviews),version in (v3)' --context $REMOTE_CONTEXT2
    # deploy ratings
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app in (ratings)' --context $REMOTE_CONTEXT2
    # deploy reviews and ratings service accounts
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account in (reviews, ratings)' --context $REMOTE_CONTEXT2
    
  5. Verify that the Bookinfo app is deployed successfully.

    kubectl get pods -n bookinfo --context $REMOTE_CONTEXT1
    kubectl get pods -n bookinfo --context $REMOTE_CONTEXT2
    
  6. Verify that your workloads and existing gateways still point to the old revision, and only the new gateway points to the new revision.

    istioctl proxy-status --context $REMOTE_CONTEXT1
    istioctl proxy-status --context $REMOTE_CONTEXT2
    

    In this example output, the Bookinfo apps and existing east-west gateway in cluster1 still point to the existing Istio installation that uses version 1.19.5. Only the new east-west gateway points to the managed Istio installation that uses version 1.20.2-solo and revision 1-20.

    NAME                                                              CLUSTER   ...  ISTIOD                           VERSION
    details-v1-6758dd9d8d-rh4db.bookinfo                              cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    istio-eastwestgateway-575b697f9-49v4c.istio-gateways              cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    istio-eastwestgateway-1-20-575b697f9-49v4c.gloo-mesh-gateways     cluster1  ...  istiod-1-20-5b7b9df586-95sq6     1.20.2-solo
    productpage-v1-b4cf67f67-s5lsh.bookinfo                           cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    ratings-v1-f849dc6d-wqdc8.bookinfo                                cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    reviews-v1-74fb8fdbd8-z8bzc.bookinfo                              cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    reviews-v2-58d564d4db-g8jzr.bookinfo                              cluster1  ...  istiod-66d54b865-6b6zt           1.19.5
    
  7. Generate traffic through the old gateways to Bookinfo.

    1. Create a Gloo root trust policy to ensure that services in each workload cluster can communicate securely. The root trust policy sets up the domain and certificates to establish a shared trust model across multiple clusters in your service mesh.

      kubectl apply --context $MGMT_CONTEXT -f - <<EOF
      apiVersion: admin.gloo.solo.io/v2
      kind: RootTrustPolicy
      metadata:
        name: root-trust
        namespace: gloo-mesh
      spec:
        config:
          autoRestartPods: true
          mgmtServerCa:
            generated: {}
      EOF
      
    2. Create a Gloo virtual destination for the reviews app.

      kubectl apply --context $REMOTE_CONTEXT1 -n bookinfo -f- <<EOF
      apiVersion: networking.gloo.solo.io/v2
      kind: VirtualDestination
      metadata:
        name: reviews-vd
        namespace: bookinfo
      spec:
        hosts:
        # Arbitrary, internal-only hostname assigned to the endpoint
        - reviews.mesh.internal.com
        ports:
        - number: 8080
          protocol: HTTP
          targetPort:
            number: 9080
        services:
          - labels:
              app: reviews
      EOF
      
    3. Create a curl pod in the second cluster.

      kubectl run -it -n bookinfo --context $REMOTE_CONTEXT2 curl \ 
      --image=curlimages/curl:7.73.0 --rm  -- sh
      
    4. Send a request to the reviews app's virtual destination hostname.

      curl http://reviews.mesh.internal.com/ -v
      

      Example output:

      *   Trying 45.33.2.79:80...
      * Connected to reviews.mesh.internal.com (45.33.2.79) port 80 (#0)
      > GET / HTTP/1.1
      > Host: reviews.mesh.internal.com
      > User-Agent: curl/7.73.0-DEV
      > Accept: */*
      > 
      * Mark bundle as not supporting multiuse
      < HTTP/1.1 200 OK
      < server: envoy
      < date: Fri, 28 Oct 2022 20:11:00 GMT
      < content-type: application/octet-stream,text/html
      < content-length: 134
      < x-envoy-upstream-service-time: 68
      < 
      * Connection #0 to host reviews.mesh.internal.com left intact
      <html><head><title>reviews.mesh.internal.com</title></head><body><h1>reviews.mesh.internal.com</h1><p>Coming soon.</p></body></html>
      
    5. Exit the temporary pod. The pod deletes itself.

      exit
      
  8. Change the label on the bookinfo namespace to use the new revision.

    kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1
    kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT2
    
    If you did not previously use revision labels for your apps, you can instead run kubectl label ns bookinfo istio-injection- and kubectl label ns bookinfo istio.io/rev=$REVISION.
  9. Update Bookinfo by rolling out restarts to each of the microservices. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands, 20 seconds elapse between each restart to ensure that the pods have time to start running.

    kubectl rollout restart deployment -n bookinfo details-v1 --context $REMOTE_CONTEXT1
    sleep 20s
    kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT1
    sleep 20s
    kubectl rollout restart deployment -n bookinfo productpage-v1 --context $REMOTE_CONTEXT1
    sleep 20s
    kubectl rollout restart deployment -n bookinfo reviews-v1 --context $REMOTE_CONTEXT1
    sleep 20s
    kubectl rollout restart deployment -n bookinfo reviews-v2 --context $REMOTE_CONTEXT1
    sleep 20s
    kubectl rollout restart deployment -n bookinfo reviews-v3 --context $REMOTE_CONTEXT2
    sleep 20s
    kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT2
    sleep 20s
    
  10. Verify that the Bookinfo pods now use the new revision.

    istioctl proxy-status | grep "\.bookinfo "
    

Activate the managed installations

After you finish testing, change the new control planes to be active, and rollout a restart to data plane workloads so that they are managed by the new control planes. Then, you can update service selectors or update internal/external DNS entries to point to the new gateways. You can also optionally uninstall the old Istio installations.

  1. In your IstioLifecycleManager resource, switch to the new istiod control plane revision by changing defaultRevision to true.

    kubectl edit IstioLifecycleManager -n gloo-mesh --context $MGMT_CONTEXT istiod-control-plane
    

    Example:

    apiVersion: admin.gloo.solo.io/v2
    kind: IstioLifecycleManager
    metadata:
      name: istiod-control-plane
      namespace: gloo-mesh
    spec:
      installations:
        - revision: 1-20
          clusters:
          - name: cluster1
            # Set this field to TRUE
            defaultRevision: true
          - name: cluster2
            # Set this field to TRUE
            defaultRevision: true
          istioOperatorSpec:
            profile: minimal
            ...
    
  2. In each workload cluster, roll out a restart to your workload apps so that they are managed by the new control planes.

    1. Change the label on any Istio-managed namespaces to use the new revision.
      kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1
      kubectl label ns <namespace> istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT2
      
      If you did not previously use revision labels for your apps, you can instead run kubectl label ns <namespace> istio-injection- --context $REMOTE_CONTEXT1 and kubectl label ns <namespace> istio.io/rev=$REVISION --context $REMOTE_CONTEXT1.
    2. Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time.
    3. Verify that your workloads and new gateways point to the new revision.
      istioctl proxy-status --context $REMOTE_CONTEXT1
      istioctl proxy-status --context $REMOTE_CONTEXT2
      
  3. If you use your own load balancer services for gateways, update the service selectors to point to the gateways for the new revision. Alternatively, if you use the load balancer services that are deployed by default, update any internal or external DNS entries to point to the new gateway IP addresses.

  4. Uninstall the old Istio installation. The uninstallation process varies depending on your original installation method. For more information, see the Istio documentation.

Next steps