You can use this guide to upgrade the Gloo version of your Gloo components, such as the management server and agents, or to apply changes to the components’ configuration settings.

Considerations

Consider the following rules before you plan your Gloo Mesh Enterprise upgrade.

Testing upgrades

During the upgrade, the data plane continues to run, but you might not be able to modify the configurations through the management plane. Because zero downtime is not guaranteed, try testing the upgrade in a staging environment before upgrading your production environment.

Patch and minor versions

Patch version upgrades:
You can skip patch versions within the same minor release. For example, you can upgrade from version 2.3.x.0 to 2.3.23 directly, and skip the patch versions in between.

Minor version upgrades:

  • Always upgrade to the latest patch version of the target minor release. For example, if you want to upgrade from version 2.2.9 to 2.3.x.x, and 2.3.23 is the latest patch version, upgrade to that version and skip any previous patch versions for that minor release. Do not upgrade to a lower patch version, such as 2.3.x.0, 2.3.x.1, and so on.
  • Do not skip minor versions during your upgrade. Upgrade minor release versions one at a time. For example, if you want to upgrade from 2.3.x to 2.5.x, you must first upgrade to the latest patch version of the 2.4 minor release. After you upgrade to 2.4.x, you can then plan your upgrade to the latest patch version of the 2.5.x release.

Multicluster only: Version skew policy for management and remote clusters

Plan to always upgrade your Gloo management server and agents to the same target version. Always upgrade the Gloo management server first. Then, roll out the upgrade to the Gloo agents in your workload clusters. During this upgrade process, your management server and agents can be one minor version apart.

For example, let’s say you want to upgrade from 2.2.9 to 2.3.x.x. Start by upgrading your management server to the latest patch version of the 2.3.x minor release. Your management server and agent are still compliant as they are one minor version apart. Then, roll out the 2.3.x minor release upgrade to the agents in your workload clusters.

If you plan to upgrade more than one minor releases, you must perform one minor release upgrade at a time. For example, to upgrade your management server and agent from 2.3.x to 2.5.x, you upgrade your management server to the latest patch version of the 2.4 minor release first. Your management server and agent are compliant because they are one minor version apart. Then, you upgrade your agents to the 2.4 minor release. After you verify the 2.4 upgrade, use the same approach to upgrade the management server and agents from 2.4 to the target 2.5 minor release.

If both your management server and agent run the same minor version, the agent can run any patch version that is equal or lower than the management server’s patch version.

Consider the following example version skew scenarios:

Supported?Management server versionAgent versionRequirement
2.4.42.4.2The management server and agents run the same minor version. The agent patch version is equal to or lower than the management server.
2.4.42.4.5The agent runs the same minor version as the server, but has a patch version greater than the server.
2.4.42.3.4The agent runs a minor version no greater than n-1 behind the server.
2.4.42.2.9The agent runs a minor version that is greater than n-1 behind the server.

Step 1: Prepare to upgrade

  1. Check that your underlying Kubernetes platform and Istio service mesh run supported versions for the Gloo Mesh Enterprise version that you want to upgrade to.

    1. Review the supported versions.
    2. Compare the supported version against the versions of Kubernetes and Istio that you run in your clusters.
    3. If necessary, upgrade Istio or Kubernetes to a version that is supported by the Gloo Mesh Enterprise version that you want to upgrade to.
  2. Set the Gloo Mesh Enterprise version that you want to upgrade to as an environment variable. The latest version is used as an example. You can find other versions in the changelog documentation. Append -fips for a FIPS-compliant image, such as 2.3.23-fips. Do not include v before the version number.Important: Do not upgrade Gloo Mesh to version 2.3.14, which contains a bug that causes the Gloo agent to have stale service discovery data. This bug is fixed in the 2.3.15 release.

      export UPGRADE_VERSION=2.3.23
      

Step 2: Upgrade the meshctl CLI

Upgrade the meshctl CLI to the version of Gloo Mesh Enterprise you want to upgrade to.

  1. Re-install meshctl to the upgrade version.

      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v$UPGRADE_VERSION sh -
      
  2. Verify that the client version matches the version you installed.

      meshctl version
      

    Example output:

      {
    "client": {
      "version": "2.3.23"
    },
      

Step 3: Upgrade Gloo Mesh Enterprise

Upgrade your Gloo Mesh Enterprise installation. The steps differ based on whether you run Gloo Mesh Enterprise in a single-cluster or multicluster environment.

Single cluster

  1. Update the gloo-platform Helm repo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  2. Apply the Gloo custom resource definitions (CRDs) for the upgrade version.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
        --namespace gloo-mesh \
        --version $UPGRADE_VERSION \
        --set installEnterpriseCrds=true
      
  3. Get the Helm values files for your current version.

      helm get values gloo-platform -o yaml -n gloo-mesh > gloo-single.yaml
    open gloo-single.yaml
      
  4. Compare your current Helm chart values with the version that you want to upgrade to. You can get a values file for the upgrade version with the helm show values command.

      helm show values gloo-platform/gloo-platform --version $UPGRADE_VERSION > all-values.yaml
      
  5. Make any changes that you want, such as modifications required for breaking changes or to enable new features, by editing your gloo-single.yaml Helm values files or preparing the --set flags. If you do not want to use certain settings, comment them out.

  6. Upgrade the Gloo Mesh Enterprise Helm installation.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
        --namespace gloo-mesh \
        -f gloo-single.yaml \
        --version $UPGRADE_VERSION
      
  7. Confirm that Gloo components, such as the gloo-mesh-mgmt-server, run the version that you upgraded to.

      meshctl version
      

    Example output:

       "server": [
       {
         "Namespace": "gloo-mesh",
         "components": [
           {
             "componentName": "gloo-mesh-mgmt-server",
             "images": [
                {
                 "name": "gloo-mesh-mgmt-server",
                 "domain": "gcr.io",
                 "path": "gloo-mesh-mgmt-server",
                 "version": "2.3.23"
               }
             ]
           },
       

Multicluster

  1. Update the gloo-platform Helm repo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  2. Get the Helm values files for your current version.

    1. Get your current values for the management cluster.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $MGMT_CONTEXT > mgmt-plane.yaml
      open mgmt-plane.yaml
        
    2. Get your current values for the workload clusters.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > data-plane.yaml
      open data-plane.yaml
        
    3. Optional: If you maintain a separate gloo-agent-addons Helm release, get the values for that Helm release too, and delete the first line that contains USER-SUPPLIED VALUES:.
        helm get values gloo-agent-addons -n gloo-mesh-addons -o yaml --kube-context $REMOTE_CONTEXT > gloo-agent-addons.yaml
      open gloo-agent-addons.yaml
        
  3. Compare your current Helm chart values with the version that you want to upgrade to. You can get a values file for the upgrade version with the helm show values command.

      helm show values gloo-platform/gloo-platform --version $UPGRADE_VERSION > all-values.yaml
      
  4. Make any changes that you want, such as modifications required for breaking changes or to enable new features, by editing your mgmt-plane.yaml and data-plane.yaml Helm values files or preparing the --set flags. If you do not want to use certain settings, comment them out.

  5. Upgrade the Gloo Mesh Enterprise Helm releases in your management cluster.

    1. Apply the Gloo custom resource definitions (CRDs) for the upgrade version in the management cluster.
        helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
          --kube-context $MGMT_CONTEXT \
          --namespace gloo-mesh \
          --version $UPGRADE_VERSION \
          --set installEnterpriseCrds=true
        
    2. Upgrade your Helm release in the management cluster. Make sure to include your Helm values when you upgrade either as a configuration file in the --values flag or with --set flags. Otherwise, any previous custom values that you set might be overwritten.
        helm upgrade gloo-platform gloo-platform/gloo-platform \
          --kube-context $MGMT_CONTEXT \
          --namespace gloo-mesh \
          -f mgmt-plane.yaml \
          --version $UPGRADE_VERSION
        
    3. Confirm that the management plane components, such as the gloo-mesh-mgmt-server, run the version that you upgraded to.
        meshctl version --kubecontext $MGMT_CONTEXT
        
      Example output:
            "server": [
            {
              "Namespace": "gloo-mesh",
              "components": [
                {
                  "componentName": "gloo-mesh-mgmt-server",
                  "images": [
                     {
                      "name": "gloo-mesh-mgmt-server",
                      "domain": "gcr.io",
                      "path": "gloo-mesh-mgmt-server",
                      "version": "2.3.23"
                    }
                  ]
                },
            
  6. Upgrade the Gloo Mesh Enterprise Helm releases in your workload clusters. Repeat these steps for each workload cluster, and be sure to update the cluster context each time.

    1. Apply the Gloo custom resource definitions (CRDs) for the upgrade version in each workload cluster.

        helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
          --kube-context $REMOTE_CONTEXT \
          --namespace=gloo-mesh \
          --version=$UPGRADE_VERSION \
          --set installEnterpriseCrds=true
        
    2. Upgrade your Helm release in each workload cluster. Make sure to include your Helm values when you upgrade either as a configuration file in the --values flag or with --set flags. Otherwise, any previous custom values that you set might be overwritten.

        helm upgrade gloo-platform gloo-platform/gloo-platform \
          --kube-context $REMOTE_CONTEXT \
          --namespace gloo-mesh \
          -f data-plane.yaml \
          --version $UPGRADE_VERSION
        
    3. Optional: If you maintain a separate gloo-agent-addons Helm release, upgrade that Helm release in each workload cluster too. Be sure to update the cluster context for each workload cluster that you repeat this command for.

        helm upgrade gloo-agent-addons gloo-platform/gloo-platform \
         --kube-context $REMOTE_CONTEXT \
         --namespace gloo-mesh-addons \
         -f gloo-agent-addons.yaml \
         --version $UPGRADE_VERSION
        
    4. Confirm that the data plane components, such as the gloo-mesh-agent, run the version that you upgraded to.

        meshctl version --kubecontext $REMOTE_CONTEXT
        

      Example output:

            {
                  "componentName": "gloo-mesh-agent",
                  "images": [
                    {
                      "name": "gloo-mesh-agent",
                      "domain": "gcr.io",
                      "path": "gloo-mesh/gloo-mesh-agent",
                      "version": "2.3.23"
                    }
                  ]
                },
            

    5. Repeat these steps for each workload cluster, and be sure to update the cluster context each time.

  7. Check that the Gloo management and agent components are connected.

      meshctl check --kubecontext $MGMT_CONTEXT
      

Update your Gloo license

Before your Gloo license expires, you can update the license by performing a Helm upgrade. If you use Gloo Mesh along with other Gloo products such as Gloo Gateway and Gloo Network, you can also update those licenses.

For example, if you notice that your Gloo control plane deployments are in a crash loop, your Gloo license might be expired. You can check the logs for one of the deployments, such as the management server, to look for an error message similar to the following:

  meshctl logs mgmt --kubecontext ${MGMT_CONTEXT}
  
  {"level":"fatal","ts":1628879186.1552186,"logger":"gloo-mesh-mgmt-server","caller":"cmd/main.go:24","msg":"License is invalid or expired, crashing - license expired", ...
  

To update your license key in your Gloo installation:

  1. Get a new Gloo license key by contacting your account representative. If you use Gloo Mesh along with other Gloo products such as Gloo Gateway and Gloo Network, make sure to ask for up-to-date license keys for all your products.

  2. Save the new license key as an environment variable.

  3. Perform a regular upgrade of your Gloo installation. During the upgrade, either update the license value in your Helm values file, or provide your new license key in a --set flag in the helm upgrade command. For example, to update your Gloo Mesh license key, either change the value of the licensing.glooMeshLicenseKey setting in your Helm values file, or supply the --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY flag when you upgrade.

  4. Optional: If your license expired and the management server pods are in a crash loop, restart the management server pods. If you updated the license before expiration, skip this step.

      kubectl rollout restart -n gloo-mesh deployment/gloo-mesh-mgmt-server --context ${MGMT_CONTEXT}
      
  5. Verify that your license check is now valid, and no errors are reported.

      meshctl check --kubecontext ${MGMT_CONTEXT}
      

    Example output:

      🟢 License status
    
    INFO  gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  Valid GraphQL license module found
      

Upgrade the Cilium CNI

To upgrade the Cilium CNI in your clusters, such as to update the version of the Solo distribution of the Cilium image or to change a setting, you can follow the upgrade guide in the Cilium documentation.

  1. To ensure your Helm values are not overwritten, save your current Helm values for the Cilium CNI installation.

      helm get values cilium -n kube-system -o yaml > solo-cilium.yaml
      
  2. Follow the upgrade guide in the Cilium documentation. In the helm install and helm upgrade commands, be sure to pass in your Helm values by using the -f solo-cilium.yaml flag.

Upgrade managed Istio within your gloo-platform Helm chart

If you manage Istio installations within the istioInstallations section your Gloo Platform Helm chart, you can apply updates to your Istio installations in one of the following ways:

Revisioned canary upgrades (recommended)

In a canary upgrade, you install another Istio installation (canary) alongside your active installation. Each installation is revisioned so that you can easily identify and verify the separate settings and resources for each installation. Note that during a canary upgrade, the validating admissions webhook is enabled only for the canary installation to prevent issues that occur when multiple webhooks are enabled.

Perform a canary upgrade when you change one of the following fields:

  • istioOperatorSpec.tag minor version
  • istioOperatorSpec.hub repository, such as switching to the repository for the minor version of the Solo distribution of Istio that you want to upgrade to
  • components, profile, values, or namespace in the istioOperatorSpec

To perform a canary upgrade:

  1. OpenShift only: Elevate the permissions of the service account that will be created for the new revision’s operator project. This permission allows the Istio sidecar to make use of a user ID that is normally restricted by OpenShift. Replace the revision with the revision you plan to use.

      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-18 --context $REMOTE_CONTEXT1
    oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-18 --context $REMOTE_CONTEXT2
      
  2. Follow the steps in this guide to perform a regular upgrade of your Gloo Mesh installation. When you edit the istioInstallations.controlPlane and istioInstallations.eastWestGateways sections of your Helm values file, add another installation entry for the canary revision, and leave the entry your your current installation as-is. For the canary revision, be sure to set defaultRevision and activeGateway to false so that only the existing revisions continue to run.

    For example, you might add the following installation entries for the Istio control plane and east-west gateway alongside your existing entries. If you have a Gloo Gateway license, you might also have entries for the ingress gateway proxy in the nothSouthGateways section too.

      
    istioInstallations:
      controlPlane:
        enabled: true
        installations:
          # EXISTING revision
          - clusters:
              - defaultRevision: true # Keep this field set to TRUE
                name: cluster1
                trustDomain: ""
            istioOperatorSpec:
              hub: $REPO
              tag: 1.17.8-solo
              profile: minimal
              namespace: istio-system
              ...
            revision: 1-17
          # NEW revision
          - clusters:
              - defaultRevision: false # Set this field to FALSE
                name: cluster1
                trustDomain: ""
            istioOperatorSpec:
              hub: $REPO
              tag: 1.18.3-solo
              profile: minimal
              namespace: istio-system
              ...
            revision: 1-18
      eastWestGateways:
        - enabled: true
          installations:
            # EXISTING revision
            - clusters:
                - activeGateway: true # Keep this field set to TRUE
                  name: cluster1
                  name: 
                  trustDomain: ""
              gatewayRevision: 1-17
              istioOperatorSpec:
                hub: $REPO
                tag: 1.17.8-solo
                profile: empty
                namespace: gloo-mesh-gateways
                ...
            # NEW revision
            - clusters:
                - defaultRevision: false # Set this field to FALSE
                  name: cluster1
                  name: 
                  trustDomain: ""
              gatewayRevision: 1-18
              istioOperatorSpec:
                hub: $REPO
                tag: 1.18.3-solo
                profile: empty
                namespace: gloo-mesh-gateways
                ...
          name: istio-eastwestgateway
      enabled: true
      
    • Updating the minor version of Istio? In your canary revision section, be sure to update both the repo key in the hub field, and the Istio version in the tag field. You can get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.
    • For most use cases, you can set the revision and the gatewayRevision to the same version. However, gateway installations can point to any istiod control plane revision by using the controlPlaneRevision field. For simplicity, if you do not specify controlPlaneRevision, the gateway installation uses a control plane with the same revision as itself.
    • For FIPS-compliant Solo distributions of Istio 1.17.2 and 1.16.4, you must use the -patch1 versions of the latest Istio builds published by Solo, such as 1.17.2-patch1-solo-fips for Solo distribution of Istio 1.17. These patch versions fix a FIPS-related issue introduced in the upstream Envoy code. In 1.17.3 and later, FIPS compliance is available in the -fips tags of regular Solo distributions of Istio, such as 1.17.3-solo-fips.
  3. After you apply the Helm upgrade with your updated values file, verify that Istio resources for the canary installation are created. For example, if you updated the Istio minor version to 1-18, verify that resources are created in the gm-iop-1-18 namespace, and that resources for 1-18 are created alongside the existing resources for the previous version in the istio-system and gloo-mesh-gateways namespaces. Note that the gateway load balancers for the canary revision contain the revision in the name, such as istio-eastwestgateway-1-18.

      kubectl get all -n gm-iop-1-18 --context $REMOTE_CONTEXT
    kubectl get all -n istio-system --context $REMOTE_CONTEXT
    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT
      
  4. After performing any necessary testing, switch to the new Istio control plane and gateway revisions.

    1. Get your Helm values file. Change the release name as needed.
        helm get values gloo-platform -n gloo-mesh -o yaml > mgmt-plane.yaml
      open mgmt-plane.yaml
        
    2. Change defaultRevision and activeGateway to false for the old revision and to true for the new revision.
      • New load balancers are created for the canary gateways. To instead change the control plane revision in use by the existing gateway load balancers, you can set the istio.io/rev label on the gateway deployment, which triggers a rolling restart.
        
      istioInstallations:
        controlPlane:
          enabled: true
          installations:
            # EXISTING revision
            - clusters:
                - defaultRevision: false # Set this field to FALSE
                  name: cluster1
                  trustDomain: ""
              istioOperatorSpec:
                hub: $REPO
                tag: 1.17.8-solo
                profile: minimal
                namespace: istio-system
                ...
              revision: 1-17
            # NEW revision
            - clusters:
                - defaultRevision: true # Set this field to TRUE
                  name: cluster1
                  trustDomain: ""
              istioOperatorSpec:
                hub: $REPO
                tag: 1.18.3-solo
                profile: minimal
                namespace: istio-system
                ...
              revision: 1-18
        eastWestGateways:
          - enabled: true
            installations:
              # EXISTING revision
              - clusters:
                  - activeGateway: false # Set this field set to FALSE
                    name: cluster1
                    name: 
                    trustDomain: ""
                gatewayRevision: 1-17
                istioOperatorSpec:
                  hub: $REPO
                  tag: 1.17.8-solo
                  profile: empty
                  namespace: gloo-mesh-gateways
                  ...
              # NEW revision
              - clusters:
                  - defaultRevision: true # Set this field to TRUE
                    name: cluster1
                    name: 
                    trustDomain: ""
                gatewayRevision: 1-18
                istioOperatorSpec:
                  hub: $REPO
                  tag: 1.18.3-solo
                  profile: empty
                  namespace: gloo-mesh-gateways
                  ...
            name: istio-eastwestgateway
        enabled: true
        
    3. Upgrade your Helm release. Change the release name as needed.
        helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         -f mgmt-plane.yaml \
         --version $UPGRADE_VERSION
        
  5. After your Helm upgrade completes, verify that the active gateways for the new revision are created, which do not have the revision appended to the name. Note that gateways for the inactive revision that you previously ran also exist in the namespace, in the case that a rollback is required.

      kubectl get all -n gloo-mesh-gateways
      

    Example output, in which the active gateway (istio-eastwestgateway) for the new revision and inactive gateway (such as istio-eastwestgateway-1-17) for the old revision are created:

      NAME                            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)     
    istio-eastwestgateway            LoadBalancer   10.44.4.140   34.150.235.221   15021:31321/TCP,80:32525/TCP,443:31826/TCP   48s                                                 AGE
    istio-eastwestgateway-1-17       LoadBalancer   10.56.15.36    34.145.163.61   15021:31936/TCP,80:30196/TCP,443:32286/TCP,15443:31851/TCP   45s
      
  6. Upgrade your apps’ Istio sidecars.

    1. For any namespace where you run Istio-managed apps, change the label to use the new revision. For example, you might update the bookinfo namespace for the canary revision 1-18. If you did not previously use revision labels for your apps, you can upgrade your application’s sidecars by running kubectl label ns bookinfo istio-injection- and kubectl label ns bookinfo istio.io/rev=<revision>.
      • Single cluster setup:
          kubectl label ns bookinfo --overwrite istio.io/rev=1-18
          
      • Multicluster setup:
          kubectl label ns bookinfo --overwrite istio.io/rev=1-18 --context $REMOTE_CONTEXT1
        kubectl label ns bookinfo --overwrite istio.io/rev=1-18 --context $REMOTE_CONTEXT2
          
    2. Update any Istio-managed apps by rolling out restarts. The Istio sidecars for each microservice are updated to use the new Istio version. Make sure that you only restart one microservice at a time. For example, in the following commands to update the Bookinfo microservices, 20 seconds elapse between each restart to ensure that the pods have time to start running.
      • Single cluster setup:
          kubectl rollout restart deployment -n bookinfo details-v1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo productpage-v1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v2
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v3
        sleep 20s
        kubectl rollout restart deployment -n bookinfo ratings-v1
        sleep 20s
          
      • Multicluster setup:
          kubectl rollout restart deployment -n bookinfo details-v1 --context $REMOTE_CONTEXT1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo productpage-v1 --context $REMOTE_CONTEXT1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v1 --context $REMOTE_CONTEXT1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v2 --context $REMOTE_CONTEXT1
        sleep 20s
        kubectl rollout restart deployment -n bookinfo reviews-v3 --context $REMOTE_CONTEXT2
        sleep 20s
        kubectl rollout restart deployment -n bookinfo ratings-v1 --context $REMOTE_CONTEXT2
        sleep 20s
          
    3. Verify that your workloads and new gateways point to the new revision.
      • Single cluster setup:
          istioctl proxy-status
          
      • Multicluster setup:
          istioctl proxy-status --context $REMOTE_CONTEXT1
          
      Example output:
        NAME                                                              CLUSTER     ...     ISTIOD                         VERSION
      details-v1-7b6df9d8c8-s6kg5.bookinfo                              cluster1    ...     istiod-1-18-7c8f6fd4c4-m9k9t     1.18.3-solo
      istio-eastwestgateway-1-18-bdc4fd65f-ftmz9.gloo-mesh-gateways     cluster1    ...     istiod-1-18-6495985689-rkwwd     1.18.3-solo
      productpage-v1-bb494b7d7-xbtxr.bookinfo                           cluster1    ...     istiod-1-18-7c8f6fd4c4-m9k9t     1.18.3-solo
      ratings-v1-55b478cfb6-wv2m5.bookinfo                              cluster1    ...     istiod-1-18-7c8f6fd4c4-m9k9t     1.18.3-solo
      reviews-v1-6dfcc9fc7d-7k6qh.bookinfo                              cluster1    ...     istiod-1-18-7c8f6fd4c4-m9k9t     1.18.3-solo
      reviews-v2-7dddd799b5-m5n2z.bookinfo                              cluster1    ...     istiod-1-18-7c8f6fd4c4-m9k9t     1.18.3-solo
        
  7. To uninstall the previous installations, or if you need to uninstall the canary installations, you can edit your Helm values file to remove the revision entries from the istioInstallations.controlPlane.installations and istioInstallations.northSouthGateways.installations lists. Then, upgrade your Gloo Mesh Helm release with your updated values file.

In-place upgrades

In an in-place upgrade, Gloo upgrades your existing control plane or gateway installations. In-place upgrades are triggered when you change one of the following fields:

  • Patch version in the tag field of the istioOperatorSpec
    • In-place upgrades are not supported for downgrading the patch version.
    • In-place upgrades are not supported if you do not already specify a tag value, such as if you want to switch from the auto setting to a specific version. This is because you must also specify hub and revision values, which require a canary upgrade.
  • meshConfig values in the istioOperatorSpec

To trigger an in-place upgrade:

  1. Follow the steps in this guide to perform a regular upgrade of your Gloo Mesh installation and include your Istio changes in your Helm values file. For example, in a single-cluster setup, you might edit your Helm values file to update the patch version of Istio in the istioInstallations.controlPlane.installations.istioOperatorSpec.tag and istioInstallations.northSouthGateways.installations.istioOperatorSpec.tag fields. After you apply the updates in your Helm upgrade of the gloo-platform chart, Gloo starts an in-place upgrade of the Istio control plane and gateways.

  2. After your Helm upgrade completes, restart your gateway pods in each workload cluster. For example, you might use the following command to rollout a restart of the istio-eatwestgateway-1-18 and istio-ingressgateway-1-18 deployments.

      kubectl rollout restart -n gloo-mesh-gateways deployment/istio-eastwestgateway-1-18 --context $REMOTE_CONTEXT1
    kubectl rollout restart -n gloo-mesh-gateways deployment/istio-ingressgateway-1-18 --context $REMOTE_CONTEXT1
      
      kubectl rollout restart -n gloo-mesh-gateways deployment/istio-eastwestgateway-1-18 --context $REMOTE_CONTEXT2
    kubectl rollout restart -n gloo-mesh-gateways deployment/istio-ingressgateway-1-18 --context $REMOTE_CONTEXT2
      
  3. Verify that your Istio resources are updated.

      kubectl get all -n gm-iop-1-18 --context $REMOTE_CONTEXT1
    kubectl get all -n istio-system --context $REMOTE_CONTEXT1
    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
      

Testing only: Manually replacing the GatewayLifecycleManager CR

If you manage Istio through your main Gloo Platform Helm chart in testing or demo setups, you can quickly upgrade your Istio service mesh and gateway configurations by manually deleting the IstioLifecycleManager and GatewayLifecycleManager CRs, and upgrading your Gloo Mesh installation with your updated gateway values in your Helm values file. Note that you can also use this method to clear your managed Istio configurations if a canary upgrade becomes stuck.

  1. Get the name of your IstioLifecycleManager resource. Typically, this resource is named gloo-platform.

      kubectl get IstioLifecycleManager -A --context $MGMT_CONTEXT
      
  2. Delete the resource.

      kubectl delete IstioLifecycleManager gloo-platform -n gloo-mesh --context $MGMT_CONTEXT
      
  3. Verify that your istiod control plane is removed.

      kubectl get all -n istio-system --context $REMOTE_CONTEXT1
    kubectl get all -n istio-system --context $REMOTE_CONTEXT2
      
  4. Optional: If you also need to make changes to your gateways, clear those configurations.

    1. Get the name of your GatewayLifecycleManager resource. Typically, this resource is named istio-eastwestgateway. You might also have an istio-ingressgateway resource, such as if you use Gloo Gateway.
        kubectl get GatewayLifecycleManager -A --context $MGMT_CONTEXT
        
    2. Delete the resource.
        kubectl delete GatewayLifecycleManager istio-eastwestgateway -n gloo-mesh --context $MGMT_CONTEXT
        
        kubectl delete GatewayLifecycleManager istio-ingressgateway -n gloo-mesh --context $MGMT_CONTEXT
        
    3. Verify that your gateway proxy is removed.
        kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
      kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
        
  5. Follow the steps in this guide to perform a regular upgrade of your Gloo Mesh installation and include your Istio changes in your Helm values file. After you apply the updates in your Helm upgrade of the gloo-platform chart, Gloo re-creates the istiod control plane, and if applicable, the Istio gateways.

  6. After your Helm upgrade completes, verify that your Istio resources are re-created.

      # Change the revision as needed
    kubectl get all -n gm-iop-1-18 --context $REMOTE_CONTEXT1
    kubectl get all -n istio-system --context $REMOTE_CONTEXT1
    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
      
      # Change the revision as needed
    kubectl get all -n gm-iop-1-18 --context $REMOTE_CONTEXT2
    kubectl get all -n istio-system --context $REMOTE_CONTEXT2
    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT2