Upgrade

Upgrade minor and patch versions for the Gloo Gateway management server, agent, and gateway proxy.

During the upgrade, the data plane continues to run, but you might not be able to modify the configurations through the management plane. Because zero downtime is not guaranteed, try testing the upgrade in a staging environment before upgrading your production environment.

Upgrade to version 2.3

Upgrade Gloo Gateway to version 2.3 for the first time by migrating your Helm charts to the improved gloo-platform Helm chart.

In Gloo Gateway 2.3 and later, the gloo-mesh-enterpise, gloo-mesh-agent, and other included Helm charts are considered legacy. If you installed Gloo Gateway by using these legacy Helm charts, or if you used meshctl version 2.2 or earlier to install Gloo Gateway, you can migrate your existing installation to the new gloo-platform Helm chart by using the meshctl migrate helm command. During the migration, your installation version is also upgraded to 2.3.

To migrate you Helm charts and upgrade to 2.3, see Migrate to the gloo-platform Helm chart. As part of the upgrade process included in the migration, note the following breaking changes in version 2.3.

Although it is possible to upgrade your Gloo Gateway version to 2.3 without migrating your legacy Helm charts to the new chart, it is not recommended. All guides in the 2.3 documentation use Helm values from the gloo-platform chart. If you must upgrade your legacy Helm charts to 2.3 without migrating, see the legacy upgrade guide.

Upgrade Helm values or the 2.3 patch version

After you upgrade your Gloo Gateway installation to version 2.3, follow these steps to modify Helm values or upgrade your 2.3 patch version.

The upgrade steps in this section are intended for installations that use the gloo-platform Helm chart, which is available in Gloo Platform 2.3 and later. If you installed Gloo Gateway by using the legacy gloo-mesh-enterpise, gloo-mesh-agent, and other included Helm charts, or if you used meshctl version 2.2 or earlier to install Gloo Gateway, migrate your legacy installation to the new gloo-platform Helm chart first. Then, you can upgrade your gloo-platform Helm chart installation by using this section.

Before you begin

Do not upgrade Gloo Gateway to version 2.3.14, which contains a bug that causes the Gloo agent to have stale service discovery data. This bug is fixed in the 2.3.15 release.

  1. Review the changelog. Focus especially on any Breaking Changes that might require a different upgrade procedure.

  2. Check that your underlying Kubernetes platform runs a supported version for the Gloo version that you want to upgrade to.

    1. Review the supported versions.
    2. Compare the supported version against the version of Kubernetes that you run in your clusters.
    3. If necessary, upgrade Kubernetes. Consult your cluster infrastructure provider.
  3. Set the Gloo Gateway version that you want to upgrade to as an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.3.22-fips’. Do not include v before the version number.

    export UPGRADE_VERSION=2.3.22
    

Looking to update certain Helm chart values but not the version? Skip to step 2.

Step 1: Upgrade Gloo CRDs

  1. Update the Helm repository for Gloo Platform.

    helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
    
  2. Apply the Gloo custom resource definitions (CRDs) for the target version by upgrading your gloo-platform-crds Helm release.

    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --version=$UPGRADE_VERSION
    
    1. Upgrade your gloo-platform-crds Helm release in the management cluster.

      helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
         --kube-context $MGMT_CONTEXT \
         --namespace=gloo-mesh \
         --version=$UPGRADE_VERSION
      
    2. Upgrade your gloo-platform-crds Helm release in each workload cluster. Remember to change the context for each workload cluster that you upgrade.

      helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
         --kube-context $REMOTE_CONTEXT \
         --namespace=gloo-mesh \
         --version=$UPGRADE_VERSION
      

Step 2: Get your Helm chart values

As part of the upgrade, you can update or reuse the Helm chart values for your Gloo management server, agent, and any add-ons that you might have such as rate limiting and external authentication services.

  1. Get the Helm values file for your current version.

    1. Get your current values. Note that if you migrated from the legacy Helm charts, your Helm release might be named gloo-mgmt or gloo-mesh-enterprise instead.
      helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml
      open gloo-gateway-single.yaml
      
    2. Delete the first line that contains USER-SUPPLIED VALUES:, and save the file.
    3. Optional: If you maintain a separate gloo-agent-addons Helm release, get the values for that Helm release too, and delete the first line that contains USER-SUPPLIED VALUES:.
      helm get values gloo-agent-addons -n gloo-mesh-addons > gloo-agent-addons.yaml
      open gloo-agent-addons.yaml
      
    1. Get your current values for the management cluster. Note that if you migrated from the legacy Helm charts, your Helm release might be named gloo-mgmt or gloo-mesh-enterprise instead.
      helm get values gloo-platform -n gloo-mesh --kube-context $MGMT_CONTEXT > mgmt-server.yaml
      open mgmt-server.yaml
      
    2. Delete the first line that contains USER-SUPPLIED VALUES:, and save the file.
    3. Get your current values for the workload clusters. Note that if you migrated from the legacy Helm charts, your Helm release might be named gloo-agent or gloo-mesh-agent instead.
      helm get values gloo-platform -n gloo-mesh --kube-context $REMOTE_CONTEXT > agent.yaml
      open agent.yaml
      
    4. Delete the first line that contains USER-SUPPLIED VALUES:, and save the file.
    5. Optional: If you maintain a separate gloo-agent-addons Helm release, get the values for that Helm release too, and delete the first line that contains USER-SUPPLIED VALUES:.
      helm get values gloo-agent-addons -n gloo-mesh-addons --kube-context $REMOTE_CONTEXT > gloo-agent-addons.yaml
      open gloo-agent-addons.yaml
      

  2. Compare your current Helm chart values with the version that you want to upgrade to. You can get a values file for the upgrade version with the helm show values command.

    helm show values gloo-platform/gloo-platform --version $UPGRADE_VERSION > all-values.yaml
    
  3. Review the changelog for any Helm Changes that might require modifications to your Helm chart. For example, take note of the following changes in version 2.3 that you might need to address before upgrading.

    • If you used your own Prometheus instance to scrape metrics in your cluster instead of using the built-in Prometheus, enter the Prometheus URL that your instance is exposed on, such as http://kube-prometheus-stack-prometheus.monitoring:9090, in the common.prometheusUrl field. You can get this value from the --web.external-url field in your Prometheus Helm values file or by selecting Status > Command-Line-Flags from the Prometheus UI. Do not use the FQDN for the Prometheus URL.
    • If you use the AWS Lambda integration in a mutlicluster setup, be sure that all CloudProvider and any CloudResource resources are created in the gloo-mesh namespace of the management cluster. If you must move any resources, be sure to also update the references to these resources in the route tables for Lambda functions.
    • The legacy pipeline is deprecated and is planned to be removed in Gloo Gateway version 2.4. For a highly available and scalable telemetry solution that is decoupled from the Gloo agent and management server core functionality, migrate to the Gloo OpenTelemetry pipeline. See Gloo OpenTelemetry pipeline for more information.
  4. Edit the Helm values file or prepare the --set flags to make any changes that you want. If you do not want to use certain settings, comment them out.

Updating values in the istioInstallations section? See Upgrade managed gateway proxies for special instructions.

Step 3: Upgrade and verify the Helm installation

  1. Upgrade the Gloo Gateway Helm installation. Make sure to include your Helm values when you upgrade either as a configuration file in the --values flag or with --set flags. Otherwise, any previous custom values that you set might be overwritten. In single cluster setups, this might mean that your Gloo agent and ingress gateways are removed.

    1. Upgrade your Helm release. Change the release name as needed.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         -f gloo-gateway-single.yaml \
         --version $UPGRADE_VERSION
      
    2. Optional: If you migrated from the legacy Helm charts and maintained a separate gloo-agent-addons Helm release during the migration, upgrade that Helm release too.
      helm upgrade gloo-agent-addons gloo-platform/gloo-platform \
         --namespace gloo-mesh-addons \
         -f gloo-agent-addons.yaml \
         --version $UPGRADE_VERSION
      

    In multicluster setups, you must always upgrade the Gloo management server before upgrading the Gloo agent to avoid unexpected behavior. Note that only n-1 minor version skew is supported between the management server and the agent. For more information, see the Skew policy.

    1. Upgrade your Helm release in the management cluster. Change the release name as needed.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --kube-context $MGMT_CONTEXT \
         --namespace gloo-mesh \
         -f mgmt-server.yaml \
         --version $UPGRADE_VERSION
      
    2. Upgrade your Helm release in each workload cluster. Change the release name as needed. Be sure to update the cluster context for each workload cluster that you repeat this command for.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --kube-context $REMOTE_CONTEXT \
         --namespace gloo-mesh \
         -f agent.yaml \
         --version $UPGRADE_VERSION
      
    3. Optional: If you migrated from the legacy Helm charts and maintained a separate gloo-agent-addons Helm release during the migration, upgrade that Helm release in each workload cluster too. Be sure to update the cluster context for each workload cluster that you repeat this command for.
      helm upgrade gloo-agent-addons gloo-platform/gloo-platform \
         --kube-context $REMOTE_CONTEXT \
         --namespace gloo-mesh-addons \
         -f gloo-agent-addons.yaml \
         --version $UPGRADE_VERSION
      

  2. Optional: Check that the Gloo management and agent resources are connected.

    meshctl check
    
  3. Confirm that the server components such as gloo-mesh-mgmt-server run the version that you upgraded to.

    meshctl version
    

    Example output:

       "server": [
       {
         "Namespace": "gloo-mesh",
         "components": [
           {
             "componentName": "gloo-mesh-mgmt-server",
             "images": [
                {
                 "name": "gloo-mesh-mgmt-server",
                 "domain": "gcr.io",
                 "path": "gloo-mesh-mgmt-server",
                 "version": "2.3.22"
               }
             ]
           },
       

  4. Multicluster setups only: Confirm that the agent components such as gloo-mesh-agent run the version that you upgraded to.

    meshctl version --kubecontext ${REMOTE_CONTEXT}
    

    Example output:

       {
             "componentName": "gloo-mesh-agent",
             "images": [
               {
                 "name": "gloo-mesh-agent",
                 "domain": "gcr.io",
                 "path": "gloo-mesh/gloo-mesh-agent",
                 "version": "2.3.22"
               }
             ]
           },
       

Next steps

Now that you upgraded Gloo, you must upgrade your meshctl CLI to the matching version. Depending on the Gloo version support, you might also want to upgrade Kubernetes in your clusters.

  1. Upgrade the meshctl CLI to the same version of Gloo.
  2. Optional: If the new version of Gloo supports a more recent version of Kubernetes, you can upgrade Kubernetes on your cluster. For more information, consult your cluster infrastructure provider.

Upgrade managed gateway proxies

During step 2 of the Helm upgrade process, you might make changes to the istioInstallations section of your Helm values file to update your Istio control plane and gateway proxies. Depending on the type of change, you apply updates to the installations in one of the following ways:

Istio version 1.17 does not support the Gloo legacy metrics pipeline. If you run the legacy metrics pipeline, before you upgrade or deploy gateway proxies with Istio 1.17, be sure that you set up the Gloo OpenTelemetry (OTel) pipeline instead in your new or existing Gloo Gateway installation.

Canary upgrades (recommended)

In a canary upgrade, you install another Istio installation (canary) alongside your active installation. Note that during a canary upgrade, the validating admissions webhook is enabled only for the canary installation to prevent issues that occur when multiple webhooks are enabled.

Perform a canary upgrade when you change one of the following fields:

To perform a canary upgrade:

  1. Follow the steps in this guide to perform a regular upgrade of your Gloo Gateway installation. When you edit the istioInstallations.controlPlane and istioInstallations.northSouthGateways sections of your Helm values file, add another installation entry for the canary revision, and leave the entry your your current installation as-is. For the canary revision, be sure to set defaultRevision and activeGateway to false so that only the existing revisions continue to run.

    For example, you might add the following installation entries for the Istio control plane and ingress gateway alongside your existing entries. If you have a Gloo Gateway license, you might also have entries for the ingress gateway proxy in the nothSouthGateways section too.

    istioInstallations:
        controlPlane:
            enabled: true
            installations:
                # EXISTING revision
                - clusters:
                      # Keep this field set to TRUE
                    - defaultRevision: true
                      name: cluster1
                      trustDomain: ""
                  istioOperatorSpec:
                    hub: $REPO
                    tag: 1.17.4-solo
                    profile: minimal
                    namespace: istio-system
                    ...
                  revision: 1-17-4
                # NEW revision
                - clusters:
                      # Set this field to FALSE
                    - defaultRevision: false
                      name: cluster1
                      trustDomain: ""
                  istioOperatorSpec:
                    hub: $REPO
                    tag: 1.18.2-solo
                    profile: minimal
                    namespace: istio-system
                    ...
                  revision: 1-18-2
        eastWestGateways: null
        enabled: true
        northSouthGateways:
            - enabled: true
              installations:
                # EXISTING revision
                - clusters:
                      # Keep this field set to TRUE
                    - activeGateway: true
                      name: cluster1
                      name: 
                      trustDomain: ""
                  gatewayRevision: 1-17-4
                  istioOperatorSpec:
                    hub: $REPO
                    tag: 1.17.4-solo
                    profile: empty
                    namespace: gloo-mesh-gateways
                    ...
                # NEW revision
                - clusters:
                      # Set this field to FALSE
                    - activeGateway: false
                      name: cluster1
                      name: 
                      trustDomain: ""
                  gatewayRevision: 1-18-2
                  istioOperatorSpec:
                    hub: $REPO
                    tag: 1.18.2-solo
                    profile: empty
                    namespace: gloo-mesh-gateways
                    ...
              name: istio-ingressgateway
    

    Updating the minor version of Istio? In your canary revision section, be sure to update both the repo key in the hub field, and the Istio version in the tag field. You can get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article.

    For most use cases, you can set the revision and the gatewayRevision to the same version. However, gateway installations can point to any istiod control plane revision by using the controlPlaneRevision field. For simplicity, if you do not specify controlPlaneRevision, the gateway installation uses a control plane with the same revision as itself.

  2. After you apply the Helm upgrade with your updated values file, verify that Istio resources for the canary installation are created. For example, if you updated the Istio minor version to 1-18-2, verify that resources are created in the gm-iop-1-18-2 namespace, and that resources for 1-18-2 are created alongside the existing resources for the previous version in the istio-system and gloo-mesh-gateways namespaces. Note that the gateway load balancers for the canary revision contain the revision in the name, such as istio-ingressgateway-1-18-2.

    kubectl get all -n gm-iop-1-18-2
    kubectl get all -n istio-system
    kubectl get all -n gloo-mesh-gateways
    

    Running into issues or seeing a stuck canary upgrade? In testing environments, you can clear your configuration by manually replacing the GatewayLifecycleManager CR.

  3. After performing any necessary testing, switch to the new Istio control plane and ingress gateway revisions.

    1. Get your Helm values file. Change the release name as needed.
      helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml
      open gloo-gateway-single.yaml
      
    2. Change defaultRevision and activeGateway to false for the old revision and to true for the new revision.
      New load balancers are created for the canary gateways. To instead change the control plane revision in use by the existing gateway load balancers, you can set the istio.io/rev label on the gateway deployment, which triggers a rolling restart.
      istioInstallations:
          controlPlane:
              enabled: true
              installations:
                  # OLD revision
                  - clusters:
                        # Set this field to FALSE
                      - defaultRevision: false
                        name: cluster1
                        trustDomain: ""
                    istioOperatorSpec:
                      hub: $REPO
                      tag: 1.17.4-solo
                      profile: minimal
                      namespace: istio-system
                      ...
                    revision: 1-17-4
                  # NEW revision
                  - clusters:
                        # Set this field to TRUE
                      - defaultRevision: true
                        name: cluster1
                        trustDomain: ""
                    istioOperatorSpec:
                      hub: $REPO
                      tag: 1.18.2-solo
                      profile: minimal
                      namespace: istio-system
                      ...
                    revision: 1-18-2
          eastWestGateways: null
          enabled: true
          northSouthGateways:
              - enabled: true
                installations:
                  # OLD revision
                  - clusters:
                        # Set this field to FALSE
                      - activeGateway: false
                        name: cluster1
                        name: 
                        trustDomain: ""
                    gatewayRevision: 1-17-4
                    istioOperatorSpec:
                      hub: $REPO
                      tag: 1.17.4-solo
                      profile: empty
                      namespace: gloo-mesh-gateways
                      ...
                  # NEW revision
                  - clusters:
                        # Set this field to TRUE
                      - activeGateway: true
                        name: cluster1
                        name: 
                        trustDomain: ""
                    gatewayRevision: 1-18-2
                    istioOperatorSpec:
                      hub: $REPO
                      tag: 1.18.2-solo
                      profile: empty
                      namespace: gloo-mesh-gateways
                      ...
                name: istio-ingressgateway
      
    3. Upgrade your Helm release. Change the release name as needed.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         -f gloo-gateway-single.yaml \
         --version $UPGRADE_VERSION
      
  4. After your Helm upgrade completes, verify that the active gateways for the new revision are created, which do not have the revision appended to the name. Note that gateways for the inactive revision that you previously ran also exist in the namespace, in the case that a rollback is required.

    kubectl get all -n gloo-mesh-gateways
    

    Example output, in which the active gateway (istio-ingressgateway) for the new revision and inactive gateway (such as istio-ingressgateway-1-17-4) for the old revision are created:

    NAME                            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)     
    istio-ingressgateway            LoadBalancer   10.44.4.140   34.150.235.221   15021:31321/TCP,80:32525/TCP,443:31826/TCP   48s                                                 AGE
    istio-ingressgateway-1-17-4       LoadBalancer   10.56.15.36    34.145.163.61   15021:31936/TCP,80:30196/TCP,443:32286/TCP,15443:31851/TCP   45s
    
  5. To uninstall the previous installations, or if you need to uninstall the canary installations, you can edit your Helm values file to remove the revision entries from the istioInstallations.controlPlane.installations and istioInstallations.northSouthGateways.installations lists. Then, upgrade your Gloo Gateway Helm release with your updated values file.

  6. If you also use Gloo Mesh Enterprise alongside Gloo Gateway, see step 6 in the Gloo Mesh upgrade documentation to upgrade your workloads’ Istio sidecars.

In-place upgrades

In an in-place upgrade, Gloo Gateway upgrades your existing control plane or gateway installations. In-place upgrades are triggered when you apply changes to one of the following fields in the istioInstallations section:

To update the patch version of Istio or update meshConfig values of the Istio installations:

  1. Follow the steps in this guide to perform a regular upgrade of your Gloo Gateway installation and include your Istio changes in your Helm values file. For example, in a single-cluster setup, you might edit your Helm values file to update the patch version of Istio in the istioInstallations.controlPlane.installations.istioOperatorSpec.tag and istioInstallations.northSouthGateways.installations.istioOperatorSpec.tag fields. After you apply the updates in your Helm upgrade of the gloo-platform chart, Gloo starts an in-place upgrade of the istiod control plane and the ingress gateway proxy.

  2. After your Helm upgrade completes, restart your gateway proxy pods. For example, you might use the following command to rollout a restart of the istio-ingressgateway-1-18-2 deployment.

    kubectl rollout restart -n gloo-mesh-gateways deployment/istio-ingressgateway-1-18-2
    
  3. Verify that your Istio resources are updated.

    kubectl get all -n gm-iop-1-18-2
    kubectl get all -n istio-system
    kubectl get all -n gloo-mesh-gateways
    

Testing only: Manually replacing the GatewayLifecycleManager CR

In testing or demo setups, you can quickly upgrade your managed gateway proxies by manually deleting the GatewayLifecycleManager CR, and upgrading your Gloo Gateway installation with your updated gateway values in your Helm values file. Note that you can also use this method to clear your managed gateway configurations if a canary upgrade becomes stuck.

This method is supported only for testing scenarios, because your ingress gateway proxies are temporarily removed in this process.

  1. Get the name of your GatewayLifecycleManager resource. Typically, this resource is named istio-ingressgateway.

    kubectl get GatewayLifecycleManager -A
    
  2. Delete the resource.

    kubectl delete GatewayLifecycleManager istio-ingressgateway -n gloo-mesh
    
  3. Verify that your gateway proxy is removed. It might take a few minutes for the service to delete.

    kubectl get all -n gloo-mesh-gateways
    
  4. Optional: If you also need to make changes to your Istio control plane, clear the istiod configuration.

    1. Get the name of your IstioLifecycleManager resource. Typically, this resource is named gloo-platform.
      kubectl get IstioLifecycleManager -A
      
    2. Delete the resource.
      kubectl delete IstioLifecycleManager gloo-platform -n gloo-mesh
      
    3. Verify that your istiod control plane is removed.
      kubectl get all -n istio-system
      
  5. Follow the steps in this guide to perform a regular upgrade of your Gloo Gateway installation and include your Istio changes in your Helm values file. After you apply the updates in your Helm upgrade of the gloo-platform chart, Gloo re-creates the ingress gateway proxy, and if applicable, the istiod control plane.

  6. After your Helm upgrade completes, verify that your Istio resources are re-created.

    # Change the revision as needed
    kubectl get all -n gm-iop-1-18-2
    kubectl get all -n istio-system
    kubectl get all -n gloo-mesh-gateways
    

Update your Gloo license

Before your Gloo license expires, you can update the license by performing a Helm upgrade. If you use Gloo Gateway along with other Gloo products such as Gloo Mesh and Gloo Network, you can also update those licenses.

For example, if you notice that your Gloo control plane deployments are in a crash loop, your Gloo license might be expired. You can check the logs for one of the deployments, such as the management server, to look for an error message similar to the following:

meshctl logs mgmt
{"level":"fatal","ts":1628879186.1552186,"logger":"gloo-mesh-mgmt-server","caller":"cmd/main.go:24","msg":"License is invalid or expired, crashing - license expired", ...

To update your license key in your Gloo installation:

  1. Get a new Gloo license key by contacting your account representative. If you use Gloo Gateway along with other Gloo products such as Gloo Mesh and Gloo Network, make sure to ask for up-to-date license keys for all your products.
  2. Save the new license key as an environment variable.
    export GLOO_GATEWAY_LICENSE_KEY=<new-key-string>
    
    export GLOO_MESH_LICENSE_KEY=<new-key-string>
    
    export GLOO_NETWORK_LICENSE_KEY=<new-key-string>
    
  3. Perform a regular upgrade of your Gloo installation. During the upgrade, either update the license value in your Helm values file, or provide your new license key in a --set flag in the helm upgrade command. For example, to update your Gloo Gateway license key, either change the value of the licensing.glooGatewayLicenseKey setting in your Helm values file, or supply the --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY flag when you upgrade.
  4. Optional: If your license expired and the management server pods are in a crash loop, restart the management server pods. If you updated the license before expiration, skip this step.
    kubectl rollout restart -n gloo-mesh deployment/gloo-mesh-mgmt-server
    
  5. Verify that your license check is now valid, and no errors are reported.
    meshctl check
    

    Example output:

    🟢 License status
    
    INFO  gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  Valid GraphQL license module found