You can use this guide to upgrade the Gloo version of your Gloo components, such as the management server and agents, or to apply changes to the components’ configuration settings.

Considerations

Consider the following rules before you plan your Gloo Network upgrade.

Testing upgrades

During the upgrade, the data plane continues to run, but you might not be able to modify the configurations through the management plane. Because zero downtime is not guaranteed, try testing the upgrade in a staging environment before upgrading your production environment.

Patch and minor versions

Patch version upgrades:
You can skip patch versions within the same minor release. For example, you can upgrade from version 2.5.0 to 2.5.6 directly, and skip the patch versions in between.

Minor version upgrades:

  • Always upgrade to the latest patch version of the target minor release. For example, if you want to upgrade from version 2.4.15 to 2.5.x, and 2.5.6 is the latest patch version, upgrade to that version and skip any previous patch versions for that minor release. Do not upgrade to a lower patch version, such as 2.5.0, 2.5.1, and so on.
  • Do not skip minor versions during your upgrade. Upgrade minor release versions one at a time. For example, if you want to upgrade from 2.3.x to 2.5.x, you must first upgrade to the latest patch version of the 2.4 minor release. After you upgrade to 2.4.x, you can then plan your upgrade to the latest patch version of the 2.5.x release.

Multicluster only: Version skew policy for management and remote clusters

Plan to always upgrade your Gloo management server and agents to the same target version. Always upgrade the Gloo management server first. Then, roll out the upgrade to the Gloo agents in your workload clusters. During this upgrade process, your management server and agents can be one minor version apart.

For example, let’s say you want to upgrade from 2.4.15 to 2.5.x. Start by upgrading your management server to the latest patch version of the 2.5 minor release. Your management server and agent are still compliant as they are one minor version apart. Then, roll out the 2.5 minor release upgrade to the agents in your workload clusters.

If you plan to upgrade more than one minor releases, you must perform one minor release upgrade at a time. For example, to upgrade your management server and agent from 2.3.x to 2.5.x, you upgrade your management server to the latest patch version of the 2.4 minor release first. Your management server and agent are compliant because they are one minor version apart. Then, you upgrade your agents to the 2.4 minor release. After you verify the 2.4 upgrade, use the same approach to upgrade the management server and agents from 2.4 to the target 2.5 minor release.

If both your management server and agent run the same minor version, the agent can run any patch version that is equal or lower than the management server’s patch version.

Consider the following example version skew scenarios:

Supported?Management server versionAgent versionRequirement
2.4.42.4.2The management server and agents run the same minor version. The agent patch version is equal to or lower than the management server.
2.4.42.4.5The agent runs the same minor version as the server, but has a patch version greater than the server.
2.4.42.3.4The agent runs a minor version no greater than n-1 behind the server.
2.4.42.2.9The agent runs a minor version that is greater than n-1 behind the server.

Breaking changes when upgrading from the previous version

For a summary of the main changes in the release, review the release notes.

Upstream Prometheus upgrade

Gloo Network includes a built-in Prometheus server to help monitor the health of your Gloo components. This release of Gloo upgrades the Prometheus community Helm chart from version 19.7.2 to 25.11.0. As part of this upgrade, upstream Prometheus changed the selector labels for the deployment, which requires recreating the deployment. To help with this process, the Gloo Helm chart includes a pre-upgrade hook that automatically recreates the Prometheus deployment during a Helm upgrade. This breaking change impacts upgrades from previous versions to version 2.4.10, 2.5.1, or 2.6.0 and later.

If you do not want the redeployment to happen automatically, you can disable this process by setting the prometheus.skipAutoMigration Helm value to true. For example, you might use Argo CD, which converts Helm pre-upgrade hooks to Argo PreSync hooks and causes issues. To ensure that the Prometheus server is deployed with the right version, follow these steps:

  1. Confirm that you have an existing deployment of Prometheus at the old Helm chart version of chart: prometheus-19.7.2.
      kubectl get deploy -n gloo-mesh prometheus-server -o yaml | grep chart
      
  2. Delete the Prometheus deployment. Note that while Prometheus is deleted, you cannot observe Gloo performance metrics.
      kubectl delete deploy -n gloo-mesh prometheus-server
      
  3. In your Helm values file, set the prometheus.skipAutoMigration field to true.
  4. Continue with the Helm upgrade of Gloo Network. The upgrade recreates the Prometheus server deployment at the new version.

Prometheus annotations removed

In Gloo version 2.5.0, the prometheus.io/port: "<port_number>" annotation was removed from the Gloo management server and agent. However, the prometheus.io/scrape: true annotation is still present. If you have another Prometheus instance that runs in your cluster, and it is not set up with custom scraping jobs for the Gloo management server and agent, the instance automatically scrapes all ports on the management server and agent pods. This can lead to error messages in the management server and agent logs. Note that this issue is resolved in version 2.5.2. To resolve this issue in Gloo version 2.5.0 or 2.5.1, see Run another Prometheus instance alongside the built-in one.

Step 1: Prepare to upgrade

  1. Review changes from the previous version.

  2. Check that your underlying Kubernetes platform and Istio service mesh run supported versions for the Gloo Network version that you want to upgrade to.

    1. Review the supported versions.
    2. Compare the supported version against the versions of Kubernetes and Istio that you run in your clusters.
    3. If necessary, upgrade Istio or Kubernetes to a version that is supported by the Gloo Network version that you want to upgrade to.
  3. Set the Gloo Network version that you want to upgrade to as an environment variable. The latest version is used as an example. Append -fips for a FIPS-compliant image, such as 2.5.6-fips. Do not include v before the version number.

      export UPGRADE_VERSION=2.5.6
      

Step 2: Upgrade the meshctl CLI

Upgrade the meshctl CLI to the version of Gloo Network you want to upgrade to.

  1. Re-install meshctl to the upgrade version.

      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v$UPGRADE_VERSION sh -
      
  2. Verify that the client version matches the version you installed.

      meshctl version
      

    Example output:

      {
    "client": {
      "version": "2.5.6"
    },
      

Step 3: Upgrade Gloo Network

Upgrade your Gloo Network installation. The steps differ based on whether you run Gloo Network in a single-cluster or multicluster environment.

Single cluster

  1. Update the gloo-platform Helm repo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  2. Apply the Gloo custom resource definitions (CRDs) for the upgrade version.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
        --namespace gloo-mesh \
        --version $UPGRADE_VERSION \
        --set installEnterpriseCrds=false
      
  3. Get the Helm values files for your current version.

      helm get values gloo-platform -o yaml -n gloo-mesh > gloo-single.yaml
    open gloo-single.yaml
      
  4. Compare your current Helm chart values with the version that you want to upgrade to. You can get a values file for the upgrade version with the helm show values command.

      helm show values gloo-platform/gloo-platform --version $UPGRADE_VERSION > all-values.yaml
      
  5. Make any changes that you want, such as modifications required for breaking changes or to enable new features, by editing your gloo-single.yaml Helm values files or preparing the --set flags. If you do not want to use certain settings, comment them out.

  6. Optional: If you plan to increase the number of I/O threads in Redis, scale down the Gloo management server to 0 replicas.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=0 -n gloo-mesh --context $MGMT_CONTEXT
      
  7. Upgrade the Gloo Network Helm installation.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
        --namespace gloo-mesh \
        -f gloo-single.yaml \
        --version $UPGRADE_VERSION
      
  8. Confirm that Gloo components, such as the gloo-mesh-mgmt-server, run the version that you upgraded to.

      meshctl version
      

    Example output:

       "server": [
       {
         "Namespace": "gloo-mesh",
         "components": [
           {
             "componentName": "gloo-mesh-mgmt-server",
             "images": [
                {
                 "name": "gloo-mesh-mgmt-server",
                 "domain": "gcr.io",
                 "path": "gloo-mesh-mgmt-server",
                 "version": "2.5.6"
               }
             ]
           },
       

Multicluster

  1. Update the gloo-platform Helm repo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  2. Get the Helm values files for your current version.

    1. Get your current values for the management cluster.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $MGMT_CONTEXT > mgmt-plane.yaml
      open mgmt-plane.yaml
        
    2. Get your current values for the workload clusters.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > data-plane.yaml
      open data-plane.yaml
        
  3. Compare your current Helm chart values with the version that you want to upgrade to. You can get a values file for the upgrade version with the helm show values command.

      helm show values gloo-platform/gloo-platform --version $UPGRADE_VERSION > all-values.yaml
      
  4. Make any changes that you want, such as modifications required for breaking changes or to enable new features, by editing your mgmt-plane.yaml and data-plane.yaml Helm values files or preparing the --set flags. If you do not want to use certain settings, comment them out.

  5. Optional: If you plan to increase the number of I/O threads in Redis, scale down the Gloo management server to 0 replicas.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=0 -n gloo-mesh --context $MGMT_CONTEXT
      
  6. Upgrade the Gloo Network Helm releases in your management cluster.

    1. Apply the Gloo custom resource definitions (CRDs) for the upgrade version in the management cluster.
        helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
          --kube-context $MGMT_CONTEXT \
          --namespace gloo-mesh \
          --version $UPGRADE_VERSION \
          --set installEnterpriseCrds=false
        
    2. Upgrade your Helm release in the management cluster. Make sure to include your Helm values when you upgrade either as a configuration file in the --values flag or with --set flags. Otherwise, any previous custom values that you set might be overwritten.
        helm upgrade gloo-platform gloo-platform/gloo-platform \
          --kube-context $MGMT_CONTEXT \
          --namespace gloo-mesh \
          -f mgmt-plane.yaml \
          --version $UPGRADE_VERSION
        
    3. Confirm that the management plane components, such as the gloo-mesh-mgmt-server, run the version that you upgraded to.
        meshctl version --kubecontext $MGMT_CONTEXT
        
      Example output:
            "server": [
            {
              "Namespace": "gloo-mesh",
              "components": [
                {
                  "componentName": "gloo-mesh-mgmt-server",
                  "images": [
                     {
                      "name": "gloo-mesh-mgmt-server",
                      "domain": "gcr.io",
                      "path": "gloo-mesh-mgmt-server",
                      "version": "2.5.6"
                    }
                  ]
                },
            
  7. Upgrade the Gloo Network Helm releases in your workload clusters. Repeat these steps for each workload cluster, and be sure to update the cluster context each time.

    1. Apply the Gloo custom resource definitions (CRDs) for the upgrade version in each workload cluster.

        helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
          --kube-context $REMOTE_CONTEXT \
          --namespace=gloo-mesh \
          --version=$UPGRADE_VERSION \
          --set installEnterpriseCrds=false
        
    2. Upgrade your Helm release in each workload cluster. Make sure to include your Helm values when you upgrade either as a configuration file in the --values flag or with --set flags. Otherwise, any previous custom values that you set might be overwritten.

        helm upgrade gloo-platform gloo-platform/gloo-platform \
          --kube-context $REMOTE_CONTEXT \
          --namespace gloo-mesh \
          -f data-plane.yaml \
          --version $UPGRADE_VERSION
        
    3. Confirm that the data plane components, such as the gloo-mesh-agent, run the version that you upgraded to.

        meshctl version --kubecontext $REMOTE_CONTEXT
        

      Example output:

            {
                  "componentName": "gloo-mesh-agent",
                  "images": [
                    {
                      "name": "gloo-mesh-agent",
                      "domain": "gcr.io",
                      "path": "gloo-mesh/gloo-mesh-agent",
                      "version": "2.5.6"
                    }
                  ]
                },
            

    4. Repeat these steps for each workload cluster, and be sure to update the cluster context each time.

  8. Check that the Gloo management and agent components are connected.

      meshctl check --kubecontext $MGMT_CONTEXT
      

Update your Gloo license

Before your Gloo license expires, you can update the license by patching the license key secret. If you use Gloo Mesh along with other Gloo products such as Gloo Gateway, you can also update those licenses.

For example, if you notice that your Gloo management plane deployments are in a crash loop, your Gloo license might be expired. You can check the status of your license with the meshctl license check command.

  • To pass in a license key directly, encode the key in base64 and pass it in the --key flag. For example, to check your Gloo Mesh Enterprise license key, you can run the following command:
      meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0) --context ${MGMT_CONTEXT}
      
  • If you store your license keys in a Kubernetes secret, you can pass the secret YAML file in the --secrets-file flag instead.
      meshctl license check --secrets-file license-keys.yaml --context ${MGMT_CONTEXT}
      

Example output for an expired license:

  WARNING  Your gloo-mesh license expired on 2024-01-24 19:30:53 +0100 CET. To get a new license, contact Support.
ERROR  License is expired. For more info, see https://docs.solo.io/gloo-mesh-enterprise/latest/setup/prepare/licensing/#update-licenses
  

To update your license key in your Gloo installation:

  1. Get a new Gloo license key by contacting your account representative. If you use Gloo Mesh along with other Gloo products such as Gloo Gateway, make sure to ask for up-to-date license keys for all your products.

  2. Save the new license key as an environment variable.

  3. Update the license secret to use the new Gloo Gateway license key.

  4. Optional: If your license expired and the management server pods are in a crash loop, restart the management server pods. If you updated the license before expiration, skip this step.

      kubectl rollout restart -n gloo-mesh deployment/gloo-mesh-mgmt-server --context ${MGMT_CONTEXT}
      
  5. Verify that your license check is now valid, and no errors are reported.

    • To pass in a license key directly, encode the key in base64 and pass it in the --key flag. For example, to check your Gloo Mesh Enterprise license key, you can run the following command:
        meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0) --context ${MGMT_CONTEXT}
        
    • If you store your license keys in a Kubernetes secret, you can pass the secret YAML file in the --secrets-file flag instead.
        meshctl license check --secrets-file license-keys.yaml --context ${MGMT_CONTEXT}
        

    Example output:

      INFO  License key gloo-mesh-license-key for product gloo-mesh is valid. Expires at 08 Oct 24 12:31 CEST
    SUCCESS  Licenses are valid
      

Upgrade the Cilium CNI

To upgrade the Cilium CNI in your clusters, such as to update the version of the Solo distribution of the Cilium image or to change a setting, you can follow the upgrade guide in the Cilium documentation.

  1. To ensure your Helm values are not overwritten, save your current Helm values for the Cilium CNI installation.

      helm get values cilium -n kube-system -o yaml > solo-cilium.yaml
      
  2. Follow the upgrade guide in the Cilium documentation. In the helm install and helm upgrade commands, be sure to pass in your Helm values by using the -f solo-cilium.yaml flag.