1.8.0+ Upgrade Notice

In this guide we will describe the necessary steps to upgrade your Gloo Edge or Gloo Edge Enterprise deployments to their 1.8 versions. We recommend that you follow these steps only after you have followed our guide to upgrade to 1.7.

This upgrade guide also assumes that was gloo installed via helm or with glooctl version 1.7.0+ (i.e., gloo is a helm release named “gloo”, which you can confirm exists by running helm ls --all-namespaces).

Also, please make sure to check out our general configuration recommendations to avoid downtime during upgrades.

Breaking Changes

Open Source

New VirtualHostOption and RouteOption CRDs

In v1.8.0, we introduced two new custom resource definitions (CRDs), VirtualHostOption CRD and RouteOption CRD . These CRDs will be automatically applied to your cluster when performing a helm install operation. However, these will not be applied when performing an upgrade. This is a deliberate design choice on the part of the Helm maintainers, given the risk associated with changing CRDs. Given this limitation, we need to apply the new CRDs to the cluster before running helm upgrade.

Installing the new VirtualHostOption and RouteOption CRDs

You can add the new CRDs to your cluster in two ways. The first is to supply a URL that points to the CRD template in the public Gloo Edge GitHub repository

kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/v1.8.0/install/helm/gloo/crds/gateway.solo.io_v1_VirtualHostOption.yaml
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/v1.8.0/install/helm/gloo/crds/gateway.solo.io_v1_RouteOption.yaml
helm repo update
helm upgrade -n gloo-system gloo gloo/gloo --version=1.8.0
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/v1.8.0/install/helm/gloo/crds/gateway.solo.io_v1_VirtualHostOption.yaml
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/v1.8.0/install/helm/gloo/crds/gateway.solo.io_v1_RouteOption.yaml
helm repo update
helm upgrade -n gloo-system glooe gloo/gloo-ee --version=1.8.0

The second option involves using the template that is shipped in the Gloo Edge and Gloo Edge enterprise charts.

helm repo update
helm pull gloo/gloo --version 1.8.0 --untar
kubectl apply -f gloo/crds/gateway.solo.io_v1_VirtualHostOption.yaml
kubectl apply -f gloo/crds/gateway.solo.io_v1_RouteOption.yaml
helm repo update
helm pull glooe/gloo-ee --version 1.8.0 --untar
kubectl apply -f gloo/crds/gateway.solo.io_v1_VirtualHostOption.yaml
kubectl apply -f gloo/crds/gateway.solo.io_v1_RouteOption.yaml

You can verify that the new CRD has been successfully applied by running the following command:

kubectl get crds virtualhostoptions.gateway.solo.io
kubectl get crds routeoptions.gateway.solo.io
CRDs with Validation Schemas

Gloo Edge CRDs now include an OpenAPI v3.0 validation schema with structural schema constraints (https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema). Previously custom resources could be defined with yaml that contained snake_case or camelCase fields. For example, the following definition was valid:

apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  name: default-productpage-9080
  namespace: gloo-system
spec:
  kube:
    selector:
      app: productpage
    service_name: productpage
    service_namespace: default
    service_port: 9080

The validation schemas require that fields be defined using camelCase. The previous definition should be converted to:

apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  name: default-productpage-9080
  namespace: gloo-system
spec:
  kube:
    selector:
      app: productpage
    serviceName: productpage
    serviceNamespace: default
    servicePort: 9080

If you do not make this change, you will see the following type of error: ValidationError(Upstream.spec.kube): unknown field "service_name" in io.solo.gloo.v1.Upstream.spec.kube

CRD field updates

In v1.8.8 the validationServerGrpcMaxSize was replaced by validationServerGrpcMaxSizeBytes in the settings CRD. If the new settings CRD is not applied, you will see the following type of error:

Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Settings.spec.gateway.validation): unknown field "validationServerGrpcMaxSizeBytes" in io.solo.gloo.v1.Settings.spec.gateway.validation

In addition, any new fields added to our CRDs (e.g. added between 1.8.0 -> 1.8.8) need to be added to our validation schemas.

To apply the new CRDs:

helm pull gloo/gloo --version $GLOO_VERSION --untar
kubectl apply -f gloo/crds

Enterprise Gloo Edge

Gloo Federation

In 1.8.0, the Gloo Federation helm chart was made a subchart of the Gloo Edge Enterprise helm chart. This introduces a few changes to how Gloo Federation is installed. Gloo Federation no longer lives in its own namespace, but is installed to the same namespace as the rest of Gloo Edge, which is gloo-system by default. In addition, Gloo Federation is now installed by default when running glooctl install gateway enterprise or when installing the Gloo Edge Enterprise helm chart.

To disable the Gloo Federation install, you can set

gloo-fed.enabled=false
echo "gloo-fed:
  enabled: false" > values.yaml
glooctl install gateway enterprise --values values.yaml --license-key=<LICENSE_KEY>
helm install gloo glooe/gloo-ee --namespace gloo-system --set gloo-fed.enabled=false --set license_key=<LICENSE_KEY>
Enterprise UI

Prior to Gloo Edge Enterprise v1.8.9, the Enterprise UI was only available if Gloo Federation was enabled. Starting in v1.8.9, the UI is included by default for all Gloo Edge Enterprise users as well.

Note that if you have Gloo Federation enabled, the UI does not show any data until you register one or more clusters. If Gloo Federation is disabled, the UI shows the installed Gloo Edge instance automatically without cluster registration.

Verify upgrade

To verify that your upgrade was successful, let’s first check the version:

glooctl version

You should see the expected version for all the server components.

Let’s also check that your Gloo Edge installation is healthy by running:

glooctl check

If everything went well, you should see the following output:

Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking auth configs... OK
Checking rate limit configs... OK
Checking VirtualHostOptions... OK
Checking RouteOptions... OK
Checking secrets... OK
Checking virtual services... OK
Checking gateways... OK
Checking proxies... OK
No problems detected.