1.0.0+ Upgrade Notice
We have officially released Gloo Edge 1.0.0! This major version bump comes with a number of breaking changes that you will have to keep in mind as you upgrade from Gloo Edge 0.x to 1.0.0+. While the easiest upgrade process is to just start with a totally fresh installation of Gloo Edge, many users are running Gloo Edge in production on older versions; this guide is provided for users like them who would like to update to our latest and greatest Gloo Edge without any downtime.
Breaking Changes From 0.x to 1.0.0
Breaking Changes Commonly Requiring Manual Action
These breaking changes are common pain points that most users will have to address, and which we specifically would like to draw attention to. For a complete list of all breaking changes, see all breaking changes.
- Route objects (in Virtual Hosts) have had their
matcher
field (see here in 0.21.1) changed tomatchers
(see here in 1.0.0) to support an array of multiple matchers. https://github.com/solo-io/gloo/pull/1353 - Upstreams have been flattened, entirely removing the
UpstreamSpec
proto message (see here in 0.21.1) and moving all the associated fields into the top-level Upstream (see here in 1.0.0). https://github.com/solo-io/gloo/pull/1697 - The suffix
v2
has been dropped from thegateway-v2
andgateway-proxy-v2
deployments, and the Gateway CRD has had the.v2
dropped from its name and API group; thegateway-resource-reader
RBAC Role has had the corresponding rule on thegateway.solo.io.v2
API group removed. https://github.com/solo-io/gloo/pull/1666 - The route config
auto_host_rewrite
is no longer implicitly set on virtual services that reference static upstreams. Virtual services referencing static upstreams must now manually setauto_host_rewrite
to true to preserve old behavior. https://github.com/solo-io/gloo/pull/1341 - By default, Gloo Edge now only propagates config from Virtual Services in the same namespace as the referencing Gateway
(compare in 0.21.1 to
1.2.0).
Note that this is only a breaking change if
virtual_services
andvirtual_service_selector
are omitted on the Gateway. Configuration to enablevirtual_service_selector
to reference virtual services outside the referencing Gateway’s namespace was added here in Gloo Edge 1.2.0. - Function Discovery Service (FDS) now defaults to whitelist mode rather than blacklist mode; FDS config docs.
- All instances of
...plugins
in our API have been renamed tooptions
(e.g.,virtualHostPlugins
->options
,routePlugins
->options
). - Update ExtAuth secret API to use strongly-typed configuration. OAuth and ApiKey secrets are no longer configured in the opaque extensions block, the same configuration lives at the top level in the api_key and oauth blocks. (https://github.com/solo-io/gloo/issues/1171)
All Breaking Changes
Example Upgrade Process
You should also read our usual upgrade guide (found here) and our upgrade FAQ (found here), both of which may also contain useful tips for performing Gloo Edge upgrades in general.
In this section, we will walk through the process of upgrading a very simple Gloo Edge installation (running in minikube) from 0.21.1 to 1.0.0 without any downtime. While this will not cover everyone’s use case, it will be useful to see how to resolve the most common breakages. We will be routing to an instance of httpbin running in our cluster. Skip to the bottom for all the commands collected in one place.
This guide will assume that you are running Gloo Edge in the gloo-system namespace.
We can see that we are running 0.21.1:
~ > glooctl version
Client: {"version":"0.21.1"}
Server: {"type":"Gateway","kubernetes":{"containers":[{"Tag":"0.21.1","Name":"discovery","Registry":"quay.io/solo-io"},{"Tag":"0.21.1","Name":"gloo-envoy-wrapper","Registry":"quay.io/solo-io"},{"Tag":"0.21.1","Name":"gateway","Registry":"quay.io/solo-io"},{"Tag":"0.21.1","Name":"gloo","Registry":"quay.io/solo-io"}],"namespace":"gloo-system"}}
And we can successfully curl httpbin through Envoy:
~ > curl -s $(glooctl proxy url)/status/418 # https://httpstatuses.com/418
-=[ teapot ]=-
_...._
.' _ _ `.
| ."` ^ `". _,
\_;`"---"`|//
| ;/
\_ _/
`"""`
Installing 1.0.0 to a New Namespace
Now we start the upgrade process. Before we begin, we may want to dump the current Gloo Edge state to a file.
~ > glooctl debug yaml > gloo-state-backup.yaml
You’ll want to save a copy of your pre-1.0.0 glooctl
somewhere locally, then upgrade the binary.
~ > cp $(which glooctl) ./glooctl-v0.21.1
~ > glooctl upgrade --release=v1.0.0
downloading glooctl-darwin-amd64 from release tag v1.0.0
successfully downloaded and installed glooctl version v1.0.0 to /usr/local/bin/glooctl
Create a new namespace for the 1.0.0 installation, and install
~ > kubectl create ns gloo-system-1-0-0
namespace/gloo-system-1-0-0 created
Create a Helm values overrides file to use during the installation:
echo "settings:
# explicitly setting watch namespaces will prevent the 1.0.0 installation from seeing old resources
# this assumes that all of your pre-1.0.0 upstreams have been written in gloo-system
watchNamespaces:
- gloo-system-1-0-0
- default
global:
glooRbac:
nameSuffix: 1-0-0-installation
" > 1.0.0-upgrade-values.yaml
And use it when installing to the new namespace:
~ > glooctl -n gloo-system-1-0-0 install gateway --values 1.0.0-upgrade-values.yaml # ignore the version warning- we are in the middle of resolving it :)
----------
WARNING: glooctl@v1.0.0 has a different major version than the following server containers: discovery@v0.21.1, gloo-envoy-wrapper@v0.21.1, gateway@v0.21.1, gloo@v0.21.1
Consider running:
./glooctl-1.0.0 upgrade --release=v0.21.1
----------
Starting Gloo Edge installation...
Installing CRDs...
Preparing namespace and other pre-install tasks...
Installing...
Gloo Edge was successfully installed!
Re-create your Virtual Services in the new namespace. You will have to edit your Virtual Services to
accommodate the matcher
-> matchers
change. An example diff of the Virtual Service in the snippet above is:
~ > diff -u 0.x-vs.yaml 1.0-compliant-vs.yaml
--- 0.x-vs.yaml
+++ 1.0-compliant-vs.yaml
@@ -8,8 +8,8 @@
domains:
- '*'
routes:
- - matcher:
- prefix: /
+ - matchers:
+ - prefix: /
routeAction:
single:
upstream:
Let’s run glooctl check
to be sure that our new installation is viable:
~ > glooctl -n gloo-system-1-0-0 check
Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking secrets... OK
Checking virtual services... OK
Checking gateways... OK
Checking proxies... OK
No problems detected.
You should now be able to direct traffic to the new deployment:
~ > curl -s $(glooctl proxy url -n gloo-system-1-0-0)/status/418
-=[ teapot ]=-
_...._
.' _ _ `.
| ."` ^ `". _,
\_;`"---"`|//
| ;/
\_ _/
`"""`
Let’s verify that the old installation continues to work. You’ll have to use your pre-1.0.0 copy of the
glooctl
binary:
~ > curl -s $(./glooctl-v0.21.1 proxy url -n gloo-system)/status/418
-=[ teapot ]=-
_...._
.' _ _ `.
| ."` ^ `". _,
\_;`"---"`|//
| ;/
\_ _/
`"""`
Now you may tear down the old, pre-1.0.0 namespace at your convenience. Note that once it is torn down, you may change your new installation’s watch namespaces to be whatever you would like, as all pre-1.0.0 resources should be deleted by that point.
Congratulations! You’ve just performed a major Gloo Edge upgrade without incurring any downtime.
Tearing Down Pre-1.0.0 Installation
If you run glooctl uninstall -n gloo-system --all
to attempt to clear out all resources including the cluster-scoped resources, you
will also delete the resources from your new 1.0.0 installation, and Gloo Edge may experience downtime.
You may delete the deprecated v2 Gateway CRD:
~ > kubectl delete crd gateways.gateway.solo.io.v2
customresourcedefinition.apiextensions.k8s.io "gateways.gateway.solo.io.v2" deleted
And then run the base uninstall, using the saved glooctl binary
.
~ > ./glooctl-v0.21.1 uninstall -n gloo-system
Uninstalling Gloo Edge...
Removing gloo, installation ID fuUIUbgiVrUAGur42069
Removing Gloo Edge system components from namespace gloo-system...
Gloo Edge was successfully uninstalled.
Removing Cluster-Scoped RBAC
Older versions of glooctl uninstall
will not remove cluster-scoped RBAC. Since we set globals.glooRbac.nameSuffix
in our 1.0.0 installation values file, you may remove any ClusterRole or ClusterRoleBinding that DOES have the label
app=gloo
and whose name does NOT have the suffix we set in that Helm value.
Helm Compatibility
There are several points to consider about Helm compatibility if you are upgrading Gloo Edge (open source or enterprise) across the 0.x to 1.x boundary:
- Helm 2 IS NOT compatible with the Open Source Gloo Edge chart in Gloo Edge versions v1.2.0 through v1.2.2.
- However, Helm 2 IS compatible with all stable versions of the Gloo Edge Enterprise chart.
glooctl
prior to v1.2.0 cannot be used to install open source Gloo Edge v1.2.0 and later or Gloo Edge Enterprise v1.2.0 and later.
Gloo Edge Enterprise
Gloo Edge Enterprise also had a 1.x.y major release! If you are a Gloo Edge Enterprise user, please consult both this section and the rest of the document (covering open source features), as the entire document will be relevant to you.
Enterprise Versioning
You may notice that Gloo Edge Enterprise has released directly to 1.2.0 from 0.x. This is because we keep the major/minor versions of Open Source Gloo Edge and Gloo Edge Enterprise in sync, and Open Source Gloo Edge has progressed to 1.2.x.
Enterprise Breaking Changes
Since Enterprise is a superset of Open Source Gloo Edge, you should also consult the comprehensive list of all breaking changes within Open Source Gloo Edge if you are upgrading Enterprise from 0.x to >=1.2.0.
In addition to the open source breaking changes, Gloo Edge Enterprise 1.2.0 also includes the following breaking changes:
- Remove some deprecated APIs:
weighed_destination_plugins
onWeightedDestinations
, preferweighted_destination_options
gateway_proxy_name
onGateway
, preferproxy_names
role_arns
on UpstreamSpec, preferrole_arn
- Extauth’s
VhostExtension
andRouteExtension
, among other minor removals. Prefer configuring Gloo Edge Enterprise ExtAuth using AuthConfig Custom Resources, and configure Virtual Services viaExtAuthExtension
to either reference theseAuthConfig
s or reference your own custom auth implementation usingCustomAuth
. (https://github.com/solo-io/gloo/issues/1171)