Release notes
Review summaries of the main changes in the Gloo 2.6 release.
Make sure that you review the breaking changes đĨ that were introduced in this release and the impact that they have on your current environment.
Introduction
The release notes include important installation changes and known issues. They also highlight ways that you can take advantage of new features or enhancements to improve your product usage.
For more information, see the following related resources:
- Upgrade guide: Steps to upgrade from the previous minor version to the current version.
- Version reference: Information about Solo’s version support.
đĨ Breaking changes
Review details about the following breaking changes. The severity is intended as a guide to help you assess how much attention to pay to this area during the upgrade, but can vary depending on your environment.
đ¨ High
Review severe changes that can impact production and require manual intervention.
- Updates to the Istio lifecycle manager: The Istio lifecycle manager now uses an Istio lifecycle agent that deploys and manages the Istio installations in your clusters, instead of the Gloo agent. Additionally, the Istio lifecycle agent now translates input
IstioOperator
configuration into Istio Helm chart values, instead of deploying installations based on theIstioOperator
configuration directly. If you use the Istio lifecycle manager in version 2.5, your Istio installations are unchanged when you upgrade to 2.6. The new translation system is only implemented when you make a change to yourIstioLifecycleManager
orGatewayLifecycleManager
resources. Additionally, you cannot use the lifecycle manager alongside unmanaged Istio service meshes. - Removal of
memory_ballast
extension from OTel collector: Thememory_ballast
extension is deprecated and no longer effective for OTel collector agents. If you use this extension to manage garbage collection, you must update your collector configuration.
đ Medium
Review changes that might have impact to production and require manual intervention, but possibly not until the next version is released.
- New Helm defaults for Solo image repositories: The default Helm values for the Solo images changed from
docker.io
togcr.io
. Make sure to update any internal tooling, such as in airgapped environments.
âšī¸ Low
Review informational updates that you might want to implement but that are unlikely to materially impact production.
- Upstream Prometheus Helm chart upgrade: The Prometheus Helm chart version is upgraded to a newer version, which requires the Prometheus deployment to be re-created. Gloo Mesh Core uses a Helm pre-upgrade hooks to re-create the deployment which can cause issues in automated environments, such as Argo CD.
Updates to the Istio lifecycle manager
In version 2.6 and later of Gloo Mesh Core, the Istio lifecycle manager is automatically updated with the following improvements.
Istio lifecycle agent
The Gloo agent that runs on each registered workload cluster now leverages a feature called the Istio lifecycle agent. This Istio lifecycle agent now deploys and manages the Istio installations in your clusters, instead of the Gloo agent directly.
Values translation
The Istio lifecycle agent now translates the IstioOperator
configuration in your IstioLifecycleManager
and GatewayLifecycleManager
resources into Istio Helm chart values to deploy the installations, instead of deploying installations based on the IstioOperator
configuration directly. The agent writes the translated values into internal resources, ClusterIstioInstallation
, which represent the state of the Istio installations in each workload cluster.
Important gateway translation differences
By default, upstream Istio translates IstioOperator
configuration into values for the deprecated gateways/istio-ingress
and gateways/istio-egress
Helm charts. The lifecycle agent in Gloo Mesh Core translates configuration into the supported gateway
Helm chart instead. If you use GatewayLifecycleManager
CRs in version 2.6 and later, note the following differences that are implemented so that your IstioOperator
configuration can be translated into supported gateway
Helm chart values:
- The same chart is used for both ingress and egress gateways. The only difference is the service type, which defaults to
ClusterIP
for egress gateways. - The
gateway
chart always uses injection of thegateway
template, which means that:- Deployments always use an injection template, regardless of the
IstioOperator
setting. - Specifying an injection template other than
gateway
results in an error. - The gateway’s namespace, such as
gloo-mesh-gateways
, must not have annotations to disable injection. - The
hub
andtag
values in theIstioOperator
determine only the Solo Helm chart to use. The image version in thedeployment
is alwaysauto
, and is injected based on which Istio control plane provides sidecar injection.
- Deployments always use an injection template, regardless of the
- The default target ports for the gateway service are now
80
and443
, as opposed to the default values of8080
and8443
in the deprecated Helm charts. - Review the required changes for specifying Helm
values
settings directly.- Do not specify
values.gateways
settings. These settings were inherently defined for the deprecated charts only, and are not compatible with the currently-supportedgateway
chart. The only exception is thevalues.global.imagePullSecrets
field, which is supported. - To specify
values
settings from the currently-supportedgateway
chart, use theunvalidatedValues
field. This is necessary because thevalues
settings are not currently supported by upstream Istio as part of theIstioOperator
. For example, the gateway chart supports akind
setting that determines whether the gateway is created as a deployment or a daemonset. To deploy a gateway as a daemonset, you can specify thekind
setting in anunvalidatedValues
section of yourGatewayLifecycleManager
:istioOperatorSpec: components: ... unvalidatedValues: gateway: kind: DaemonSet
- Do not specify
- The following
k8s
settings that you might set incomponents.ingressgateway.k8s
, for example, are either unsupported or supported for certain values:ownerName
is not applicable because it is used for anistio-operator-specific
annotation.strategy
(such as to define rolling update settings) is not available in thegateway
chart, but can be specified in an overlay.env
only supports explicit values. To specifyValsFrom
orEnvFrom
, you must use an overlay.
Upgrading to 2.6
- If you use the Istio lifecycle manager in Gloo Mesh Core version 2.5, your Istio installations are unchanged when you upgrade Gloo Mesh Core to 2.6. The new translation system is only implemented when you make a change to your
IstioLifecycleManager
orGatewayLifecycleManager
resources. To preview any changes to how your gateway values might be translated, you can run a canary upgrade to compare two Istio installations. - Because of the updates to the Istio lifecycle manager in version 2.6, you currently cannot use the lifecycle manager alongside unmanaged Istio service meshes that you install by using Helm,
istioctl
, or anIstioOperator
. To use the Istio lifecycle manager, remove any existing Istio installations, and create managed Istio installations by following the steps in this guide. Note that this limitation will be addressed in future releases.
Removal of memory_ballast
extension from the OTel collector
The memory_ballast
extension is deprecated and no longer effective for the OTel collector agents. To ensure that your OTel collector instances continue to perform garbage collection correctly:
- If you do not currently customize the
memory_ballast
extension, you can safely remove it from your OTel collector configuration. No further steps are necessary. - If you do customize the
memory_ballast
extension, you can instead control garbage collection with a soft memory limit by setting theGOMEMLIMIT
environment variable to the recommended value of 80-90% of the total memory.
New Helm defaults for Solo image repositories
The default Helm values for the Solo images changed from docker.io
to gcr.io
. For example, the Redis image changed from docker.io/redis:7.2.4-alpine
to gcr.io/gloo-mesh/redis:7.2.4-alpine
. If you internal tooling is set up to pull the images from docker.io
, such as in air-gapped environments, you must update that tooling to pull the images from the Google Cloud Registry repository.
For an overview of the images that are used in an air-gapped environment, see the Install in air-gapped environments docs in the Gloo Mesh Enterprise documentation.
Upstream Prometheus upgrade
Gloo Mesh Core includes a built-in Prometheus server to help monitor the health of your Gloo components. This release of Gloo upgrades the Prometheus community Helm chart from version 19.7.2 to 25.11.0. As part of this upgrade, upstream Prometheus changed the selector labels for the deployment, which requires recreating the deployment. To help with this process, the Gloo Helm chart includes a pre-upgrade hook that automatically recreates the Prometheus deployment during a Helm upgrade. This breaking change impacts upgrades from previous versions to version 2.4.10, 2.5.1, or 2.6.0 and later.
If you do not want the redeployment to happen automatically, you can disable this process by setting the prometheus.skipAutoMigration
Helm value to true
. For example, you might use Argo CD, which converts Helm pre-upgrade hooks to Argo PreSync
hooks and causes issues. To ensure that the Prometheus server is deployed with the right version, follow these steps:
- Confirm that you have an existing deployment of Prometheus at the old Helm chart version of
chart: prometheus-19.7.2
.kubectl get deploy -n gloo-mesh prometheus-server -o yaml | grep chart
- Delete the Prometheus deployment. Note that while Prometheus is deleted, you cannot observe Gloo performance metrics.
kubectl delete deploy -n gloo-mesh prometheus-server
- In your Helm values file, set the
prometheus.skipAutoMigration
field totrue
. - Continue with the Helm upgrade of Gloo Mesh Core. The upgrade recreates the Prometheus server deployment at the new version.
âī¸ Installation changes
In addition to comparing differences across versions in the changelog, review the following installation changes from the previous minor version to version 2.6.
Safe mode enabled by default
Starting in version 2.6.0, safe mode is enabled on the Gloo management server by default to ensure that the server translates input snapshots only if all input snapshots are present in Redis or its local memory. This way, translation only occurs based on a complete translation context that includes all workload clusters.
Enabling safe mode resolves a race condition that was identified in version 2.5.3, 2.4.11, and earlier that could be triggered during simultaneous restarts of the management plane and Redis, including an upgrade to a newer Gloo Mesh Enterprise version. If hit, this failure mode could lead to partial translations on the Gloo management server which could result in Istio resources being temporarily deleted from the output snapshots that are sent to the Gloo agents.
To learn more about safe mode, see Safe mode.
Enabling safe mode requires the Gloo management server to be scaled down to 0 replicas. Make sure to follow the upgrade guide to safely upgrade your Gloo Mesh Core installation.
New default values for Gloo UI auth sessions
Some of the default Helm values changed for configuring the Gloo UI auth session storage:
glooUi.auth.oidc.session.backend
: The default value changed from""
(empty) tocookie
to ensure auth sessions are stored in browser cookies by default.glooUi.auth.oidc.session.redis.host
: The default value changed from""
(empty) togloo-mesh-redis.gloo-mesh:6379
to ensure a valid Redis host is set whenglooUi.auth.oidc.session.backend
is changed toredis
.
To learn how to set up Gloo UI auth session storage, see Store UI sessions.
New container settings for Redis
In 2.6, the default container settings for the built-in Redis changed. To apply these settings, Redis must be restarted during the upgrade. Make sure that safe mode is enabled before you proceed with the upgrade so that translation of input snapshots halts until the input snapshots of all connected Gloo agents are re-populated in Redis. In 2.6.0, safe mode is enabled by default.
To learn more about safe mode, see Safe mode.
Redis installation for snapshots and insights
In version 2.5 and earlier, you configured the details for the backing Redis instance for different Gloo Mesh Core components in the Helm chart section for each component. When components had to share Redis instances, this approach could lead to misconfiguration. For example, the Gloo external auth service and portal server must share the same configuration details to use the same Redis instance, but these Redis details were configured separately.
Now, you can configure the details for the backing Redis instance consistently across components by using the new redisStore
section of the Helm chart. The redisStore
section also organizes Redis into four main use cases, which you can configure to use separate Redis instances. All the components that read or write data for this particular use case automatically get the same Redis details set up for them. This way, you can consistently configure Redis.
The previous way of configuring Redis is still supported, so you do not have to update your configuration right away. However, you might want to update to redisStore
for simplicity, as well as to use the new use case-driven approach that lets you configure Redis for snapshot
and insights
.
Optional steps to migrate to the redisStore
configuration:
The following example shows how your configuration might look when you use the built-in Redis options for the control plane (redis
), external auth service, developer portal, and rate limiter. For other configuration options, such as using shared instances or bringing your own external Redis instance like Amazon ElastiCache, see the updated Redis documentation.
Review your existing Helm values file to find your current Redis values.
redis: deployment: # Enable the creation of the local gloo-mesh-redis deployment and service. enabled: true
Remove your existing Redis values, and enable the
redisStore
options instead. The following example removes the defaultredis
option and enables two built-in instances for thegloo-redis-snapshot
andgloo-redis-insights
deployments.redis: deployment: # Disable the creation of the legacy Redis deployment and service, # so that you can use the redisStore configuration path instead. enabled: false redisStore: insights: # Enable the creation of the local Redis deployment and service. deployment: enabled: true snapshot: # Enable the creation of the local Redis deployment and service. deployment: enabled: true
Continue with the upgrade.
đ New features
Review the following new features that are introduced in version 2.6 and that you can enable in your environment.
Ambient service mesh support with the Istio lifecycle manager
Ambient mesh is now considered generally available in Gloo Mesh Core version 2.6. To try it out, see [Install Istio in ambient mode](/gloo-mesh-core/latest//istio/ambient/install/).
Delimiters in JWT token claims
As of version 2.6.6, you can configure custom delimiters when you extract claims from JWT tokens. This way, you can append the claim information in a header in a different format than the default comma-delimited format. For example steps, see Extract claims to headers.
Istio 1.21, 1.22, and 1.23 support
You can now use Istio 1.21 and 1.22 in version 2.6.0 and later, and Istio 1.23 in version 2.6.3 and later. Note that you must upgrade the Gloo management server to version 2.6 first before you start upgrading your Istio version. To find the image reposititories for the Solo distribution of Istio, see the Istio images built by Solo support article. Istio versions 1.16 and 1.17 are no longer supported in 2.6.
For more information about supported Istio versions, see the Supported Solo distributions of Istio.
Kubernetes 1.29 and 1.30 support
Starting in version 2.6.0, Gloo Mesh Core can now run on Kubernetes 1.29 and 1.30. Kubernetes version 1.22 and 1.23 are no longer supported. For more information about supported Kubernetes and Istio versions, see the version support matrix.
I/O threads for Redis
A new Helm value redis.deployment.ioThreads
was introduced to specify the number of I/O threads to use for the built-in Redis instance. Redis is mostly single threaded, however some operations, such as UNLINK or slow I/O accesses can be performed on side threads. Increasing the number of side threads can help improve and maximize the performance of Redis as these operations can run in parallel.
The default and minimum valid value for this setting is 1. If you plan to increase the number of I/O side threads, make sure that you also change the CPU requests and CPU limits for the Redis pod. Set the CPU requests and limits to the same number that you use for the I/O side threads plus 1. That way, you can ensure that each side thread has an available CPU core, and that an additional CPU core is left for the main Redis thread. For example, if you want to set I/O threads to 2, make sure to add 3 CPU cores to the resource requests and limits for the Redis pod. You can find further recommendations regarding I/O threads in this Redis configuration example.
If you set I/O threads, the Redis pod must be restarted during the upgrade so that the changes can be applied. During the restart, the input snapshots from all connected Gloo agents are removed from the Redis cache. If you also update settings in the Gloo management server that require the management server pod to restart, the management server’s local memory is cleared and all Gloo agents are disconnected. Although the Gloo agents attempt to reconnect to send their input snapshots and re-populate the Redis cache, some agents might take longer to connect or fail to connect at all. To ensure that the Gloo management server halts translation until the input snapshots of all workload cluster agents are present in Redis, it is recommended to enable safe mode on the management server alongside updating the I/O threads for the Redis pod. For more information, see Safe mode. Note that in version 2.6.0 and later, safe mode is enabled by default.
To update I/O side threads in Redis as part of your Gloo Mesh Enterprise upgrade:
Scale down the number of Gloo management server pods to 0.
kubectl scale deployment gloo-mesh-mgmt-server --replicas=0 -n gloo-mesh
Upgrade Gloo Mesh Enterprise and use the following settings in your Helm values file for the management server. Make sure to also increase the number of CPU cores to one core per thread, and add an additional CPU core for the main Redis thread. The following example also enables safe mode on the Gloo management server to ensure translation is done with the complete context of all workload clusters.
glooMgmtServer: safeMode: true redis: deployment: ioThreads: 2 resources: requests: cpu: 3 limits: cpu: 3
Scale the Gloo management server back up to the number of desired replicas. The following example uses 1 replica.
kubectl scale deployment gloo-mesh-mgmt-server --replicas=1 -n gloo-mesh
Persistence mode for built-in Redis
To back up data for the Gloo Mesh Core control plane components like the management server, you can use a Solo-provided built-in Redis instance that is installed in the cluster. By default, this built-in Redis instance does not persist data, which gets lost during pod restarts. Now, you can also configure the built-in Redis to use persistent storage. This way, data persists across Redis restarts, such as after an upgrade. By persisting the data, you can reduce delays in the relay process that otherwise would happen after a restart. For more information, see Persistent storage.
đ Feature changes
Review the following changes that might impact how you use certain features in your Gloo environment.
Improved error logging
The Gloo management server translates your Gloo custom resources into many underlying Istio resources. When the management server cannot translate a resource, it returns debug logs that vary in severity from errors to warnings or info.
In this release, the management server logs are improved in the following ways:
- All translation errors are now logged at the debug level. This way, the management server logs are not cluttered by errors that do not impact the management server’s health.
- Fixed a bug that caused many duplicate error logs. Now, you have fewer logs to sift through.
For example, you might have a service that does not select any existing workloads. This scenario might be intentional, such as if you use a CI/CD tool like ArgoCD to deploy your environment in phases. Translation does not complete until you update the service’s selector or create the workload. Previously, the translation error would show up many times in the management server logs, even though the situation is intentional and the management server is healthy and can translate other objects. Now, the translation error is logged less verbosely at the debug level.
You can still review translation errors in the following ways:
- Translation errors and warnings are shown in the statuses of Gloo custom resources. For example, if a policy fails to apply to a route, you can review the warning in the policy and the route table statuses.
- In the management server, enable debug logging by enabling the
--verbose=true
setting. Example command:kubectl patch deploy -n gloo-mesh gloo-mesh-mgmt-server --type "json" -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--verbose=true"}]'
Istio resources not generated on non-mesh clusters
Added a fix that now prevents Istio resources from being written to the management cluster when the management cluster is not registered as a workload cluster at the same time. Previously, if you had an Istio installation in your management cluster that was not managed by Gloo Mesh Core and a Gloo agent was deployed to that cluster at the same time, the agent discovered these Istio resources and included them in the snapshot that was sent to the management server. The agent also wrote Istio resources to that cluster, which might have interfered with existing Istio resources in that cluster.
Previously generated resources are not automatically removed during the upgrade. You must manually clean up these resources in the management cluster. You can use the following script to remove these resources.
#!/bin/bash
namespace=$1
if [ -z "$namespace" ]; then
echo "Usage: cleanup.sh <namespace>"
exit 1
fi
kubectl delete authorizationpolicies.security.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete certificaterequests.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete clusteristioinstallations.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete destinationrules.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete discoveredcnis.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete discoveredgateways.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete envoyfilters.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete gateways.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete issuedcertificates.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete istiooperators.install.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete meshes.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete peerauthentications.security.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete podbouncedirectives.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete portalconfigs.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete proxyconfigs.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete requestauthentications.security.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete serviceentries.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete sidecars.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete spireregistrationentries.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete telemetries.telemetry.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete virtualservices.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete wasmplugins.extensions.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete workloadentries.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete workloadgroups.networking.istio.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
kubectl delete xdsconfigs.internal.gloo.solo.io -n $namespace -l "reconciler.mesh.gloo.solo.io/name=translator"
đ§ Known issues
The Solo team fixes bugs, delivers new features, and makes changes on a regular basis as described in the changelog. Some issues, however, might impact many users for common use cases. These known issues are as follows:
Cluster names: Do not use underscores (
_
) in the names of your clusters or in thekubeconfig
context for your clusters.Istio:
- Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh Core) on AWS Fargate. For more information, see the Amazon EKS issue.
- Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh Core) on AWS Fargate. For more information, see the Amazon EKS issue.
- In Gloo Mesh Core version 2.6, ambient mode requires the Solo distribution of Istio version 1.22.3 or later (
1.22.3-solo
).
- In Istio 1.22.0-1.22.3, the
ISTIO_DELTA_XDS
environment variable must be set tofalse
. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4. - If you plan to upgrade to Istio 1.21, you must upgrade the Gloo management server to version 2.6 first. For more information, see the 2.6 release notes.
- Istio 1.20 is supported only as patch version
1.20.1-patch1
and later. Do not use patch versions 1.20.0 and 1.20.1, which contain bugs that impact several Gloo Mesh Core features that rely on Istio ServiceEntries.
meshctl: The meshctl
CLI version 2.6.0 has a known issue with the install
command that prevents you from using --set
options. This issue was resolved in version 2.6.1 and the 2.6.0 meshctl
binaries were updated to point to the 2.6.1 meshctl
binaries instead. Because of this change, you might see version 2.6.1 when you run meshctl version
, even if you installed the 2.6.0 meshctl
binaries.
OTel pipeline: FIPS-compliant builds are not currently supported for the OTel collector agent image.