Release notes
Review summaries of the main changes in the Gloo 2.11 release.
Make sure that you review the breaking changes đĨ that were introduced in this release and the impact that they have on your current environment.
Introduction
The release notes include important installation changes and known issues. They also highlight ways that you can take advantage of new features or enhancements to improve your product usage.
For more information, see the following related resources:
- Upgrade guide: Steps to upgrade from the previous minor version to the current version.
- Version reference: Information about Solo’s version support.
đĨ Breaking changes
Review details about the following breaking changes. The severity is intended as a guide to help you assess how much attention to pay to this area during the upgrade, but can vary depending on your environment.
đ¨ High
Review severe changes that can impact production and require manual intervention.
- No high-severity changes are currently reported.
đ Medium
Review changes that might have impact to production and require manual intervention, but possibly not until the next version is released.
- No medium-severity changes are currently reported.
âšī¸ Low
Review informational updates that you might want to implement but that are unlikely to materially impact production.
- No low-severity changes are currently reported.
đ§ New known issues
Review new known issues and how to mitigate them.
ingress-use-waypointin flat networks: If you use a flat network setup for your multicluster ambient mesh, and want to apply theistio.io/ingress-use-waypoint=truelabel to a service that is exposed globally across clusters in the mesh, you must maintain namespace sameness for the service instances. In other words, you must maintain identical manifests, including labels, for each service instance in the same namespace of each cluster. If one of the service instances in any cluster has theistio.io/ingress-use-waypoint=truelabel, the global service uses the waypoint for ingress traffic.
đ New features
Review the following new features that are introduced in version 2.11 and that you can enable in your environment.
Istio 1.28 support
You can now run Gloo Mesh (OSS APIs) with Istio 1.28. Istio 1.23 is no longer supported. For more information, see the version support matrix, and the Solo distribution of Istio changelog for 1.28.
New features in the Solo distribution of Istio 1.28 include the following.
Segments for multitenancy (alpha)
You can now create segments, which enable flexible multicluster multitenancy by logically partitioning meshes. By grouping clusters in your multicluster mesh into logical segments, you can provide apps in each segment with a segment-wide, custom domain suffix, and isolate service discovery to the segment. For more information, check out Multitenancy with segments.
Note that this feature is in the alpha state in the Solo distribution of Istio 1.28. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
NodePort resolution for cross-cluster traffic (alpha)
In each cluster, you create an east-west gateway, which is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. In the Solo distribution of Istio 1.28 and later, you can use either LoadBalancer or NodePort addresses to resolve cross-cluster traffic requests through this gateway. Note that the NodePort method is considered alpha in Istio version 1.28.
LoadBalancer: In the standard LoadBalancer peering method, cross-cluster traffic through the east-west gateway resolves to its LoadBalancer address.
NodePort (alpha): If you prefer to use direct pod-to-pod traffic across clusters, you can annotate the east-west and peering gateways so that cross-cluster traffic resolves to NodePort addresses. This method allows you to avoid LoadBalancer services to reduce cross-cluster traffic costs. Review the following considerations:
- Note that the gateways must still be created with stable IP addresses, which are required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data-plane communication, in that requests to services resolve to the NodePort instead of the LoadBalancer address. Also, the east-west gateway must have the
topology.istio.io/clusterlabel. - If a node in a target cluster becomes inaccessible, such as during a restart or replacement, a delay can occur in the connection from the client cluster that must become aware of the new east-west gateway NodePort. In this case, you might see a connection error when trying to send cross-cluster traffic to an east-west gateway that is no longer accepting connections.
- Only nodes where an east-west gateway pod is provisioned are considered targets for traffic.
- Like LoadBalancer gateways, NodePort gateways support traffic from Envoy-based ingress gateways, waypoints, and sidecars.
- This feature is in an alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
The steps in the following guide to create the gateways include options for either the LoadBalancer or NodePort method. A status condition on each east-west and remote peer gateway indicates which dataplane service type is in use.
For more information, see the [multicluster mesh installation guides](/gloo-mesh/latest//ambient/setup/multicluster/default). Note that this feature is not applicable in flat network setups.Note that this feature is in the alpha state in the Solo distribution of Istio 1.28. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
ECS services integration (alpha)
Extend the mesh to include workloads running in an ECS cluster by leveraging the istioctl ecs add-service command. This command automatically bootstraps existing ECS services with a ztunnel sidecar container, which uses IAM roles to authenticate with your Istio installation. The workloads in ECS can then use the ztunnel to communicate with the in-mesh services in your cluster. To get started, check out [Add ECS services to the mesh](/gloo-mesh/latest//ambient/sample apps/ecs-integration/).
Note that this feature is in the alpha state in the Solo distribution of Istio 1.28. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
Drain local or remote clusters (alpha)
In the Solo distribution of Istio version 1.28 and later, the solo.io/draining-weight annotation was introduced. This annotation allows you to set a draining weight on an east-west or remote peering gateway that indicates how much traffic you want to accept for a given cluster. You can use this annotation to prevent connections to a cluster that you want to perform maintenance on, or in cases where you want to test service failover scenarios without impacting other services in the mesh.
For more information, see Drain clusters in the mesh.
Migrating from multicluster community Istio
If you previously used the multicluster feature in community Istio, and want to now migrate to multicluster peering in the Solo distribution of Istio, the DISABLE_LEGACY_MULTICLUSTER environment variable is introduced in the Solo distribution of Istio version 1.28 to disable the community multicluster mechanisms. Multicluster in community Istio uses remote secrets that contain kubeconfigs to watch resources on remote clusters. This system is incompatible with the decentralized, push-based model for peering in the Solo distribution of Istio. This variable causes istiod to ignore remote secrets so that it does not attempt to set up Kubernetes clients to connect to them.
- For fresh multicluster mesh installations with the Solo distribution of Istio, use this environment variable in your istiod settings. This setting serves as a recommended safety measure to prevent any use of remote secrets.
- If you want to initiate a multicluster migration from community Istio, contact a Solo account representative. An account representative can help you set up two revisions of Istio that each select a different set of namespaces, and set the
DISABLE_LEGACY_MULTICLUSTERvariable on the revision that uses the Solo distribution of Istio for multicluster peering.
Additional Helm settings for scrape overrides
New Helm settings are added for scrape intervals and timeouts in the OpenTelemetry (OTel) gateway and collector. These settings can be useful for tuning your telemetry pipeline large environments.
For example, you might want to override the following Prometheus values.
prometheus:
server:
global:
scrape_interval: 60s
scrape_timeout: 50s
To easily override these values, you can use the following new Helm settings in your Gloo Mesh installation.
- Telemetry gateway:
telemetryGatewayCustomization.otlp.max_recv_msg_size_mib telemetryGatewayCustomization.prometheusScrapeInterval telemetryGatewayCustomization.prometheusScrapeTimeout - Telemetry collector:
telemetryCollectorCustomization.prometheusScrapeInterval telemetryCollectorCustomization.prometheusSrapeTimeout telemetryCollectorCustomization.otlp.max_recv_msg_size_mib telemetryCollectorCustomization.otlpExporterTimeout telemetryCollectorCustomization.otlpExporterRetry.enabled telemetryCollectorCustomization.otlpExporterRetry.initial_interval telemetryCollectorCustomization.otlpExporterRetry.max_interval telemetryCollectorCustomization.otlpExporterRetry.max_elapsed_time
OTel collector sharding
By default, the telemetry collector runs as a daemon set in your Gloo Mesh environment. In some organizations, security or architecture restrictions might prevent you from running the collector pod on every node in the cluster. In this case, you might want to shard the tellemetry collector as a stateful set instead. This method allows the collector to be able to continually process a high level of metrics, without requiring the collector pod to deploy as a daemon set.
To shard the telemetry collector, follow the Upgrade guide and add the following configuration to your Helm values file:
telemetryCollector:
enabled: true
mode: statefulset
replicaCount: 2
telemetryCollectorCustomization:
sharding:
enabled: true
đ Feature changes
Review the following changes that might impact how you use certain features in your Gloo environment.
Deprecation of the built-in Jaeger instance
The built-in Jaeger instance, which was previously provided in the Gloo UI for testing or demo purposes, is deprecated in Gloo Mesh version 2.11 and later, and will be removed in future versions. Instead, you can create your own custom tracing pipeline in the Gloo telemetry pipeline to forward traces to a tracing platform that is managed by your organization and hardened for production. Alternatively, you can send production traces to a SaaS backend. Use the steps in Bring your own Jaeger instance as a guide to integrate your own tracing platform.
No Istio 1.28 support in Istio lifecycle manager
The Istio lifecycle manager (ILM) is deprecated and is planned to be removed in Gloo Mesh (OSS APIs) 2.12. You cannot use ILM to install Istio 1.28. However, you can continue to use the ILM to install the latest patch updates for Istio 1.27 or earlier. Note that during the installation or upgrade with the ILM, a deprecated note is shown.
If you have not done so yet, change the way that you manage Istio by using either Helm or the new Gloo Operator. Check out the guides for installing ambient or sidecar meshes, or for migration steps, see Migrate to the Gloo Operator from the Istio lifecycle manager.
Deprecation of solo.io/service-scope=global-only
By labeling a service or namespace with solo.io/service-scope, you make a service available across clusters throughout a multicluster mesh setup. In the Solo distribution of Istio 1.27 and earlier, supported values included global to make services available across all peered clusters through a global hostname, and global-only to ensure that traffic requests to the service are always routed to the global hostname and never to the service’s in-cluster local hostname.
In the Solo distribution of Istio 1.28 and later, the global-only value for solo.io/service-scope is deprecated. Instead, you can use the cluster, segment, or global values for the solo.io/service-scope label, and replicate the same functionality of all traffic routing through the service’s global hostname by using the new solo.io/service-takeover=true label.
For more information about the updated solo.io/service-scope label, see Global vs segment scope with solo.io/service-scope. For more information about the new solo.io/service-takeover label, see Local traffic takeover with solo.io/service-takeover.
đī¸ Removed features
No features were removed.
đ§ Known issues
The Solo team fixes bugs, delivers new features, and makes changes on a regular basis as described in the changelog. Some issues, however, might impact many users for common use cases. These known issues are as follows:
- Cluster names: Do not use underscores (
_) in the names of your clusters or in thekubeconfigcontext for your clusters. - Istio:
- Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version
1.26.1-patch0and later only. - In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the
license.valueorlicense.secretReffield of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the--set pilot.env.SOLO_LICENSE_KEYfield. - Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (
1.24.3-solo), including the Solo distribution ofistioctl. - Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (OSS APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.
- Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version
- OTel pipeline: FIPS-compliant builds are not currently supported for the OTel collector agent image.