This release note describes what’s different between Solo builds of Istio versions 1.27.0 and 1.28.0.

General Changes

Solo Flavor Changes

  • Improved istioctl multicluster check with the following enhancements:

    • Check the gloo.solo.io/PeeringSucceeded remote gateway condition for installations below 1.27
    • Find stale endpoints from remote clusters that point to Pods that no longer exist
    • Check that the environment variables on istiod do not conflict with multi-cluster functionality
  • Added the ability to mark a cluster as draining.

    • The annotation, solo.io/draining-weight: <value between 0 and 100>, specifies the amount of traffic to drain
    • The annotation can be set on:
      1. the istio-eastwest gateway to prevent inbound traffic from remote clusters
      2. the istio-remote gateway to prevent outbound traffic to the remote cluster
    • If a weight is set on both the istio-eastwest and istio-remote gateways for a given cluster, the maximum weight is applied
  • Improved the solo.io/service-scope label semantics to control service visibility within the mesh. You can apply this label to namespaces or individual services, with labels on individual services taking precedence over the namespace label. Service takeover is now separated from scope and has been extracted to a dedicated solo.io/service-takeover: true|false (default: false) label. The takeover label only has effect within the same segment, even if scope is set to global. You can choose between the following scope values:

    • cluster to limit service visibility to apps within the same cluster. This can be useful to opt out individual services from being globally available if the entire namespace’s scope is set to global.
    • segment to limit service visibility to apps within the same segment.
    • global to make services available across all peered clusters.
    • global-only has been deprecated in favor of using solo.io/service-takeover for service takeover use cases, but will still work.
  • Added a new CRD, Segment, allowing clusters to declare a custom domain suffix and allow addressing individual groups of clusters.

  • Added an environment variable to istiod, DISABLE_LEGACY_MULTICLUSTER, to disable legacy OSS multicluster discovery mechanisms that use remote secrets. OSS multicluster uses remote secrets (containing kubeconfigs) to watch resources on remote clusters, which is fundamentally incompatible with peering’s decentralized, push-based model. This variable ensures istiod ignores remote secrets and doesn’t attempt to set up Kubernetes clients to connect to them. When migrating from legacy OSS multicluster to peering, set this on the new revision of the control plane. For installations without remote secrets, this serves as a recommended safety measure.

  • Added multi-cluster and cross-account support to ECS platform discovery.

  • Added support for peering multiple clusters using NodePort istio-eastwest gateway services: - You can enable this feature with the following annotations: - peering.solo.io/data-plane-service-type: NodePort on the istio-eastwest gateway resource, propagated to the corresponding Kubernetes service. - peering.solo.io/preferred-data-plane-service-type: NodePort on the istio-remote gateway resource. - When peering via NodePort is enabled, only nodes where an istio-eastwest gateway pod is provisioned will be considered targets for traffic. - Like LoadBalancer gateways, NodePort gateways support traffic from Envoy-based ingress gateways, waypoints, and sidecars. - A new gateway status condition indicates what data plane service type is currently being used for peering.

  • Fixed a race condition that occasionally missed global Services when peering from remote clusters.

  • Fixed changes to Service resources not being propagated to their associated global services.

  • Fixed an issue where any external modification of auto-generated resources for multi-cluster peering are restored to their original state.

  • Fixed an issue where a service only existing in a remote cluster would not have an L7 policy properly applied if the remote service did not properly declare an L7 protocol via the port name or appProtocol field.

  • Fixed an issue where Services with solo.io/service-scope=global-only were not reachable via standard Kubernetes service hostnames, regardless of whether the local cluster had a copy of the Service or not.

  • Fixed a race condition where gateway status updates were conflicting with gateway updates. Retries are now attempted when an error occurs.

  • Fixed an istiod crash when supplying a license to a FIPS-compliant version using a license secret.

  • Fixed an issue where deleting a remote service did not delete the corresponding ServiceEntry in the local cluster.

  • Fixed protocol being reset to TCP on global services after an istiod restart. When using waypoints, this could suddenly stop HTTPRoutes or other L7 policies from applying.

  • Fixed an issue where “to-workload” traffic through a waypoint would fail when peering is enabled (regardless of actual multi-cluster installation or traffic).

  • Fixed traffic routing through the local cluster’s waypoint proxy when a service exists only on a remote cluster, but the same waypoint is also deployed remotely. Traffic now correctly routes to remote waypoint proxies when the service only exists remotely.

  • Fixed traffic breaking when a waypoint from the local cluster was deleted while one still existed in the remote cluster.

  • Fixed an issue with connections to istio-eastwest gateways from Envoy proxies (sidecar, waypoint, ingress) where the outer HBONE connection used port 15008 rather than the HBONE port specified in the istio-remote gateway. This presented a problem when specifying NodePort istio-eastwest gateways.

  • Fixed missing gateway reconciliation for service-type changes.

  • Fixed invalid ServiceEntry generation when the service port is not named.

  • Fixed locality information not being propagated for peered multi-cluster resources when the istio-remote Gateway’s topology.istio.io/subzone was specified.

  • Fixed ambient workloads attempting to send HBONE to plaintext workloads on other clusters when using a flat-network multicluster setup.

  • Fixed an issue with flat networking where traffic would not traverse remote-only waypoints.

  • Fixed an issue with peering with a flat network for the case when the istio.io/use-waypoint label was set on a namespace in the local cluster that had services in both the local and remote cluster(s). In these cases, the waypoint specified by the remote cluster incorrectly took precedence over what was specified in the local cluster.

  • Fixed an issue with peering with a flat network for the case when the istio.io/use-waypoint label was NOT set on the local cluster for a service or namespace, but remote clusters did have the label. In such cases, no waypoint was used at all. Now, if the local cluster doesn’t set the label, the information from the remote cluster is used. To intentionally skip using remote waypoints from the local cluster, set istio.io/use-waypoint: none.

  • Fixed an issue with flat-network peering where stale endpoints were not properly cleaned up from remote clusters.

  • Fixed an issue where an incorrect trust domain was set on peered flat-networking workloads.

  • Fixed a rare issue where peered service ports or other settings would get stuck in an incorrect state when the Service is created both locally and in a peered cluster in a nearly identical timeframe.

  • Fixed sidecars and gateways not respecting load balancing settings or performing locality load balancing when sending traffic to a waypoint.

  • Fixed local istio-eastwest gateways not being translated into NetworkGateways when the cluster name matched the network ID.

  • Removed incorrect UnsupportedProtocol warning from istio-eastwest Gateway resources.

FIPS Flavor Changes

No changes in this section.