Helm
Use Helm to deploy an ambient service mesh to your cluster.
For more information about the components that are installed in these steps, see the ambient components overview.
Considerations
Before you install an ambient mesh, review the following considerations and requirements.
Version requirements
Review the following known Istio version requirements and restrictions.
- Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version
1.26.1-patch0and later only. - In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the
license.valueorlicense.secretReffield of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the--set pilot.env.SOLO_LICENSE_KEYfield. - Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (
1.24.3-solo), including the Solo distribution ofistioctl. - Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (OSS APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.
Single-cluster and multicluster meshes
This guide shows you how to install an ambient mesh in one cluster. If you later decide to to upgrade to a multicluster mesh with an Enterprise level license, you can link your existing ambient meshes across clusters.
Alternatively, if you prefer to start with a multicluster mesh immediately, check out Install and connect new ambient meshes in the multicluster guide instead.
Platform requirements
The steps in the following sections have options for deploying an ambient mesh to either Kubernetes or OpenShift clusters.
If you use OpenShift clusters, complete the following steps before you begin:
- Set
routingViaHost: truein the Cluster Network Operator to enable OVN-Kubernetes local gateway mode. For more information, see the Platform-specific prerequisites in the ambient mesh docs.
The commands for OpenShift in the following steps contain these required settings:
- Your Helm settings must include
global.platform=openshiftfor Istio 1.24 and later. If you instead install Istio 1.23 or earlier, you must useprofile=openshiftinstead of theglobal.platformsetting. - Install the
istio-cniandztunnelHelm releases in thekube-systemnamespace, instead of theistio-systemnamespace.
Revision and canary upgrade limitations
The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.
Step 1: Set up tools
If you do not already have a license, decide the level of licensed features that you want, and contact an account representative to obtain the license.
Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table.
Decide on the specific tag of Solo distribution of Istio image, such as
-solo,-solo-fips,-solo-distroless, or-solo-fips-distroless, that you want for your environment.Save the details for the version of the Solo distribution of Istio that you want to install.
- Save the Solo distribution of Istio patch version and tag.
export ISTIO_VERSION=1.28.0 # Change the tags as needed export ISTIO_IMAGE=${ISTIO_VERSION}-solo - Save the image and Helm repository information for the Solo distribution of Istio.
- Istio version 1.29 and later:
export REPO=us-docker.pkg.dev/soloio-img/istio export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helm - Istio version 1.28 and earlier: Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL
us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.# 12-character hash at the end of the minor version repo URL export REPO_KEY=<repo_key> export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY} export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
- Istio version 1.29 and later:
- Set your license key as an environment variable. If you prefer to specify license keys in a secret instead, see Licensing.
export SOLO_LICENSE_KEY=<license_key>
- Save the Solo distribution of Istio patch version and tag.
Install or upgrade
istioctlwith the same version of Istio that you saved.curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-${ISTIO_VERSION} export PATH=$PWD/bin:$PATHCheck the platform-specific prerequisites for ambient to determine whether you must make any changes to your environment before you install an ambient mesh.
Step 2: Install CRDs
Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the
Gatewayresource, and more.kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yamlInstall the
basechart, which contains the CRDs and cluster roles required to set up Istio.
Step 3: Deploy the ambient control plane
Create the
istiodcontrol plane in your cluster.Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.
Verify that the components of the Istio ambient control plane are successfully installed. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods -A | grep istioExample output:
istio-system istiod-85c4dfd97f-mncj5 1/1 Running 0 40s istio-system istio-cni-node-pr5rl 1/1 Running 0 9s istio-system istio-cni-node-pvmx2 1/1 Running 0 9s istio-system istio-cni-node-6q26l 1/1 Running 0 9s
Step 4: Deploy the ambient data plane
Install the ztunnel daemonset.
Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods -A | grep ztunnelExample output:
ztunnel-tvtzn 1/1 Running 0 7s ztunnel-vtpjm 1/1 Running 0 4s ztunnel-hllxg 1/1 Running 0 4s