Install Istio in ambient mode
Use the Istio Lifecycle Manager to deploy ambient service meshes to your Gloo Mesh Core clusters.
Considerations
Before you install Istio in ambient mode, review the following considerations and requirements.
Gloo 2.6 lifecycle manager changes
The Solo Istio Lifecycle Manager helps you manage your Istio installations and upgrades more easily. In Gloo Mesh Core version 2.6 and later, the Istio lifecycle manager feature is changed so that an Istio lifecycle agent now deploys and manages the Istio installations to your clusters, instead of the Gloo agent. Additionally, the Istio lifecycle agent now translates input IstioOperator
configuration into Istio Helm chart values to deploy the installations, instead of deploying installations based on the IstioOperator
configuration directly.
Because of the updates to the Istio lifecycle manager in version 2.6, you currently cannot use the lifecycle manager alongside unmanaged Istio service meshes that you install by using Helm, istioctl
, or an IstioOperator
within the same environment. To use the Istio lifecycle manager, remove any existing Istio installations, and create managed Istio installations by following the steps in this guide. Note that this limitation will be addressed in future releases. For more information, see the Istio lifecycle manager overview.
Requirements
- In Gloo Mesh Core version 2.6, ambient mode requires the Solo distribution of Istio version 1.22.3 or later (
1.22.3-solo
). - In Istio 1.22.0-1.22.3, the
ISTIO_DELTA_XDS
environment variable must be set tofalse
. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4.
Single-cluster limitation
Currently, Istio in ambient mode is supported only for single clusters. Ambient mode in a multicluster environment where apps in different clusters can communicate through east-west routing as part of a single service mesh is not supported. However, you can still use your management cluster to deploy separate ambient service meshes to multiple, individual workload clusters.
Revision and canary upgrade limitations
Currently, revisions are not supported for ambient installations. This limitation is planned to be addressed in a future release. To ensure that you can install and upgrade your ambient service mesh smoothly, do not specify a named revision in the spec.installations.revision
field. If necessary, you can explicitly set spec.installations.revision
to ""
.
Because revisions are not supported yet, canary upgrades are also not supported. Instead, you can use the lifecycle manager to perform only in-place upgrades of ambient Istio installations.
Prepare your environment
Install Gloo Mesh Core by following the multicluster getting started guide or the Helm setup guide. Do not install Istio as part of your setup.
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt export REMOTE_CLUSTER=cluster1
- Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT=<remote-cluster-context>
Set environment variables for the Solo distribution of Istio repository and image version that you want to install. You can find these values in the Istio images built by Solo.io support article.
export REPO=<repo-key> export ISTIO_IMAGE=1.22.3-solo
If other Istio installations already exist in your clusters, you must first uninstall any existing Istio installations. You currently cannot use the lifecycle manager alongside or to takeover unmanaged Istio installations that you install by using Helm,
istioctl
, or anIstioOperator
.If you use Google Kubernetes Engine (GKE) clusters, create the following
ResourceQuota
in theistio-system
namespace of the workload cluster. For more information about this requirement, see the community Istio documentation.kubectl --context $REMOTE_CONTEXT create namespace istio-system kubectl --context $REMOTE_CONTEXT -n istio-system apply -f - <<EOF apiVersion: v1 kind: ResourceQuota metadata: name: gcp-critical-pods namespace: istio-system spec: hard: pods: 1000 scopeSelector: matchExpressions: - operator: In scopeName: PriorityClass values: - system-node-critical EOF
Deploy the Istio ambient control plane
Create an
IstioLifecycleManager
resource in your management cluster to deploy anistiod
control plane in ambient mode to the workload cluster.Do not specify a named revision in thespec.installations.revision
field, which is currently unsupported.If you have multiple workload clusters, you can add more entries to thespec.installations.clusters
list. However, note that ambient service meshes are currently supported only for single clusters. If you specify multiple clusters in the list, you can deploy an ambient mesh to each cluster, but the service meshes are individual. Apps in the service mesh of one cluster cannot communicate with apps in the service mesh of another cluster through east-west routing.Do not specify a named revision in the `spec.installations.revision` field.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - clusters: - name: $REMOTE_CLUSTER istioOperatorSpec: hub: $REPO tag: $ISTIO_IMAGE profile: ambient components: cni: namespace: istio-system enabled: true values: ztunnel: env: L7_ENABLED: true EOF
Verify that the components of the Istio ambient mesh and the Istio CNI pods are successfully installed. Because the ztunnel and the CNI are deployed as daemon sets, the number of ztunnel pods and CNI pods each equal the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods --context $REMOTE_CONTEXT -A | grep istio
Example output:
istio-cni-node-6q26l 1/1 Running 0 1m43s istio-cni-node-7gg8k 1/1 Running 0 1m43s istio-cni-node-lcrcd 1/1 Running 0 1m43s istiod-d765ff7cf-46dbm 1/1 Running 0 2m4s ztunnel-648wc 1/1 Running 0 2m8s ztunnel-6rhp5 1/1 Running 0 2m8s ztunnel-hllxg 1/1 Running 0 2m8s
To send requests to sample apps from outside your Gloo Mesh Core setup, you can use the following gateway lifecycle manager resource to also deploy an Istio ingress gateway. Note that in an ambient service mesh, gateways do not require an ambient profile.
kubectl --context $MGMT_CONTEXT -n gloo-mesh apply -f - <<EOF apiVersion: admin.gloo.solo.io/v2 kind: GatewayLifecycleManager metadata: name: istio-ingressgateway namespace: gloo-mesh spec: installations: - clusters: - name: $REMOTE_CLUSTER istioOperatorSpec: hub: $REPO tag: $ISTIO_IMAGE profile: empty components: ingressGateways: - enabled: true k8s: service: ports: # Port for health checks on path /healthz/ready. # For AWS ELBs, must be listed as the first port - name: status-port port: 15021 targetPort: 15021 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 - name: tls port: 15443 targetPort: 15443 selector: istio: ingressgateway type: LoadBalancer label: app: istio-ingressgateway istio: ingressgateway name: istio-ingressgateway namespace: gloo-mesh-gateways EOF
In the workload cluster, verify that the ingress gateway pod has a status of
RUNNING
and that the load balancer service has an external address.kubectl get pods,svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s