Review the following information about the Istio control plane setup in this guide:

Single cluster

Install a sidecar mesh in a single cluster environment.

Step 1: Prepare the cluster environment

Set up the following tools and environment variables.

  1. Set environment variables for the Solo distribution of Istio that you want to install. You can find these values in the Istio images built by Solo.io support article. For more information, see the Solo distribution of Istio overview.

    # Solo distrubution of Istio patch version
    # in the format 1.x.x, with no tags
    export ISTIO_VERSION=1.24.2
    # Repo key for the minor version of the Solo distribution of Istio
    # This is the 12-character hash at the end of the repo URL: 'us-docker.pkg.dev/gloo-mesh/istio-<repo-key>'
    export REPO_KEY=<repo_key>
    
    # Solo distrubution of Istio patch version and Solo tag
    # Optionally append other Solo tags as needed
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
    # Solo distribution of Istio image repo
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}

    Be sure to review the following known Istio version restrictions.

  2. Install istioctl, the Istio CLI tool.

    curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
    cd istio-${ISTIO_VERSION}
    export PATH=$PWD/bin:$PATH
  3. Add and update the Helm repository for Istio.

    helm repo add istio https://istio-release.storage.googleapis.com/charts
    helm repo update

Step 2: Install CRDs

Deploy the Istio CRDs and a sidecar control plane to your cluster.

  1. Save the name and kubeconfig context of a workload cluster in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

    export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
  2. Install the Istio CRDs.

    helm upgrade --install istio-base istio/base \
    -n istio-system \
    --create-namespace \
    --kube-context ${CLUSTER_CONTEXT} \
    --version ${ISTIO_VERSION} \
    --set defaultRevision=main
  3. Create the istio-config namespace. This namespace serves as the administrative root namespace for Istio configuration.

    kubectl create namespace istio-config --context ${CLUSTER_CONTEXT}
  4. OpenShift only: Install the CNI plug-in, which is required for using Istio in OpenShift.

    helm install istio-cni istio/cni \
    --namespace kube-system \
    --kube-context ${CLUSTER_CONTEXT} \
    --version ${ISTIO_VERSION} \
    --set cni.cniBinDir=/var/lib/cni/bin \
    --set cni.cniConfDir=/etc/cni/multus/net.d \
    --set cni.cniConfFileName="istio-cni.conf" \
    --set cni.chained=false \
    --set cni.privileged=true \
    --set global.platform=openshift

Step 3: Install the Istio control plane

  1. Prepare a Helm values file for the istiod control plane. You can further edit the file to provide your own details for production-level settings.

    1. Download an example file, istiod.yaml, and update the environment variables with the values that you previously set. The provided Helm values files are configured with production-level settings; however, depending on your environment, you might need to edit settings to achieve specific Istio functionality.

      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh-enterprise/istio-install/manual-helm/istiod-1.24+.yaml > istiod.yaml
      envsubst < istiod.yaml > istiod-values.yaml
      open istiod-values.yaml
    2. Optional: Trust domain validation is disabled by default in the profile that you downloaded in the previous step. If you have a multicluster mesh setup and you want to enable trust domain validation, add all the clusters that are part of your mesh in the meshConfig.trustDomainAliases field, excluding the cluster that you currently prepare for the istiod installation. For example, let’s say you have 3 clusters that belong to your mesh: cluster1, cluster2, and cluster3. When you install istiod in cluster1, you set the following values for your trust domain:

      ...
      meshConfig:
        trustDomain: cluster1
        trustDomainAliases: ["cluster2","cluster3"]

      Then, when you move on to install istiod in cluster2, you set trustDomain: cluster2 and trustDomainAliases: ["cluster1","cluster3"]. You repeat this step for all the clusters that belong to your service mesh. Note that as you add or delete clusters from your service mesh, you must make sure that you update the trustDomainAliases field for all of the clusters.

  2. Create the istiod control plane in your cluster.

  3. After the installation is complete, verify that the Istio control plane pods are running.

    kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}

    Example output:

    NAME                          READY   STATUS    RESTARTS   AGE
    istiod-main-bb86b959f-msrg7   1/1     Running   0          2m45s
    istiod-main-bb86b959f-w29cm   1/1     Running   0          3m

Multicluster

Install a multicluster sidecar service meshes with Helm by using Solo.io’s multicluster peering capability.

Considerations

Before you install a multicluster sidecar mesh, review the following considerations and requirements.

Version and license requirements

  • In Gloo Mesh version 2.7 and later, multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl. This distribution requires a Gloo Mesh Enterprise license. If you do not have one, contact an account representative.

Components

In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh.

Revision and canary upgrade limitations

The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.

Step 1: Prepare the cluster environments

  1. Set your Gloo Mesh Enterprise license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

    export GLOO_MESH_LICENSE_KEY=<license_key>
  2. Set environment variables for the Solo distribution of Istio that you want to install. You can find these values in the Ambient section of the Istio images built by Solo.io support article.

    # Solo distrubution of Istio patch version
    # in the format 1.x.x, with no tags
    export ISTIO_VERSION=1.24.2
    # Repo key for the minor version of the Solo distribution of Istio
    # This is the 12-character hash at the end of the repo URL: 'us-docker.pkg.dev/gloo-mesh/istio-<repo-key>'
    export REPO_KEY=<repo_key>
    
    # Solo distrubution of Istio patch version and Solo tag
    # Optionally append other Solo tags as needed
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
    # Solo distrubution of Istio patch version and tag,
    # image repo, Helm repo, and binary repo
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
    export BINARY_REPO=https://console.cloud.google.com/storage/browser/istio-binaries-${REPO_KEY}/${ISTIO_IMAGE}
  3. Download the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Navigate to the storage repository for the Solo distribution of Istio binaries.
      open ${BINARY_REPO}
    2. Download the tar.gz file for your system, such as istio-1.24.2-solo-osx-amd64.tar.gz.
    3. Extract the downloaded tar.gz file.
    4. Navigate to the package directory and add the istioctl client to your system’s PATH.
      cd istio-${ISTIO_IMAGE}
      export PATH=$PWD/bin:$PATH
    5. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.
      istioctl version --remote=false
      Example output:
      client version: 1.24.2-solo
  1. If you use Google Kubernetes Engine (GKE) clusters, create the following ResourceQuota in the istio-system namespace of each cluster. For more information about this requirement, see the community Istio documentation.
    kubectl create namespace istio-system
    kubectl -n istio-system apply -f - <<EOF
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: gcp-critical-pods
      namespace: istio-system
    spec:
      hard:
        pods: 1000
      scopeSelector:
        matchExpressions:
        - operator: In
          scopeName: PriorityClass
          values:
          - system-node-critical
    EOF

Step 2: Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Step 3: Install CRDs

  1. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

    export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
  2. Install the base chart, which contains the CRDs and cluster roles required to set up Istio.

  3. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${CLUSTER_CONTEXT}

Step 4: Deploy the Istio control plane

  1. Create the istiod control plane in your cluster.

  2. Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.

  3. Verify that the components of the Istio control plane are successfully installed. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

    kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep istio

    Example output:

    istio-system   istiod-main-85c4dfd97f-mncj5                             1/1     Running   0               40s
    istio-system   istio-cni-node-pr5rl                                1/1     Running   0               9s
    istio-system   istio-cni-node-pvmx2                                1/1     Running   0               9s
    istio-system   istio-cni-node-6q26l                                1/1     Running   0               9s

Step 5: Deploy the Istio data plane

  1. Install the ztunnel daemonset. Note that this component is required to successfully peer clusters together.

  2. Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

    kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep ztunnel

    Example output:

    ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
  3. Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The Istio control plane uses this label internally to group pods that exist in the same L3 network.

    kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}
  4. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.

    • You can use the following istioctl command to quickly create the east-west gateway.
      kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
    • To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command.
      kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate
      In this example output, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode.
      apiVersion: gateway.networking.k8s.io/v1
      kind: Gateway
      metadata:
        labels:
          istio.io/expose-istiod: "15012"
          topology.istio.io/network: "<cluster_network_name>"
        name: istio-eastwest
        namespace: istio-eastwest
      spec:
        gatewayClassName: istio-eastwest
        listeners:
        - name: cross-network
          port: 15008
          protocol: HBONE
          tls:
            mode: Passthrough
        - name: xds-tls
          port: 15012
          protocol: TLS
          tls:
            mode: Passthrough

Step 6: Repeat steps 3 - 5 for each cluster

For each cluster that you want to include in the multicluster mesh setup, repeat steps 3 - 5 to install the CRDs, control plane, and data plane in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.

export CLUSTER_NAME=<cluster-name>
export CLUSTER_CONTEXT=<cluster-context>

Linking clusters enables cross-cluster service discovery and allows traffic to be routed through east-west gateways across clusters.

  1. Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.

    kubectl config get-contexts
    • In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
    • If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
      KUBECONFIG=<kubeconfig_file1>.yaml:<file2>.yaml:<file3>.yaml kubectl config view --flatten
  2. Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

Next

Add apps to the service mesh.