Manually deploy Istio
Use Istio Helm charts to configure and deploy an Istio control plane and gateways in each workload cluster. The deployments are created by using Helm to facilitate future version upgrades. For example, you can fork Istio's existing Helm chart to add it to your existing CI/CD workflow.
For more information about manually deploying Istio, review the following:
- This installation guide installs production-level Gloo Istio, a hardened Istio enterprise image. For more information, see About Gloo Istio.
- For information about the namespaces that are used in this guide and other deployment recommendations, see Best practices for Istio in prod.
- The east-west gateways in this architecture allow services in one mesh to route cross-cluster traffic to services in the other mesh. If you install Istio into only one cluster for a single-cluster Gloo Mesh setup, the east-west gateway deployment is not required.
- For more information about using Istio Helm charts, see the Istio documentation.
- For more information about the example resource files that are provided in the following steps, see the GitHub repository for Gloo Mesh Use Cases.
Before you begin
-
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2 ...
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context> ...
-
Install
helm
, the Kubernetes package manager. -
To use a Gloo Mesh hardened image of Istio, you must have a Solo account. Log in to Support Center and get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article. If you do not have a Solo account or have trouble logging in, contact your account administrator.
- Istio version 1.17 does not support the Gloo legacy metrics pipeline. If you run the legacy metrics pipeline, before you upgrade or install Istio with version 1.17, be sure that you [set up the Gloo OpenTelemetry (OTel) pipeline](https://docs.solo.io/gloo-mesh-enterprise/main/observability/pipeline/setup/) instead in your new or existing Gloo Mesh installation.
Step 1: Deploy Istio control planes
Deploy an Istio control plane in each workload cluster. The provided Helm values files are configured with production-level settings; however, depending on your environment, you might need to edit settings to achieve specific Istio functionality.
Note that the values file includes a revision
label that matches the Istio version of the resource to facilitate canary-based upgrades. This revision label helps you upgrade the version of the Istio control plane more easily, as documented in the Istio upgrade guide.
-
Save the Istio version information as environment variables.
- For
REPO
, use a Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Gloo Istio version that you want to use. - For
ISTIO_IMAGE
, save the version that you downloaded, such as 1.17.2, and append thesolo
tag, which is required to use many enterprise features. You can optionally append other Gloo Istio tags, as described in About Gloo Istio. If you downloaded a different version than the following, make sure to specify that version instead. - For
REVISION
, take the Istio major and minor version numbers and replace the period with a hyphen, such as1-17-2
.
export REPO=<repo-key> export ISTIO_IMAGE=1.17.2-solo export REVISION=1-17-2
- For
-
Install
istioctl
, the Istio CLI tool. Download the same version that you want to use for Istio in your clusters, such as 1.17.2, and verify that the version is supported for the Kubernetes or OpenShift version of your workload clusters. To check your installed version, runistioctl version
. -
Create the
istio-config
namespace. This namespace serves as the administrative root namespace for Istio configuration. For more information, see Plan Istio namespaces.kubectl create namespace istio-config --context $REMOTE_CONTEXT1 kubectl create namespace istio-config --context $REMOTE_CONTEXT2
-
Add and update the Helm repository for Istio.
helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update
-
Install the Istio CRDs in each cluster.
helm upgrade --install istio-base istio/base \ -n istio-system \ --version ${ISTIO_IMAGE} \ --kube-context ${REMOTE_CONTEXT1} \ --create-namespace helm upgrade --install istio-base istio/base \ -n istio-system \ --version ${ISTIO_IMAGE} \ --kube-context ${REMOTE_CONTEXT2} \ --create-namespace
-
OpenShift only: Deploy the Istio CNI plug-in, and elevate the
istio-system
service account permissions. For more information about using Istio on OpenShift, see the Istio documentation for OpenShift installation.- Install the CNI plug-in in each cluster, which is required for using Istio in OpenShift.
helm install istio-cni istio/cni \ --namespace kube-system \ --kube-context ${REMOTE_CONTEXT1} \ --version ${ISTIO_IMAGE} \ --set cni.cniBinDir=/var/lib/cni/bin \ --set cni.cniConfDir=/etc/cni/multus/net.d \ --set cni.cniConfFileName="istio-cni.conf" \ --set cni.chained=false \ --set cni.privileged=true helm install istio-cni istio/cni \ --namespace kube-system \ --kube-context ${REMOTE_CONTEXT2} \ --version ${ISTIO_IMAGE} \ --set cni.cniBinDir=/var/lib/cni/bin \ --set cni.cniConfDir=/etc/cni/multus/net.d \ --set cni.cniConfFileName="istio-cni.conf" \ --set cni.chained=false \ --set cni.privileged=true
- Elevate the permissions of the
istio-system
service account that will be created. This permission allows the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-config --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-ingress --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-eastwest --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT2 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-config --context $REMOTE_CONTEXT2 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-ingress --context $REMOTE_CONTEXT2 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-eastwest --context $REMOTE_CONTEXT2
- Install the CNI plug-in in each cluster, which is required for using Istio in OpenShift.
-
Prepare a Helm values file for the
istiod
control plane. You can further edit the file to provide your own details for production-level settings.-
Download an example file,
istiod.yaml
, and update the environment variables with the values that you previously set.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/istiod.yaml > istiod.yaml envsubst < istiod.yaml > istiod-values.yaml
-
Optional: Trust domain validation is disabled by default in the profile that you downloaded in the previous step. If you have a multicluster mesh setup and you want to enable trust domain validation, add all the clusters that are part of your mesh in the
meshConfig.trustDomainAliases
field, excluding the cluster that you currently prepare for the istiod installation. For example, let's say you have 3 clusters that belong to your mesh:cluster1
,cluster2
, andcluster3
. When you install istiod incluster1
, you set the following values for your trust domain:... meshConfig: trustDomain: cluster1 trustDomainAliases: ["cluster2","cluster3"]
Then, when you move on to install istiod in
cluster2
, you settrustDomain: cluster2
andtrustDomainAliases: ["cluster1","cluster3"]
. You repeat this step for all the clusters that belong to your service mesh. Note that as you add or delete clusters from your service mesh, you must make sure that you update thetrustDomainAliases
field for all of the clusters.
-
-
Create the
istiod
control plane in your clusters.helm upgrade --install istiod-${REVISION} istio/istiod \ --version ${ISTIO_IMAGE} \ --namespace istio-system \ --kube-context ${REMOTE_CONTEXT1} \ --wait \ -f istiod-values.yaml helm upgrade --install istiod-${REVISION} istio/istiod \ --version ${ISTIO_IMAGE} \ --namespace istio-system \ --kube-context ${REMOTE_CONTEXT2} \ --wait \ -f istiod-values.yaml
-
After the installation is complete, verify that the Istio control plane pods are running.
kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
Example output for 2 replicas in
cluster1
:NAME READY STATUS RESTARTS AGE istiod-1-17-2-7b96cb895-4nzv9 1/1 Running 0 30s istiod-1-17-2-7b96cb895-r7l8k 1/1 Running 0 30s
Step 2: Deploy Istio east-west gateways
If you have a multicluster Gloo Mesh setup, deploy an Istio east-west gateway into each workload cluster. An east-west gateway lets services in one mesh communicate with services in another.
-
Prepare a Helm values file for the Istio east-west gateway. This sample command downloads an example file,
eastwest-gateway.yaml
, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/eastwest-gateway.yaml > eastwest-gateway.yaml envsubst < eastwest-gateway.yaml > eastwest-gateway-values.yaml
-
Create the east-west gateway in each cluster.
helm upgrade --install istio-eastwestgateway-${REVISION} istio/gateway \ --version ${ISTIO_IMAGE} \ --create-namespace \ --namespace istio-eastwest \ --kube-context ${REMOTE_CONTEXT1} \ --wait \ -f eastwest-gateway-values.yaml helm upgrade --install istio-eastwestgateway-${REVISION} istio/gateway \ --version ${ISTIO_IMAGE} \ --create-namespace \ --namespace istio-eastwest \ --kube-context ${REMOTE_CONTEXT2} \ --wait \ -f eastwest-gateway-values.yaml
-
Verify that the east-west gateway pods are running and the load balancer service is assigned an external address.
kubectl get pods -n istio-eastwest --context $REMOTE_CONTEXT1 kubectl get svc -n istio-eastwest --context $REMOTE_CONTEXT1 kubectl get pods -n istio-eastwest --context $REMOTE_CONTEXT2 kubectl get svc -n istio-eastwest --context $REMOTE_CONTEXT2
Example output for
cluster1
:NAME READY STATUS RESTARTS AGE istio-eastwestgateway-1-17-2-7f6f8f7fc7-ncrzq 1/1 Running 0 11s istio-eastwestgateway-1-17-2-7f6f8f7fc7-ncrzq 1/1 Running 0 48s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway-1-17-2 LoadBalancer 10.96.166.166 <externalip> 15021:32343/TCP,80:31685/TCP,443:30877/TCP,31400:31030/TCP,15443:31507/TCP,15012:30668/TCP,15017:30812/TCP 13s
AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the east-west gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the east-west gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Step 3 (optional): Deploy Istio ingress gateways
If you have a Gloo Gateway license, deploy an Istio ingress gateway to allow incoming traffic requests to your Istio-managed apps.
-
Prepare a Helm values file for the Istio ingress gateway. This sample command downloads an example file,
ingress-gateway.yaml
, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/ingress-gateway.yaml > ingress-gateway.yaml envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
-
Create the ingress gateway in each cluster.
helm upgrade --install istio-ingressgateway-${REVISION} istio/gateway \ --version ${ISTIO_IMAGE} \ --create-namespace \ --namespace istio-ingress \ --kube-context ${REMOTE_CONTEXT1} \ --wait \ -f ingress-gateway-values.yaml helm upgrade --install istio-ingressgateway-${REVISION} istio/gateway \ --version ${ISTIO_IMAGE} \ --create-namespace \ --namespace istio-ingress \ --kube-context ${REMOTE_CONTEXT2} \ --wait \ -f ingress-gateway-values.yaml
-
Verify that the ingress gateway pods are running and the load balancer service is assigned an external address.
kubectl get pods -n istio-ingress --context $REMOTE_CONTEXT1 kubectl get svc -n istio-ingress --context $REMOTE_CONTEXT1 kubectl get pods -n istio-ingress --context $REMOTE_CONTEXT2 kubectl get svc -n istio-ingress --context $REMOTE_CONTEXT2
Example output for
cluster1
:NAME READY STATUS RESTARTS AGE istio-ingressgateway-1-17-2-665d46686f-nhh52 1/1 Running 0 106s istio-ingressgateway-1-17-2-665d46686f-tlp5j 1/1 Running 0 2m1s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway-1-17-2 LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
-
Optional for OpenShift: Expose the load balancer by using an OpenShift route.
oc -n istio-ingress expose svc istio-ingressgateway-1-17-2 --port=http2 --context $REMOTE_CONTEXT1 oc -n istio-ingress expose svc istio-ingressgateway-1-17-2 --port=http2 --context $REMOTE_CONTEXT2
Step 4: Deploy workloads
Now that Istio is up and running on all your workload clusters, you can create service namespaces for your teams to run app workloads in.
-
OpenShift only: In each workload project, create a NetworkAttachmentDefinition and elevate the service account.
- Create a NetworkAttachmentDefinition custom resource for each project where you want to deploy workloads, such as the
bookinfo
project.cat <<EOF | oc -n bookinfo --context $REMOTE_CONTEXT1 create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF cat <<EOF | oc -n bookinfo --context $REMOTE_CONTEXT2 create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
- Elevate the permissions of the service account in each project where you want to deploy workloads, such as the
bookinfo
project. This permission allows the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.oc adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo --context $REMOTE_CONTEXT2
- Create a NetworkAttachmentDefinition custom resource for each project where you want to deploy workloads, such as the
-
For any workload namespace, such as
bookinfo
, label the namespace with the revision so that Istio sidecars are deployed to your app pods.kubectl label ns bookinfo istio.io/rev=$REVISION --overwrite --context $REMOTE_CONTEXT1
-
Deploy apps and services to your workload namespaces. For example, you might start out with the Bookinfo sample application for multicluster or single cluster environments. Those steps guide you through creating workspaces for your workloads, deploying Bookinfo across workload clusters, and using ingress and east-west gateways to shift traffic across clusters.
Next steps
- If you haven't already, install Gloo Mesh Enterprise so that Gloo Mesh can manage your Istio service mesh resources. You don't need to directly configure any Istio resources going forward.
- Review how Gloo Mesh custom resources are automatically translated into Istio resources.
- Try out the Policies for steps to secure, observe, and control network traffic.