Deploy Istio in production

Deploy an Istio operator, and use an IstioOperator resource to configure and deploy an Istio control plane in each workload cluster. Installations that use the Istio operator are recommended because the operator facilitates Istio control plane upgrades through the canary deployment model.

For more information about deploying Istio in production, review the following:

Figure of a production-level IstioOperator deployment architecture

Note that the east-west gateways in this architecture allow services in one mesh to route cross-cluster traffic to services in the other mesh. If you install Istio into only one cluster for a single-cluster Gloo Mesh setup, the east-west gateway deployment is not required.

Before you begin

Throughout this guide, you use example configuration files that have pre-filled values. You can update some of the values, but unexpected behaviors might occur. For example, if you change the default istio-ingressgateway name, you cannot also use Kubernetes horizontal pod autoscaling. For more information, see the Troubleshooting docs.

Step 1: Deploy the Istio operator

Start by creating an Istio operator in the istio-operator namespace of your workload cluster. The Istio operator deployment translates an IstioOperator resource that you create into an Istio control plane in your cluster. For more information, see the Istio operator documentation.

Note that the following deployment is created by using Helm to facilitate future version upgrades. For example, you can fork Istio's existing Helm chart to add it to your existing CI/CD workflow.

  1. Install istioctl, the Istio CLI tool. Be sure to download a version of Istio that is supported for the version of your workload clusters. To check your installed version, run istioctl version.

    Download the same version that you want to use for Istio in your clusters, such as 1.16.2.

  2. Navigate to the Istio directory.

    cd istio-<version>
    
  3. Save the Istio version information as environment variables.

    • For REPO, use a Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Gloo Istio version that you want to use.
    • For ISTIO_IMAGE, save the version that you downloaded, such as 1.16.2, and append the solo tag, which is required to use many enterprise features. You can optionally append other Gloo Istio tags, as described in About Gloo Istio. If you downloaded a different version than the following, make sure to specify that version instead.
    • For REVISION, take the Istio major and minor version numbers and replace the period with a hyphen, such as 1-16.
    export REPO=<repo-key>
    export ISTIO_IMAGE=1.16.2-solo
    export REVISION=1-16
    
  4. Save the workload cluster that you want to install Istio as the environment variable CLUSTER_NAME so that you can reuse this same variable for all your workload clusters.

    export CLUSTER_NAME=$REMOTE_CLUSTER1
    
  5. Switch to your workload cluster's context.

    kubectl config use-context $REMOTE_CONTEXT1
    
  6. Create the following Istio namespaces. For more information, see Plan Istio namespaces.

    kubectl create namespace istio-system
    kubectl create namespace istio-operator
    kubectl create namespace istio-ingress
    kubectl create namespace istio-eastwest
    kubectl create namespace istio-config
    
  7. Create the Istio operator. Steps vary by Istio version.

    Create the Istio operator in your cluster.

    helm install istio-operator manifests/charts/istio-operator \
    --set watchedNamespaces="istio-system\,istio-ingress\,istio-eastwest" \
    --set hub="$REPO" \
    --set tag="$ISTIO_IMAGE" \
    --set revision="$REVISION" \
    -n istio-operator
    
    1. Create a Helm template with the following settings, and save the template as operator.yaml. Direct Helm chart installation cannot be used in Istio 1.11 and earlier due to a namespace ownership bug.
      helm template istio-operator-$REVISION manifests/charts/istio-operator \
      --set operatorNamespace=istio-operator \
      --set watchedNamespaces="istio-system\,istio-ingress\,istio-eastwest" \
      --set hub="$REPO" \
      --set global.hub="gcr.io/istio-release" \
      --set tag="$ISTIO_IMAGE" \
      --set revision="$REVISION" \
      --set enableCRDTemplates=true \
      --include-crds \
      -n istio-operator > operator.yaml
      
    2. Optional: View the operator resource configurations.
      cat operator.yaml
      
    3. Create the Istio operator in your cluster.
      kubectl apply -f operator.yaml
      

  8. Verify that the operator resources are deployed.

    kubectl get all -n istio-operator
    

    Example output:

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/istio-operator-1-16-2-647b5df446-zkvpm   1/1     Running   4          2m57s
    
    NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    service/istio-operator-1-16-2   ClusterIP   10.00.100.100   <none>        8383/TCP   2m57s
    
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-operator-1-16-2   1/1     1            1           2m57s
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-operator-1-16-2-647b5df446   1         1         1       2m57s
    

Step 2: Use the Istio operator to create the Istio control plane

Create an IstioOperator resource to configure and deploy the Istio control plane in your cluster. The provided IstioOperator resources are configured with production-level settings; however, depending on your environment, you might need to edit settings to achieve specific Istio functionality.

Note that the resource includes a revision label that matches the Istio version of the resource to facilitate canary-based upgrades. This revision label helps you upgrade the version of the Istio control plane more easily, as documented in the Istio upgrade guide.

For more information about the content of the provided IstioOperator examples, check these resources:

Creating an IstioOperator resource on Kubernetes

  1. Prepare an IstioOperator resource file for the istiod control plane. This sample command downloads an example file, istiod-kubernetes.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/1.16/istiod-kubernetes.yaml > istiod-kubernetes.yaml
    envsubst < istiod-kubernetes.yaml > istiod-kubernetes-values.yaml
    
  2. Create the istiod control plane in your cluster.

    kubectl apply -f istiod-kubernetes-values.yaml
    
  3. After the installation is complete, verify that the Istio control plane pods are running.

    kubectl get pods -n istio-system
    

    Example output for 2 replicas:

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-1-16-2-7b96cb895-4nzv9   1/1     Running   0          30s
    istiod-1-16-2-7b96cb895-r7l8k   1/1     Running   0          30s
    

Creating an IstioOperator resource on OpenShift

For more information about using Istio on OpenShift, see the Istio documentation for OpenShift installation.

  1. Elevate the permissions of the istio-system and istio-operator service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
    oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator
    
  2. Prepare an IstioOperator resource file for the istiod control plane. This sample command downloads an example file, istiod-openshift.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/1.16/istiod-openshift.yaml > istiod-openshift.yaml
    envsubst < istiod-openshift.yaml > istiod-openshift-values.yaml
    
  3. Create the istiod control plane in your cluster.

    kubectl apply -f istiod-openshift-values.yaml
    
  4. After the installation is complete, verify that the Istio control plane pods are running.

    oc get pods -n istio-system
    

    Example output for 2 replicas:

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-1-16-2-7b96cb895-4nzv9   1/1     Running   0          30s
    istiod-1-16-2-7b96cb895-r7l8k   1/1     Running   0          30s
    
  5. Create a NetworkAttachmentDefinition custom resource for the project where you want to deploy workloads, such as the default project. In each OpenShift project where Istio must create workloads, a NetworkAttachmentDefinition is required.

    cat <<EOF | oc -n default create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    
  6. Elevate the permissions of the service account in the project where you want to deploy workloads, such as the default project. This permission allows the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc adm policy add-scc-to-group anyuid system:serviceaccounts:default
    

Step 3 (optional): Deploy Istio ingress gateway

If you have a Gloo Gateway license, deploy an Istio ingress gateway to allow incoming traffic requests to your Istio-managed apps. The following steps include using Istio gateway injection to deploy the gateway to an istio-ingress namespace.

  1. Prepare an IstioOperator resource file for the Istio ingress gateway. This sample command downloads an example file, ingress-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/1.16/ingress-gateway.yaml > ingress-gateway.yaml
    envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
    
  2. Create the ingress gateway in your cluster.

    kubectl apply -f ingress-gateway-values.yaml
    
  3. Verify that the ingress gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n istio-ingress
    kubectl get svc -n istio-ingress
    

    Example output:

    NAME                                           READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-665d46686f-nhh52   1/1     Running   0          106s
    istio-ingressgateway-665d46686f-tlp5j   1/1     Running   0          2m1s
    NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-ingressgateway          LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP                                   2m2s
    

    AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.

  4. Optional for OpenShift: Expose the istio-ingressgateway load balancer by using an OpenShift route.

    oc --context -n istio-ingress expose svc istio-ingressgateway --port=http2
    

Step 4 (optional): Deploy Istio east-west gateway

If you have a multicluster Gloo Mesh setup, deploy an Istio east-west gateway into each cluster in addition to the ingress gateway. An east-west gateway lets services in one mesh communicate with services in another. The following steps include using Istio gateway injection to deploy the gateway to an istio-eastwest namespace.

  1. Prepare an IstioOperator resource file for the Istio east-west gateway. This sample command downloads an example file, eastwest-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/1.16/eastwest-gateway.yaml > eastwest-gateway.yaml
    envsubst < eastwest-gateway.yaml > eastwest-gateway-values.yaml
    
  2. Create the east-west gateway in your cluster.

    kubectl apply -f eastwest-gateway-values.yaml
    
  3. Verify that the east-west gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n istio-eastwest
    kubectl get svc -n istio-eastwest
    

    Example output:

    NAME                                           READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-7f6f8f7fc7-ncrzq   1/1     Running   0          11s
    istio-eastwestgateway-7f6f8f7fc7-ncrzq   1/1     Running   0          48s
    NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-eastwestgateway         LoadBalancer   10.96.166.166   <externalip>  15021:32343/TCP,80:31685/TCP,443:30877/TCP,31400:31030/TCP,15443:31507/TCP,15012:30668/TCP,15017:30812/TCP   13s
    

    AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the east-west gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the east-west gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.

Step 5: Repeat for each workload cluster

Repeat these steps for any other clusters that you registered with Gloo. Remember to change the $CLUSTER_NAME variable for each workload cluster, and switch to that cluster's context.

Step 6: Deploy workloads

Now that Istio is up and running on all your workload clusters, you can create service namespaces for your teams to run app workloads in. For example, you might start out with the Bookinfo sample application by following the steps 4 and 5 in the getting started guide. Those steps guide you through creating workspaces for your workloads, deploying Bookinfo across workload clusters, and using ingress and east-west gateways to shift traffic across clusters.

For any service namespace, be sure to label the namespace with the revision so that Istio sidecars are deployed to your app pods: kubectl label ns <namespace> istio.io/rev=$REVISION.

Next steps