Deploy Istio in production

Deploy an Istio operator, and use an IstioOperator resource to configure and deploy an Istio control plane in each remote cluster. Installations that use the Istio operator are recommended because the operator facilitates Istio control plane upgrades through the canary deployment model.

This installation guide installs production-level Gloo Mesh Istio, a hardened Istio enterprise image. For more information, see About Gloo Mesh Istio. For more information about the example resource files that are provided in the following steps, see the GitHub repository for Gloo Mesh Use Cases.

Figure of a production-level IstioOperator deployment architecture

Note that the east-west gateways in this architecture allow services in one mesh to route cross-cluster traffic to services in the other mesh. If you install Istio into only one cluster for a single-cluster Gloo Mesh setup, the east-west gateway deployment is not required.

Before you begin

Step 1: Deploy the Istio operator

Start by creating an Istio operator in the istio-operator namespace of your remote cluster. The Istio operator deployment translates an IstioOperator resource that you create into an Istio control plane in your cluster. For more information, see the Istio operator documentation.

Note that the following deployment is created by using Helm to facilitate future version upgrades. For example, you can fork Istio's existing Helm chart to add it to your existing CI/CD workflow.

  1. Save the remote cluster that you want to install Istio as the environment variable CLUSTER_NAME so that you can reuse this same variable for all your remote clusters.

    export CLUSTER_NAME=$REMOTE_CLUSTER1
    
  2. Switch to your remote cluster's context.

    kubectl config use-context $REMOTE_CONTEXT1
    
  3. Create the following Istio namespaces. For more information, see Plan Istio namespaces.

    kubectl create namespace istio-system 
    kubectl create namespace istio-ingress
    kubectl create namespace istio-eastwest
    kubectl create namespace istio-config
    
  4. Install istioctl, the Istio CLI tool. Be sure to download a version of Istio that is supported for the version of your remote clusters.

    Download the same version that you want to use for Istio in your clusters, such as 1.12.1.

  5. Navigate to the Istio directory.

    cd istio-<version>
    
  6. Save the Istio version as environment variables.

    • For REPO, use a Gloo Mesh Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. Or, for Istio 1.11 or earlier, you can use gcr.io/istio-enterprise. For more information, see Get the Gloo Mesh Istio version that you want to use.
    • For ISTIO_VERSION, save the version that you downloaded, such as 1.12.1, and append the solo tag, which is required to use many Gloo Mesh Enterprise features such as Gloo Mesh Gateway. You can optionally append other Gloo Mesh Istio tags, as described in About Gloo Mesh Istio. If you downloaded a different version than the following, make sure to specify that version instead.
    • For REVISION, take the Istio version number and replace the periods with hyphens, such as 1.12.1 to 1-12-1.
    export REPO=<repo-key>
    export ISTIO_VERSION=1.12.1-solo
    export REVISION=1-12-1
    
  7. Create a Helm template with the following settings, and save the template as operator.yaml. Direct Helm chart installation cannot currently be used due to a namespace ownership bug.

    TEMPLATE=$(helm template istio-operator-$REVISION manifests/charts/istio-operator \
      --set operatorNamespace=istio-operator \
      --set watchedNamespaces="istio-system\,istio-ingress\,istio-eastwest" \
      --set global.hub="docker.io/istio" \
      --set global.tag="$ISTIO_VERSION" \
      --set revision="$REVISION")
    
    echo $TEMPLATE > operator.yaml
    
  8. Optional: View the operator resource configurations.

    cat operator.yaml
    
  9. Create the Istio operator in your cluster.

    kubectl apply -f operator.yaml
    
  10. Verify that the operator resources are deployed.

    kubectl get all -n istio-operator
    

    Example output:

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/istio-operator-1-12-1-647b5df446-zkvpm   1/1     Running   4          2m57s
    
    NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    service/istio-operator-1-12-1   ClusterIP   10.00.100.100   <none>        8383/TCP   2m57s
    
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-operator-1-12-1   1/1     1            1           2m57s
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-operator-1-12-1-647b5df446   1         1         1       2m57s
    

Step 2: Use the Istio operator to create the Istio control plane

Create an IstioOperator resource to configure and deploy the Istio control plane in your cluster. The provided IstioOperator resources are configured with production-level settings; however, depending on your environment, you might need to edit settings to achieve specific Istio functionality.

Note that the resource includes a revision label that matches the Istio version of the resource to facilitate canary-based upgrades. This revision label helps you upgrade the version of the Istio control plane more easily, as documented in the Istio upgrade guide.

For more information about the content of the provided IstioOperator examples, check these resources:

Creating an IstioOperator resource on Kubernetes

  1. Prepare an IstioOperator resource file for the istiod control plane. This sample command downloads an example file, istiod-kubernetes.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/istiod-kubernetes.yaml > istiod-kubernetes.yaml
    
  2. Update the IstioOperator resource file with the Istio version environment variables that you previously set for $CLUSTER_NAME, $REPO, $ISTIO_VERSION, and $REVISION.

    envsubst < istiod-kubernetes.yaml > istiod-kubernetes-values.yaml
    
  3. Create the istiod control plane in your cluster.

    istioctl install -y -f istiod-kubernetes-values.yaml
    
  4. After the installation is complete, verify that the Istio control plane pods are running.

    kubectl get pods -n istio-system
    

    Example output for 2 replicas:

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-1-12-1-7b96cb895-4nzv9   1/1     Running   0          30s
    istiod-1-12-1-7b96cb895-r7l8k   1/1     Running   0          30s
    

Creating an IstioOperator resource on OpenShift

For more information about using Istio on OpenShift, see the Istio documentation for OpenShift installation.

  1. Elevate the permissions of the istio-system and istio-operator service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
    oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-operator
    
  2. Prepare an IstioOperator resource file for the istiod control plane. This sample command downloads an example file, istiod-openshift.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/istiod-openshift.yaml > istiod-openshift.yaml
    
  3. Update the IstioOperator resource file with the Istio version environment variables that you previously set for $CLUSTER_NAME, $REPO, $ISTIO_VERSION, and $REVISION.

    envsubst < istiod-openshift.yaml > istiod-openshift-values.yaml
    
  4. Create the istiod control plane in your cluster.

    istioctl install -y -f istiod-openshift-values.yaml
    
  5. After the installation is complete, verify that the Istio control plane pods are running.

    oc get pods -n istio-system
    

    Example output for 2 replicas:

    NAME                            READY   STATUS    RESTARTS   AGE
    istiod-1-12-1-7b96cb895-4nzv9   1/1     Running   0          30s
    istiod-1-12-1-7b96cb895-r7l8k   1/1     Running   0          30s
    
  6. Create a NetworkAttachmentDefinition custom resource for the project where you want to deploy workloads, such as the default project. In each OpenShift project where Istio must create workloads, a NetworkAttachmentDefinition is required.

    cat <<EOF | oc -n default create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    
  7. Elevate the permissions of the service account in the project where you want to deploy workloads, such as the default project. This permission allows the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.

    oc adm policy add-scc-to-group anyuid system:serviceaccounts:default
    

Step 3: Deploy Istio ingress gateway

The recommended gateway architecture for a production-level setup includes creating your own load balancer service that is not managed by Istio. In this setup, you can run multiple versions of Istio ingress gateways in a blue/green deployment model, and use the same load balancer to expose the multiple gateways. However, each Istio gateway uses its own IstioOperator configuration file to ensure that each gateway can be upgraded independently.

  1. Copy the Istio revision configmap, which is named such as istio-1-12-1, from the istio-system namespace to the istio-ingress and istio-eastwest namespaces. This is required due to a bug in which gateways rely on the configmap, but cannot access it from another namespace.
CM_DATA=$(kubectl get configmap istio-$REVISION -n istio-system -o jsonpath={.data})
cat <<EOF | kubectl apply -f -
{
    "apiVersion": "v1",
    "data": $CM_DATA,
    "kind": "ConfigMap",
    "metadata": {
        "labels": {
            "istio.io/rev": "${REVISION}"
        },
        "name": "istio-${REVISION}",
        "namespace": "istio-ingress"
    }
}
EOF

cat <<EOF | kubectl apply -f -
{
    "apiVersion": "v1",
    "data": $CM_DATA,
    "kind": "ConfigMap",
    "metadata": {
        "labels": {
            "istio.io/rev": "${REVISION}"
        },
        "name": "istio-${REVISION}",
        "namespace": "istio-eastwest"
    }
}
EOF
  1. Prepare an IstioOperator resource file for the Istio ingress gateway and a service account for the gateway deployment. This sample command downloads an example file, ingress-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/ingress-gateway.yaml > ingress-gateway.yaml
    envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
    
  2. Create the ingress gateway in your cluster.

    kubectl apply -f ingress-gateway-values.yaml
    
  3. Prepare a load balancer service to expose the ingress gateway. This sample command downloads an example file, ingress-gateway-lb.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/ingress-gateway-lb.yaml > ingress-gateway-lb.yaml
    envsubst < ingress-gateway-lb.yaml > ingress-gateway-lb-values.yaml
    
  4. Create the load balancer for the ingress gateway in your cluster.

    kubectl apply -f ingress-gateway-lb-values.yaml
    
  5. Verify that the ingress gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n istio-ingress
    kubectl get svc -n istio-ingress
    

    Example output:

    NAME                                           READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-1-12-1-665d46686f-nhh52   1/1     Running   0          106s
    istio-ingressgateway-1-12-1-665d46686f-tlp5j   1/1     Running   0          2m1s
    NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-ingressgateway          LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP                                   2m2s
    istio-ingressgateway-1-12-1   ClusterIP      10.96.109.253   <none>        15021/TCP,80/TCP,443/TCP,31400/TCP,15443/TCP 
    
  6. Optional for OpenShift: Expose the istio-ingressgateway load balancer by using an OpenShift route.

    oc --context -n istio-ingress expose svc istio-ingressgateway --port=http2
    

Step 4 (optional): Deploy Istio east-west gateway

If you have a multicluster Gloo Mesh setup, deploy an Istio east-west gateway into each cluster in addition to the ingress gateway. An east-west gateway lets services in one mesh communicate with services in another. Then, create a VirtualMesh for each east-west gateway. This way, the service meshes can identify each other by their east-west gateways.

  1. Prepare an IstioOperator resource file for the Istio east-west gateway and a service account for the gateway deployment. This sample command downloads an example file, eastwest-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/eastwest-gateway.yaml > eastwest-gateway.yaml
    envsubst < eastwest-gateway.yaml > eastwest-gateway-values.yaml
    
  2. Create the east-west gateway in your cluster.

    kubectl apply -f eastwest-gateway-values.yaml
    
  3. Prepare a load balancer service to expose the east-west gateway. This sample command downloads an example file, eastwest-gateway-lb.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/istio-install/1.12/eastwest-gateway-lb.yaml > eastwest-gateway-lb.yaml
    envsubst < eastwest-gateway-lb.yaml > eastwest-gateway-lb-values.yaml
    
  4. Create the load balancer for the east-west gateway in your cluster.

    kubectl apply -f eastwest-gateway-lb-values.yaml
    
  5. Verify that the east-west gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n istio-eastwest
    kubectl get svc -n istio-eastwest
    

    Example output:

    NAME                                           READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-1-12-1-7f6f8f7fc7-ncrzq   1/1     Running   0          11s
    istio-eastwestgateway-1-12-1-7f6f8f7fc7-ncrzq   1/1     Running   0          48s
    NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-eastwestgateway         LoadBalancer   10.96.166.166   <externalip>  15021:32343/TCP,80:31685/TCP,443:30877/TCP,31400:31030/TCP,15443:31507/TCP,15012:30668/TCP,15017:30812/TCP   13s
    istio-eastwestgateway-1-12-1   ClusterIP      10.28.7.135   <none>         15021/TCP,15443/TCP               13s
    
  6. Create a VirtualMesh resource on the management cluster that targets the east-west gateway in each cluster.

    
    cat << EOF | kubectl --context $MGMT_CONTEXT apply -f -
    apiVersion: networking.mesh.gloo.solo.io/v1
    kind: VirtualMesh
    metadata:
      name: virtual-mesh
      namespace: gloo-mesh
    spec:
      mtlsConfig:
        # Note: Do NOT use this autoRestartPods setting in production!! Control pod restarts in another way, such as a rolling update.
        autoRestartPods: false
        shared:
          rootCertificateAuthority:
            generated: {}
      federation:
        ingressGatewaySelectors:
          - portName: tls
             destinationSelectors:
             - kubeServiceMatcher:
                clusters:
                - ${REMOTE_CLUSTER1}
                - ${REMOTE_CLUSTER2}
                labels:
                  istio: eastwestgateway
                namespaces:
                - istio-eastwest
      globalAccessPolicy: ENABLED
      meshes:
      - name: istiod-istio-system-${REMOTE_CLUSTER1}
        namespace: gloo-mesh
      - name: istiod-istio-system-${REMOTE_CLUSTER2}
        namespace: gloo-mesh
    EOF
    

Step 5: Deploy workloads

Now that Istio is up and running, you can create service namespaces for your teams to run app workloads in. For example, you might start out with the Bookinfo sample application by using the following steps. For any service namespace, be sure to label the namespace with the revision so that Istio sidecars are deployed to your app pods: kubectl label ns <namespace> istio.io/rev=$REVISION.

  1. Create a bookinfo namespace.

    kubectl create namespace bookinfo
    
  2. Label the namespace with the revision for the version of Istio that runs in your cluster.

    kubectl label ns bookinfo istio.io/rev=$REVISION
    
  3. Install Bookinfo in the bookinfo namespace.

    kubectl apply -n bookinfo -f samples/bookinfo/platform/kube/bookinfo.yaml
    kubectl apply -n bookinfo -f samples/bookinfo/networking/bookinfo-gateway.yaml
    
  4. Verify that the pods are running.

    kubectl get pods -n bookinfo
    
  5. Scale Bookinfo to 2 replicas.

    kubectl scale -n bookinfo --replicas=2 deployment/details-v1 deployment/ratings-v1 deployment/productpage-v1 deployment/reviews-v1 deployment/reviews-v2 deployment/reviews-v3
    
  6. Get the address of the ingress gateway.

    
    CLUSTER_1_INGRESS_ADDRESS=$(kubectl get svc -n istio-ingress istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
    
    
    CLUSTER_1_INGRESS_ADDRESS=$(kubectl get svc -n istio-ingress istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    echo http://$CLUSTER_1_INGRESS_ADDRESS/productpage
    

  7. Navigate to http://$CLUSTER_1_INGRESS_ADDRESS/productpage in a web browser to verify that the productpage for Bookinfo is reachable.

    open http://$CLUSTER_1_INGRESS_ADDRESS/productpage
    

Next steps