Manually deploy gateway proxies

Use Istio Helm charts to configure and deploy an Istio control plane and gateways in each workload cluster. The deployments are created by using Helm to facilitate future version upgrades. For example, you can fork Istio's existing Helm chart to add it to your existing CI/CD workflow.

Review the following information about the Istio control plane and gateway setup in this guide:

If you also use Gloo Mesh Enterprise alongside Gloo Gateway, follow the steps to install Istio in the Gloo Mesh documentation instead. The Gloo Mesh guide shows you how to customize your service mesh installation, and install sidecars with your control plane and gateways.

Before you begin

  1. Install Gloo Gateway in a single or multicluster setup. For more information, see sample deployment patterns.

  2. Save the names of your clusters from your infrastructure provider as environment variables.

    export CLUSTER_NAME=<cluster-name>
    
    export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
    

  3. Save the kubeconfig contexts for your clusters as environment variables. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.

    export MGMT_CONTEXT=<cluster-context>
    kubectl config use-context $MGMT_CONTEXT
    
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
    

  4. Install helm, the Kubernetes package manager.

  5. To use a Gloo Mesh hardened image of Istio, you must have a Solo account. Log in to Support Center and get the repo key for the Istio version that you want to install from the Istio images built by Solo.io support article. If you do not have a Solo account or have trouble logging in, contact your account administrator.

  6. Istio version 1.17 does not support the Gloo legacy metrics pipeline. If you run the legacy metrics pipeline, before you upgrade or deploy gateway proxies with Istio 1.17, be sure that you [set up the Gloo OpenTelemetry (OTel) pipeline](https://docs.solo.io/gloo-gateway/main/observability/pipeline/setup/) instead in your new or existing Gloo Gateway installation.

Step 1: Deploy Istio control planes

Deploy an Istio control plane in each workload cluster. The provided Helm values files are configured with production-level settings; however, depending on your environment, you might need to edit settings to achieve specific Istio functionality.

Note that the values file includes a revision label that matches the Istio version of the resource to facilitate canary-based upgrades. This revision label helps you upgrade the version of the Istio control plane more easily, as documented in the Istio upgrade guide.

  1. Multicluster setups only: Save a workload cluster where you want to install Istio as the environment variable CLUSTER_NAME so that you can reuse this variable for all your workload clusters. Switch to your workload cluster's context.

    export CLUSTER_NAME=$REMOTE_CLUSTER1
    kubectl config use-context $REMOTE_CONTEXT1
    
  2. Save the Istio version information as environment variables.

    • For REPO, use a Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Gloo Istio version that you want to use.
    • For ISTIO_IMAGE, save the version that you downloaded, such as 1.17.2, and append the solo tag, which is required to use many enterprise features. You can optionally append other Gloo Istio tags, as described in About Gloo Istio. If you downloaded a different version than the following, make sure to specify that version instead.
    • For REVISION, take the Istio major and minor version numbers and replace the period with a hyphen, such as 1-17-2.
    export REPO=<repo-key>
    export ISTIO_IMAGE=1.17.2-solo
    export REVISION=1-17-2
    
  3. Install istioctl, the Istio CLI tool. Download the same version that you want to use for Istio in your clusters, such as 1.17.2, and verify that the version is supported for the Kubernetes or OpenShift version of your workload clusters. To check your installed version, run istioctl version.

  4. Create the following namespaces. In this setup, the istio-config namespace serves as the administrative root namespace for Istio configuration.

    kubectl create namespace istio-system
    kubectl create namespace istio-config
    kubectl create namespace gloo-mesh-gateways
    
  5. Add and update the Helm repository for Istio.

    helm repo add istio https://istio-release.storage.googleapis.com/charts
    helm repo update
    
  6. Install the Istio CRDs in each cluster.

    helm upgrade --install istio-base istio/base \
      -n istio-system \
      --version ${ISTIO_IMAGE}
    
  7. OpenShift only: Deploy the Istio CNI plug-in, and elevate the istio-system service account permissions. For more information about using Istio on OpenShift, see the Istio documentation for OpenShift installation.

    1. Install the CNI plug-in in each cluster, which is required for using Istio in OpenShift.
      helm install istio-cni istio/cni \
      --namespace kube-system \
      --version ${ISTIO_IMAGE} \
      --set cni.cniBinDir=/var/lib/cni/bin \
      --set cni.cniConfDir=/etc/cni/multus/net.d \
      --set cni.cniConfFileName="istio-cni.conf" \
      --set cni.chained=false \
      --set cni.privileged=true
      
    2. Elevate the permissions of the following service accounts. This permission allows the Istio sidecars to make use of a user ID that is normally restricted by OpenShift.
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-config
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways
      
    3. Create a NetworkAttachmentDefinition custom resource for the gloo-mesh-gateways project.
      cat <<EOF | oc -n gloo-mesh-gateways create -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
  8. Prepare a Helm values file for the istiod control plane. This sample command downloads an example file, istiod.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/istiod.yaml > istiod.yaml
    envsubst < istiod.yaml > istiod-values.yaml
    
  9. Create the istiod control plane in your clusters.

    helm upgrade --install istiod-${REVISION} istio/istiod \
      --version ${ISTIO_IMAGE} \
      --namespace istio-system \
      --wait \
      -f istiod-values.yaml
    
  10. After the installation is complete, verify that the Istio control plane pods are running.

    kubectl get pods -n istio-system
    

    Example output for 2 replicas:

    NAME                          READY   STATUS    RESTARTS   AGE
    istiod-1-17-2-7b96cb895-4nzv9   1/1     Running   0          30s
    istiod-1-17-2-7b96cb895-r7l8k   1/1     Running   0          30s
    

Step 2: Deploy Istio ingress gateway

Deploy an Istio ingress gateway to allow incoming traffic requests to your Istio-managed apps.

  1. Prepare a Helm values file for the Istio ingress gateway. This sample command downloads an example file, ingress-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/ingress-gateway.yaml > ingress-gateway.yaml
    envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
    
  2. Create the ingress gateway in each cluster.

    helm upgrade --install istio-ingressgateway-${REVISION} istio/gateway \
      --version ${ISTIO_IMAGE} \
      --namespace gloo-mesh-gateways \
      --wait \
      -f ingress-gateway-values.yaml
    
  3. Verify that the ingress gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n gloo-mesh-gateways
    kubectl get svc -n gloo-mesh-gateways
    

    Example output:

    NAME                                         READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-1-17-2-665d46686f-nhh52   1/1     Running   0          106s
    istio-ingressgateway-1-17-2-665d46686f-tlp5j   1/1     Running   0          2m1s
    NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-ingressgateway-1-17-2          LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP                                   2m2s
    

    AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.

  4. Optional for OpenShift: Expose the load balancer by using an OpenShift route.

    oc -n gloo-mesh-gateways expose svc istio-ingressgateway-1-17-2 --port=http2
    

Step 3 (multicluster only): Deploy Istio east-west gateway

If you have a multicluster Gloo Gateway setup, deploy an Istio east-west gateway into each cluster in addition to the ingress gateway. In Gloo Gateway, the east-west gateways allow the the ingress gateway in one cluster to route incoming traffic requests to services in another cluster.

  1. Prepare a Helm values file for the Istio east-west gateway. This sample command downloads an example file, eastwest-gateway.yaml, and updates the environment variables with the values that you previously set. You can further edit the file to provide your own details for production-level settings.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/manual-helm/eastwest-gateway.yaml > eastwest-gateway.yaml
    envsubst < eastwest-gateway.yaml > eastwest-gateway-values.yaml
    
  2. Create the east-west gateway in each cluster.

    helm upgrade --install istio-eastwestgateway-${REVISION} istio/gateway \
      --version ${ISTIO_IMAGE} \
      --namespace gloo-mesh-gateways \
      --wait \
      -f eastwest-gateway-values.yaml
    
  3. Verify that the east-west gateway pods are running and the load balancer service is assigned an external address.

    kubectl get pods -n gloo-mesh-gateways
    kubectl get svc -n gloo-mesh-gateways
    

    Example output:

    NAME                                          READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-1-17-2-7f6f8f7fc7-ncrzq   1/1     Running   0          11s
    istio-eastwestgateway-1-17-2-7f6f8f7fc7-ncrzq   1/1     Running   0          48s
    NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
    istio-eastwestgateway-1-17-2         LoadBalancer   10.96.166.166   <externalip>  15021:32343/TCP,80:31685/TCP,443:30877/TCP,31400:31030/TCP,15443:31507/TCP,15012:30668/TCP,15017:30812/TCP   13s
    

    AWS clusters only: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the east-west gateway service, verify that the health check shows a healthy state. Gloo Mesh configures the east-west gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo Mesh configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.

Step 4 (multicluster only): Repeat for each workload cluster

Repeat these steps for any other clusters that you registered with Gloo. Remember to change the $CLUSTER_NAME variable for each workload cluster, and switch to that cluster's context.

Next steps

Now that the gateway proxies are installed, check out the following resources to explore Gloo Gateway capabilities: