Optional: Install a Solo distribution of the Cilium CNI

Before you install Gloo Mesh Enterprise, use a Solo distribution of Cilium to install the Cilium CNI in your clusters.

Use Gloo Mesh Enterprise with Solo distributions of Cilium images to provide connectivity, security, and observability for containerized workloads with a Cilium-based container network interface (CNI) plug-in that leverages the Linux kernel technology eBPF. The Solo distribution of Cilium is a hardened Cilium enterprise image, which maintains support for security patches to address Common Vulnerabilities and Exposures (CVEs) and other security fixes.

For more information about the benefits of using a Solo distribution of Cilium in conjuncton with a service mesh managed by Gloo Mesh Enterprise, see About Cilium in Gloo Mesh.

The steps to install a CNI by using a Solo distribution of Cilium vary depending on the way you create your cluster. For example, installing the CNI in a kind cluster is different from installing the CNI in a GKE cluster. Make sure to follow the instructions for your cloud environment in the Cilium documentation.

  1. Install the following CLI tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes cluster you plan to use with Gloo Mesh Enterprise.
    • helm, the Kubernetes package manager.
    • cilium, the Cilium command line tool.
  2. Create or use clusters that meet the Cilium requirements. For example, to try out the Cilium CNI in Google Kubernetes Engine (GKE) clusters, your clusters must be created with specific node taints.

    1. Open the Cilium documentation and find the cloud provider that you want to use to create your clusters.

    2. Follow the steps of your cloud provider to create clusters that meet the Cilium requirements.

      • For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo Mesh control plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads and Istio service mesh. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters.
      • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
      • The instructions in the Cilium documentation might create a cluster with insufficient CPU and memory resources for Gloo Mesh. Make sure that you use a machine type with at least 2vCPU and 8GB of memory.

      Example to create a cluster in GKE:

      export NAME="$(whoami)-$RANDOM"                                                        
      gcloud container clusters create "${NAME}" \
      --node-taints node.cilium.io/agent-not-ready=true:NoExecute \
      --zone us-west2-a \
      --machine-type e2-standard-2
      gcloud container clusters get-credentials "${NAME}" --zone us-west2-a
      
  3. Create environment variables for the following Cilium image details.

    • SOLO_CILIUM_REPO: A Solo distribution of Cilium image repo key that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article.
    • CILIUM_VERSION: The Cilium version that you want to install, such as 1.14.2.
    export SOLO_CILIUM_REPO=<cilium_repo_key>
    export CILIUM_VERSION=1.14.2
    
  4. Add and update the Cilium Helm repo.

    helm repo add cilium https://helm.cilium.io/
    helm repo update
    
  5. Install the CNI by using a Solo distribution of Cilium in your cluster.

    Depending on the cloud provider you use, you must update this command to add additional Helm values as suggested in the Cilium documentation.

    Generic example (add cloud provider-specific values in --set flags below):

    helm install cilium cilium/cilium \
    --namespace kube-system \
    --version ${CILIUM_VERSION} \
    --set hubble.enabled=true \
    --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \
    --set image.repository=${SOLO_CILIUM_REPO}/cilium \
    --set image.useDigest=false \
    --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \
    --set operator.image.useDigest=false \
    --set operator.prometheus.enabled=true \
    --set prometheus.enabled=true \
    ## ADD CLOUD PROVIDER-SPECIFIC HELM VALUES HERE
    


    Example for GKE:

    NATIVE_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')"
    echo $NATIVE_CIDR
    
    helm install cilium cilium/cilium \
    --namespace kube-system \
    --version ${CILIUM_VERSION} \
    --set hubble.enabled=true \
    --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \
    --set image.repository=${SOLO_CILIUM_REPO}/cilium \
    --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \
    --set operator.prometheus.enabled=true \
    --set prometheus.enabled=true \
    --set nodeinit.enabled=true \
    --set nodeinit.reconfigureKubelet=true \
    --set nodeinit.removeCbrBridge=true \
    --set cni.binPath=/home/kubernetes/bin \
    --set gke.enabled=true \
    --set ipam.mode=kubernetes \
    --set ipv4NativeRoutingCIDR=${NATIVE_CIDR}
    

    Generic example (add any cloud provider-specific values):

    helm upgrade -i -n kube-system cilium cilium/cilium --version ${CILIUM_VERSION} -f - <<EOF
    hubble:
      enabled: true
      metrics:
        enabled:
        - dns:destinationContext=pod;sourceContext=pod
        - drop:destinationContext=pod;sourceContext=pod
        - tcp:destinationContext=pod;sourceContext=pod
        - flow:destinationContext=pod;sourceContext=pod
        - port-distribution:destinationContext=pod;sourceContext=pod
    image:
      repository: ${SOLO_CILIUM_REPO}/cilium
      useDigest: false
    operator:
      replicas: 1
      prometheus:
        enabled: true
      image:
        repository: ${SOLO_CILIUM_REPO}/operator
        useDigest: false
    prometheus:
      enabled: true
    # ADD CLOUD PROVIDER-SPECIFIC HELM VALUES
    EOF
    


    Example for GKE:

    NATIVE_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')"
    echo $NATIVE_CIDR
    
    helm upgrade -i -n kube-system cilium cilium/cilium --version ${CILIUM_VERSION} -f - <<EOF
    hubble:
      enabled: true
      metrics:
        enabled:
        - dns:destinationContext=pod;sourceContext=pod
        - drop:destinationContext=pod;sourceContext=pod
        - tcp:destinationContext=pod;sourceContext=pod
        - flow:destinationContext=pod;sourceContext=pod
        - port-distribution:destinationContext=pod;sourceContext=pod
    image:
      repository: ${SOLO_CILIUM_REPO}/cilium
      useDigest: false
    operator:
      replicas: 1
      prometheus:
        enabled: true
      image:
        repository: ${SOLO_CILIUM_REPO}/operator
        useDigest: false
    prometheus:
      enabled: true
    nodeinit:
      enabled: true
      reconfigureKubelet: true
      removeCbrBridge: true
    cni:
      binPath: /home/kubernetes/bin
    gke:
      enabled: true
    ipam:
      mode: kubernetes
    ipv4NativeRoutingCIDR: ${NATIVE_CIDR}
    EOF
    

    Example output:

    NAME: cilium
    LAST DEPLOYED: Fri Sep 16 10:31:52 2022
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.14.2.
    
    For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp
    
  6. Verify that the Cilium CNI is successfully installed. Because the Cilium agent is deployed as a daemon set, the number of Cilium and Cilium node init pods equals the number of nodes in your cluster.

    kubectl get pods -n kube-system | grep cilium
    

    Example output:

    cilium-gbqgq                                                  1/1     Running             0          48s
    cilium-j9n5x                                                  1/1     Running             0          48s
    cilium-node-init-c7rxb                                        1/1     Running             0          48s
    cilium-node-init-pnblb                                        1/1     Running             0          48s
    cilium-node-init-wdtjm                                        1/1     Running             0          48s
    cilium-operator-69dd4567b5-2gjgg                              1/1     Running             0          47s
    cilium-operator-69dd4567b5-ww6wp                              1/1     Running             0          47s
    cilium-smp9c                                                  1/1     Running             0          48s
    
  7. Check the status of the Cilium installation.

    cilium status --wait
    

    Example output:

       /¯¯\
    /¯¯\__/¯¯\    Cilium:         OK
    \__/¯¯\__/    Operator:       OK
    /¯¯\__/¯¯\    Hubble:         disabled
    \__/¯¯\__/    ClusterMesh:    disabled
       \__/
    
    Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
    DaemonSet         cilium             Desired: 3, Ready: 3/3, Available: 3/3
    Containers:       cilium             Running: 3
                      cilium-operator    Running: 2
    Cluster Pods:     10/10 managed by Cilium
    Image versions    cilium             ${SOLO_CILIUM_REPO}/cilium:v1.12.1@sha256:...: 3
                      cilium-operator    ${SOLO_CILIUM_REPO}/operator-generic:v1.12.1@sha256:...: 2
    
  8. Repeat steps 5 - 7 to install the CNI in each cluster that you want to use in your Gloo Mesh Enterprise environment.

  9. Continue with Install Gloo Mesh to install the Gloo Mesh Enterprise components in your clusters.