About the Solo distribution of Cilium

Use Gloo Network for Cilium to provide connectivity, security, and observability for containerized workloads with a Cilium-based container network interface (CNI) plug-in that leverages the Linux kernel technology eBPF. The Solo distribution of Cilium is a hardened Cilium enterprise image, which maintains support for security patches to address Common Vulnerabilities and Exposures (CVEs) and other security fixes.

Keep in mind that Gloo Network for Cilium offers security patching support only with Solo distributions of Cilium versions, not community Cilium versions. Solo distributions of Cilium versions support the same patch versions as community Cilium. You can review community Cilium patch versions in the Cilium release documentation. To get the backported Cilium support, you must run the latest Gloo Network patch version.

To download the Solo distribution of a Cilium image, you must be a registered user and be able to log in to the Solo Support Center. Open the Cilium images built by Solo.io support article. When prompted, log in to the Support Center with your Solo account credentials.

The following versions of Gloo Network are supported with the compatible Solo versions of Cilium. Later versions of the open source project that are released after Gloo Network might also work, but are not tested as part of the Gloo Network release.

Gloo NetworkRelease dateSupported Solo distributions of Cilium versions tested by Solo
2.509 Jan 2024Cilium 1.12 - 1.14 on Kubernetes 1.22 - 1.28

Before you begin

You can follow this guide to customize settings for an advanced Gloo Network for Cilium installation. To learn more about the benefits and architecture, see About.

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.5.0 sh -
        
    • helm, the Kubernetes package manager.
    • cilium, the Cilium command line tool.
  2. Create or use Kubernetes clusters that meet the Cilium requirements. For example, to try out the Cilium CNI in Google Kubernetes Engine (GKE) clusters, your clusters must be created with specific node taints. If you plan to run Gloo Network for Cilium and Istio in an EKS environment, see Considerations for running Cilium and Istio on EKS.

    1. Open the Cilium documentation and find the cloud provider that you want to use to create your clusters.

    2. Follow the steps of your cloud provider to create one or more clusters that meet the Cilium requirements.

      • For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo Mesh control plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters.
      • The instructions in the Cilium documentation might create a cluster with insufficient CPU and memory resources for Gloo Network. Make sure that you use a machine type with at least 2vCPU and 8GB of memory.
      • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

      Example to create a cluster in GKE:

        export NAME="$(whoami)-$RANDOM"                                                        
      gcloud container clusters create "${NAME}" \
      --node-taints node.cilium.io/agent-not-ready=true:NoExecute \
      --zone us-west2-a \
      --machine-type e2-standard-2
      gcloud container clusters get-credentials "${NAME}" --zone us-west2-a
        
  3. Create environment variables for the following details.

    • GLOO_NETWORK_LICENSE_KEY: Your Gloo Network for Cilium license key. If you do not have one, contact an account representative.
    • GLOO_VERSION: The Gloo Network version. This example uses the latest version. Append -fips for a FIPS-compliant image. Do not include v before the version number.
    • SOLO_CILIUM_REPO: A repo key for the Solo distribution of Cilium that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article.
    • CILIUM_VERSION: The Cilium version that you want to install. This example uses the latest version.
      export GLOO_NETWORK_LICENSE_KEY=<license_key>
    export GLOO_VERSION=2.5.0
    export SOLO_CILIUM_REPO=<cilium_repo_key>
    export CILIUM_VERSION=1.14.2
      
  4. If you plan to run a Prometheus server alongside the built-in Prometheus server in Gloo Network, make sure to review Run another Prometheus instance alongside the built-in one before you proceed.

Install the Solo distribution of the Cilium CNI

  1. Add and update the Cilium Helm repo.

      helm repo add cilium https://helm.cilium.io/
    helm repo update
      
  2. Install the Cilium CNI in your cluster, including the following flags for compatibility with Gloo Network.

    Example output:

      NAME: cilium
    LAST DEPLOYED: Fri Sep 16 10:31:52 2022
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.14.2.
    
    For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp
      
  3. Verify that the Cilium CNI is successfully installed. Because the Cilium agent is deployed as a daemon set, the number of Cilium and Cilium node init pods equals the number of nodes in your cluster.

      kubectl get pods -n kube-system | grep cilium
      

    Example output:

      cilium-gbqgq                                                  1/1     Running             0          48s
    cilium-j9n5x                                                  1/1     Running             0          48s
    cilium-node-init-c7rxb                                        1/1     Running             0          48s
    cilium-node-init-pnblb                                        1/1     Running             0          48s
    cilium-node-init-wdtjm                                        1/1     Running             0          48s
    cilium-operator-69dd4567b5-2gjgg                              1/1     Running             0          47s
    cilium-operator-69dd4567b5-ww6wp                              1/1     Running             0          47s
    cilium-smp9c                                                  1/1     Running             0          48s
      
  4. Check the status of the Cilium CNI installation.

      cilium status --wait
      

    Example output:

      
    ____/¯¯\
     /¯¯\__/¯¯\    Cilium:             OK
     \__/¯¯\__/    Operator:           OK
     /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
     \__/¯¯\__/    Hubble Relay:       disabled
        \__/       ClusterMesh:        disabled
    
    Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
    DaemonSet              cilium             Desired: 4, Ready: 4/4, Available: 4/4
    Containers:            cilium-operator    Running: 2
                           cilium             Running: 4
    Cluster Pods:          3/3 managed by Cilium
    Helm chart version:    1.14.2
    Image versions         cilium             ${SOLO_CILIUM_REPO}/cilium:v1.14.2: 4
                           cilium-operator    ${SOLO_CILIUM_REPO}/operator-generic:v1.14.2: 2
      
  5. Repeat steps 2 - 4 to install the Cilium CNI in each cluster that you want to use in your Gloo Network environment.

Install Gloo Network for Cilium

Single cluster

Install the Gloo Network for Cilium components in your cluster and verify that Gloo Network can discover the Cilium CNI.

  1. Save the name of your cluster as an environment variable.

      export CLUSTER_NAME=<cluster_name>
      
  2. Add and update the Helm repository for Gloo Network.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a single-cluster Gloo Network installation.

      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-single-cluster.yaml > gloo-network-single.yaml
    open gloo-network-single.yaml
      
  4. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledEnable or disable the default Prometheus instance. Prometheus is required to scrape metrics from specific workloads to visualize workload communication in the Gloo UI Graph.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    telemetryCollectorCustomization
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. You can optionally customize the Cilium metrics collection to reduce the number of metrics that are collected.
  5. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  6. Use the customizations in your Helm values file to install the Gloo Network components in your cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values gloo-network-single.yaml \
        --set common.cluster=$CLUSTER_NAME \
        --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      
  7. Verify that Gloo Network installed correctly. This check might take a few seconds to verify that:

    • Your Gloo Network product license is valid and current.
    • The Gloo CRDs installed at the correct version.
    • The Gloo pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check
      

    Example output:

      🟢 License status
    
     INFO  gloo-network enterprise license expiration is 17 Oct 24 13:24 CDT
     INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    
    🟢 Gloo Platform deployment status
    
    Namespace | Name                           | Ready | Status 
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 4/4   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster | Registered | Connected Pod                                   
    test    | true       | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1  
      

Multicluster

In a multicluster setup, you deploy the Gloo Network control plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters.

Control plane

Deploy the Gloo Network control plane into a dedicated management cluster.

  1. Save the name and kubeconfig context for your management cluster in environment variables.

      export MGMT_CLUSTER=<management-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
      
  2. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo Network control plane installation.

      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-mgmt.yaml > control-plane.yaml
    open control-plane.yaml
      
  3. Decide how you want to manage the root and intermediate certificates that the Gloo management server and agents use to secure their relay connection. For more information, see the relay certificate options.

  4. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledEnable or disable the default Prometheus instance. Prometheus is required to scrape metrics from specific workloads to visualize workload communication in the Gloo UI Graph.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    telemetryCollectorCustomization
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. You can optionally customize the Cilium metrics collection to reduce the number of metrics that are collected.
  5. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --kube-context $MGMT_CONTEXT \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  6. Use the customizations in your Helm values file to install the Gloo Network control plane components in your management cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        --kube-context $MGMT_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values control-plane.yaml \
        --set common.cluster=$MGMT_CLUSTER \
        --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      

    Note: For quick testing, you can create an insecure connection between the management server and workload agents by including the --set common.insecure=true and --set glooMgmtServer.insecure=true flags.

  7. Verify that the control plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  8. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

  9. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

Install the data plane

Register each workload cluster with the Gloo Network control plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo Network, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=<workload_cluster_name>
    export REMOTE_CONTEXT=<workload_cluster_context>
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required to install the Gloo data plane components in the workload cluster.

      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-agent.yaml > data-plane.yaml
    open data-plane.yaml
      
  4. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the relay certificate options.

  5. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
    telemetryCollectorCustomization
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. You can optionally customize the Cilium metrics collection to reduce the number of metrics that are collected.
  6. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --kube-context $REMOTE_CONTEXT \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  7. Use the customizations in your Helm values file to install the Gloo Network data plane components in your workload cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        --kube-context $REMOTE_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values data-plane.yaml \
        --set common.cluster=$REMOTE_CLUSTER \
        --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
        --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      
  8. Verify that the Gloo data plane components are healthy. If not, try debugging the agent.

      meshctl check --kubecontext $REMOTE_CONTEXT
      

    Example output:

      🟢 CRD Version check
    
    🟢 Gloo Platform Deployment Status
    
    Namespace        | Name                           | Ready | Status 
    gloo-mesh        | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh-addons | ext-auth-service               | 1/1   | Healthy
    gloo-mesh-addons | rate-limiter                   | 1/1   | Healthy
    gloo-mesh-addons | redis                          | 1/1   | Healthy
    gloo-mesh        | gloo-telemetry-collector-agent | 3/3   | Healthy
      
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your multicluster Gloo Network setup installed correctly. Note that this check might take a few seconds to verify that:

    • Your Gloo Network product license is valid and current.
    • The Gloo CRDs installed at the correct version.
    • The Gloo pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
     INFO  gloo-network enterprise license expiration is 25 Aug 24 10:38 CDT
     INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo Platform deployment status
    
    Namespace | Name                           | Ready | Status 
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
      

Next steps

Now that you have Gloo Network for Cilium up and running, check out the following guides to expand your Cilium capabilities.

  • Insights: Continue exploring insights to review your setup’s health and security posture.
  • Cilium:
  • Gloo Network: When it’s time to upgrade Gloo Network, see the upgrade guide.
  • Istio: To also deploy Istio in your environment, check out Gloo Mesh Core to quickly install and manage your service mesh for you with service mesh lifecycle management.