You can follow this guide to customize settings for an advanced Gloo Network installation. To learn more about the benefits and architecture, see About.

About the Solo distribution of Cilium

Use Gloo Network for Cilium to provide connectivity, security, and observability for containerized workloads with a Cilium-based container network interface (CNI) plug-in that leverages the Linux kernel technology eBPF.

The Solo distribution of Cilium is a hardened Cilium enterprise image, which maintains support for security patches to address Common Vulnerabilities and Exposures (CVEs) and other security fixes.

Keep in mind that Gloo Network offers security patching support only with Solo distributions of Cilium versions, not community Cilium versions. Solo distributions of Cilium versions support the same patch versions as community Cilium. You can review community Cilium patch versions in the Cilium release documentation. To get the backported Cilium support, you must run the latest Gloo Network patch version.

To download the Solo distribution of a Cilium image, you must be a registered user and be able to log in to the Solo Support Center. Open the Cilium images built by Solo.io support article. When prompted, log in to the Support Center with your Solo account credentials.

The following versions of Gloo Network are supported with the compatible Solo versions of Cilium. Later versions of the open source project that are released after Gloo Network might also work, but are not tested as part of the Gloo Network release.

Gloo NetworkRelease dateSupported Solo distributions of Cilium versions tested by Solo
2.509 Jan 2024Cilium 1.12 - 1.14 on Kubernetes 1.22 - 1.28

Before you begin

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.5.12 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
    • helm, the Kubernetes package manager.
    • cilium, the Cilium command line tool.
  2. Create environment variables for the following details.

    • GLOO_NETWORK_LICENSE_KEY: Set your Gloo Network for Cilium license key as an environment variable. If you do not have one, contact an account representative.
    • GLOO_VERSION: Set the Gloo Network version. This example uses the latest version. Append -fips for a FIPS-compliant image. Do not include v before the version number.
    • SOLO_CILIUM_REPO: A repo key for the Solo distribution of Cilium that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article.
    • CILIUM_VERSION: The Cilium version that you want to install. This example uses the latest version.
      export GLOO_NETWORK_LICENSE_KEY=<license_key>
    export GLOO_VERSION=2.5.12
    export SOLO_CILIUM_REPO=<cilium_repo_key>
    export CILIUM_VERSION=1.14.2
      
  3. Optional: If you plan to run Istio with sidecar injection and the Cilium CNI in tunneling mode (VXLAN or GENEVE) on an Amazon EKS cluster, see Considerations for running Cilium and Istio on EKS.

Install the Solo distribution of the Cilium CNI

  1. Create or use Kubernetes clusters that meet the Cilium requirements. For example, to try out the Cilium CNI in Google Kubernetes Engine (GKE) clusters, your clusters must be created with specific node taints.

    1. Open the Cilium documentation and find the cloud provider that you want to use to create your clusters.

    2. Follow the steps of your cloud provider to create one or more clusters that meet the Cilium requirements.

      • The instructions in the Cilium documentation might create a cluster with insufficient CPU and memory resources for Gloo Network. Make sure that you use a machine type with at least 2vCPU and 8GB of memory.
      • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
      • Multicluster setups only: For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo management plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads and Istio service mesh. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters.

      Example to create a cluster in GKE:

        export NAME="$(whoami)-$RANDOM"                                                        
      gcloud container clusters create "${NAME}" \
      --node-taints node.cilium.io/agent-not-ready=true:NoExecute \
      --zone us-west2-a \
      --machine-type e2-standard-2
      gcloud container clusters get-credentials "${NAME}" --zone us-west2-a
        
  2. Add and update the Cilium Helm repo.

      helm repo add cilium https://helm.cilium.io/
    helm repo update
      
  3. Install the CNI by using a Solo distribution of Cilium in your cluster. Be sure to include the following settings for compatability with Gloo Network, and the necessary settings for your infrastructure provider.

    Example output:

      NAME: cilium
    LAST DEPLOYED: Fri Sep 16 10:31:52 2022
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.14.2.
    
    For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp
      
  4. Verify that the Cilium CNI is successfully installed. Because the Cilium agent is deployed as a daemon set, the number of Cilium and Cilium node init pods equals the number of nodes in your cluster.

      kubectl get pods -n kube-system | grep cilium
      

    Example output:

      cilium-gbqgq                                                  1/1     Running             0          48s
    cilium-j9n5x                                                  1/1     Running             0          48s
    cilium-node-init-c7rxb                                        1/1     Running             0          48s
    cilium-node-init-pnblb                                        1/1     Running             0          48s
    cilium-node-init-wdtjm                                        1/1     Running             0          48s
    cilium-operator-69dd4567b5-2gjgg                              1/1     Running             0          47s
    cilium-operator-69dd4567b5-ww6wp                              1/1     Running             0          47s
    cilium-smp9c                                                  1/1     Running             0          48s
      
  5. Check the status of the Cilium installation.

      cilium status --wait
      

    Example output:

      
    ____/¯¯\
     /¯¯\__/¯¯\    Cilium:             OK
     \__/¯¯\__/    Operator:           OK
     /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
     \__/¯¯\__/    Hubble Relay:       disabled
        \__/       ClusterMesh:        disabled
    
    Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
    DaemonSet              cilium             Desired: 4, Ready: 4/4, Available: 4/4
    Containers:            cilium-operator    Running: 2
                           cilium             Running: 4
    Cluster Pods:          3/3 managed by Cilium
    Helm chart version:    1.14.2
    Image versions         cilium             ${SOLO_CILIUM_REPO}/cilium:v1.14.2: 4
                           cilium-operator    ${SOLO_CILIUM_REPO}/operator-generic:v1.14.2: 2
      
  6. Multicluster setups only: Repeat steps 3 - 5 to install the CNI in each cluster that you want to use in your Gloo Network environment.

Install Gloo Network for Cilium

Deploy the Gloo Network components into one cluster or across a multicluster environment.

Single cluster

Install the Gloo Network for Cilium components in your cluster and verify that Gloo Network can discover the Cilium CNI.

  1. Save the name of your cluster as an environment variable.

      export CLUSTER_NAME=<cluster_name>
      
  2. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a single-cluster Gloo Network installation.

      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-single-cluster.yaml > gloo-network-single.yaml
    open gloo-network-single.yaml
      
  5. Decide how you want to secure the relay connection between the Gloo management server and agent. In test and POC environments, you can use Gloo self-signed certificates to secure the connection. If you plan to use Gloo Network in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-agent pod, such as cpu: 500m and memory: 512Mi.
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    Cilium observability
    • Cilium metrics: Set glooNetwork.agent.enabled to true to install the Gloo Network-specific agent, which collects additional metrics when Cilium is installed.
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. Note that if you customize the Cilium metrics collection to reduce the number of metrics that are collected, not all Cilium insights can be generated.
  7. Use the customizations in your Helm values file and the following recommended Cilium settings to install the Gloo Network components in your cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values gloo-network-single.yaml \
        --set common.cluster=$CLUSTER_NAME \
        --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      
  8. Verify that your Gloo Network setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product license is valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check
      

    Example output:

      🟢 License status
    
    INFO  gloo-network enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster | Registered | Connected Pod                                   
    test    | true       | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1  
      

Multicluster

In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Save the name and kubeconfig context for your management cluster in environment variables.

      export MGMT_CLUSTER=<management-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
      
  2. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $MGMT_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-mgmt.yaml > mgmt-plane.yaml
    open mgmt-plane.yaml
      
  5. Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Network in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    glooNetwork.agent.enabledSet to true to install the Gloo Network-specific agent, which collects additional metrics when Cilium is installed.
    telemetryCollectorCustomization
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. You can optionally customize the Cilium metrics collection to reduce the number of metrics that are collected.
  7. Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        --kube-context $MGMT_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values mgmt-plane.yaml \
        --set common.cluster=$MGMT_CLUSTER \
        --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY \
        --set glooNetwork.agent.enabled=true \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      
  8. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  9. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

      export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
      

  10. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

      export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
      

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=<workload_cluster_name>
    export REMOTE_CONTEXT=<workload_cluster_context>
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. In your workload cluster, apply the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $REMOTE_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-agent.yaml > data-plane.yaml
    open data-plane.yaml
      
  5. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
    Cilium observability
    • Cilium metrics: Set glooNetwork.agent.enabled to true to install the Gloo Network-specific agent, which collects additional metrics when Cilium is installed.
    • Cilium logs: To collect Cilium pod logs and view them in the Gloo UI, set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled to true. For more information, see Add Cilium flow logs.
    • Cilium insights: To generate insights that analyze your Cilium setup’s health, set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled to true. This pipeline uses the default filter/cilium processor that collects all Cilium and Hubble metrics. Note that if you customize the Cilium metrics collection to reduce the number of metrics that are collected, not all Cilium insights can be generated.
  7. Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        --kube-context $REMOTE_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values data-plane.yaml \
        --set common.cluster=$REMOTE_CLUSTER \
        --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
        --set glooNetwork.agent.enabled=true \
        --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS \
        --set telemetryCollectorCustomization.pipelines.logs/cilium_flows.enabled=true \
        --set telemetryCollectorCustomization.pipelines.metrics/cilium.enabled=true
      
  8. Verify that the Gloo data plane component pods are running. If not, try debugging the agent.

      meshctl check --kubecontext $REMOTE_CONTEXT
      

    Example output:

      🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
      
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Network setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-network enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2  
      

Next steps

Now that you have Gloo Network for Cilium up and running, check out some of the following resources to learn more about Gloo Network and expand your Cilium capabilities.

Cilium:

Gloo Network:

  • Explore insights to review and improve your setup’s health and security posture.
  • When it’s time to upgrade Gloo Network, see the upgrade guide.

Istio: To also deploy Istio in your environment, check out Gloo Mesh Core or Gloo Mesh Enterprise to quickly install and manage your service mesh for you with service mesh lifecycle management.

Help and support: