Install Gloo Network
Install the Gloo Network, including the Cilium CNI and the Gloo Platform management components, with Helm in a Kubernetes cluster. Review the Gloo Network architecture for more information.
Before you begin
-
Install the following CLI tools:
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes cluster you plan to use with Gloo Network.helm
, the Kubernetes package manager.cilium
, the Cilium command line tool.meshctl
, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.4.1 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
-
Create or use a cluster that meets the Cilium requirements. For example, to try out the Cilium CNI in a Google Kubernetes Engine (GKE) cluster, your cluster must be created with specific node taints. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
-
Open the Cilium documentation and find the cloud provider that you want to use to create your cluster.
-
Follow the steps of your cloud provider to create a cluster that meets the Cilium requirements.
The instructions in the Cilium documentation might create a cluster with insufficient CPU and memory resources for Gloo Platform and Gloo Network. Make sure that you use a machine type with at least 2vCPU and 8GB of memory.Example to create a cluster in GKE:
export NAME="$(whoami)-$RANDOM" gcloud container clusters create "${NAME}" \ --node-taints node.cilium.io/agent-not-ready=true:NoExecute \ --zone us-west2-a \ --machine-type e2-standard-2 gcloud container clusters get-credentials "${NAME}" --zone us-west2-a
-
-
Create environment variables for the following details:
- For
GLOO_NETWORK_LICENSE_KEY
, use your Gloo Network license key that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative. - For
SOLO_CILIUM_REPO
, use a Solo.io Cilium image repo key that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article. - For
CILIUM_VERSION
, save the Cilium version that you want to install, such as1.13.6
. - For
CLUSTER_NAME
, save the name of your cluster. - For
GLOO_VERSION
, set the Gloo Platform version that you want to install, such as2.4.1
.
export GLOO_NETWORK_LICENSE_KEY=<license_key> export SOLO_CILIUM_REPO=<cilium_repo_key> export CILIUM_VERSION=1.13.6 export CLUSTER_NAME=<cluster_name> export GLOO_VERSION=2.4.1
- For
Install the Gloo Network Cilium CNI
The steps to install the Gloo Network Cilium CNI vary depending on the way you create your cluster. For example, installing the CNI in a kind cluster is different from installing the CNI in a GKE cluster. Make sure to follow the instructions for your cloud environment in the Cilium documentation.
-
Add and update the Cilium Helm repo.
helm repo add cilium https://helm.cilium.io/ helm repo update
-
Install the Gloo Network Cilium CNI in your cluster.
Depending on the cloud provider you use, you must update this command to add additional Helm values as suggested in the Cilium documentation.
Generic example (add cloud provider-specific values in
--set
flags below):helm install cilium cilium/cilium \ --namespace kube-system \ --version $CILIUM_VERSION \ --set hubble.enabled=true \ --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \ --set image.repository=${SOLO_CILIUM_REPO}/cilium \ --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \ --set operator.prometheus.enabled=true \ --set prometheus.enabled=true \ ## ADD CLOUD PROVIDER-SPECIFIC HELM VALUES HERE
Example for GKE:
NATIVE_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')" echo $NATIVE_CIDR helm install cilium cilium/cilium \ --namespace kube-system \ --version $CILIUM_VERSION \ --set hubble.enabled=true \ --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \ --set image.repository=${SOLO_CILIUM_REPO}/cilium \ --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \ --set operator.prometheus.enabled=true \ --set prometheus.enabled=true \ --set nodeinit.enabled=true \ --set nodeinit.reconfigureKubelet=true \ --set nodeinit.removeCbrBridge=true \ --set cni.binPath=/home/kubernetes/bin \ --set gke.enabled=true \ --set ipam.mode=kubernetes \ --set ipv4NativeRoutingCIDR=$NATIVE_CIDR
Example output:
NAME: cilium LAST DEPLOYED: Fri Sep 16 10:31:52 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: You have successfully installed Cilium with Hubble. Your release version is 1.13.6. For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp
-
Verify that the Gloo Network Cilium CNI is successfully installed. Because the Cilium agent is deployed as a daemon set, the number of Cilium and Cilium node init pods equals the number of nodes in your cluster.
kubectl get pods -n kube-system | grep cilium
Example output:
cilium-gbqgq 1/1 Running 0 48s cilium-j9n5x 1/1 Running 0 48s cilium-node-init-c7rxb 1/1 Running 0 48s cilium-node-init-pnblb 1/1 Running 0 48s cilium-node-init-wdtjm 1/1 Running 0 48s cilium-operator-69dd4567b5-2gjgg 1/1 Running 0 47s cilium-operator-69dd4567b5-ww6wp 1/1 Running 0 47s cilium-smp9c 1/1 Running 0 48s
-
Check the status of the Cilium installation.
cilium status --wait
Example output:
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: disabled \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Containers: cilium Running: 3 cilium-operator Running: 2 Cluster Pods: 10/10 managed by Cilium Image versions cilium ${SOLO_CILIUM_REPO}/cilium:v1.12.1@sha256:...: 3 cilium-operator ${SOLO_CILIUM_REPO}/operator-generic:v1.12.1@sha256:...: 2
Install Gloo Network
Install the Gloo Network components in your cluster. The following steps install the necessary Gloo components, such as the Gloo management server, agent, and Prometheus server, and then configure the agent for Gloo Network.
Installing with Argo CD on a GKE dataplanev2 cluster? Add the following exclusions to your argocd-cm
configmap, which ensure that the CiliumIdentity
for the Gloo OpenTelemetry metrics pipeline is not managed by Argo.
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
clusters:
- "*"
-
Install the Gloo Platform management components in your cluster.
meshctl install --profiles gloo-mesh-single,gloo-network \ --set common.cluster=$CLUSTER_NAME \ --set istioInstallations.enabled=false \ --set glooMgmtServer.createGlobalWorkspace=true \ --version $GLOO_VERSION \ --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY
-
Verify that Gloo Network is successfully installed. This check might take a few seconds to verify that:
- Your Gloo Network product license is valid and current.
- The Gloo Platform CRDs are installed at the correct version.
- The Gloo Network pods are running and healthy.
- The Gloo agent is running and connected to the management server.
meshctl check
Example output:
🟢 License status INFO gloo-network enterprise license expiration is 25 Aug 23 10:38 CDT INFO No GraphQL license module found for any product 🟢 CRD version check 🟢 Gloo Platform deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
-
Open the Gloo UI.
- Port-forward the
gloo-mesh-ui
service on 8090.kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
- Open your browser and connect to the Gloo UI at http://localhost:8090.
- Port-forward the
-
In the Cluster panel on the Overview page, verify that the Gloo Network Cilium version was discovered and is displayed in the card for your cluster.
What's next?
Now that you installed Gloo Network, you can choose between the following options:
- Use Gloo network policies to control what services in your cluster can communicate with each other.
- Enable additional Cilium metrics and network flow logs to monitor network traffic in your cluster.
- Import the Cilium Grafana dashboard to monitor the health of your Cilium CNI.