Get started

Learn how to install Gloo Network in your Kubernetes cluster, and use Gloo network policies to control traffic for your microservices.

Before you begin

  1. Install the following CLI tools:

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes cluster you plan to use with Gloo Network.
    • helm, the Kubernetes package manager.
    • cilium, the Cilium command line tool.
    • meshctl, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more.
      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.4.1 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
      
  2. Create or use a cluster that meets the Cilium requirements. For example, to try out the Cilium CNI in a Google Kubernetes Engine (GKE) cluster, your cluster must be created with specific node taints. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

    1. Open the Cilium documentation and find the cloud provider that you want to use to create your cluster.

    2. Follow the steps of your cloud provider to create a cluster that meets the Cilium requirements.

      The instructions in the Cilium documentation might create a cluster with insufficient CPU and memory resources for Gloo Platform and Gloo Network. Make sure that you use a machine type with at least 2vCPU and 8GB of memory.

      Example to create a cluster in GKE:

      export NAME="$(whoami)-$RANDOM"                                                        
      gcloud container clusters create "${NAME}" \
      --node-taints node.cilium.io/agent-not-ready=true:NoExecute \
      --zone us-west2-a \
      --machine-type e2-standard-2
      gcloud container clusters get-credentials "${NAME}" --zone us-west2-a
      
  3. Create environment variables for the following details:

    • For GLOO_NETWORK_LICENSE_KEY, use your Gloo Network license key that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative.
    • For SOLO_CILIUM_REPO, use a Solo.io Cilium image repo key that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article.
    • For CILIUM_VERSION, save the Cilium version that you want to install, such as 1.13.6.
    • For CLUSTER_NAME, save the name of your cluster.
    • For GLOO_VERSION, set the Gloo Platform version that you want to install, such as 2.4.1.
    export GLOO_NETWORK_LICENSE_KEY=<license_key>
    export SOLO_CILIUM_REPO=<cilium_repo_key>
    export CILIUM_VERSION=1.13.6
    export CLUSTER_NAME=<cluster_name>
    export GLOO_VERSION=2.4.1
    

Install the Gloo Network Cilium CNI

The steps to install the Gloo Network Cilium CNI vary depending on the way you create your cluster. For example, installing the CNI in a kind cluster is different from installing the CNI in a GKE cluster. Make sure to follow the instructions for your cloud environment in the Cilium documentation.

  1. Add and update the Cilium Helm repo.

    helm repo add cilium https://helm.cilium.io/
    helm repo update
    
  2. Install the Gloo Network Cilium CNI in your cluster.

    Depending on the cloud provider you use, you must update this command to add additional Helm values as suggested in the Cilium documentation.

    Generic example (add cloud provider-specific values in --set flags below):

    helm install cilium cilium/cilium \
    --namespace kube-system \
    --version $CILIUM_VERSION \
    --set hubble.enabled=true \
    --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \
    --set image.repository=${SOLO_CILIUM_REPO}/cilium \
    --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \
    --set operator.prometheus.enabled=true \
    --set prometheus.enabled=true \
    ## ADD CLOUD PROVIDER-SPECIFIC HELM VALUES HERE
    

    Example for GKE:

    NATIVE_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')"
    echo $NATIVE_CIDR
       
    helm install cilium cilium/cilium \
    --namespace kube-system \
    --version $CILIUM_VERSION \
    --set hubble.enabled=true \
    --set hubble.metrics.enabled="{dns:destinationContext=pod;sourceContext=pod,drop:destinationContext=pod;sourceContext=pod,tcp:destinationContext=pod;sourceContext=pod,flow:destinationContext=pod;sourceContext=pod,port-distribution:destinationContext=pod;sourceContext=pod}" \
    --set image.repository=${SOLO_CILIUM_REPO}/cilium \
    --set operator.image.repository=${SOLO_CILIUM_REPO}/operator \
    --set operator.prometheus.enabled=true \
    --set prometheus.enabled=true \
    --set nodeinit.enabled=true \
    --set nodeinit.reconfigureKubelet=true \
    --set nodeinit.removeCbrBridge=true \
    --set cni.binPath=/home/kubernetes/bin \
    --set gke.enabled=true \
    --set ipam.mode=kubernetes \
    --set ipv4NativeRoutingCIDR=$NATIVE_CIDR
    

    Example output:

    NAME: cilium
    LAST DEPLOYED: Fri Sep 16 10:31:52 2022
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    You have successfully installed Cilium with Hubble.
    
    Your release version is 1.13.6.
    
    For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp
    
  3. Verify that the Gloo Network Cilium CNI is successfully installed. Because the Cilium agent is deployed as a daemon set, the number of Cilium and Cilium node init pods equals the number of nodes in your cluster.

    kubectl get pods -n kube-system | grep cilium
    

    Example output:

    cilium-gbqgq                                                  1/1     Running             0          48s
    cilium-j9n5x                                                  1/1     Running             0          48s
    cilium-node-init-c7rxb                                        1/1     Running             0          48s
    cilium-node-init-pnblb                                        1/1     Running             0          48s
    cilium-node-init-wdtjm                                        1/1     Running             0          48s
    cilium-operator-69dd4567b5-2gjgg                              1/1     Running             0          47s
    cilium-operator-69dd4567b5-ww6wp                              1/1     Running             0          47s
    cilium-smp9c                                                  1/1     Running             0          48s
    
  4. Check the status of the Cilium installation.

    cilium status --wait
    

    Example output:

       /¯¯\
    /¯¯\__/¯¯\    Cilium:         OK
    \__/¯¯\__/    Operator:       OK
    /¯¯\__/¯¯\    Hubble:         disabled
    \__/¯¯\__/    ClusterMesh:    disabled
       \__/
    
    Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
    DaemonSet         cilium             Desired: 3, Ready: 3/3, Available: 3/3
    Containers:       cilium             Running: 3
                      cilium-operator    Running: 2
    Cluster Pods:     10/10 managed by Cilium
    Image versions    cilium             ${SOLO_CILIUM_REPO}/cilium:v1.12.1@sha256:...: 3
                      cilium-operator    ${SOLO_CILIUM_REPO}/operator-generic:v1.12.1@sha256:...: 2
    

Install Gloo Network

Install the Gloo Network components in your cluster and verify that Gloo Network can discover the Cilium CNI.

  1. Install Gloo Network in your cluster.

    meshctl install --profiles gloo-mesh-single,gloo-network \
      --set common.cluster=$CLUSTER_NAME \
      --set istioInstallations.enabled=false \
      --set glooMgmtServer.createGlobalWorkspace=true \
      --version $GLOO_VERSION \
      --set licensing.glooNetworkLicenseKey=$GLOO_NETWORK_LICENSE_KEY
    
  2. Verify that Gloo Network is successfully installed. This check might take a few seconds to verify that:

    • Your Gloo Network product license is valid and current.
    • The Gloo Platform CRDs are installed at the correct version.
    • The Gloo Network pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
    meshctl check
    

    Example output:

    🟢 License status
    
     INFO  gloo-network enterprise license expiration is 25 Aug 23 10:38 CDT
     INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo Platform deployment status
    
    Namespace | Name                           | Ready | Status 
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
  3. Open the Gloo UI.

    1. Port-forward the gloo-mesh-ui service on 8090.
      kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
      
    2. Open your browser and connect to the Gloo UI at http://localhost:8090.
  4. In the Cluster panel on the Overview page, verify that the Gloo Network Cilium version was discovered and is displayed in the card for your cluster.

    Gloo UI cluster panel

Deploy Bookinfo and monitor network traffic

In this Getting Started tutorial, the Bookinfo app is used to show Cilium network traffic metrics in the Gloo UI. The app is installed without Istio sidecars for demonstration and simplicity purposes.

  1. Deploy the Bookinfo app in your cluster.

    1. Create a namespace for the Bookinfo app.

      kubectl create ns bookinfo
      
    2. Deploy the Bookinfo app.

      kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.15.3/samples/bookinfo/platform/kube/bookinfo.yaml
      
    3. Verify that the Bookinfo pods are running.

      kubectl get pods -n bookinfo
      

      Example output:

      NAME                              READY   STATUS    RESTARTS   AGE
      details-v1-7d88846999-hk2gm       1/1     Running   0          53s
      productpage-v1-7795568889-9q5ls   1/1     Running   0          51s
      ratings-v1-754f9c4975-mnwln       1/1     Running   0          52s
      reviews-v1-55b668fc65-p5b82       1/1     Running   0          52s
      reviews-v2-858f99c99-58czx        1/1     Running   0          52s
      reviews-v3-7886dd86b9-qp8vg       1/1     Running   0          51s
      
  2. Generate some network traffic by sending requests to the product page app. The product page app sends requests to other microservices.

    Create a temporary curl pod in the bookinfo namespace, so that you can test the app setup. You can also use this method in Kubernetes 1.23 or later, but an ephemeral container might be simpler, as shown in the other tab.

    1. Create the curl pod.

      kubectl run -it -n bookinfo curl \
        --image=curlimages/curl:7.73.0 --rm  -- sh
      
    2. Send a request to the product page app. You can repeat this request a few times.

      curl http://productpage:9080/productpage -v
      
    3. Exit the temporary pod. The pod deletes itself.

      exit
      

    Use the kubernetes debug command to create an ephemeral curl container in the deployment. This way, the curl container inherits any permissions from the app that you want to test. If you don't run Kubernetes 1.23 or later, you can deploy a separate curl pod or manually add the curl container as shown in the other tab.

    kubectl -n bookinfo debug -i pods/$(kubectl get pod -l app=reviews -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://productpage:9080/productpage
    

  3. View network traffic information in the Gloo UI.

    1. From the left-hand navigation, open the Graph.
    2. Open the Layout settings and toggle Cilium: SHOW.
    3. View the network graph for the Bookinfo app. The graph is automatically generated based on what apps talk to each other. The more requests you send to the productpage, the more connections you can see in the graph. Gloo UI graph

Restrict access with Gloo Network access policies

You can further protect the apps in your cluster by applying access policies. With access policies, you can specify what microservices are allowed to talk to each other.

  1. Create a temporary curl client in your cluster and verify that you can access the reviews app.

    1. Create the curl pod.

      kubectl run -it -n bookinfo curl \
        --image=curlimages/curl:7.73.0 --rm  -- sh
      
    2. Send a request to the reviews app.

      curl http://reviews:9080/reviews/1 -v
      

      Example output:

      < HTTP/1.1 200 OK
      < X-Powered-By: Servlet/3.1
      < Content-Type: application/json
      < Date: Mon, 19 Sep 2022 14:05:16 GMT
      < Content-Language: en-US
      < Content-Length: 358
      < 
      * Connection #0 to host reviews left intact
      {"id": "1","podname": "reviews-v1-55b668fc65-p5b82","clustername": "null","reviews": [{  "reviewer": "Reviewer1",  "text": "An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!"},{  "reviewer": "Reviewer2",  "text": "Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare."}]}
      
    3. Exit the temporary pod. The pod deletes itself.

      exit
      
  2. Add a curl container inside the product page app and verify that you can access the reviews app.

    1. Create the curl container.

      kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
      
    2. Log in to the product page and verify that you can access the reviews app.

      kubectl exec $(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') -n bookinfo -c curl -- curl -v http://reviews:9080/reviews/1
      
  3. Create a Gloo Network access policy that allows only the product page to access the reviews app. After you create this access policy, all other clients, including the temporary curl pod, are not allowed to access the reviews app anymore.

    kubectl apply -f - << EOF
    apiVersion: security.policy.gloo.solo.io/v2
    kind: AccessPolicy
    metadata:
      name: reviews-access
      namespace: bookinfo
    spec:
      applyToDestinations:
      - port:
          number: 9080
        selector:
          labels:
            app: reviews
      config:
        authn:
          tlsMode: STRICT
        authz:
          allowedClients:
          - serviceAccountSelector:
              labels:
                account: productpage
          allowedPaths:
          - /reviews*
    EOF
    
  4. Re-create the temporary curl client in your cluster and verify that you cannot access the reviews app anymore.

    1. Create the curl pod.

      kubectl run -it -n bookinfo curl \
        --image=curlimages/curl:7.73.0 --rm  -- sh
      
    2. Send a request to the reviews app. Note that no response is returned as request packets are dropped before they reach the reviews app.

      curl http://reviews:9080/reviews/1 -v
      
    3. Exit the temporary pod. The pod deletes itself.

      exit
      
  5. Log in to the product page app again and verify that you can still access the reviews app.

    kubectl exec $(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') -n bookinfo -c curl -- curl -v http://reviews:9080/reviews/1
    

Congratulations! You successfully installed Gloo Network, monitored network traffic and applied Gloo Network access policies to restrict access in your cluster.

What's next