Gloo Mesh Core deploys alongside your Istio installation in a single-cluster environment, and gives you instant insights into your Istio service mesh through a custom dashboard. You can follow this guide to quickly get started with Gloo Mesh Core. To learn more about the benefits and architecture, see About. To customize your installation with Helm instead, see the advanced installation guide.

Before you begin

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.6.0-beta2 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
  2. Create or use an existing Kubernetes cluster, and save the cluster name in an environment variable.

      export CLUSTER_NAME=<cluster_name>
      
  3. Set your Gloo Mesh Core license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_CORE_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_CORE_LICENSE_KEY=<license_key>
      
  4. Create a YAML file with the following values to configure the TLS connection between the Gloo management server and agent. The following example uses My token as your relay identity token value, but you can use any string value. The relay token is used by the Gloo agent when establishing the first connection to the Gloo management server. Only when the relay identity token that the agent presents matches the relay token that the Gloo management server uses, initial trust is established and the Gloo agent and management server proceed with establishing a simple TLS connection. In a simple TLS setup, only the management server presents a certificate to authenticate its identity. The identity of the agent is not verified.

      cat > values.yaml <<EOF
    glooMgmtServer:
       extraEnvs:
         RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION:
           value: "true"
         RELAY_TOKEN: 
           value: "My token"
    glooAgent:
      extraEnvs:
        RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION:
          value: "true"
        RELAY_TOKEN: 
          value: "My token"
    EOF
      

Install Gloo Mesh Core

Install all Gloo Mesh Core components in the same cluster as your Istio service mesh.

  1. Install Gloo Mesh Core in your cluster. This command uses a basic profile to create a gloo-mesh namespace and install the Gloo control and data plane components. For more information, check out the CLI install profiles.

  2. Verify that your Gloo Mesh Core setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product license is valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check
      

    Example output:

      🟢 License status
    
    INFO  gloo-mesh-core enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster | Registered | Connected Pod                                   
    test    | true       | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1  
      

Deploy Istio

Check whether an Istio control plane already exists.

  kubectl get pods -n istio-system
  

Deploy a sample app

To analyze your service mesh with Gloo Mesh Core, be sure to include your services in the mesh.

  • If you already deployed apps that you want to include in the mesh, you can run the following command to label the service namespace for Istio sidecar injection.
      kubectl label ns <namespace> istio-injection=enabled
      
  • If you don’t have any apps yet, you can deploy Bookinfo, the Istio sample app.
    1. Create the bookinfo namespace and label it for Istio injection so that the services become part of the service mesh.
        kubectl create ns bookinfo
      kubectl label ns bookinfo istio-injection=enabled
        
    2. Deploy the Bookinfo app.
        # deploy bookinfo application components for all versions less than v3
      kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)'
      # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
      kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
      # deploy all bookinfo service accounts
      kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
        
    3. Verify that the Bookinfo app deployed successfully.
        kubectl get pods -n bookinfo
      kubectl get svc -n bookinfo
        

Explore the UI

Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.

Launch the dashboard

  1. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090. You can connect by using the meshctl or kubectl CLIs.

  2. Review your Dashboard for an at-a-glance overview of your Gloo Mesh Core environment. Environment insights, health, status, inventories, security, and more are summarized in the following cards:

    • Analysis and Insights: Gloo Mesh Core recommendations for how to improve your Istio setup, and if installed, your Cilium setup.
    • Gloo, Istio, and Cilium health: A status check of the Gloo Mesh Core, Istio, and if installed, Cilium installations in your cluster.
    • Certificates Expiry: Validity timelines for your root and intermediate Istio certificates.
    • Cluster Services: Inventory of services in your Gloo Mesh Core setup, and whether those services are in a service mesh or not.
    • Istio FIPS: FIPS compliance checks for the istiod control plane and Istio data plane workloads.
    • Zero Trust: Number of service mesh workloads that receive only mutual TLS (mTLS)-encrypted traffic, and number of external services that are accessed from the mesh.


    Figure: Gloo UI dashboard
    Figure: Gloo UI dashboard

  3. If you installed the Cilium CNI, click the Cilium tab in the Gloo, Istio, and Cilium health card. Verify that the Solo distribution of Cilium version was discovered.

    Cilium installation health card in the Gloo UI

Check insights

Review the insights for your environment. Gloo Mesh Core comes with an insights engine that automatically analyzes your Istio and Cilium setups for health issues. Then, Gloo shares these issues along with recommendations to harden your Istio and Cilium setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment.

  1. On the Analysis and Insights card of the dashboard, you can quickly see a summary of the insights for your environment, including how many insights are available at each severity level, and the type of insight.

    Figure: Insights and analysis card
    Figure: Insights and analysis card

  2. View the list of insights by clicking the Details button, or go to the Insights page.

  3. On the Insights page, you can view recommendations to harden your Istio, and if installed, Cilium setup, and steps to implement them in your environment. Gloo Mesh Core analyzes your setup, and returns individual insights that contain information about errors and warnings in your environment, best practices you can use to improve your configuration and security, and more.

    Figure: Insights page
    Figure: Insights page

  4. On an insight that you want to resolve, click Details. The details modal shows more data about the insight, such as the time when it was last observed in your environment, and if applicable, the extended settings or configuration that the insight applies to.

    Figure: Example insight
    Figure: Example insight
  5. Click the Target YAML tab to see the resource file that the insight references, and click the View Resolution Steps tab to see guidance such as steps for fixing warnings and errors in your resource configuration or recommendations for improving your security and setup.

Optional: Apply a Cilium network policy

If you installed the Solo distribution of the Cilium CNI, deploy a demo app to visualize Cilium network traffic in the Gloo UI, and try out a Cilium network policy to secure and control traffic flows between app microservices.

  1. Deploy the Cilium Star Wars demo app in your cluster.

    1. Create a namespace for the demo app, and include the starwars services in your service mesh.

        kubectl create ns starwars
      kubectl label ns starwars istio-injection=enabled
        
    2. Deploy the demo app, which includes tiefighter, xwing, and deathstar pods, and a deathstar service. The tiefighter and deathstar pods have the org=empire label, and the xwing pod has the org=alliance label.

        kubectl -n starwars apply -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml
        
    3. Verify that the demo pods and service are running.

        kubectl get pods,svc -n starwars
        

      Example output:

        NAME                             READY   STATUS    RESTARTS   AGE
      pod/deathstar-6fb5694d48-5hmds   1/1     Running   0          107s
      pod/deathstar-6fb5694d48-fhf65   1/1     Running   0          107s
      pod/tiefighter                   1/1     Running   0          107s
      pod/xwing                        1/1     Running   0          107s
      
      NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
      service/deathstar    ClusterIP   10.96.110.8   <none>        80/TCP    107s
        
  2. Generate some network traffic by sending requests from the xwing and tiefighter pods to the deathstar service.

      kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
    kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    Example output for both commands:

      Ship landed
    Ship landed
      
  3. View network traffic information in the Gloo UI.

    1. Open the Gloo UI.

      • meshctl:
          meshctl dashboard
          
      • kubectl:
        1. Port-forward the gloo-mesh-ui service on 8090.
            kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
            
        2. Open your browser and connect to http://localhost:8090.
    2. From the left-hand navigation, click Observability > Graph.

    3. View the network graph for the Star Wars app. The graph is automatically generated based on which apps talk to each other.

      Figure: Gloo UI network graph for the Star Wars app
      Figure: Gloo UI network graph for the Star Wars app
  4. Create a Cilium network policy that allows only apps that have the org=empire label to access the deathstar app. After you create this access policy, only the tiefighter pod can access the deathstar app.

      kubectl apply -f - << EOF
    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: "rule1"
      namespace: starwars
    spec:
      description: "L3-L4 policy to restrict deathstar access to empire ships only"
      endpointSelector:
        matchLabels:
          org: empire
          class: deathstar
      ingress:
      - fromEndpoints:
        - matchLabels:
            org: empire
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
    EOF
      
  5. Send another request from the tiefighter pod to the deathstar service.

      kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    The request succeeds, because requests from apps with the org=empire label are permitted. Example output:

      Ship landed
      
  6. Send another request from the xwing pod to the deathstar service.

      kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    This request hangs, because requests from apps without the org=empire label are not permitted. No Layer 7 HTTP response code is returned, because the request is dropped on Layer 3. You can enter control+c to stop the curl request, or wait for it to time out.

  7. You can also check the metrics to verify that the policy allowed or blocked requests.

    1. Open the Prometheus expression browser.
      • meshctl: For more information, see the CLI documentation.
          meshctl proxy prometheus
          
      • kubectl:
        1. Port-forward the prometheus-server deployment on 9091.
            kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
            
        2. Open your browser and connect to localhost:9091/.
    2. In the Expression search bar, paste the following query and click Execute.
        rate(hubble_drop_total{destination_workload_id=~"deathstar.+"}[5m])
        
    3. Verify that you can see requests from the xwing pod to the deathstar service that were dropped because of the network policy.

Next steps

Now that you have Gloo Mesh Core and Istio up and running, check out some of the following resources to learn more about Gloo Mesh Core and expand your service mesh capabilities.

Istio:

Cilium: If you installed the Solo distribution of the Cilium CNI in your clusters:

Gloo Mesh Core:

  • Customize your Gloo Mesh Core installation with a Helm-based setup.
    • Explore insights to review and improve your setup’s health and security posture.
    • When it’s time to upgrade Gloo Mesh Core, see the upgrade guide.

    Help and support:

    Cleanup

    1. If you installed the Solo distribution of the Cilium CNI, remove the demo app resources and namespace.

        kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml -n starwars
      kubectl delete cnp rule1 -n starwars
      kubectl delete ns starwars
        
    2. If you no longer need this quick-start Gloo Mesh Core environment, you can follow the steps in the uninstall guide.