Set up Gloo Gateway

Start with setting up Gloo Gateway as an API gateway for your cluster.

In the following steps, you install the Gloo Gateway management plane and gateway proxy. When you install the Gloo Gateway management plane, a deployment named gloo-mesh-mgmt-server is created to translate and implement Gloo configurations that you create in other guides, such as controlling traffic with Gloo policies.

The Gloo Gateway management plane also installs an ingress gateway proxy in your cluster named istio-ingressgateway. You can use the Gloo Gateway management plane to fully manage the lifecycle of this gateway proxy.

The instructions in this guide assume that you want to install the Gloo Gateway management plane and data plane in one cluster. To set up the management plane in a dedicated management cluster and the data plane in workload clusters instead, see the Multicluster setup.

  1. Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

    export CLUSTER_NAME=<cluster_name>
    
  2. Install meshctl, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version 2.5.4, which uses the latest Gloo Gateway installation values.

    curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.5.4 sh -
    export PATH=$HOME/.gloo-mesh/bin:$PATH
    
  3. Install Gloo Gateway in your cluster. This command uses a basic profile to install the management plane components, such as the management server and Prometheus server, and the data plane components, such as the agent, gateway proxy, rate limit server, and external auth server, in your cluster. If you do not have a license key, contact an account representative.

    meshctl install --profiles gloo-gateway-demo \
      --set common.cluster=$CLUSTER_NAME \
      --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
    

    Need to specify more settings? You can create a values.yaml file, and include it in your installation command with the --chart-values-file values.yaml flag. For example, you might include cloud provider specific-annotations for the ingress gateway, such as the following:

    cat >values.yaml <<EOF
    istioInstallations:
      northSouthGateways:
        - enabled: true
          name: istio-ingressgateway
          installations:
            - clusters:
              - activeGateway: true
                name: $CLUSTER_NAME
              gatewayRevision: auto
              istioOperatorSpec:
                components:
                  ingressGateways:
                  - enabled: true
                    k8s:
                      serviceAnnotations:
                        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
                        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
                        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
                        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>"
                        service.beta.kubernetes.io/aws-load-balancer-type: external
    EOF
    
    1. Elevate the permissions for the gloo-mesh service account to mount volumes on the host where the telemetry collector agents run. In Gloo Mesh Gateway version 2.4, a new cilium-run volume was added to the Gloo telemetry pipeline configuration to collect Cilium flow logs. For more information about this change, see the 2.4 release notes.
      oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh
      
    2. Elevate the permissions of the following service accounts that will be created. These permissions allow the ingress gateway proxy to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-20
      
    3. Create the gloo-mesh-gateways and gloo-mesh-addons projects, and create NetworkAttachmentDefinition custom resources for the projects.
      kubectl create ns gloo-mesh-gateways
      kubectl create ns gloo-mesh-addons
      
      cat <<EOF | oc -n gloo-mesh-gateways create -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
      cat <<EOF | oc -n gloo-mesh-addons create -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
    4. Install Gloo Gateway.
      meshctl install --profiles gloo-gateway-demo-openshift \
        --set common.cluster=$CLUSTER_NAME \
        --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
      

    Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

  4. Verify that Gloo Gateway is correctly installed. This check might take a few seconds to verify that:

    • Your Gloo Platform product licenses are valid and current.
    • The Gloo Platform CRDs are installed at the correct version.
    • The Gloo Gateway pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
    meshctl check
    

    Example output:

    🟢 License status
    
     INFO  gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT
     INFO  Valid GraphQL license module found
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace        | Name                           | Ready | Status 
    gloo-mesh        | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh        | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh        | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh        | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh        | prometheus-server              | 1/1   | Healthy
    gloo-mesh-addons | ext-auth-service               | 1/1   | Healthy
    gloo-mesh-addons | rate-limiter                   | 1/1   | Healthy
    gloo-mesh-addons | redis                          | 1/1   | Healthy
    gloo-mesh        | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
  5. Verify that the gateway proxy service is created and assigned an external IP address. It might take a few minutes for the load balancer to deploy.

    kubectl get svc -n gloo-mesh-gateways
    

    Example output:

    NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                      AGE
    istio-ingressgateway   LoadBalancer   10.XX.XXX.XXX   35.XXX.XXX.XXX   15021:30826/TCP,80:31257/TCP,443:30673/TCP,15443:30789/TCP   48s
    
  6. Optional: Check out the workspace and workspace settings that were created for you. Workspaces help to organize team resources in your cluster, and to isolate Kubernetes and Gloo Gateway resources. Because the default workspace is used for demonstration purposes, it does not isolate any resources, and instead allows all Kubernetes and Gloo Gateway resources in the workspace.

    kubectl get workspace $CLUSTER_NAME -n gloo-mesh -o yaml
    
    kubectl get workspacesettings default -n gloo-mesh -o yaml
    
  7. Optional for OpenShift: Expose the ingress gateway by using an OpenShift route.

    oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2
    

Next

Deploy sample apps to try out the routing capabilities and traffic policies in Gloo Gateway.

Understand what happened

Find out more information about the Gloo Gateway environment that you set up in this guide.

Gloo Gateway installation: This quick start guide used meshctl to install a minimum deployment of Gloo Gateway for testing purposes, and some optional components are not installed. To learn more about production-level installation options, including advanced configuration options available in the Gloo Gateway Helm chart, see the Setup guide.

Management server and agent: When you installed the Gloo Gateway management plane, a deployment named gloo-mesh-mgmt-server was created to translate and implement your Gloo configurations. Because the glooAgent.enabled: true setting is included in the gloo-gateway install profile, the cluster was also registered to be managed by Gloo. The deployment named gloo-mesh-agent was created to run the Gloo agent as part of the Gloo Gateway data plane.

Relay architecture: In a multicluster setup, the Gloo agent discovers Gloo and Kubernetes resources, such as deployments and services, and sends snapshots of them to the management server for translation and implementation. However, in a single cluster setup, your resources are written directly to the cluster without relay. For more information about relay server-agent communication, see the relay architecture page.

Gateway proxy installation: The gateway proxy installation profiles in this getting started guide were provided within the Gloo Gateway installation Helm chart. However, Gloo Gateway can discover gateway deployments regardless of their installation options. To manually install gateway proxies, see the advanced configuration guides.

Gloo workspace: Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, a single workspace is created for everything. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting. You can also change the default workspace by following the Workspace setup guide.