About AWS Elastic Load Balancers (ELBs)

Gloo Gateway is an application (L7) proxy based on Envoy and the Kubernetes Gateway API that can act as both a secure edge router and as a developer-friendly Kubernetes ingress/egress (north-south traffic) gateway. You can get many benefits by pairing Gloo Gateway with an AWS Elastic Load Balancer (ELB), including better cross availability zone failover and deeper integration with AWS services like AWS Certificate Manager, AWS CLI & CloudFormation, and Route 53 (DNS).

AWS provides the following types of ELBs:

  • Network Load Balancer (NLB): An optimized L4 TCP/UDP load balancer that can handle very high throughput (millions of requests per second) while maintaining low latency. This load balancer also has deep integration with other AWS services like Route 53 (DNS).
  • Application Load Balancer (ALB): An L7 HTTP-only load balancer that is focused on providing HTTP request routing capabilities.

AWS NLB vs. ALB

In general, it is recommended to use a Gloo Gateway proxy with an AWS NLB as it provides more application (L7) capabilities than AWS ALBs. For example, you can configure the NLB for TLS passthrough and terminate TLS traffic on the gateway. You can also terminate traffic at the NLB and configure the NLB with a certificate that is used to secure the connection from the NLB to the gateway proxy.

ALBs on the other hand are useful if you want to use AWS WAF policies. Because TLS traffic is terminated at the ALB, you are responsible for securing the connection from the AWS to the Gloo Gateway proxy.

About this guide

In this guide you explore how to expose the Gloo Gateway proxy with an AWS ALB.

Before you begin

  1. Create or use an existing AWS account.
  2. Follow the Get started guide to install Gloo Gateway and deploy the httpbin sample app. You do not need to set up a Gateway as you create a custom Gateway as part of this guide.

Step 1: Deploy the AWS Load Balancer controller

  1. Save the name and region of your AWS EKS cluster and your AWS account ID in environment variables.

      export CLUSTER_NAME="<cluster-name>"
    export REGION="<region>"
    export AWS_ACCOUNT_ID=<aws-account-ID>
    export IAM_POLICY_NAME=AWSLoadBalancerControllerIAMPolicyNew
    export IAM_SA=aws-load-balancer-controller
      
  2. Create an AWS IAM policy and bind it to a Kubernetes service account.

      # Set up an IAM OIDC provider for a cluster to enable IAM roles for pods
    eksctl utils associate-iam-oidc-provider \
     --region ${REGION} \
     --cluster ${CLUSTER_NAME} \
     --approve
    
    # Fetch the IAM policy that is required for the Kubernetes service account
    curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.3/docs/install/iam_policy.json
    
    # Create the IAM policy
    aws iam create-policy \
     --policy-name ${IAM_POLICY_NAME} \
     --policy-document file://iam-policy.json
    
    # Create the Kubernetes service account
    eksctl create iamserviceaccount \
     --cluster=${CLUSTER_NAME} \
     --namespace=kube-system \
     --name=${IAM_SA} \
     --attach-policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${IAM_POLICY_NAME} \
     --override-existing-serviceaccounts \
     --approve \
     --region ${REGION}
      
  3. Verify that the service account is created in your cluster.

      kubectl -n kube-system get sa aws-load-balancer-controller -o yaml
      
  4. Deploy the AWS Load Balancer Controller.

      kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"
    
    helm repo add eks https://aws.github.io/eks-charts
    helm repo update
    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
      -n kube-system \
      --set clusterName=${CLUSTER_NAME} \
      --set serviceAccount.create=false \
      --set serviceAccount.name=${IAM_SA}
      

Step 2: Deploy your gateway proxy

  1. Create a Gateway resource with an HTTP listener.

      kubectl apply -n gloo-system -f- <<EOF
    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1
    metadata:
      name: alb
    spec:
      gatewayClassName: gloo-gateway
      listeners:
      - protocol: HTTP
        port: 8080
        name: http
        allowedRoutes:
          namespaces:
            from: All
    EOF
      
  2. Create an HttpListenerOption resource to change the default health check path for the gateway proxy. This step is required to ensure that the ALB health checks pass successfully.

      kubectl apply -f- <<EOF
    apiVersion: gateway.solo.io/v1
    kind: HttpListenerOption
    metadata:
      name: alb-healthcheck
      namespace: gloo-system
    spec:
      targetRefs:
      - group: gateway.networking.k8s.io
        kind: Gateway
        name: alb
      options:
        healthCheck:
          path: "/healthz"
    EOF
      
  3. Use an Ingress resource to create your ALB. Make sure to include the health check path that you set up earlier.

      kubectl apply -f- <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      namespace: gloo-system
      name: alb
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/target-type: instance
        alb.ingress.kubernetes.io/healthcheck-protocol: HTTP #--HTTPS by default
        alb.ingress.kubernetes.io/healthcheck-path: "/healthz"
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: gloo-proxy-alb
                  port:
                    number: 8080
    EOF
      
  4. Review the load balancer in the AWS EC2 dashboard.

    1. Go to the AWS EC2 dashboard.
    2. Go to Load Balancing > Load Balancers. Find and open the ALB that was created for you.
    3. On the Resource map tab, verify that the load balancer points to targets in your cluster.
  5. Create an HTTPRoute resource to open up a port on the gateway proxy. This step is required for AWS ELB health checks to pass.

      kubectl apply -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: httpbin-alb
      namespace: httpbin
      labels:
        example: httpbin-route
    spec:
      parentRefs:
        - name: alb
          namespace: gloo-system
      hostnames:
        - "albtest.com"
      rules:
        - backendRefs:
            - name: httpbin
              port: 8000
    EOF
      
  6. Go back to the AWS EC2 console to verify that the AWS ELB checks now pass.

Test the ALB

  1. From the AWS EC2 console, get the DNS name that was assigned to your ALB and save it as an environment variable.

      export INGRESS_GW_ADDRESS=<alb-dns-name>
      
  2. Send a request to the httpbin app. Verify that you get back a 200 HTTP response code.

      curl -vik http://$INGRESS_GW_ADDRESS:80/headers -H "host: albtest.com:80"
      

    Example output:

      ...
    < HTTP/1.1 200 OK
    HTTP/1.1 200 OK
      

Cleanup

You can optionally remove the resources that you set up as part of this guide.
  kubectl delete ingress alb -n gloo-system
kubectl delete httproute httpbin-alb -n httpbin
kubectl delete gateway alb -n gloo-system 
kubectl delete httplisteneroption alb-healthcheck -n gloo-system