AWS Lambda with EKS ServiceAccounts

AWS offers the ability to associate Kubernetes Service Accounts with IAM Roles. This AWS article explains the feature in more detail.
Gloo Gateway supports discovering and invoking AWS Lambdas using these projected Service Accounts.

The following list describes the different resources that are involved in this setup:

There are many different ways of building these objects, including using the AWS Management Console.

Configuring an EKS cluster to use an IAM role

Step 1: Associate an OpenID Provider to your EKS cluster

The first step is to associate an OpenID Provider to your EKS cluster. A full tutorial can be found in AWS docs.

Once the cluster exists and is configured properly, return here for the rest of the tutorial. The service account webhook is available by default in all EKS clusters, even if the workload does not explicitly show up.

Step 2: Create an IAM Policy

Create a new IAM Policy which has access to the following four actions for this tutorial to function properly:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:ListFunctions",
                "lambda:InvokeFunction",
                "lambda:GetFunction",
                "lambda:InvokeAsync"
            ],
            "Resource": "*"
        }
    ]
}

Step 3: Create an IAM Role

Create an IAM Role and attach the policy that you created in step 2.

Then, you use the AWS CLI to modify the role’s trust policy to enable the WebIdentities (projected ServiceAccount) to assume that Role. To find the OIDC provider ID to use with your policy, see the AWS documentation. The following JSON payload shows an example of trust relationship:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<ACCOUNT ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC-ID>"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.<REGION>.amazonaws.com/id/<OIDC-ID>:sub": [
            "system:serviceaccount:gloo-system:discovery",
            "system:serviceaccount:gloo-system:gateway-proxy"
          ]
        }
      }
    }
  ]
}

Step 4: take note of the ARNs

After creating this role the following ENV variables need to be set for the remainder of this demo

 export REGION=<region> # The region in which the lambdas are located.
 export AWS_ROLE_ARN=<role-arn> # The Role ARN of the Role created above.
 export SECONDARY_AWS_ROLE_ARN=<secondary-role-arn> # (Optional): A secondary Role ARN with Lambda access.

The Role ARN will be of the form: arn:aws:iam::<AWS ACCOUNT ID>:role/<ROLE NAME> For more info on ARNs see: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html

Deploying Gloo Gateway

For the purpose of this tutorial we will be installing open source Gloo Gateway, but Gloo Gateway Enterpise will work exactly the same, with slightly different helm values as specified below.


helm install gloo gloo/gloo \
 --namespace gloo-system --create-namespace --values - <<EOF
settings:
  aws:
    enableServiceAccountCredentials: true
    stsCredentialsRegion: ${REGION}
gateway:
  proxyServiceAccount:
    extraAnnotations:
      eks.amazonaws.com/role-arn: ${AWS_ROLE_ARN}
discovery:
  serviceAccount:
    extraAnnotations:
      eks.amazonaws.com/role-arn: ${AWS_ROLE_ARN}
EOF

helm install gloo glooe/gloo-ee \
 --namespace gloo-system --create-namespace --set-string license_key=YOUR_LICENSE_KEY --values - <<EOF
gloo:
  settings:
    aws:
      enableServiceAccountCredentials: true
      stsCredentialsRegion: ${REGION}
  gateway:
    proxyServiceAccount:
      extraAnnotations:
        eks.amazonaws.com/role-arn: ${AWS_ROLE_ARN}
  discovery:
    serviceAccount:
      extraAnnotations:
        eks.amazonaws.com/role-arn: ${AWS_ROLE_ARN}
EOF

Once helm has finished installing, which we can check by running the following, we’re ready to move on.

kubectl rollout status deployment -n gloo-system gateway-proxy
kubectl rollout status deployment -n gloo-system gloo
kubectl rollout status deployment -n gloo-system gateway
kubectl rollout status deployment -n gloo-system discovery

Routing to your Lambda(s)

Now that Gloo Gateway is running with our credentials set up, you can go ahead and create the Gloo Gateway config to enable routing to your AWS Lambdas.

First create an Upstream CR:

kubectl apply -f - <<EOF
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  name: lambda
  namespace: gloo-system
spec:
  aws: 
    region: ${REGION}
    roleArn: ${AWS_ROLE_ARN}
EOF

Since FDS is enabled, Gloo Gateway will go ahead and discover all available lambdas using the ServicaAccount credentials. The lambda we will be using for the purposes of this demo will be called uppercase, and it is a very simple lambda which will uppercase any text in the request body.

kubectl get us -n gloo-system lambda -oyaml

apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  name: lambda
  namespace: gloo-system
spec:
  aws:
    lambdaFunctions:
    # ...
    - lambdaFunctionName: uppercase
      logicalName: uppercase
      qualifier: $LATEST
    # ...
    region: us-east-1
status:
  reportedBy: gloo
  state: 1

Once the Upstream has been accepted we can go ahead and create our Virtual Service:

kubectl apply -f - <<EOF
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - '*'
    routes:
    - matchers:
      - prefix: /lambda
      routeAction:
        single:
          destinationSpec:
            aws:
              logicalName: uppercase
          upstream:
            name: lambda
            namespace: gloo-system
EOF

Now we can go ahead and try our route! The very first request will take slightly longer, as the STS credential request must be performed in band. However, each subsequent request will be much quicker as the credentials will be cached.

curl -v $(glooctl proxy url)/lambda --data '"abc"' --request POST -H"content-type: application/json"
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 3.129.77.154...
* TCP_NODELAY set
* Connected to <redacted> port 80 (#0)
> POST /lambda HTTP/1.1
> Host: <redacted>
> User-Agent: curl/7.64.1
> Accept: */*
> content-type: application/json
> Content-Length: 5
>
* upload completely sent off: 5 out of 5 bytes
< HTTP/1.1 200 OK
< date: Wed, 05 Aug 2020 17:59:58 GMT
< content-type: application/json
< content-length: 5
< x-amzn-requestid: e5cc4545-2989-4105-a4b2-49707d654bce
< x-amzn-remapped-content-length: 0
< x-amz-executed-version: 1
< x-amzn-trace-id: root=1-5f2af39e-5b3e38488ffeb5ec541107d4;sampled=0
< x-envoy-upstream-service-time: 53
< server: envoy
<
* Connection #0 to host <redacted> left intact
"ABC"* Closing connection 0

We can also optionally override the role ARN used to authenticate our lambda requests, by adding it into our Upstream like so:

kubectl apply -f - << EOF
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
  name: lambda
  namespace: gloo-system
spec:
  aws:
    region: us-east-1
    roleArn: $SECONDARY_AWS_ROLE_ARN
EOF

If you want to assume this role via the webtoken rather than in a chained form you may set envoy.reloadable_features.aws_lambda.sts_chaining to 0.

Now we can go ahead and try our route again! Everything should just work, notice that the request may take as long as the initial request since the credentials for this ARN have not been cached yet.

curl -v $(glooctl proxy url)/lambda --data '"abc"' --request POST -H"content-type: application/json"
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 3.129.77.154...
* TCP_NODELAY set
* Connected to <redacted> port 80 (#0)
> POST /lambda HTTP/1.1
> Host: <redacted>
> User-Agent: curl/7.64.1
> Accept: */*
> content-type: application/json
> Content-Length: 5
>
* upload completely sent off: 5 out of 5 bytes
< HTTP/1.1 200 OK
< date: Wed, 05 Aug 2020 17:59:58 GMT
< content-type: application/json
< content-length: 5
< x-amzn-requestid: e5cc4545-2989-4105-a4b2-49707d654bce
< x-amzn-remapped-content-length: 0
< x-amz-executed-version: 1
< x-amzn-trace-id: root=1-5f2af39e-5b3e38488ffeb5ec541107d4;sampled=0
< x-envoy-upstream-service-time: 53
< server: envoy
<
* Connection #0 to host <redacted> left intact
"ABC"* Closing connection 0

Preparing for Lambda cold starts

When you invoke a new function in AWS Lambda, you might notice significant latency, or a cold start, as Lambda downloads your code and prepares the execution environment. The latency can vary from under 100 ms to more than 1 second. The chances of a cold start increase if you write the function in a programming language that takes a long time to start up a VM, such as Java. For more information, see the AWS blog.

Keep in mind cold start latency as you prepare the timeout values of your Virtual Services. If you do not, you might notice 500-level server error responses. The following options example for a VirtualService allows for a total 35-second timeout window (default is 15 seconds), in which up to three requests with 10-second timeouts will be attempted.

      options:
        timeout: 35s  # default value is 15s
        retries:
          retryOn: '5xx'
          numRetries: 3
          perTryTimeout: '10s'

For more information about controlling timeout and retry settings, see the API documentation.