Monetization/Usage Graph

Portal can be configured to store usage data in a postgres DB.

This data can be used to monetize APIs and to generate a usage graph in the Admin Portal UI.

Pre-requisites

  1. Gloo Portal installed to a Kubernetes cluster alongside Gloo Edge with version >= 1.8.0.

  2. A Portal with at least one API Product, for testing

Set up postgres DB

First, let’s create a ConfigMap to define the environment variables we’ll need to set up postgres:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  namespace: gloo-portal
  labels:
    app: postgres
data: # these env values define the default DB, user, and password for our postgres instance
  POSTGRES_DB: postgres-db
  POSTGRES_USER: postgres-user
  POSTGRES_PASSWORD: postgres-password

EOF

Next we’ll create the PersistentVolume to allocate the backing storage for the database:

cat <<EOF | kubectl apply -f -
kind: PersistentVolume
apiVersion: v1
metadata:
   name: postgres-pv-volume
   namespace: gloo-portal
   labels:
      type: local
      app: postgres
spec:
   storageClassName: manual
   capacity:
      storage: 5Gi # storage needs may vary depending on traffic volume
   accessModes:
      - ReadWriteMany
   hostPath:
      path: "/mnt/data"

EOF

and the PersistentVolumeClaim for that volume:

cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pv-claim
  namespace: gloo-portal
  labels:
    app: postgres
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

EOF

Then we’ll actually deploy postgres:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: gloo-portal
spec:
  selector:
    matchLabels:
      app: "postgres"
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:13.2 # any supported version of postgres should work
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: postgres-config # mount the configmap from earlier as env
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgresdb
      volumes:
        - name: postgresdb
          persistentVolumeClaim:
            claimName: postgres-pv-claim

EOF

and expose it as a service:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: gloo-portal
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
    - port: 5432
  selector:
    app: postgres

EOF

Now that we have a postgres instance, we’ll need to create a requests table. The schema must have the following columns:

   Column    |           Type           | Collation | Nullable |               Default
-------------+--------------------------+-----------+----------+--------------------------------------
 id          | bigint                   |           | not null | nextval('requests_id_seq'::regclass)
 user_id     | text                     |           | not null |
 route       | text                     |           | not null |
 api_product | text                     |           | not null |
 environment | text                     |           | not null |
 status      | integer                  |           | not null |
 request_ts  | timestamp with time zone |           | not null |
 method      | text                     |           | not null |
 request_id  | text                     |           | not null |
Indexes:
    "requests_pkey" PRIMARY KEY, btree (id)

We can load this schema by running psql in our pod, specifying the password assigned to POSTGRES_PASSWORD earlier:

kubectl -n gloo-portal exec -it deploy/postgres -- psql -U postgres-user -d postgres-db --password

and running the following pg_dump:

--
-- PostgreSQL database dump
--

-- Dumped from database version 13.2 (Debian 13.2-1.pgdg100+1)
-- Dumped by pg_dump version 13.2 (Debian 13.2-1.pgdg100+1)

-- <settings omitted>

--
-- Name: requests; Type: TABLE; Schema: public; Owner: postgres-user
--

CREATE TABLE public.requests (
                                id bigint NOT NULL,
                                user_id text NOT NULL,
                                route text NOT NULL,
                                api_product text NOT NULL,
                                environment text NOT NULL,
                                status integer NOT NULL,
                                request_ts timestamp with time zone NOT NULL,
                                method text NOT NULL,
                                request_id text NOT NULL
);


ALTER TABLE public.requests OWNER TO "postgres-user";

--
-- Name: requests_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres-user
--

CREATE SEQUENCE public.requests_id_seq
   AS integer
   START WITH 1
   INCREMENT BY 1
   NO MINVALUE
   NO MAXVALUE
   CACHE 1;


ALTER TABLE public.requests_id_seq OWNER TO "postgres-user";

--
-- Name: requests_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres-user
--

ALTER SEQUENCE public.requests_id_seq OWNED BY public.requests.id;


--
-- Name: requests id; Type: DEFAULT; Schema: public; Owner: postgres-user
--

ALTER TABLE ONLY public.requests ALTER COLUMN id SET DEFAULT nextval('public.requests_id_seq'::regclass);


--
-- Name: requests requests_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres-user
--

ALTER TABLE ONLY public.requests
   ADD CONSTRAINT requests_pkey PRIMARY KEY (id);


--
-- PostgreSQL database dump complete
--

We can confirm the new table has been created as desired by executing \d requests:

postgres-db=# \d requests
                                       Table "public.requests"
   Column    |           Type           | Collation | Nullable |               Default
-------------+--------------------------+-----------+----------+--------------------------------------
 id          | bigint                   |           | not null | nextval('requests_id_seq'::regclass)
 user_id     | text                     |           | not null |
 route       | text                     |           | not null |
 api_product | text                     |           | not null |
 environment | text                     |           | not null |
 status      | integer                  |           | not null |
 request_ts  | timestamp with time zone |           | not null |
 method      | text                     |           | not null |
 request_id  | text                     |           | not null |
Indexes:
    "requests_pkey" PRIMARY KEY, btree (id)

Add monetization config

Configuration is needed in order for Edge and Portal to connect to the database.

We’ll accomplish this by applying a configmap and a secret. These will need to be applied in the namespace(s) where Edge and Portal are running.

The config map can be applied like so:

cat <<EOF | kubectl apply -n gloo-portal -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: monetization-config
data:
  storage-type: "postgres"
  config.yaml: | # Edge and Portal will mount this volume and read this field as a YAML file
    secretpath: /etc/monetization/secret
    host: postgres.gloo-portal.svc.cluster.local
    db: postgres-db
    port: 5432

EOF

and we store the DB credentials as a secret so as not to expose sensitive information:

cat <<EOF | kubectl apply -n gloo-portal -f -
apiVersion: v1
kind: Secret
metadata:
   name: monetization-secret
type: kubernetes.io/basic-auth
stringData:
   username: postgres-user
   password: postgres-password

EOF

Upgrade glooe with monetization values

A number of helm values need to be added in order to support monetization.

The config and secret we just applied will need to be added as volumes and mounted, an env flag needs to be set, and access logging needs to be set up.

The following YAML contains the requisite values and can be written to a file like so:

cat << EOF > glooe-monetization-values.yaml
create_license_secret: false
global:
   extensions:
      extAuth:
         deployment:
            extraVolume: # specify the monetization config and secret as volumes for the extauth deployment
               - name: monetization-config
                 configMap:
                    name: monetization-config
               - name: monetization-secret
                 secret:
                    secretName: monetization-secret
            extraVolumeMount: # add the volumeMounts for the monetization config and secret volumes
               - name: monetization-config
                 mountPath: /etc/monetization/storage-config
                 readOnly: true
               - name: monetization-secret
                 mountPath: /etc/monetization/secret
                 readOnly: true
            customEnv: # set the extauth env flag to enable monetization
               - name: MONETIZATION_ENABLED
                 value: "true"
gloo:
   gatewayProxies:
      gatewayProxy:
         envoyStaticClusters:
            - name: extauth # we use the extauth server as an access log service to enable monetization
              connect_timeout: 5.000s
              type: STRICT_DNS
              typed_extension_protocol_options:
                 envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
                    "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
                    # Explicitly require HTTP/2
                    explicit_http_config:
                       http2_protocol_options: { }
              lb_policy: ROUND_ROBIN
              load_assignment:
                 cluster_name: extauth
                 endpoints:
                    - lb_endpoints:
                         - endpoint:
                              address:
                                 socket_address: # address needs to route to the correct namespace where extauth is running
                                    address: extauth.gloo-portal.svc.cluster.local 
                                    port_value: 8083
         gatewaySettings:
            accessLoggingService:
               accessLog: # point access logs to the extauth cluster
                  - grpcService:
                       logName: "monetization-log"
                       staticClusterName: "extauth"
EOF

We can then apply these values to our edge deployment with a helm upgrade:

helm upgrade -n gloo-portal glooe glooe/gloo-ee --values=glooe-monetization-values.yaml

At this point, requests we make to the gateway should result in rows written in our DB.

To observe this, we can make a request to our gateway with curl:

These env variables are set in the Getting Started guide

curl "http://${INGRESS_HOST}:${INGRESS_PORT}/api/pets" -H "Host: api.example.com"
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Now we should have a row in the requests table in our postgres DB, again using the password we specified earlier:

kubectl exec -it -n gloo-portal deploy/postgres -- psql -U postgres-user -d postgres-db --password -c 'select * from requests;'
 id | user_id |   route   |           api_product           |         environment         | status |          request_ts           | method |              request_id
----+---------+-----------+---------------------------------+-----------------------------+--------+-------------------------------+--------+--------------------------------------
  1 |         | /api/pets | external-api-product.dev-portal | test-environment.dev-portal |    200 | 2021-06-28 21:10:00.998876+00 | GET    | c7493fdb-f642-43eb-b291-8e3f6784567a
(1 row)

user_id will be populated for authenticated requests

Upgrade gloo-portal with monetization values

We’ll want to add the following helm values to our gloo-portal deployment in order to expose the Admin Server on port 32002 and enable its MonetizationData endpoint:

We append here in order not to overwrite unrelated values. Alternatively a second values file can be written and specified in addition to the original.

cat << EOF >> gloo-values.yaml
adminDashboard:
   service:
      type: NodePort
      httpNodePort: 32002
monetization:
   enabled: true
   configMapName: monetization-config
   secretName: monetization-secret
EOF
helm upgrade gloo-portal gloo-portal/gloo-portal -n gloo-portal --values gloo-values.yaml

We can verify that this was successful by checking that the gloo-portal-admin-server pods have restarted:

$ kubectl get -n gloo-portal pods
NAME                                        READY   STATUS    RESTARTS   AGE
...
gloo-portal-admin-server-7fb78c4d59-jjfm8   3/3     Running   0          10s
...

And we can test the GetData endpoint using grpcurl:

grpcurl -plaintext -d '{"lookback":600,"groupings":{"user":true}}' localhost:32002 admin.portal.gloo.solo.io.MonetizationDataApi/GetData

We should see data for the single request we made in the previous section:

avgLatency is not yet implemented

{
   "start":"2021-06-28T21:03:57.095188100Z",
   "end":"2021-06-28T21:13:57.095188100Z",
   "dataGroup":[
      {
         "label":"test-basic-api-key-013a3086-f0d5-a923-2310-dd9e98afa44e",
         "data":[
            {
               "timestamp":"2021-06-28T21:10:00.998876Z",
               "requestCount":"1"
            }
         ]
      }
   ],
   "totalRequests":"1",
   "avgLatency":"0s"
}

Usage graph

We can visualize the data by opening the Admin Dashboard in a browser and navigating to the “API Usage” tab.

Admin Dashboard API Usage

This screenshot reflects more, and different, test data than we’ve generated in this guide