Hello World
In this guide, we will introduce Gloo Gateway’s Upstream and Virtual Service concepts by accomplishing the following tasks:
- Deploy a REST service to Kubernetes using the Pet Store sample application
- Observe that Gloo Gateway’s Discovery system finds the Pet Store service and creates an Upstream Custom Resource (CR) for it
- Create a Virtual Service and add routes sending traffic to specific paths on the Pet Store Upstream based on incoming web requests
- Verify Gloo Gateway correctly configures Envoy to route to the Upstream
- Test the routes by submitting web requests using
curl
If there are no routes configured, Envoy will not be listening on the gateway port.
Preparing the Environment
To follow along in this guide, you will need to fulfill a few prerequisites.
Prerequisite Software
Your local system should have kubectl
and glooctl
installed, and you should have access to a Kubernetes deployment to install Gloo Gateway.
kubectl
glooctl
- Kubernetes v1.11.3+ deployed somewhere. Minikube is a great way to get a cluster up quickly.
Install Gloo Gateway and glooctl
The linked guide walks you through the process of installing glooctl
locally and installing Gloo Gateway on Kubernetes to the default gloo-system
namespace.
Once you have completed the installation of glooctl
and Gloo Gateway, you are now ready to deploy an example application and configure routing.
Example Application Setup
On your Kubernetes installation, you will deploy the Pet Store Application and validate this it is operational.
Deploy the Pet Store Application
Let’s deploy the Pet Store Application on Kubernetes using a YAML file hosted on GitHub. The deployment will stand up the Pet Store container and expose the Pet Store API through a Kubernetes service.
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/v1.13.x/example/petstore/petstore.yaml
deployment.extensions/petstore created
service/petstore created
Verify the Pet Store Application
Now let’s verify the pod running the Pet Store application launched successfully the petstore service has been created:
kubectl -n default get pods
NAME READY STATUS RESTARTS AGE
petstore-####-#### 1/1 Running 0 30s
If the pod is not yet running, run the kubectl -n default get pods -w
command and wait until it is. Then enter Ctrl-C
to break out of the wait loop.
Let’s verify that the petstore service has been created as well.
kubectl -n default get svc petstore
Note that the service does not have an external IP address. It is only accessible within the Kubernetes cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
petstore ClusterIP 10.XX.XX.XX <none> 8080/TCP 1m
Verify the Upstream for the Pet Store Application
The Gloo Gateway discovery services watch for new services added to the Kubernetes cluster. When the petstore service was created, Gloo Gateway automatically created an Upstream for the petstore service. If everything deployed properly, the Upstream STATUS should be Accepted.
Let’s verify this by using the glooctl
command line tool:
glooctl get upstreams
+--------------------------------+------------+----------+------------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+--------------------------------+------------+----------+------------------------------+
| default-kubernetes-443 | Kubernetes | Pending | svc name: kubernetes |
| | | | svc namespace: default |
| | | | port: 8443 |
| | | | |
| default-petstore-8080 | Kubernetes | Accepted | svc name: petstore |
| | | | svc namespace: default |
| | | | port: 8080 |
| | | | |
| gloo-system-gateway-proxy-8080 | Kubernetes | Accepted | svc name: gateway-proxy |
| | | | svc namespace: gloo-system |
| | | | port: 8080 |
| | | | |
| gloo-system-gloo-9977 | Kubernetes | Accepted | svc name: gloo |
| | | | svc namespace: gloo-system |
| | | | port: 9977 |
| | | | |
+--------------------------------+------------+----------+------------------------------+
This command lists all the Upstreams Gloo Gateway has discovered, each written to an Upstream CR.
The Upstream we want to see is default-petstore-8080
.
The Upstream was created in the gloo-system
namespace rather than default
because it was created by the discovery service. Upstreams and Virtual Services do not need to live in the gloo-system
namespace to be processed by Gloo Gateway.
Investigate the YAML of the Upstream
You can view more information about the properties of a particular Upstream by specifying the output type as kube-yaml
.
Let’s take a closer look at the upstream that Gloo Gateway’s Discovery service created:
glooctl get upstream default-petstore-8080 --output kube-yaml
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
labels:
app: petstore
discovered_by: kubernetesplugin
name: default-petstore-8080
namespace: gloo-system
spec:
discoveryMetadata: {}
kube:
selector:
app: petstore
serviceName: petstore
serviceNamespace: default
servicePort: 8080
status:
statuses:
gloo-system:
reportedBy: gloo
state: 1
By default the upstream created is rather simple. It represents a specific kubernetes service. However, the petstore application is a swagger service. Gloo Gateway can discover this swagger spec, but by default Gloo Gateway’s function discovery features are turned off to improve performance. To enable Function Discovery Service (fds) on our petstore, we need to label the namespace.
kubectl label namespace default discovery.solo.io/function_discovery=enabled
Now Gloo Gateway’s function discovery will discover the swagger spec. Fds populated our Upstream with the available rest endpoints it implements.
glooctl get upstream default-petstore-8080
+-----------------------+------------+----------+-------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+-----------------------+------------+----------+-------------------------+
| default-petstore-8080 | Kubernetes | Accepted | svc name: petstore |
| | | | svc namespace: default |
| | | | port: 8080 |
| | | | REST service: |
| | | | functions: |
| | | | - addPet |
| | | | - deletePet |
| | | | - findPetById |
| | | | - findPets |
| | | | |
+-----------------------+------------+----------+-------------------------+
glooctl get upstream default-petstore-8080 --output kube-yaml
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
labels:
discovered_by: kubernetesplugin
service: petstore
name: default-petstore-8080
namespace: gloo-system
spec:
discoveryMetadata: {}
kube:
selector:
app: petstore
serviceName: petstore
serviceNamespace: default
servicePort: 8080
serviceSpec:
rest:
swaggerInfo:
url: http://petstore.default.svc.cluster.local:8080/swagger.json
transformations:
addPet:
body:
text: '{"id": {{ default(id, "") }},"name": "{{ default(name, "")}}","tag":
"{{ default(tag, "")}}"}'
headers:
:method:
text: POST
:path:
text: /api/pets
content-type:
text: application/json
deletePet:
headers:
:method:
text: DELETE
:path:
text: /api/pets/{{ default(id, "") }}
content-type:
text: application/json
findPetById:
body: {}
headers:
:method:
text: GET
:path:
text: /api/pets/{{ default(id, "") }}
content-length:
text: "0"
content-type: {}
transfer-encoding: {}
findPets:
body: {}
headers:
:method:
text: GET
:path:
text: /api/pets?tags={{default(tags, "")}}&limit={{default(limit,
"")}}
content-length:
text: "0"
content-type: {}
transfer-encoding: {}
status:
statuses:
gloo-system:
reportedBy: gloo
state: 1
The application endpoints were discovered by Gloo Gateway’s Function Discovery (fds) service. This was possible because the petstore application implements OpenAPI (specifically, discovering a Swagger JSON document at petstore/swagger.json
).
Configuring Routing
We have confirmed that the Pet Store application was deployed successfully and that the Function Discovery service on Gloo Gateway automatically added an Upstream entry with all the published application endpoints of the Pet Store application. Now let’s configure some routing rules on the default Virtual Service and test them to ensure we get a valid response.
Add a Routing Rule
Even though the Upstream has been created, Gloo Gateway will not route traffic to it until we add some routing rules on a Virtual Service. Let’s now use glooctl to create a basic route for this Upstream with the --prefix-rewrite
flag to rewrite the path on incoming requests to match the path our petstore application expects.
glooctl add route \
--path-exact /all-pets \
--dest-name default-petstore-8080 \
--prefix-rewrite /api/pets
If using Git Bash on Windows, the above will not work; Git Bash interprets the route parameters as Unix file paths and mangles them. Adding MSYS_NO_PATHCONV=1
to the start of the above command should allow it to execute correctly.
We do not specify a specific Virtual Service, so the route is added to the default
Virtual Service. If a default
Virtual Service does not exist, glooctl
will create one.
+-----------------+--------------+---------+------+---------+-----------------+---------------------------+
| VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL | STATUS | LISTENERPLUGINS | ROUTES |
+-----------------+--------------+---------+------+---------+-----------------+---------------------------+
| default | | * | none | Pending | | /all-pets -> gloo-system. |
| | | | | | | .default-petstore-8080 |
+-----------------+--------------+---------+------+---------+-----------------+---------------------------+
The initial STATUS of the petstore Virtual Service will be Pending. After a few seconds it should change to Accepted. Let’s verify that by retrieving the default
Virtual Service with glooctl
.
glooctl get virtualservice default
+-----------------+--------------+---------+------+----------+-----------------+---------------------------+
| VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL | STATUS | LISTENERPLUGINS | ROUTES |
+-----------------+--------------+---------+------+----------+-----------------+---------------------------+
| default | | * | none | Accepted | | /all-pets -> gloo-system. |
| | | | | | | .default-petstore-8080 |
+-----------------+--------------+---------+------+----------+-----------------+---------------------------+
Verify Virtual Service Creation
Let’s verify that a Virtual Service was created with that route.
Routes are associated with Virtual Services in Gloo Gateway. When we created the route in the previous step, we didn’t provide a Virtual Service, so Gloo Gateway created a Virtual Service called default
and added the route.
With glooctl
, we can see that the default
Virtual Service was created with our route:
glooctl get virtualservice default --output kube-yaml
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: default
namespace: gloo-system
ownerReferences: []
status:
statuses:
gloo-system:
reportedBy: gloo
state: Accepted
subresourceStatuses:
'*v1.Proxy.gateway-proxy_gloo-system':
reportedBy: gloo
state: Accepted
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- exact: /all-pets
options:
prefixRewrite: /api/pets
routeAction:
single:
upstream:
name: default-petstore-8080
namespace: gloo-system
When a Virtual Service is created, Gloo Gateway immediately updates the proxy configuration. Since the status of this Virtual Service is Accepted
, we know this route is now active.
At this point we have a Virtual Service with a routing rule sending traffic on the path /all-pets
to the Upstream petstore
at a path of /api/pets
.
Test the Route Rule
Let’s test the route rule by retrieving the URL of Gloo Gateway, and sending a web request to the /all-pets
path of the URL using curl.
curl $(glooctl proxy url --name gateway-proxy)/all-pets
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]
If you test locally by using minikube, the load balancer service that exposes the gateway proxy is not assigned an external IP address or hostname and remains in a <pending>
state. Because of that, the glooctl proxy url
command returns an error similar to Error: load balancer ingress not found on service gateway-proxy curl: (3) URL using bad/illegal format or missing URL
. To open a connection to the gateway proxy service, run minikube tunnel
.
The proxy has now been configured to route requests to the /api/pets
REST endpoint on the Pet Store application in Kubernetes.
Next Steps
Congratulations! You’ve successfully set up your first routing rule. That’s just the tip of the iceberg though. In the next sections, we’ll take a closer look at more HTTP routing capabilities, including customizing the matching rules, route destination types, and request processing features.
To learn more about the concepts behind Upstreams and Virtual Services check out the Concepts page.
If you’re ready to dive deeper into routing, the next logical step is trying out different matching rules starting with Path Matching.