Set up a demo for testing
Set up a demo environment to try out Gloo Mesh Enterprise features, such as traffic and security policies.
Before you begin
Make sure that you have the following tools installed on your local system.
- Cloud provider CLI, such as
kind
,k3d
,gcloud
, or your preferred cloud provider. istioctl
, the Istio command line tool. The resources in the guide use Istio version 1.13.4. To check your installed version, runistioctl version
.jq
, to parse JSON output to get values that can be used in subsequent commands, environment variables, or other contexts.kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use with Gloo Mesh.meshctl
, the Gloo Mesh command line tool for bootstrapping Gloo Mesh, registering clusters, describing configured resources, and more.openssl
:Make sure that you have the OpenSSL version of
openssl
, not LibreSSL. Theopenssl
version must be at least 1.1.- Check the
openssl
version that is installed. If you see LibreSSL in the output, continue to the next step.openssl version
- Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
brew install openssl
- Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an
openssl
command.- For example,
openssl
might be installed along the following path:/usr/local/opt/openssl@3/bin/
- To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL.
/usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...
- For example,
- Check the
Step 1: Create your clusters
Use Kubernetes or OpenShift clusters in your cloud provider, or create local clusters with a project such as Kind or K3s.
- Create three clusters: one cluster to run the management components, and two clusters to run your workloads. For example, you might have three clusters that are named
mgmt-cluster
,cluster-1
, andcluster-2
. - Save the cluster names and Kubernetes contexts as the following environment variables.
export MGMT_CLUSTER=mgmt-cluster export REMOTE_CLUSTER1=cluster-1 export REMOTE_CLUSTER2=cluster-2 export MGMT_CONTEXT=mgmt-cluster export REMOTE_CONTEXT1=cluster-1 export REMOTE_CONTEXT2=cluster-2
Step 2: Configure the locality labels for the nodes
Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have locality labels
Verify that your nodes have at least region
and zone
labels. If so, and you do not want to update the labels, you can skip the remaining steps.
kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with region
and zone
labels:
..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
Add locality labels to your nodes
If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region
label to each node, but a separate zone
label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-south
- List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
- Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster-1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster-1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster-1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster-2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster-2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster-2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Step 3: Install Gloo Mesh Enterprise
The following example installs Gloo Mesh Enterprise for testing purposes. For complete installation instructions, see Install Gloo Mesh.
-
Set the Gloo Mesh Enterprise version as an environment variable.
export GLOO_MESH_VERSION=2.0.7
-
Install the
meshctl
CLI, such as with the following example command for Linux or macOS.curl -sL https://run.solo.io/meshctl/install | v$GLOO_MESH_VERSION sh -
-
Install Gloo Mesh management components. If you need to get a Gloo Mesh license key, contact an account representative.
To unlock advanced north-south traffic management features, use a Gloo Mesh Gateway license key. For more information, see Gateway features.
meshctl install --license $GLOO_MESH_LICENSE_KEY --kubecontext $MGMT_CONTEXT
-
Save the external address and port that was assigned by your cloud provider to the
gloo-mesh-mgmt-server
load balancer service. Thegloo-mesh-agent
relay agent in each cluster accesses this address via a secure connection.- Get the external IP address and port details of the service.
kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT
- Save the address and port as an environment variable.
export MGMT_SERVER_NETWORKING_ADDRESS=<IP:9900>
MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
- Get the external IP address and port details of the service.
-
Optional: To test rate limit or external auth policies, create the following Helm chart values file.
cat << EOF > values.yaml rate-limiter: enabled: true ext-auth-service: enabled: true EOF
-
Register each workload cluster. To test a rate limit policy, include the
--gloo-mesh-agent-chart-values=values.yaml
option.meshctl cluster register \ --kubecontext=$MGMT_CONTEXT \ --remote-context=$REMOTE_CONTEXT1 \ --relay-server-address $MGMT_SERVER_NETWORKING_ADDRESS \ --version $GLOO_MESH_VERSION \ --gloo-mesh-agent-chart-values=values.yaml \ $REMOTE_CLUSTER1 meshctl cluster register \ --kubecontext=$MGMT_CONTEXT \ --remote-context=$REMOTE_CONTEXT2 \ --relay-server-address $MGMT_SERVER_NETWORKING_ADDRESS \ --version $GLOO_MESH_VERSION \ --gloo-mesh-agent-chart-values=values.yaml \ $REMOTE_CLUSTER2
-
Optional: Verify the registration.
Step 4: Set up TLS secrets for HTTPS traffic
To test HTTPS traffic, you must create TLS secrets for the ingress gateway. The following steps use self-signed certificates for testing environments only. This setup includes two separate domain hostnames, www.example.com
in secret gw-ssl-1
and www.test.com
in secret gw-ssl-2
. For more information, see Secure gateways.
Want to use different hosts? You can substitute any hosts that you want, or use wildcards (*
) so that it works for any host. However, throughout the rest of the documentation, make sure to update the networking resources to match the host, such as in route tables and virtual gateways.
- Generate a root key to use for both your sample hostnames.
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out gateway-root.crt -keyout gateway-root.key -subj /CN=root
- Create the following two configuration files.
cat >> gw-ssl-1.conf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, serverAuth subjectAltName = @alt_names [alt_names] DNS = www.example.com EOF
cat >> gw-ssl-2.conf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, serverAuth subjectAltName = @alt_names [alt_names] DNS = www.test.com EOF
- Create the TLS keys and certificates for each hostname.
openssl genrsa -out gw-ssl-1.key 2048 openssl req -new -key gw-ssl-1.key -out gw-ssl-1.csr -subj /CN=www.example.com -config gw-ssl-1.conf openssl x509 -req -days 3650 -CA gateway-root.crt -CAkey gateway-root.key -set_serial 0 -in gw-ssl-1.csr -out gw-ssl-1.crt -extensions v3_req -extfile gw-ssl-1.conf
openssl genrsa -out gw-ssl-2.key 2048 openssl req -new -key gw-ssl-2.key -out gw-ssl-2.csr -subj /CN=www.test.com -config gw-ssl-2.conf openssl x509 -req -days 3650 -CA gateway-root.crt -CAkey gateway-root.key -set_serial 0 -in gw-ssl-2.csr -out gw-ssl-2.crt -extensions v3_req -extfile gw-ssl-2.conf
- Create the TLS secrets for each hostname in each workload cluster. The secrets must be created in the same namespace as the ingress gateway.
kubectl --context ${REMOTE_CONTEXT1} -n istio-system create secret generic gw-ssl-1-secret \ --from-file=tls.key=gw-ssl-1.key \ --from-file=tls.crt=gw-ssl-1.crt kubectl --context ${REMOTE_CONTEXT1} -n istio-system create secret generic gw-ssl-2-secret \ --from-file=tls.key=gw-ssl-2.key \ --from-file=tls.crt=gw-ssl-2.crt
kubectl --context ${REMOTE_CONTEXT2} -n istio-system create secret generic gw-ssl-1-secret \ --from-file=tls.key=gw-ssl-1.key \ --from-file=tls.crt=gw-ssl-1.crt kubectl --context ${REMOTE_CONTEXT2} -n istio-system create secret generic gw-ssl-2-secret \ --from-file=tls.key=gw-ssl-2.key \ --from-file=tls.crt=gw-ssl-2.crt
Step 5: Install Istio and sample apps
The following steps install a simple profile of Istio. For production-level steps, see Install Istio.
You install two sample apps in your demo setup: Bookinfo and httpbin. These sample apps are used throughout the documentation to help test connectivity, such as in the policy guides.
-
Complete Step 3 of the Getting Started to install Istio in each workload cluster. The steps vary depending on if your platform is Kubernetes or OpenShift.
-
Create the
bookinfo
namespace in each workload cluster.kubectl create ns bookinfo --context $REMOTE_CONTEXT1 kubectl create ns bookinfo --context $REMOTE_CONTEXT2
-
Complete Step 2: Deploy Bookinfo across Clusters in the Multicluster federation and isolation with Bookinfo guide.
-
Create an
httpbin
app.kubectl --context $REMOTE_CONTEXT1 create ns httpbin kubectl --context $REMOTE_CONTEXT1 label namespace httpbin istio-injection=enabled kubectl --context $REMOTE_CONTEXT1 -n httpbin apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/httpbin.yaml
-
Get the IP addresses for the Istio east-west and ingress gateways, as well as the HTTP and HTTPS addresses for ingress.
kubectl get svc -A --context ${REMOTE_CONTEXT1} | grep gateway
In the following example, the IP addresses are as follows:
- istio-eastwestgateway: 35.xxx.xx.81
- istio-ingressgateway: 34.xxx.xxx.179
- HTTP istio-ingressgateway: 34.xxx.x.179:80
- HTTPS istio-ingressgateway: 34.xxx.x.179:443
istio-system istio-eastwestgateway LoadBalancer 10.xx.x.xxx 35.xxx.xx.81 15021:32746/TCP,15443:31161/TCP istio-system istio-ingressgateway LoadBalancer 10.xx.x.xxx 34.xxx.x.179 15021:30194/TCP,80:30727/TCP,443:32488/TCP,15443:31475/TCP
-
Set the Istio gateway IP addresses as environment variables.
export EASTWEST_GW_IP=<istio-eastwestgateway-IP> export INGRESS_GW_IP=<istio-ingressgateway-IP> export INGRESS_GW_HTTP=<istio-ingressgateway-IP>:80 export INGRESS_GW_HTTPS=<istio-ingressgateway-IP>:443 echo $EASTWEST_GW_IP echo $INGRESS_GW_IP echo $INGRESS_GW_HTTP echo $INGRESS_GW_HTTPS
Optional Step 6: Install Keycloak
You might want to test how to restrict access to your applications to authenticated users, such as with external auth or JWT policies. You can install Keycloak in your cluster as an OpenID Connect (OIDC) provider.
The following steps install Keycloak in your cluster, and configure two user credentials as follows.
- Username:
user1
, password:password
, email:user1@example.com
- Username:
user2
, password:password
, email:user2@solo.io
Install and configure Keycloak:
-
Create a namespace for your Keycloak deployment.
kubectl --context ${MGMT_CONTEXT} create namespace keycloak
-
Create the Keycloak deployment.
kubectl --context ${MGMT_CONTEXT} -n keycloak apply -f https://raw.githubusercontent.com/solo-io/workshops/gloo-mesh-demo/gloo-mesh-2-0/data/steps/deploy-keycloak/keycloak.yaml
-
Wait for the Keycloak rollout to finish.
kubectl --context ${MGMT_CONTEXT} -n keycloak rollout status deploy/keycloak
-
Set the Keycloak endpoint details from the load balancer service.
export ENDPOINT_KEYCLOAK=$(kubectl --context ${MGMT_CONTEXT} -n keycloak get service keycloak -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8080 export HOST_KEYCLOAK=$(echo ${ENDPOINT_KEYCLOAK} | cut -d: -f1) export PORT_KEYCLOAK=$(echo ${ENDPOINT_KEYCLOAK} | cut -d: -f2) export KEYCLOAK_URL=http://${ENDPOINT_KEYCLOAK}/auth echo $KEYCLOAK_URL
-
Set the Keycloak admin token. If you see a parsing error, try running the
curl
command by itself. You might notice that your network is blocking the requests, which might require updating the security settings so that the request can be processed.export KEYCLOAK_TOKEN=$(curl -d "client_id=admin-cli" -d "username=admin" -d "password=admin" -d "grant_type=password" "$KEYCLOAK_URL/realms/master/protocol/openid-connect/token" | jq -r .access_token) echo $KEYCLOAK_TOKEN
-
Use the admin token to configure Keycloak with the two users for testing purposes. If you get a
401 Unauthorized
error, run the previous command and try again.# Create initial token to register the client read -r client token <<<$(curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"expiration": 0, "count": 1}' $KEYCLOAK_URL/admin/realms/master/clients-initial-access | jq -r '[.id, .token] | @tsv') export KEYCLOAK_CLIENT=${client} # Register the client read -r id secret <<<$(curl -X POST -d "{ \"clientId\": \"${KEYCLOAK_CLIENT}\" }" -H "Content-Type:application/json" -H "Authorization: bearer ${token}" ${KEYCLOAK_URL}/realms/master/clients-registrations/default| jq -r '[.id, .secret] | @tsv') export KEYCLOAK_SECRET=${secret} # Add allowed redirect URIs curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X PUT -H "Content-Type: application/json" -d '{"serviceAccountsEnabled": true, "directAccessGrantsEnabled": true, "authorizationServicesEnabled": true, "redirectUris": ["'https://${ENDPOINT_HTTPS_GW_CLUSTER1}'/callback"]}' $KEYCLOAK_URL/admin/realms/master/clients/${id} # Add the group attribute in the JWT token returned by Keycloak curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"name": "group", "protocol": "openid-connect", "protocolMapper": "oidc-usermodel-attribute-mapper", "config": {"claim.name": "group", "jsonType.label": "String", "user.attribute": "group", "id.token.claim": "true", "access.token.claim": "true"}}' $KEYCLOAK_URL/admin/realms/master/clients/${id}/protocol-mappers/models # Create first user curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"username": "user1", "email": "user1@example.com", "enabled": true, "attributes": {"group": "users"}, "credentials": [{"type": "password", "value": "password", "temporary": false}]}' $KEYCLOAK_URL/admin/realms/master/users # Create second user curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"username": "user2", "email": "user2@solo.io", "enabled": true, "attributes": {"group": "users"}, "credentials": [{"type": "password", "value": "password", "temporary": false}]}' $KEYCLOAK_URL/admin/realms/master/users
What's next?
Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.
- Browse Gloo Mesh traffic control and security policies policies to try out some of Gloo Mesh Enterprise's features.
- Check out the setup guide for advanced installation and cluster registration options.
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo Mesh workshops.