Register workload clusters
After you install the Gloo Mesh management components, register clusters so that Gloo Mesh can identify and manage their service meshes.
When you installed Gloo Mesh Enterprise in the management cluster, a deployment named gloo-mesh-mgmt-server
was created to run the relay server. The relay server is exposed by the gloo-mesh-mgmt-server
LoadBalancer service. When you register workload clusters to be managed by Gloo Mesh Enterprise, a deployment named gloo-mesh-agent
is created on each workload cluster to run a relay agent. Each relay agent is exposed by an gloo-mesh-agent
ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the relay architecture page. Cluster registration also creates a KubernetesCluster
custom resource on the management cluster to represent the workload cluster and store relevant data, such as the workload cluster's local domain (“cluster.local”).
Before you begin
-
Create or choose one or more workload clusters to register with Gloo Mesh. Note: The cluster name cannot include underscores (
_
). -
Set the names of your clusters from your infrastructure provider.
export MGMT_CLUSTER=<management_cluster_name> export REMOTE_CLUSTER=<remote_cluster_name>
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT=<remote-cluster-context>
-
Production installations: Review Best practices for production to prepare your optional security measures. For example, if you provided your own certificates during Gloo Mesh installation, you can use these certificates during cluster registration too.
-
To customize registration in detail, such as for production environments, register clusters with Helm. For quick registration, such as for testing environments, you can register clusters with
meshctl
.
Registering with Helm
Customize your cluster registration by using the gloo-mesh-agent
Helm chart.
-
In the management cluster, create a
KubernetesCluster
resource to represent the workload cluster and store relevant data, such as the workload cluster's local domain.- The
metadata.name
must match the name of the workload cluster that you will specify in thegloo-mesh-agent
Helm chart in subsequent steps. - The
spec.clusterDomain
must match the local cluster domain of the Kubernetes cluster. - You can optionally give your cluster a label, such as
env: prod
,region: us-east
, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER} namespace: gloo-mesh labels: env: prod spec: clusterDomain: cluster.local EOF
- The
-
Get the Gloo Mesh Enterprise version that the
gloo-mesh-mgmt-server
runs in the management cluster. Thegloo-mesh-agent
must run the same version.meshctl version --kubecontext $MGMT_CONTEXT
Example output:
server": [ "Namespace": "gloo-mesh", "components": [ { "componentName": "gloo-mesh-mgmt-server", "images": [ { "name": "gloo-mesh-mgmt-server", "domain": "gcr.io", "path": "gloo-mesh/gloo-mesh-mgmt-server", "version": "2.1.0-beta8" } ] },
-
Save the version as an environment variable.
export GLOO_MESH_VERSION=<version>
-
In the management cluster, find the external address and port that was assigned by your cloud provider to the
gloo-mesh-mgmt-server
load balancer service. Thegloo-mesh-agent
relay agent in each cluster accesses this address via a secure connection.MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
-
Create the
gloo-mesh
namespace in your workload cluster.kubectl create ns gloo-mesh --context $REMOTE_CONTEXT
-
Default certificates only: If you installed Gloo Mesh by using the default self-signed certificates, you must copy the root CA certificate to a secret in the workload cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the workload cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.
-
Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.
kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt rm ca.crt
-
Get the bootstrap token from the management cluster and create a secret in the workload cluster.
kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token rm token
-
-
Add and update the Helm repository for the Gloo Mesh Enterprise relay agent.
helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent helm repo update
-
Prepare a Helm values file for production-level settings or for default settings.
You can edit the
values-data-plane.yaml
values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates and disabling rate limiting and external authentication in thegloo-mesh
namespace. For more information about these settings, see Best practices for production and the agent Helm values documentation.- Download the sample values file from GitHub to your local workstation.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/helm-install/2.1/values-data-plane.yaml > values-data-plane.yaml
- Update the Helm values file with the environment variables that you previously set for
$REMOTE_CLUSTER
and$MGMT_SERVER_NETWORKING_ADDRESS
.envsubst < values-data-plane.yaml > values-data-plane-env.yaml
- Provide your own details for settings that are recommended for production deployments, including custom certificates, disabling rate limiting and external authentication in the
gloo-mesh
namespace, and more. If you do not want to use these settings, you must comment them out.
- Save the default Helm values. For more information, review the Gloo Mesh Enterprise agent Helm values documentation.
helm show values gloo-mesh-agent/gloo-mesh-agent --version $GLOO_MESH_VERSION > values-data-plane-env.yaml
- Edit the file to provide the required details.
- For
cluster
, specify$REMOTE_CLUSTER
. - For
relay.serverAddress
, specify$MGMT_SERVER_NETWORKING_ADDRESS
.
- For
- Download the sample values file from GitHub to your local workstation.
-
Deploy the relay agent to the workload cluster.
helm install gloo-mesh-agent gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh \ --kube-context=$REMOTE_CONTEXT \ --version $GLOO_MESH_VERSION \ --values values-data-plane-env.yaml
If you installed the Gloo Mesh management plane in insecure mode by including the--set insecure=true
flag in the install command, include the--set insecure=true
flag in eachhelm install gloo-mesh-agent
command. -
Repeat steps 5 - 10 to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.
export REMOTE_CLUSTER=<remote_cluster_name> export REMOTE_CONTEXT=<remote-cluster-context>
Registering with meshctl
You can use the CLI tool meshctl
to register your workload clusters.
-
Register the workload cluster. The
meshctl
command completes the following:- Creates the
gloo-mesh
namespace - Copies the root CA certificate to the workload cluster
- Copies the boostrap token to the workload cluster
- Installs the relay agent in the workload cluster
- Creates the KubernetesCluster CRD in the management cluster
meshctl cluster register \ --kubecontext=$MGMT_CONTEXT \ --remote-context=$REMOTE_CONTEXT \ --version $GLOO_MESH_VERSION \ $REMOTE_CLUSTER
If you installed the Gloo Mesh management plane in insecure mode by runningmeshctl install --set insecure=true
, include the--relay-server-insecure=true
flag in eachmeshctl cluster register
command.Example output:
Registering cluster π Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster π Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster π» Installing relay agent in the remote cluster Finished installing chart 'gloo-mesh-agent' as release gloo-mesh:gloo-mesh-agent π Creating remote.cluster KubernetesCluster CRD in management cluster β Waiting for relay agent to have a client certificate Checking... Checking... π Removing bootstrap token β Done registering cluster!
- Creates the
-
Repeat these to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.
export REMOTE_CLUSTER=<remote_cluster_name> export REMOTE_CONTEXT=<remote-cluster-context>
Verifying the registration
After you register a workload cluster, verify that the relay agent is successfully deployed and that the management cluster identified the workload cluster.
-
Verify that the relay agent pod has a status of
Running
.kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-64fc8cc9c5-v7b97 1/1 Running 0 25m
-
Verify that each workload cluster is successfully registered with Gloo Mesh.
kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME AGE cluster-1 27s cluster-2 23s
Verify the relay connection
-
Check that the relay connection between the management server and workload agents is healthy.
- Forward port 9091 of the
gloo-mesh-mgmt-server
pod to your localhost.kubectl port-forward -n gloo-mesh --context $MGMT_CONTEXT deploy/gloo-mesh-mgmt-server 9091
- In your browser, connect to http://localhost:9091/metrics.
- In the metrics UI, look for the following lines. If the values are
1
, the agents in the workload clusters are successfully registered with the management server. If the values are0
, the agents are not successfully connected. Thewarmed
successful message indicates that the management server can push configuration to the agents.relay_pull_clients_connected{cluster="cluster-1"} 1 relay_pull_clients_connected{cluster="cluster-2"} 1 relay_push_clients_connected{cluster="cluster-1"} 1 relay_push_clients_connected{cluster="cluster-2"} 1 relay_push_clients_warmed{cluster="cluster-1"} 1 relay_push_clients_warmed{cluster="cluster-2"} 1
- Take snapshots in case you want to refer to the logs later, such as to open a Support issue.
curl localhost:9091/snapshots/input -o input_snapshot.json curl localhost:9091/snapshots/output -o output_snapshot.json
- Forward port 9091 of the
-
Check that the Gloo Mesh management services are running.
-
Send a gRPC request to the Gloo Mesh management server.
kubectl get secret --context $MGMT_CONTEXT -n gloo-mesh relay-root-tls-secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt grpcurl -authority enterprise-networking.gloo-mesh --cacert=./ca.crt 35.184.21.94:9900 list
-
Verify that the following services are listed.
envoy.service.accesslog.v3.AccessLogService envoy.service.metrics.v2.MetricsService envoy.service.metrics.v3.MetricsService grpc.reflection.v1alpha.ServerReflection relay.multicluster.skv2.solo.io.RelayCertificateService relay.multicluster.skv2.solo.io.RelayPullServer relay.multicluster.skv2.solo.io.RelayPushServer
-
-
Check the logs on the
gloo-mesh-mgmt-server
pod on the management cluster for communication from the workload cluster.kubectl -n gloo-mesh --context $MGMT_CONTEXT logs deployment/gloo-mesh-mgmt-server | grep $REMOTE_CLUSTER
Example output:
{"level":"debug","ts":1616160185.5505846,"logger":"pull-resource-deltas","msg":"recieved request for delta: response_nonce:\"1\"","metadata":{":authority":["gloo-mesh-mgmt-server.gloo-mesh.svc.cluster.local:11100"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.34.0"],"x-cluster-id":["remote.cluster"]},"peer":"10.244.0.17:40074"}
To increase the verbosity of the logs, you can patch the management server deployment.
kubectl patch deploy -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT --type "json" -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--verbose=true"}]'
Optional: Setting up rate limiting and external authentication
To enable mTLS with rate limiting and external authentication, you must add an injection directive for those components. Although you can enable an injection directive on the gloo-mesh
namespace, this directive makes the management plane components dependent on the functionality of Istioβs mutating webhook, which may be a fragile coupling and is not recommended as best practice. In production setups, install the Gloo Mesh Enterprise chart with just rate limiting and external authentication services enabled to the gloo-mesh-addons
namespace, and label the gloo-mesh-addons
namespace for Istio injection.
-
Create the
gloo-mesh-addons
namespace.kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT
-
In a
values.yaml
file, enable rate limiting and external authentication, and disable the relay agent.rate-limiter: enabled: true ext-auth-service: enabled: true glooMeshAgent: enabled: false
-
Create a
gloo-mesh-addons
release from the Gloo Mesh agent Helm chart, to install only rate limiting and external authentication in thegloo-mesh-addons
namespace.helm install gloo-mesh-agent-addons gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh-addons \ --kube-context=$REMOTE_CONTEXT \ --values values.yaml
-
Label the
gloo-mesh-addons
namespace for Istio injection.kubectl --context $REMOTE_CONTEXT label ns gloo-mesh-addons istio-injection=enabled --overwrite
-
Verify that the rate limiting and external authentication components are successfully installed.
kubectl get pods -n gloo-mesh-addons --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE rate-limit-3d62244cdb-fcrvd 2/2 Running 0 4m2s ext-auth-service-3d62244cdb-fcrvd 2/2 Running 0 4m2s
Next, you can check out the guide for rate limiting to use this feature.
Optional: Configure the locality labels for the nodes
Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have locality labels
Verify that your nodes have at least region
and zone
labels. If so, and you do not want to update the labels, you can skip the remaining steps.
kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with region
and zone
labels:
..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
Add locality labels to your nodes
If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region
label to each node, but a separate zone
label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
- List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
- Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster-1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster-1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster-1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster-2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster-2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster-2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Next Steps
The Gloo Mesh Enterprise management plane and the workload clusters in the data plane can now communicate over mTLS to continuously discover and configure your service meshes and workloads.
Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.
- Install Istio into each workload cluster.
- Configure workspaces to create boundaries for your teams’ resources.
- Review how Gloo Mesh custom resources are automatically translated into Istio resources.
- Browse the complete set of Gloo Mesh guides to try out some of Gloo Mesh Enterprise's features.
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo Mesh workshops.