Install Gloo
Install the Gloo Platform management components in one cluster, and register workload clusters with Gloo Mesh.
Your Gloo setup consists of a management plane and a data plane.
- Management plane: For production use cases, install the Gloo management components in a dedicated management cluster.
- Data plane: Set up one or more workload clusters that run service meshes, which are then registered with and managed by the management cluster.
Before you begin
-
Add your Gloo Mesh Enterprise license that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative. If you prefer to specify license keys in a secret instead, see Prepare to install.
export GLOO_MESH_LICENSE_KEY=<license_key>
-
Install the following CLI tools:
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version2.2.5
, which uses the latest Gloo Mesh installation values.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.2.5 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
-
Create or use existing Kubernetes clusters. For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo management plane where Gloo Platform components are installed. The other cluster runs your Kubernetes workloads and service meshes. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two remote workload clusters. Note: The cluster name cannot include underscores (
_
). -
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt-cluster export REMOTE_CLUSTER1=cluster-1 export REMOTE_CLUSTER2=cluster-2
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster-1-context> export REMOTE_CONTEXT2=<remote-cluster-2-context>
-
Set the Gloo Mesh Enterprise version. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.2.5-fips’. Do not include
v
before the version number.Gloo Platform version 2.2.5 is not compatible with previous 1.x releases and custom resources such as VirtualMesh or TrafficPolicy.export GLOO_VERSION=2.2.5
-
To customize your installation in detail, such as for production environments, install with Helm. For quick installations, such as for testing environments, you can install with
meshctl
.
Install with Helm
Customize your Gloo setup by installing with the Gloo Platform Helm chart.
Install the management components
-
Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates and set up secure access to the Gloo UI.
-
Install
helm
, the Kubernetes package manager. -
Add and update the Helm repositories for Gloo Platform.
helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent helm repo update
-
Install the Gloo management Helm chart using either default settings or production-level settings.
Use the default settings for the Helm installation, including default certificates.
- Optional: Check out the default Helm values. For more information, review the Gloo management Helm values documentation.
helm show values gloo-mesh-enterprise/gloo-mesh-enterprise --version $GLOO_VERSION > values-mgmt-plane-env.yaml open values-mgmt-plane-env.yaml
- Install the Gloo management Helm chart in the
gloo-mesh
namespace.helm install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \ --namespace gloo-mesh \ --create-namespace \ --kube-context $MGMT_CONTEXT \ --version $GLOO_VERSION \ --set glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY \ --set global.cluster=$MGMT_CLUSTER
You can edit the
values-mgmt-plane.yaml
values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates, and OIDC authorization for the Gloo UI. Additionally, this values file includes aglooMeshMgmtServer.serviceOverrides
section, which applies the recommended Amazon Web Services (AWS) annotations for modifying the deployed load balancer service. For more information about these settings, see Best practices for production and the Helm values documentation for each component.-
Download the sample values file from GitHub to your local workstation.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/helm-install/2.2/values-mgmt-plane.yaml > values-mgmt-plane.yaml
-
Update the Helm values file with the environment variables that you previously set for
$MGMT_CLUSTER
,$GLOO_MESH_LICENSE_KEY
, and$GLOO_VERSION
. Save the updated file asvalues-mgmt-plane-env.yaml
.- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
envsubst '${MGMT_CLUSTER},${GLOO_MESH_LICENSE_KEY},${GLOO_VERSION}'< values-mgmt-plane.yaml > values-mgmt-plane-env.yaml open values-mgmt-plane-env.yaml
- Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
-
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following.
- Provide your custom certificates in the
glooMeshMgmtServer.relay
section. Otherwise, you can enable the default Gloo CA relay certificates. - Optionally set up OIDC authorization for the Gloo UI in the
glooMeshUi.auth
section. OIDC is disabled by default. - For OpenShift clusters, set all instances of
floatingUserId
totrue
. - Review the other Helm value settings for changes that you might want to make. For example, you might use Gloo Mesh with other Gloo products such as Gloo Gateway and provide the
glooGatewayLicenseKey
.
- Provide your custom certificates in the
-
Install the Gloo management Helm chart in the
gloo-mesh
namespace, including the customizations in your Helm values file.
helm install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \ --namespace gloo-mesh \ --create-namespace \ --kube-context $MGMT_CONTEXT \ --version $GLOO_VERSION \ --values values-mgmt-plane-env.yaml
- Optional: Check out the default Helm values. For more information, review the Gloo management Helm values documentation.
-
Verify that the Gloo component pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-7cdcbcbd4-4s8wp 1/1 Running 0 30s gloo-mesh-redis-794d79b7df-r2rtp 1/1 Running 0 30s gloo-mesh-ui-748fd66f5c-lftcx 3/3 Running 0 30s prometheus-server-647b488bb-vg7t5 2/2 Running 0 30s
-
Save the external address and port that were assigned by your cloud provider to the
gloo-mesh-mgmt-server
load balancer service. Thegloo-mesh-agent
relay agent in each cluster accesses this address via a secure connection.MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
-
Create a workspace that selects all clusters and namespaces by default. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. For more complex setups, such as creating a workspace for each team to enforce service isolation, set up federation, and even share resources by importing and exporting, see Organize team resources with workspaces.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' EOF
-
Create a workspace settings for the workspace that enables federation across clusters and selects the Istio east-west gateway.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: $MGMT_CLUSTER
namespace: gloo-mesh
spec:
options:
serviceIsolation:
enabled: false
federation:
enabled: false
serviceSelector:
- {}
eastWestGateways:
- selector:
labels:
istio: eastwestgateway
EOF
Register workload clusters
Register each workload cluster with the management server by deploying the relay agent.
-
For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you register another workload cluster.
export REMOTE_CLUSTER=$REMOTE_CLUSTER1 export REMOTE_CONTEXT=$REMOTE_CONTEXT1
-
Create a
KubernetesCluster
resource in the management cluster to represent the workload cluster and store relevant data, such as the workload cluster's local domain.- The
metadata.name
must match the name of the workload cluster that you specify in thegloo-mesh-agent
Helm chart in subsequent steps. - The
spec.clusterDomain
must match the local cluster domain of the Kubernetes cluster. - You can optionally give your cluster a label, such as
env: prod
,region: us-east
, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER} namespace: gloo-mesh labels: env: prod spec: clusterDomain: cluster.local EOF
- The
-
Install the Gloo agent Helm chart using either default settings or production-level settings.
Use the default settings for the Helm installation, including default certificates.
- If you used default settings to install the management components, including default certificates, you must copy the root CA certificate to a secret in the workload cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the workload cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.
- Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.
kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt rm ca.crt
- Get the bootstrap token from the management cluster and create a secret in the workload cluster.
kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token rm token
- Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.
- Optional: Check out the default Helm values. For more information, review the Gloo agent Helm values documentation.
helm show values gloo-mesh-agent/gloo-mesh-agent --version $GLOO_VERSION > values-data-plane-env.yaml open values-data-plane-env.yaml
- Deploy the relay agent to the workload cluster.
helm install gloo-agent gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh \ --create-namespace \ --kube-context $REMOTE_CONTEXT \ --version $GLOO_VERSION \ --set cluster=$REMOTE_CLUSTER \ --set relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS
You can edit thevalues-data-plane.yaml
values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates and disabling rate limiting and external authentication in thegloo-mesh
namespace. For more information about these settings, see Best practices for production and the agent Helm values documentation. 3. Download the sample values file from GitHub to your local workstation.sh curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/helm-install/2.2/values-data-plane.yaml > values-data-plane.yaml
4. Update the Helm values file with the environment variables that you previously set for$REMOTE_CLUSTER
,$MGMT_SERVER_NETWORKING_ADDRESS
, and$GLOO_VERSION
. Save the updated file asvalues-data-plane-env.yaml
. * Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.shell envsubst < values-data-plane.yaml > values-data-plane-env.yaml open values-data-plane-env.yaml
5. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings. * Provide your references to custom certificates in therelay
section. Otherwise, you can enable the default Gloo CA relay certificates. * For OpenShift clusters, setfloatingUserId
totrue
. * Review the other Helm value settings for changes that you might want to make. For example, you might use deployment overrides to provide settings like node selectors. 6. Deploy the relay agent to the workload cluster.sh helm install gloo-agent gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh \ --create-namespace \ --kube-context $REMOTE_CONTEXT \ --version $GLOO_VERSION \ --values values-data-plane-env.yaml
- If you used default settings to install the management components, including default certificates, you must copy the root CA certificate to a secret in the workload cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the workload cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.
-
Verify that the relay agent pod has a status of
Running
. If not, try debugging the agent.kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-64fc8cc9c5-v7b97 1/1 Running 0 30s
-
Repeat steps 1 - 4 to register each workload cluster with Gloo.
-
Verify that each workload cluster is successfully registered with Gloo.
kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME AGE cluster1 27s cluster2 23s
-
Verify that the workload clusters are successfully identified by the management plane. If not, try debugging the relay connection. Note that this check might take a few seconds to ensure that the expected relay agents are now running and are connected to the relay server in the management cluster.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
Checking Gloo Mesh Management Cluster Installation 🟢 Gloo Mgmt Server Deployment Status 🟢 Gloo Mgmt Server Connectivity to Agents +----------+------------+--------------------------------------------------+ | CLUSTER | REGISTERED | CONNECTED POD | +----------+------------+--------------------------------------------------+ | cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd | +----------+------------+--------------------------------------------------+ | cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd | +----------+------------+--------------------------------------------------+
Install with meshctl
Quickly install Gloo by using meshctl
, such as for testing purposes.
Install the management components
Start by installing the Gloo management components in your management cluster.
-
Install the Gloo management components in the management cluster.
meshctl install
creates a self-signed certificate authority for mTLS if you do not supply your own certificates. If you prefer to set up Gloo Gateway without secure communication for quick demonstrations, include the--set insecure=true
flag. Note that using the default self-signed CAs or using insecure mode are not suitable for production environments.meshctl install --namespace gloo-mesh \ --kubecontext $MGMT_CONTEXT \ --license $GLOO_MESH_LICENSE_KEY \ --version $GLOO_VERSION \ --set global.cluster=$MGMT_CLUSTER
meshctl install --namespace gloo-mesh \ --kubecontext $MGMT_CONTEXT \ --license $GLOO_MESH_LICENSE_KEY \ --version $GLOO_VERSION \ --set global.cluster=$MGMT_CLUSTER \ --set glooMeshMgmtServer.floatingUserId=true \ --set glooMeshUi.floatingUserId=true \ --set glooMeshRedis.floatingUserId=true
-
Verify that the Gloo component pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-7cdcbcbd4-4s8wp 1/1 Running 0 30s gloo-mesh-redis-794d79b7df-r2rtp 1/1 Running 0 30s gloo-mesh-ui-748fd66f5c-lftcx 3/3 Running 0 30s prometheus-server-647b488bb-vg7t5 2/2 Running 0 30s
-
Create a workspace that selects all clusters and namespaces by default. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. For more complex setups, such as creating a workspace for each team to enforce service isolation, set up federation, and even share resources by importing and exporting, see Organize team resources with workspaces.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' EOF
-
Create a workspace settings for the workspace that enables federation across clusters and selects the Istio east-west gateway.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Register workload clusters
Register each workload cluster with the management server by deploying the relay agent.
-
Register the workload clusters. The
meshctl
command completes the following:- Creates the
gloo-mesh
namespace - Copies the root CA certificate to the workload cluster
- Copies the boostrap token to the workload cluster
- Installs the relay agent in the workload cluster
- Creates the KubernetesCluster CRD in the management cluster
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --version $GLOO_VERSION
meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --version $GLOO_VERSION
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --version $GLOO_VERSION \ --set glooMeshAgent.floatingUserId=true
meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --version $GLOO_VERSION \ --set glooMeshAgent.floatingUserId=true
If you installed the Gloo management plane in insecure mode, include the--relay-server-insecure=true
flag in this command. - Creates the
-
Verify that the relay agent pod has a status of
Running
. If not, try debugging the agent.kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT2
Example output for
cluster-1
:NAME READY STATUS RESTARTS AGE gloo-mesh-agent-64fc8cc9c5-v7b97 1/1 Running 0 30s
-
Verify that each workload cluster is successfully registered with Gloo.
kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME AGE cluster-1 27s cluster-2 23s
-
Verify that the workload clusters are successfully identified by the management plane. If not, try debugging the relay connection. Note that this check might take a few seconds to ensure that the expected relay agents are now running and are connected to the relay server in the management cluster.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
Checking Gloo Mesh Management Cluster Installation 🟢 Gloo Mgmt Server Deployment Status 🟢 Gloo Mgmt Server Connectivity to Agents +----------+------------+--------------------------------------------------+ | CLUSTER | REGISTERED | CONNECTED POD | +----------+------------+--------------------------------------------------+ | cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd | +----------+------------+--------------------------------------------------+ | cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd | +----------+------------+--------------------------------------------------+
Optional: Set up rate limiting and external authentication
To enable mTLS with rate limiting and external authentication, you must add an injection directive for those components. Although you can enable an injection directive on the gloo-mesh
namespace, this directive makes the management plane components dependent on the functionality of Istio’s mutating webhook, which may be a fragile coupling and is not recommended as best practice. In production setups, install the Gloo agent Helm chart with just rate limiting and external authentication services enabled to the gloo-mesh-addons
namespace, and label the gloo-mesh-addons
namespace for Istio injection.
Want to modify the default deployment values for the external auth or rate limiting services, such as to set resource requests and limits? See Modify external auth or rate limiting subcharts.
-
Add and update the Helm repositories for the Gloo agent.
helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent helm repo update
-
Create a
gloo-agent-addons
release from the Gloo agent Helm chart, to install only rate limiting and external authentication in thegloo-mesh-addons
namespace.helm install gloo-agent-addons gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh-addons \ --create-namespace \ --kube-context=$REMOTE_CONTEXT \ --version $GLOO_VERSION \ --set rate-limiter.enabled=true \ --set ext-auth-service.enabled=true \ --set glooMeshAgent.enabled=false
-
Label the
gloo-mesh-addons
namespace for Istio injection.kubectl --context $REMOTE_CONTEXT label ns gloo-mesh-addons istio-injection=enabled --overwrite
-
Verify that the rate limiting and external authentication components are successfully installed.
kubectl get pods -n gloo-mesh-addons --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE rate-limit-3d62244cdb-fcrvd 2/2 Running 0 4m2s ext-auth-service-3d62244cdb-fcrvd 2/2 Running 0 4m2s
Next, you can check out the guides for external auth and rate limiting policies.
Optional: Configure the locality labels for the nodes
Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have locality labels
Verify that your nodes have at least region
and zone
labels. If so, and you do not want to update the labels, you can skip the remaining steps.
kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with region
and zone
labels:
..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
Add locality labels to your nodes
If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region
label to each node, but a separate zone
label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
- List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
- Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster-1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster-1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster-1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster-2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster-2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster-2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Next steps
Now that the Gloo management components are installed and workload clusters are registered, check out the following resources to explore Gloo Mesh capabilities:
- Install Istio into each workload cluster.
- Organize team resources with workspaces.
- Deploy sample apps in your cluster to follow the guides in the documentation.
- Review how Gloo Mesh custom resources are automatically translated into Istio resources.
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo Mesh workshops.