Set up Gloo Mesh (Gloo Platform APIs)
Quickly set up Gloo Mesh (Gloo Platform APIs) in three clusters.
Gloo Mesh (Gloo Platform APIs) is a multicluster and multimesh management plane that is based on hardened, open-source projects like Envoy and Istio. With Gloo Mesh, you can unify the configuration, operation, and visibility of service-to-service connectivity across your distributed applications. These apps can run in different virtual machines (VMs) or Kubernetes clusters on premises or in various cloud providers, and even in different service meshes.
You can follow this guide to quickly get started with Gloo Mesh (Gloo Platform APIs). To learn more about the benefits and architecture, see About. To customize your installation with Helm instead, see the advanced installation guide.
The following figure depicts the multi-mesh architecture created by this quick-start guide.
Before you begin
Install the following command-line (CLI) tools.
helm, the Kubernetes package manager.kubectl, the Kubernetes command line tool. Download thekubectlversion that is within one minor version of the Kubernetes clusters you plan to use.meshctl, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.11.0-beta0 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.
- The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number) to follow the Kubernetes DNS label standard.
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2- Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts, look for your cluster in theCLUSTERcolumn, and get the context name in theNAMEcolumn. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context> Set your Gloo Mesh (Gloo Platform APIs) license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0).export GLOO_MESH_LICENSE_KEY=<license_key>
Install the Gloo management plane
Deploy the Gloo management plane into a dedicated management cluster.
Install the Gloo management plane in your management cluster. This command uses a basic profile to create a
gloo-meshnamespace and install the management plane components, such as the management server and Prometheus server, in your management cluster.Verify that the management plane pods have a status of
Running.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXTExample output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 30s gloo-telemetry-collector-agent-mf5rw 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30sSave the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the
gloo-mesh-confignamespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' --- apiVersion: v1 kind: Namespace metadata: name: gloo-mesh-config --- apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh-config spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Install the Gloo data plane
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.
Register both workload clusters with the management server. These commands use basic profiles to install the Gloo agent, rate limit server, and external auth server in each workload cluster.
Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.
kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT2Example output:
NAME READY STATUS RESTARTS AGE extauth-585565448c-smz4j 1/1 Running 0 90s gloo-mesh-agent-8ffc775c4-tk2z5 2/2 Running 0 90s gloo-telemetry-collector-agent-g8p7x 1/1 Running 0 90s gloo-telemetry-collector-agent-mp2wd 1/1 Running 0 90s rate-limit-8db4b7b5d-hq7g8 1/1 Running 0 90sVerify that your Gloo Mesh (Gloo Platform APIs) setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product licenses are valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the management server.
- Any Istio installation versions are compatible with the installed Gloo version.
meshctl check --kubecontext $MGMT_CONTEXTExample output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 2/2 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2 🟢 Istio compatibility check All Istio versions found are compatible
Deploy Istio
Use the Solo distribution of Istio to install a service mesh. For more information, check out Solo distributions of Istio. Note that this guide installs a minimal service mesh setup to get you started. For a more advanced Istio sidecar installation, check out Install Istio service meshes with Helm.
Install the Gloo Operator to the
gloo-meshnamespace of each cluster. This operator deploys and manages your Istio installations.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.4.0 \ -n gloo-mesh \ --create-namespace \ --kube-context ${context} \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY} doneVerify that the operator pods are running.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl get pods -n gloo-mesh --context ${context} -l app.kubernetes.io/name=gloo-operator done- Create a ServiceMeshController custom resource to configure an Istio installation. For a description of each configurable field, see the ServiceMeshController reference. If you need to set more advanced Istio configuration, you can also create a gloo-extensions-config configmap.
function apply_smc() { context=${1:?context} cluster=${2:?cluster} kubectl apply -n gloo-mesh --context ${context} -f - <<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: # required for multicluster setups cluster: ${CLUSTER_NAME} dataplaneMode: Sidecar installNamespace: istio-system version: 1.27.0 EOF } apply_smc ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1} apply_smc ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}Note that the operator detects your cloud provider and cluster platform, and configures the necessary settings required for that platform for you. For example, if you create an ambient mesh in an OpenShift cluster, no OpenShift-specific settings are required in the ServiceMeshController, because the operator automatically sets the appropriate settings for OpenShift and your specific cloud provider accordingly.If you set theinstallNamespaceto a namespace other thangloo-system,gloo-mesh, oristio-system, you must include the‐‐set manager.env.WATCH_NAMESPACES=<namespace>setting. Verify that the istiod control plane and Istio CNI pods are running in each cluster.
kubectl get pods -n istio-system --context ${REMOTE_CONTEXT1} kubectl get pods -n istio-system --context ${REMOTE_CONTEXT2}Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-cni-node-6s5nk 1/1 Running 0 2m53s istio-cni-node-blpz4 1/1 Running 0 2m53s istiod-gloo-bb86b959f-msrg7 1/1 Running 0 2m45s istiod-gloo-bb86b959f-w29cm 1/1 Running 0 3mVerify that Gloo Mesh (Gloo Platform APIs) successfully discovered the Istio service meshes. Gloo Mesh (Gloo Platform APIs) creates internal
meshresources to represent the state of the Istio service mesh.kubectl get mesh -n gloo-mesh --context ${REMOTE_CONTEXT1} kubectl get mesh -n gloo-mesh --context ${REMOTE_CONTEXT2}
Next
Deploy sample apps to try out the routing capabilities and traffic policies in Gloo Mesh (Gloo Platform APIs).
Understand what happened
Find out more information about the Gloo Mesh environment that you set up in this guide.
Gloo Mesh installation: This quick start guide used meshctl to install a minimum deployment of Gloo Mesh (Gloo Platform APIs) for testing purposes, and some optional components are not installed. For example, self-signed certificates are used to secure communication between the management and workload clusters. To learn more about production-level installation options, including advanced configuration options available in the Gloo Mesh (Gloo Platform APIs) Helm chart, see the Setup guide.
Relay architecture: When you installed the Gloo management plane in the management cluster, a deployment named gloo-mesh-mgmt-server was created to translate and implement your Gloo configurations and act as the relay server. When you registered the workload clusters to be managed by the management plane, a deployment named gloo-mesh-agent was created on each workload cluster to run a relay agent. All communication is outbound from the relay agents on the workload clusters to the relay server on the management cluster. For more information about server-agent communication, see the relay architecture page. Additionally, default, self-signed certificates were used to secure communication between the control and data planes. For more information about the certificate architecture, see Default Gloo Mesh-managed certificates.
Workload cluster registration: Cluster registration creates a KubernetesCluster custom resource on the management cluster to represent the workload cluster and store relevant data, such as the workload cluster’s local domain (“cluster.local”). To learn more about cluster registration and how to register clusters with Helm rather than meshctl, review the cluster registration guide.
Istio installation: The Istio service meshes in this getting started guide were installed by using the Gloo Operator. However, Gloo Mesh can discover Istio service meshes regardless of their installation options. For more information, check out the Gloo Mesh (Gloo Platform APIs) guides for installing and managing Istio.
Gloo workspace: Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, a single workspace is created for everything. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting. You can also change the default workspace by following the Workspace setup guide.