Set up Gloo Mesh
Start with setting up Gloo Mesh Enterprise in three clusters.
The following figure depicts the multi-mesh architecture created by this quick-start guide.
Before you begin
-
Install
meshctl
, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version2.4.1
, which uses the latest Gloo Mesh installation values.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.4.1 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
-
Save the names of three clusters as environment variables. In this guide, the cluster names
mgmt
,cluster1
, andcluster2
are used. Themgmt
cluster serves as the management cluster, andcluster1
andcluster2
serve as the workload clusters in this setup. If your clusters have different names, specify those names instead. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).export MGMT_CLUSTER=mgmt export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2
-
Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context>
-
Set your Gloo Mesh license key as an environment variable. If you do not have one, contact an account representative.
export GLOO_MESH_LICENSE_KEY=<gloo-mesh-license-key>
Install Gloo Platform control plane in the management cluster
-
Install the Gloo Platform control plane in your management cluster. This command uses a basic profile to create a
gloo-mesh
namespace and install the control plane components, such as the management server and Prometheus server, in your management cluster.meshctl install --profiles mgmt-server \ --kubecontext $MGMT_CONTEXT \ --set common.cluster=$MGMT_CLUSTER \ --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY
Note:
- Need to use OpenShift routes instead of load balancer service types? Follow the OpenShift steps in the full multicluster setup guide instead.
- After you run the following command in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.
meshctl install --profiles mgmt-server-openshift \ --kubecontext $MGMT_CONTEXT \ --set common.cluster=$MGMT_CLUSTER \ --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY
-
Verify that the control plane pods are running.
kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
-
Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents in each workload cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
-
Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the
gloo-mesh-config
namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' --- apiVersion: v1 kind: Namespace metadata: name: gloo-mesh-config --- apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh-config spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Register workload clusters
-
Prepare the
gloo-mesh-addons
namespace.kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1 kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2
In each workload cluster, run the following commands to:
- Elevate the permissions of the
gloo-mesh-addons
service account that will be created - Create the
gloo-mesh-addons
project - Create a NetworkAttachmentDefinition custom resource for the project.
These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT2 kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1 kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2 cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT1 -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT2 -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
- Elevate the permissions of the
-
Register both workload clusters with the management server. These commands use basic profiles to install the Gloo agent, rate limit server, and external auth server in each workload cluster.
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --profiles agent,ratelimit,extauth \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --profiles agent,ratelimit,extauth \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --profiles agent-openshift,ratelimit,extauth \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --profiles agent-openshift,ratelimit,extauth \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article. -
Verify that the Gloo data plane components are healthy.
meshctl check --kubecontext $REMOTE_CONTEXT1 meshctl check --kubecontext $REMOTE_CONTEXT2
Example output:
🟢 CRD Version check 🟢 Gloo Platform Deployment Status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh-addons | ext-auth-service | 1/1 | Healthy gloo-mesh-addons | rate-limiter | 1/1 | Healthy gloo-mesh-addons | redis | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy
-
Verify that your Gloo Mesh setup is correctly installed. This check might take a few seconds to verify that:
- Your Gloo Platform product licenses are valid and current.
- The Gloo Platform CRDs are installed at the correct version.
- The control plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the control plane.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 23 10:38 CDT INFO No GraphQL license module found for any product 🟢 CRD version check 🟢 Gloo Platform deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
Install managed Istio
-
Create the
istiod
control planes in your workload clusters.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - clusters: - defaultRevision: true name: $REMOTE_CLUSTER1 - defaultRevision: true name: $REMOTE_CLUSTER2 istioOperatorSpec: components: pilot: k8s: env: - name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES value: "false" - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" meshConfig: accessLogFile: /dev/stdout defaultConfig: holdApplicationUntilProxyStarts: true proxyMetadata: ISTIO_META_DNS_AUTO_ALLOCATE: "true" ISTIO_META_DNS_CAPTURE: "true" outboundTrafficPolicy: mode: ALLOW_ANY rootNamespace: istio-system namespace: istio-system profile: minimal revision: auto EOF
- Elevate the permissions of the following service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT2 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT2 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-18-2 --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-18-2 --context $REMOTE_CONTEXT2
- Create the
gloo-mesh-gateways
project, and create a NetworkAttachmentDefinition custom resource for the project.kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT1 cat <<EOF | oc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT2 cat <<EOF | oc --context $REMOTE_CONTEXT2 -n gloo-mesh-gateways create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
- Create an
IstioLifecycleManager
custom resource to manage theistiod
control planes.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: IstioLifecycleManager metadata: name: istiod-control-plane namespace: gloo-mesh spec: installations: - clusters: - defaultRevision: true name: $REMOTE_CLUSTER1 - defaultRevision: true name: $REMOTE_CLUSTER2 istioOperatorSpec: components: cni: enabled: true namespace: kube-system k8s: overlays: - kind: DaemonSet name: istio-cni-node patches: - path: spec.template.spec.containers[0].securityContext.privileged value: true pilot: k8s: env: - name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES value: "false" - name: PILOT_SKIP_VALIDATE_TRUST_DOMAIN value: "true" meshConfig: accessLogFile: /dev/stdout defaultConfig: holdApplicationUntilProxyStarts: true envoyMetricsService: address: gloo-mesh-agent.gloo-mesh:9977 envoyAccessLogService: address: gloo-mesh-agent.gloo-mesh:9977 proxyMetadata: ISTIO_META_DNS_CAPTURE: "true" ISTIO_META_DNS_AUTO_ALLOCATE: "true" outboundTrafficPolicy: mode: ALLOW_ANY rootNamespace: istio-system namespace: istio-system profile: openshift values: cni: cniBinDir: /var/lib/cni/bin cniConfDir: /etc/cni/multus/net.d chained: false cniConfFileName: "istio-cni.conf" excludeNamespaces: - istio-system - kube-system logLevel: info sidecarInjectorWebhook: injectedAnnotations: k8s.v1.cni.cncf.io/networks: istio-cni revision: auto EOF
- Elevate the permissions of the following service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
-
Create the Istio gateways in your workload clusters. The commands vary based on which Gloo Platform licenses you use.
If you have only a Gloo Mesh license, create a
GatewayLifecycleManager
custom resource to manage the east-west gateways.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: GatewayLifecycleManager metadata: name: istio-eastwestgateway namespace: gloo-mesh spec: installations: - clusters: - activeGateway: true name: $REMOTE_CLUSTER1 - activeGateway: true name: $REMOTE_CLUSTER2 gatewayRevision: auto istioOperatorSpec: components: ingressGateways: - enabled: true k8s: env: - name: ISTIO_META_ROUTER_MODE value: "sni-dnat" service: ports: - port: 15021 targetPort: 15021 name: status-port - port: 15443 targetPort: 15443 name: tls selector: istio: eastwestgateway type: LoadBalancer label: istio: eastwestgateway app: istio-eastwestgateway name: istio-eastwestgateway namespace: gloo-mesh-gateways namespace: istio-system profile: empty EOF
If you have both Gloo Mesh and Gloo Gateway licenses, create two
GatewayLifecycleManager
custom resources to manage the east-west and ingress gateways. Note that you can optionally add cloud provider-specific annotations for the ingress gateway load balancer; for example, you can uncomment the AWSserviceAnnotations
in theistio-ingressgateway
gateway lifecycle manager.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: GatewayLifecycleManager metadata: name: istio-eastwestgateway namespace: gloo-mesh spec: installations: - clusters: - activeGateway: true name: $REMOTE_CLUSTER1 - activeGateway: true name: $REMOTE_CLUSTER2 gatewayRevision: auto istioOperatorSpec: components: ingressGateways: - enabled: true k8s: env: - name: ISTIO_META_ROUTER_MODE value: "sni-dnat" service: ports: - port: 15021 targetPort: 15021 name: status-port - port: 15443 targetPort: 15443 name: tls selector: istio: eastwestgateway type: LoadBalancer label: istio: eastwestgateway app: istio-eastwestgateway name: istio-eastwestgateway namespace: gloo-mesh-gateways namespace: istio-system profile: empty --- apiVersion: admin.gloo.solo.io/v2 kind: GatewayLifecycleManager metadata: name: istio-ingressgateway namespace: gloo-mesh spec: installations: - clusters: - activeGateway: true name: $REMOTE_CLUSTER1 - activeGateway: true name: $REMOTE_CLUSTER2 gatewayRevision: auto istioOperatorSpec: components: ingressGateways: - enabled: true k8s: service: ports: - name: status-port port: 15021 targetPort: 15021 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 - name: tls port: 15443 targetPort: 15443 selector: istio: ingressgateway type: LoadBalancer #serviceAnnotations: # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" # service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance # service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" # service.beta.kubernetes.io/aws-load-balancer-type: external label: istio: ingressgateway app: istio-ingressgateway name: istio-ingressgateway namespace: gloo-mesh-gateways namespace: istio-system profile: empty EOF
-
Verify that the namespaces for your Istio installations are created in each workload cluster.
kubectl get ns --context $REMOTE_CONTEXT1 kubectl get ns --context $REMOTE_CONTEXT2
For example, the
gm-iop-1-18-2
,gloo-mesh-gateways
, andistio-system
namespaces are created:NAME STATUS AGE default Active 56m gloo-mesh Active 36m gloo-mesh-addons Active 36m gm-iop-1-18-2 Active 91s gloo-mesh-gateways Active 90s istio-system Active 91s ...
-
Verify that Gloo Mesh successfully discovered the Istio service meshes. Gloo creates internal
mesh
resources to represent the state of the Istio service mesh.kubectl get mesh -n gloo-mesh --context $REMOTE_CONTEXT1 kubectl get mesh -n gloo-mesh --context $REMOTE_CONTEXT2
Next
Deploy sample apps to try out the routing capabilities and traffic policies in Gloo Mesh.
Understand what happened
Find out more information about the Gloo Mesh environment that you set up in this guide.
Gloo Mesh installation: This quick start guide used meshctl
to install a minimum deployment of Gloo Mesh Enterprise for testing purposes, and some optional components are not installed. For example, self-signed certificates are used to secure communication between the management and workload clusters. To learn more about production-level installation options, including advanced configuration options available in the Gloo Mesh Enterprise Helm chart, see the Setup guide.
Relay architecture: When you installed the Gloo Mesh control plane in the management cluster, a deployment named gloo-mesh-mgmt-server
was created to translate and implement your Gloo configurations and act as the relay server. When you registered the workload clusters to be managed by the control plane, a deployment named gloo-mesh-agent
was created on each workload cluster to run a relay agent. All communication is outbound from the relay agents on the workload clusters to the relay server on the management cluster. For more information about server-agent communication, see the relay architecture page. Additionally, default, self-signed certificates were used to secure communication between the control and data planes. For more information about the certificate architecture, see Default Gloo Mesh-managed certificates.
Workload cluster registration: Cluster registration creates a KubernetesCluster
custom resource on the management cluster to represent the workload cluster and store relevant data, such as the workload cluster's local domain (“cluster.local”). To learn more about cluster registration and how to register clusters with Helm rather than meshctl
, review the cluster registration guide.
Istio installation: The Istio profiles in this getting started guide were provided with IstioLifecycleManager
and GatewayLifecycleManager
custom resources. However, Gloo Mesh can discover Istio service meshes regardless of their installation options. To manually install Istio, see the advanced configuration guides.
Gloo workspace: Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, a single workspace is created for everything. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting. You can also change the default workspace by following the Workspace setup guide.