Alpha: External workloads (VMs)
Onboard external workloads, such as virtual or bare metal machines, to an Istio service mesh in your Gloo Mesh Enterprise environment.
Onboarding external workloads to Gloo Mesh Enterprise is an alpha feature. As of 2.4.0, this feature is tested for onboarding VMs that run in Google Cloud Platform (GCP) and Amazon Web Services (AWS). Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Gloo feature maturity.
Onboard external workloads, such as virtual machines, to an Istio service mesh in your Gloo Mesh Enterprise environment.
About
As you build your Gloo Mesh Enterprise environment, you might want to add a virtual machine to your setup. For example, you might run an app or service in a VM that must communicate with services in the Istio service mesh that runs in your Kubernetes cluster.
To onboard your VM into the service mesh, you deploy three agents to your VM: an Istio sidecar agent, a SPIRE agent, and an OpenTelemetry (OTel) collector agent.
IstioBy adding an Istio sidecar agent to your VM, you can achieve fully bi-directional communication between apps in your cluster’s service mesh, and apps in the VM. Because all communication between services in the workload cluster and services in the VM goes through the workload cluster’s east-west gateway, the communication is mTLS-secured. Additionally, you get the added benefit of applying Gloo resources to the apps on your VM, such as Gloo traffic policies.
SPIRETo securely identify a virtual machine, Gloo Mesh Enterprise uses SPIRE, a production-ready implementation of the SPIFFE APIs. SPIRE issues SPIFFE Verifiable Identity Documents (SVIDs) to workloads and verifies the SVIDs of other workloads. In this guide, you deploy a SPIRE server to your workload cluster and a SPIRE agent to the VM. To onboard the VM into your Gloo setup, the SPIRE agent must authenticate and verify itself when it first connects to the server, which is known as node attestation. During node attestation, the agent and server together verify the identity of the node that the agent is deployed to. This process ensures that the workload cluster and your VM can securely connect to each other.
OTelTo ensure that metrics can be collected from the VM in the same way that they are collected from workload clusters, you deploy an OTel collector agent to the VM. The collector sends metrics through the workload cluster’s east-west gateway to the OTel gateway on the management cluster. Additionally, as part of this guide, you enable the collector agents to gather metadata about the compute instances that they are deployed to. This compute instance metadata helps you better visualize your Gloo Mesh Enterprise setup across your cloud provider infrastructure network. For more information about metrics collection in Gloo Platform, see Set up the Gloo OTel pipeline.
Before you begin
Install CLI tools
Install the following CLI tools.
helm
, the Kubernetes package manager.kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.istioctl
, the Istio command line tool. Important: Versions 1.17.4 through 1.18.7-patch3 are supported in Gloo Mesh Enterprise for onboarding VMs.meshctl
, the Gloo command line tool for bootstrapping Gloo Mesh Enterprise, registering clusters, describing configured resources, and more. Be sure to download version2.4.16
, which uses the latest Gloo Mesh installation values.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.4.16 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
- Install the following CLI tools, depending on your infrastructure platform.
Overview
This guide walks you through the following general steps:
- Create compute resources: In your cloud provider, create two Kubernetes clusters and a VM on the same network subnet. Additionally, configure IAM permissions for the SPIRE server you later deploy to the workload cluster.
- Set up the management cluster: Deploy the Gloo management plane to the management cluster. The installation settings enable the onboarding of external workloads to the service mesh.
- Set up the workload cluster: Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane. The registration settings include deploying a SPIRE server to the workload cluster, and using PostgreSQL as the default datastore for the SPIRE server.
- Onboard the VM: Onboard the VM to your Gloo Mesh Enterprise environment by generating a bootstrap bundle that installs the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.
- Test connectivity: Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.
- Launch the UI (optional): To visualize the connection to your VM in your Gloo Mesh Enterprise setup, you can lauch the Gloo UI.
For more information about onboarding VMs to a service mesh, see the high-level architectural overview of Istio’s virtual machine integration.
Step 1: Create compute resources
In your cloud provider, create the virtual machine, management cluster, and at least one workload cluster, and configure IAM permissions for the SPIRE server.
Step 2: Set up the management cluster
Install the Gloo Platform management plane in the management cluster. Note that the installation settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.
Set your Gloo Mesh Enterprise license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0)
.export GLOO_MESH_LICENSE_KEY=<license_key>
If you plan to deploy an ingress gateway to manage ingress traffic to mesh workloads and you want to apply policies, such as rate limits, external authentication, or Web Application Firewalls to that gateway, you must also have a Gloo Mesh Gateway license. Without a Gloo Mesh Gateway license, you can only set up simple routing rules to match and forward traffic to mesh workloads. Save the Gloo Mesh Gateway license key as an additional environment variable.
export GLOO_MESH_GATEWAY_LICENSE_KEY=<license-key>
Save the following settings in a
mgmt-server.yaml
Helm values file.Install the Gloo Platform management plane in your management cluster. This command uses a basic profile to create a
gloo-mesh
namespace, install the management plane components in your management cluster, such as the management server, and enable the ability to add external workloads to your Gloo environment.meshctl install \ --kubecontext ${MGMT_CONTEXT} \ --version 2.4.16 \ --profiles mgmt-server \ --chart-values-file mgmt-server.yaml \ --set enabledExperimentalApi="{externalworkloads.networking.gloo.solo.io/v2alpha1,spireregistrationentries.internal.gloo.solo.io/v2alpha1}"
Verify that the management plane pods are running.
kubectl get pods -n gloo-mesh --context ${MGMT_CONTEXT}
Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents that collect logs in the workload cluster and in the VM send metrics to this address.
Create a workspace that includes relevant namespaces in all clusters, and create workspace settings that enable communication through the east-west gateway. If you do not want to include the
gloo-mesh
namespace in thisdemo
workspace, create a separateadmin
workspace for thegloo-mesh
namespace, and use the workspace import-export functionality to import thegloo-telemetry-collector
Kubernetes service in thegloo-mesh
namespace to the application workspace. For more information, see Organize team resources with workspaces.kubectl apply --context ${MGMT_CONTEXT} -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: demo namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: sleep - name: httpbin - name: vm-config - name: istio-eastwest - name: gloo-mesh --- apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: demo namespace: gloo-mesh spec: options: eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Decide on the type of certificates for the Istio installations you deploy to your workload cluster in the next section.
- Default Istio certificates: To use the default self-signed certificates, no further steps are required. When you deploy Istio in the next section, the Istio components use the self-signed certificates by default.
- Custom Istio certificates: To set up your own certificates to deploy Istio, complete the following example steps. These steps show how to use Istio’s certificate generator tool to quickly generate self-signed root and intermediate CA certificates and keys. For more information and advanced certificate setup options, see Istio certificates.
Create the
istio-system
namespace on the workload cluster, so that the certificate secret can be created in that namespace.kubectl create namespace istio-system --context ${REMOTE_CONTEXT}
Navigate to the Istio certs directory.
cd istio-${ISTIO_VERSION}/tools/certs
Generate a self-signed root CA certificate and private key.
make -f Makefile.selfsigned.mk \ ROOTCA_CN="Solo Root CA" \ ROOTCA_ORG=Istio \ root-ca
Store the root CA certificate, private key and certificate chain in a Kubernetes secret on the management cluster.
cp root-cert.pem ca-cert.pem cp root-key.pem ca-key.pem cp root-cert.pem cert-chain.pem kubectl --context ${MGMT_CONTEXT} -n gloo-mesh create secret generic my-root-trust-policy.gloo-mesh \ --from-file=./ca-cert.pem \ --from-file=./ca-key.pem \ --from-file=./cert-chain.pem \ --from-file=./root-cert.pem
Create the root trust policy in the management cluster and reference the root CA Kubernetes secret in the
spec.config.mgmtServerCa.secretRef
section. You can optionally customize the number of days the root and derived intermediate CA certificates are valid for by specifying thettlDays
in themgmtServerCA
(root CA) andintermediateCertOptions
(intermediate CA) of your root trust policy.The following example policy sets up the root CA with the credentials that you stored in the
my-root-trust-policy.gloo-mesh
secret. The root CA certificate is valid for 730 days. The intermediate CA certificates that are automatically created by Gloo are valid for 1 day.kubectl apply --context ${MGMT_CONTEXT} -f- << EOF apiVersion: admin.gloo.solo.io/v2 kind: RootTrustPolicy metadata: name: root-trust-policy namespace: gloo-mesh spec: config: intermediateCertOptions: secretRotationGracePeriodRatio: 0.1 ttlDays: 1 mgmtServerCa: generated: ttlDays: 730 secretRef: name: my-root-trust-policy.gloo-mesh namespace: gloo-mesh EOF
Verify that the
cacerts
Kubernetes secret was created in theistio-system
namespace on the workload cluster. This secret represents the intermediate CA and is used by the Istio control plane istiod to issue leaf certificates to the workloads in your service mesh.kubectl get secret cacerts -n istio-system --context ${REMOTE_CONTEXT}
Verify the certificate chain for the intermediate CA. Because the intermediate CA was derived from the root CA, the root CA must be listed as the
root-cert
in thecacerts
Kubernetes secret.- Get the root CA certificate from the
my-root-trust-policy.gloo-mesh
secret on the management cluster.kubectl get secret my-root-trust-policy.gloo-mesh -n gloo-mesh -o jsonpath='{.data.ca-cert\.pem}' --context ${MGMT_CONTEXT}| base64 --decode
- In each workload cluster, get the root CA certificate that is listed as the
root-cert
in thecacerts
Kubernetes secret.kubectl get secret cacerts -n istio-system --context ${REMOTE_CONTEXT} -o jsonpath='{.data.root-cert\.pem}' | base64 --decode
- Verify that the root CA certificate that is listed in the intermediate CA secret (8.2) matches the root CA certificate that was created by the Gloo root trust policy (8.1).
- Get the root CA certificate from the
Step 3: Set up the workload cluster
Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane.
Deploy test apps
To verify connectivity from the VM to services that run in the workload cluster, deploy the sleep
and test
sample applications to your workload cluster.
Create the
sleep
app, and label thesleep
namespace for Istio sidecar injection.kubectl --context ${REMOTE_CONTEXT} create namespace sleep kubectl --context ${REMOTE_CONTEXT} label ns sleep istio-injection=enabled kubectl --context ${REMOTE_CONTEXT} apply -n sleep -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/sleep/sleep.yaml
Create the
httpbin
app, and label thehttpbin
namespace for Istio sidecar injection.kubectl --context ${REMOTE_CONTEXT} create namespace httpbin kubectl --context ${REMOTE_CONTEXT} label ns httpbin istio-injection=enabled kubectl --context ${REMOTE_CONTEXT} apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/1.18.7/samples/httpbin/httpbin.yaml
Get the IP address of the
httpbin
pod.kubectl --context ${REMOTE_CONTEXT} get pods -n httpbin -o wide
Log in to your VM, and curl the pod IP address. This ensures that the VM and workload cluster are available to each other on the same network subnet, before you deploy Istio sidecars to either.
curl -s <httpbin_pod_ip>:80
Note: If the curl is unsuccessful, verify that the security rules that you created for the VM allow outbound traffic and permit TCP traffic on ports 22, 80, and 5000.
Deploy Istio
Set up the Istio control plane and gateways in the workload cluster. Note that the installation settings included in these steps are tailored for basic setups.
Create the following Istio namespaces, and label the
istio-system
namespace withtopology.istio.io/network=${REMOTE_CLUSTER}
.kubectl --context ${REMOTE_CONTEXT} create namespace istio-ingress kubectl --context ${REMOTE_CONTEXT} create namespace istio-eastwest kubectl --context ${REMOTE_CONTEXT} create namespace istio-system kubectl --context ${REMOTE_CONTEXT} label namespace istio-system topology.istio.io/network=${REMOTE_CLUSTER}
Download the IstioOperator resource, which contains Istio installation settings that are required for onboarding a VM to the service mesh.
Update the file with your workload cluster name, and install the control plane and gateways.
Verify that the Istio pods are healthy.
kubectl --context ${REMOTE_CONTEXT} get pods -n istio-system
Verify that the load balancer service for the east-west gateway is created. Note that it might take a few minutes for an external IP address to be assigned. The LoadBalancer service that exposes the east-west gateway allows the VM to access the cluster’s service mesh and be onboarded into the mesh. After the VM is onboarded, all traffic sent to and from the VM goes through the east-west gateway.
kubectl --context ${REMOTE_CONTEXT} get svc -n istio-eastwest
Rollout a restart to the test apps that you deployed prior to the Istio installation, so that they are re-deployed with an Istio sidecar.
kubectl --context ${REMOTE_CONTEXT} rollout restart deploy/sleep -n sleep kubectl --context ${REMOTE_CONTEXT} rollout restart deploy/httpbin -n httpbin
Enable telemetry for all workloads in the workload cluster.
kubectl apply --context ${REMOTE_CONTEXT} -f - <<EOF apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: demo namespace: istio-system spec: # no selector specified, applies to all workloads metrics: - providers: - name: prometheus EOF
Register the workload cluster
Register the workload cluster with the Gloo management server. Note that the registration settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.
Save the settings that are required to deploy SPIRE and PostgreSQL when you register your workload cluster. The steps vary based on the certificates you used to deploy Istio.
- Default Istio certificates: If you used the default Istio certificates and did not set up and manage your own certificates to deploy Istio, you can use the default certificates to also secure the SPIRE server.
- Copy the Istio root CA secret for the SPIRE server and use it as the certificate authority for the SPIRE server.
kubectl --context ${REMOTE_CONTEXT} create ns gloo-mesh kubectl --context ${REMOTE_CONTEXT} get secret istio-ca-secret -n istio-system -o yaml | \ grep -v '^\s*namespace:\s' | sed 's/istio-ca-secret/spire-ca/' | \ kubectl --context ${REMOTE_CONTEXT} apply -n gloo-mesh -f -
- Save the following settings for SPIRE and PostgreSQL in an
agent.yaml
Helm values file. Note that if you want to use a MySQL database instead, you can change thedatabaseType
tomysql
. For more infromation, see the SPIRE docs.
- Copy the Istio root CA secret for the SPIRE server and use it as the certificate authority for the SPIRE server.
- Custom Istio certificates: If you set up your own certificates to deploy Istio, you also can use these certificates to secure the SPIRE server.
- Be sure that you created a
RootTrustPolicy
as part of the custom Istio certificate setup process that you followed. The root trust policy allows the Gloo agent to automatically create aspire-ca
secret, which contains the Istio root CA for the SPIRE server to use as its certificate authority. - Save the following settings for SPIRE and PostgreSQL in an
agent.yaml
Helm values file.
- Be sure that you created a
- Default Istio certificates: If you used the default Istio certificates and did not set up and manage your own certificates to deploy Istio, you can use the default certificates to also secure the SPIRE server.
Register the workload cluster with the management server. This command uses basic profiles to install the Gloo agent, rate limit server, and external auth server, as well the values file to install the SPIRE and PostgreSQL deployments.
meshctl cluster register ${REMOTE_CLUSTER} \ --kubecontext ${MGMT_CONTEXT} \ --remote-context ${REMOTE_CONTEXT} \ --version 2.4.16 \ --profiles agent,ratelimit,extauth \ --telemetry-server-address ${TELEMETRY_GATEWAY_ADDRESS} \ --gloo-mesh-agent-chart-values agent.yaml \ --set enabledExperimentalApi="{externalworkloads.networking.gloo.solo.io/v2alpha1,spireregistrationentries.internal.gloo.solo.io/v2alpha1}"
Verify that the Gloo data plane components are healthy, and that your Gloo Mesh Enterprise setup is correctly installed.
meshctl check --kubecontext ${REMOTE_CONTEXT} meshctl check --kubecontext ${MGMT_CONTEXT}
Depending on your cloud provider, annotate the necessary Kubernetes service accounts in your workload cluster with the IAM service account that you created earlier.
Verify that the pods in your workload cluster are healthy.
kubectl --context ${REMOTE_CONTEXT} get pods -n gloo-mesh
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-d689d4544-g8fzn 1/1 Running 0 4m29s gloo-spire-server-cd88fb77d-jk7mr 2/2 Running 0 53s gloo-telemetry-collector-agent-7jzl4 1/1 Running 0 4m29s gloo-telemetry-collector-agent-86ktk 1/1 Running 0 4m28s gloo-telemetry-collector-agent-l8c99 1/1 Running 0 4m29s gloo-telemetry-collector-agent-pkh2v 1/1 Running 0 4m28s gloo-telemetry-collector-agent-pmqrh 1/1 Running 0 4m29s gloo-telemetry-collector-agent-wnq7d 1/1 Running 0 4m28s postgresql-0 1/1 Running 0 4m28s
Step 4: Onboard the VM
Onboard the VM to your Gloo Mesh Enterprise setup by installing the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.
On the workload cluster, create a namespace for the VM configuration that you set up in subsequent steps.
kubectl --context ${REMOTE_CONTEXT} create namespace vm-config
Save the following
ExternalWorkload
Gloo resource to create an identity for apps that run on the VM. The following example resource provisions an identity in thevm-config
namespace of the workload cluster, for services that listen on port 5000 and that run on a VM of the specified identity selector. For more information and available options, see the API reference documentation.This example creates an identity for only the test app that you create on the VM in subsequent steps, which you select by specifying port5000
. If you run multiple apps on your VM that you want to include in the service mesh, you can specify multiple ports to select each app. Then, when you create a virtual destination for the test app in subsequent steps, you can create additional virtual destinations for each of your other apps.Create the
ExternalWorkload
resource in the workload cluster.kubectl apply --context ${REMOTE_CONTEXT} -f externalworkload.yaml
Confirm that a
WorkloadGroup
Istio resource is created in your VM configuration namespace. This resource summarizes all workloads on the VM that you selected by port number in theExternalWorkload
.kubectl --context ${REMOTE_CONTEXT} get workloadgroup -n vm-config
Example output:
NAME AGE vm-ext-workload 10s
In the management cluster, use
meshctl
to generate the bootstrap bundle that is required to install the Istio sidecar, OTel collector, and SPIRE agents on the VM. The generated bundle,bootstrap.tar.gz
, is created based on theExternalWorkload
configuration that you previously applied in your workload cluster. For more information about the options for this command, see the CLI reference.The output
/tmp/bootstrap.tar.gz
directory on your local machine contains thecluster.env
,hosts
,mesh.yaml
,root-cert.pem
, andspire-agent.conf
files.INFO ✅ Bootstrap bundle generated: /tmp/bootstrap.tar.gz
On the VM, create the
gloo-mesh-config
directory, and navigate to the directory.mkdir gloo-mesh-config cd gloo-mesh-config
From your local machine, copy the bootstrap bundle and the bootstrap script to the
~/gloo-mesh-config/
directory on the VM.scp /tmp/bootstrap.tar.gz <username>@<instance_address>:~/gloo-mesh-config/ scp bootstrap.sh <username>@<instance_address>:~/gloo-mesh-config/
Get the URLs for the packages to install the Istio sidecar, OTel collector, and SPIRE agents on your VM, and save them as environment variables on the VM.
- Log in to the Support Center and review the Solo packages for external workload integration support article.
- Save the Istio package URL.
- For the same Istio version that you previously downloaded, open the link for the cloud storage bucket that has Solo distributions of Istio.
- In the storage bucket, open the
-solo
directory for the version you want to use, such as1.18.7-patch3-solo/
. - Depending on your VM image type, open either the
deb/
orrpm/
directory. - On either the
istio-sidecar
oristio-sidecar-arm64
binary package, click the menu button, and click Copy Public URL. - On your VM, save the URL as an environment variable.
export ISTIO_URL=<istio_package_url>
- Save the OTel package URL.
- Open the link for the Solo cloud storage bucket.
- In the storage bucket, open the directory for the Gloo version that you use.
- Open the
otel/
directory. - On the package for your VM type, click the menu button, and click Copy Public URL.
- On your VM, save the URL as an environment variable.
export OTEL_URL=<otel_package_url>
- Save the SPIRE package URL.
- Open the link for the Solo cloud storage bucket.
- In the storage bucket, open the directory for the Gloo version that you use.
- Open the
spire/
directory. - On the package for your VM type, click the menu button, and click Copy Public URL.
- On your VM, save the URL as an environment variable.
export SPIRE_URL=<spire_package_url>
On the VM, install the Istio sidecar, OTEL collector, and SPIRE agents.
sudo ./bootstrap.sh --istio-pkg ${ISTIO_URL} --otel-pkg ${OTEL_URL} --spire-pkg ${SPIRE_URL} -b bootstrap.tar.gz --install --start
Step 5: Test connectivity
Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.
On your VM, deploy a simple HTTP server that listens on port 5000. This app is represented by the
ExternalWorkload
identity that you previously created.nohup python3 -m http.server 5000 &
From the VM, curl the httpbin service in your workload cluster. Note that this curl command uses the app’s cluster DNS name instead of its pod IP address, because the VM is now connected to the Istio service mesh that runs in your workload cluster.
curl -s httpbin.httpbin:8000 -v
Example output:
* Trying 10.XX.XXX.XXX:8000... * Connected to httpbin.httpbin (10.XX.XXX.XXX) port 8000 (#0) > GET / HTTP/1.1 > Host: httpbin.httpbin:8000 > User-Agent: curl/7.74.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < server: envoy < date: Mon, 10 Jul 2023 18:00:32 GMT < content-type: text/html; charset=utf-8 < content-length: 9593 < access-control-allow-origin: * < access-control-allow-credentials: true < x-envoy-upstream-service-time: 4 < <!DOCTYPE html> ...
In the workload cluster, create a Gloo
VirtualDestination
resource so that apps in the service mesh can also access the HTTP server test app that runs on the VM through thetestapp.vd
hostname. Note that if you selected multiple apps that run on the VM in yourExternalWorkload
resource, you can create a virtual destination for each app by using the app’s port that you specified.kubectl --context ${REMOTE_CONTEXT} apply -f - <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualDestination metadata: labels: app: testapp name: testapp-vd namespace: vm-config spec: externalWorkloads: - labels: # Label that you gave to the ExternalWorkload resource app: vm-ext-workload hosts: # Hostname to use for the app - testapp.vd ports: # Port that you specified in the ExternalWorkload resource to select the test app - number: 5000 protocol: HTTP targetPort: name: http EOF
From the sleep app in your workload cluster, curl the HTTP server on the VM by using the
testapp.vd
hostname.kubectl --context ${REMOTE_CONTEXT} exec deploy/sleep -n sleep -- curl -s testapp.vd:5000 -v
Example output:
* Trying 244.XXX.XXX.XX:5000... * Connected to testapp.vd (244.XXX.XXX.XX) port 5000 (#0) > GET / HTTP/1.1 > Host: testapp.vd:5000 > User-Agent: curl/8.1.2 > Accept: */* > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Directory listing for /</title> </head> <body> <h1>Directory listing for /</h1> <hr> <ul> <li><a href=".bash_history">.bash_history</a></li> <li><a href=".bash_logout">.bash_logout</a></li> <li><a href=".bashrc">.bashrc</a></li> <li><a href=".profile">.profile</a></li> <li><a href=".ssh/">.ssh/</a></li> <li><a href="gm-config/">gm-config/</a></li> <li><a href="nohup.out">nohup.out</a></li> </ul> <hr> </body> </html> < HTTP/1.1 200 OK < server: envoy < date: Mon, 10 Jul 2023 18:00:50 GMT < content-type: text/html; charset=utf-8 < content-length: 600 < x-envoy-upstream-service-time: 3 < { [600 bytes data] * Connection #0 to host testapp.vd left intact
Step 6 (optional): Launch the UI
To visualize the connection to your VM in your Gloo Mesh Enterprise setup, you can lauch the Gloo UI.
- Access the Gloo UI.
meshctl dashboard --kubecontext ${MGMT_CONTEXT}
- Click the Graph tab to open the network visualization graph for your Gloo Mesh Enterprise setup.
- From the footer toolbar, click Layout Settings.
- Toggle Group By to
INFRA
to review the clusters, virtual machines, and Kubernetes namespaces that your app nodes are organized in. This view also shows details for the cloud provider infrastructure, such as the VPCs and subnets that your resources are deployed to. - Verify that you see your VM connection to your workload cluster. In this example graph, the VM instance connects to the
httpbin
app in the workload cluster. - You can also see more information about the VM instance by clicking on its icon, which opens the details pane for the connection. In this example details pane, the title
helloworld -> httpbin
demonstrates that an external workload namedhelloworld
, which represents apps on the VM, connects to thehttpbin
app in the workload cluster.
Congratulations! Your VM is now registered with Gloo Mesh Enterprise. You can now create Gloo resources for the workloads that you run on the VM, such as Gloo traffic policies. For example, if you selected multiple apps in your ExternalWorkload
resource and want to apply a policy to all of those apps, you can use the label on the ExternalWorkload
in the policy selector. Or, for policies that apply to destinations, you can select only the virtual destination for one of the apps. For more information, see Policy enforcement.
Uninstall
On the VM:
- Remove the Istio sidecar, OTel collector, and SPIRE agents.
sudo ./bootstrap.sh --uninstall
- Remove the bootstrap script and bundle, the agent packages, and the test app data.
cd .. rm -r gloo-mesh-config
- Remove the Istio sidecar, OTel collector, and SPIRE agents.
On the workload cluster:
- Delete the
vm-config
,sleep
, andhttpbin
namespaces.kubectl --context ${REMOTE_CONTEXT} delete namespace vm-config kubectl --context ${REMOTE_CONTEXT} delete namespace sleep kubectl --context ${REMOTE_CONTEXT} delete namespace httpbin
- Delete the
Gateway
andVirtualService
resources from theistio-eastwest
namespace.kubectl --context ${REMOTE_CONTEXT} delete Gateway istiod-gateway -n istio-eastwest kubectl --context ${REMOTE_CONTEXT} delete Gateway spire-gateway -n istio-eastwest kubectl --context ${REMOTE_CONTEXT} delete VirtualService istiod-vs -n istio-eastwest kubectl --context ${REMOTE_CONTEXT} delete VirtualService spire-vs -n istio-eastwest
- Delete the
Continue to use your Gloo Mesh Enterprise setup, or uninstall it.
- To continue to use your Gloo Mesh Enterprise setup, you can optionally remove the SPIRE and PostgreSQL servers by following the upgrade guide to remove their settings from the Helm values for your workload cluster.
- To uninstall your Gloo Mesh Enterprise setup:
- Uninstall the Istio service mesh.
istioctl uninstall --context ${REMOTE_CONTEXT} --purge
- Remove the Istio namespaces.
kubectl delete ns --context ${REMOTE_CONTEXT} istio-system kubectl delete ns --context ${REMOTE_CONTEXT} istio-ingress kubectl delete ns --context ${REMOTE_CONTEXT} istio-eastwest
- Follow the steps in Uninstall Gloo Mesh Enterprise to deregister the workload cluster and uninstall the Gloo management plane.
- Uninstall the Istio service mesh.