Alpha: Onboard an external workload to the service mesh

Onboarding external workloads to Gloo Mesh is an alpha feature. As of 2.4.0, this feature is tested for onboarding VMs that run in Google Cloud Platform (GCP) and Amazon Web Services (AWS). Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Gloo feature maturity.

Onboard external workloads, such as virtual machines, to an Istio service mesh in your Gloo Mesh environment.

About

As you build your Gloo Mesh environment, you might want to add a virtual machine to your setup. For example, you might run an app or service in a VM that must communicate with services in the Istio service mesh that runs in your Kubernetes cluster.

To onboard your VM into the service mesh, you deploy three agents to your VM: an Istio sidecar agent, a SPIRE agent, and an OpenTelemetry (OTel) collector agent.

Istio
By adding an Istio sidecar agent to your VM, you can achieve fully bi-directional communication between apps in your cluster's service mesh, and apps in the VM. Because all communication between services in the workload cluster and services in the VM goes through the workload cluster's east-west gateway, the communication is mTLS-secured. Additionally, you get the added benefit of applying Gloo resources to the apps on your VM, such as Gloo traffic policies.

SPIRE
To securely identify a virtual machine, Gloo Mesh uses SPIRE, a production-ready implementation of the SPIFFE APIs. SPIRE issues SPIFFE Verifiable Identity Documents (SVIDs) to workloads and verifies the SVIDs of other workloads. In this guide, you deploy a SPIRE server to your workload cluster and a SPIRE agent to the VM. To onboard the VM into your Gloo setup, the SPIRE agent must authenticate and verify itself when it first connects to the server, which is known as node attestation. During node attestation, the agent and server together verify the identity of the node that the agent is deployed to. This process ensures that the workload cluster and your VM can securely connect to each other.

OTel
To ensure that metrics can be collected from the VM in the same way that they are collected from workload clusters, you deploy an OTel collector agent to the VM. The collector sends metrics through the workload cluster's east-west gateway to the OTel gateway on the management cluster. Additionally, as part of this guide, you enable the collector agents to gather metadata about the compute instances that they are deployed to. This compute instance metadata helps you better visualize your Gloo Mesh setup across your cloud provider infrastructure network. For more information about metrics collection in Gloo Platform, see Set up the Gloo OTel pipeline.

Before you begin

Install CLI tools

Install the following CLI tools.

Overview

This guide walks you through the following general steps:

  1. Create compute resources: In your cloud provider, create two Kubernetes clusters and a VM on the same network subnet. Additionally, configure IAM permissions for the SPIRE server you later deploy to the workload cluster.
  2. Set up the management cluster: Deploy the Gloo management plane to the management cluster. The installation settings enable the onboarding of external workloads to the service mesh.
  3. Set up the workload cluster: Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane. The registration settings include deploying a SPIRE server to the workload cluster, and using PostgreSQL as the default datastore for the SPIRE server.
  4. Onboard the VM: Onboard the VM to your Gloo Mesh environment by generating a bootstrap bundle that installs the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.
  5. Test connectivity: Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.
  6. Launch the UI (optional): To visualize the connection to your VM in your Gloo Mesh setup, you can lauch the Gloo UI.

For more information about onboarding VMs to a service mesh, see the high-level architectural overview of Istio’s virtual machine integration.

Step 1: Create compute resources

In your cloud provider, create the virtual machine, management cluster, and at least one workload cluster, and configure IAM permissions for the SPIRE server.

  1. Save your GCP project ID and the VPC name in environment variables.

    export PROJECT=<gcp_project_id>
    export VPC=<gcp_network_of_the_vm>
    
  2. Create or use existing Google Kubernetes Engine (GKE) clusters.

    1. Create or use at least two clusters. In subsequent steps, you deploy the Gloo management plane to one cluster, and the Gloo data plane and an Istio service mesh to the other cluster.
      • Enable Workload Identity for the workload cluster. Workload Identity allows the Kubernetes service account for the SPIRE server in your workload cluster to act as a GCP IAM service account. The SPIRE server pod can then automatically authenticate as the IAM service account when accessing your VM.
      • Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
      • For more information, see the System requirements.
    2. Set the names of your clusters from your infrastructure provider.
      export MGMT_CLUSTER=<mgmt-cluster-name>
      export REMOTE_CLUSTER=<workload-cluster-name>
      
    3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
      export MGMT_CONTEXT=<management-cluster-context>
      export REMOTE_CONTEXT=<workload-cluster-context>
      
  3. Create a virtual machine to connect to your service mesh.

    1. Create a virtual machine that meets the following requirements.
      • Currently, Debian and RPM images are supported.
      • The VM must be connected to the same subnet as your workload cluster.
    2. For testing connectivity, allow all outbound traffic, and in the associated VPC firewall rules for your VM, permit inbound TCP traffic on ports 22 (SSH access), 80 (HTTP traffic), and 5000 (sample app). Example commands:
      gcloud compute --project=$PROJECT firewall-rules create $VPC-allow-ssh --direction=INGRESS --priority=1000 --network=$VPC --action=ALLOW --rules=tcp:22 --source-ranges=0.0.0.0/0 --target-tags=http-server
      gcloud compute --project=$PROJECT firewall-rules create $VPC-allow-http --direction=INGRESS --priority=1000 --network=$VPC --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
      gcloud compute --project=$PROJECT firewall-rules create $VPC-allow-test-app --direction=INGRESS --priority=1000 --network=$VPC --action=ALLOW --rules=tcp:5000 --source-ranges=0.0.0.0/0 --target-tags=http-server
      
    3. Save the VM instance name in an environment variable.
      export VM_NAME=<vm_instance_name>
      
  4. Create an IAM service account in GCP for the SPIRE server in the workload cluster, and grant IAM permissions so that the server can perform node attestation of the VM instance.

    1. Create an IAM service account in GCP named SpireServer.
      gcloud iam service-accounts create SpireServer --project $PROJECT
      
    2. Create an IAM role that gives the permission to describe the VM instances in your project.
      gcloud iam roles create SpireComputeViewer --project $PROJECT \
        --title "SPIRE compute viewer" \
        --description "read compute.instances.get" \
        --permissions compute.instances.get,iam.serviceAccounts.getAccessToken
      
    3. Bind the role to the SPIRE GCP IAM service account.
      gcloud projects add-iam-policy-binding $PROJECT \
        --role "projects/$PROJECT/roles/SpireComputeViewer" \
        --member "serviceAccount:SpireServer@$PROJECT.iam.gserviceaccount.com"
      
    4. Allow the SPIRE Kubernetes service account to use workload identity, which allows it to act as the GCP IAM service account and describe VMs.
      gcloud iam service-accounts add-iam-policy-binding SpireServer@$PROJECT.iam.gserviceaccount.com \
        --project $PROJECT \
        --role roles/iam.workloadIdentityUser \
        --member "serviceAccount:$PROJECT.svc.id.goog[gloo-mesh/gloo-spire-server]"
      
  5. Create an IAM service account in GCP for the OTel collector in the workload cluster, and grant IAM permissions so that the collector can access metadata about the compute instances that the workload cluster is deployed to. Later in this guide, this compute instance metadata helps you better visualize your Gloo Mesh setup across your GCP network.

    1. Create an IAM service account in GCP named OTelCollector.
      gcloud iam service-accounts create OTelCollector --project $PROJECT
      
    2. Create an IAM role that gives the permission to describe the VM instances in your project.
      gcloud iam roles create OTelComputeViewer \
        --project $PROJECT \
        --title "OTel compute viewer" \
        --permissions compute.instances.get,iam.serviceAccounts.getAccessToken
      
    3. Bind the role to the OTel GCP IAM service account.
      gcloud iam service-accounts add-iam-policy-binding OTelCollector@$PROJECT.iam.gserviceaccount.com \
        --project $PROJECT \
        --role "projects/$PROJECT/roles/OTelComputeViewer" \
        --member "serviceAccount:$PROJECT.svc.id.goog[gloo-mesh/gloo-telemetry-collector]"
      
  1. Create or use existing Amazon Elastic Kubernetes Service (EKS) clusters.

    1. Create or use at least two clusters. In subsequent steps, you deploy the Gloo management plane to one cluster, and the Gloo data plane and an Istio service mesh to the other cluster.
      • Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
      • For more information, see the System requirements.
    2. Set the names of your clusters from your infrastructure provider.
      export MGMT_CLUSTER=<mgmt-cluster-name>
      export REMOTE_CLUSTER=<workload-cluster-name>
      
    3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
      export MGMT_CONTEXT=<management-cluster-context>
      export REMOTE_CONTEXT=<workload-cluster-context>
      
    4. Associate an IAM OIDC provider with the workload cluster. The OIDC provider allows the Kubernetes service account for the SPIRE server in your workload cluster to act as an AWS IAM service account. The SPIRE server pod can then automatically authenticate as the IAM service account when accessing your VM.
  2. Install the required AWS add-ons in your workload cluster.

    1. Save your AWS account ID in an environment variable.
      export ACCOUNT="$(aws sts get-caller-identity --query Account --output text)"
      
    2. Install the EBS CSI driver required by the persistent volume that the PostgreSQL database uses.
      eksctl create iamserviceaccount \
        --name ebs-csi-controller-sa \
        --namespace kube-system \
        --cluster "$REMOTE_CLUSTER" \
        --role-name AmazonEKS_EBS_CSI_DriverRole-$REMOTE_CLUSTER \
        --role-only \
        --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
        --approve \
        --override-existing-serviceaccounts
      
      eksctl create addon \
        --name aws-ebs-csi-driver \
        --cluster "$REMOTE_CLUSTER" \
        --service-account-role-arn "arn:aws:iam::$ACCOUNT:role/AmazonEKS_EBS_CSI_DriverRole-$REMOTE_CLUSTER" \
        --force
      
    3. Install the Load Balancer Controller required to expose load balancer services.
      curl -o /tmp/aws_lb_iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json
      
      aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file:///tmp/aws_lb_iam_policy.json
      
      eksctl create iamserviceaccount \
        --name=aws-load-balancer-controller \
        --cluster="$REMOTE_CLUSTER" \
        --namespace=kube-system \
        --role-name AmazonEKSLoadBalancerControllerRole \
        --attach-policy-arn="arn:aws:iam::$ACCOUNT:policy/AWSLoadBalancerControllerIAMPolicy" \
        --approve
      
      helm repo add eks https://aws.github.io/eks-charts
      helm repo update
      
      helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
        --kube-context "$REMOTE_CONTEXT" \
        -n kube-system \
        --set clusterName="$REMOTE_CLUSTER" \
        --set serviceAccount.create=false \
        --set serviceAccount.name=aws-load-balancer-controller
      
  3. Create an EKS security group for the VM.

    1. Get the ID of the AWS VPC that the workload cluster is attached to.
    2. Create a security group for that VPC.
      aws ec2 create-security-group --description "VM test" --group-name <name> --vpc-id <eks_cluster_VPC_ID>
      
    3. From the output, save the security group ID in an environment variable.
      export VM_SG_ID=<security group ID>
      
    4. For testing connectivity, allow all outbound traffic, and permit TCP traffic on ports 22 (SSH access), 80 (HTTP traffic), and 5000 (sample app).
      aws ec2 authorize-security-group-ingress --group-id $VM_SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
      aws ec2 authorize-security-group-ingress --group-id $VM_SG_ID --protocol tcp --port 80 --cidr 0.0.0.0/0
      aws ec2 authorize-security-group-ingress --group-id $VM_SG_ID --protocol tcp --port 5000 --cidr 0.0.0.0/0
      
  4. Create the VM.

    1. Create an EC2 instance that meets the following requirements:

      • Currently, Debian and RPM images are supported.
      • The VM must be connected to the same subnet as your workload cluster.

      Example command:

      # Create a key pair for SSH
      aws ec2 create-key-pair --key-name vmtest --query 'KeyMaterial' --output text > ~/vmtest.pem
      
      # Create the EC2 instance
      aws ec2 run-instances --image-id ami-03f65b8614a860c29  --instance-type t3.micro --count 1 --security-group-ids $VM_SG_ID --associate-public-ip-address --key-name vmtest --subnet-id <workload_cluster_subnet>
      
    2. Update the security group for the EKS cluster nodegroup to permit traffic from the VM's security group.

  5. Create IAM permissions for the SPIRE server service account so that the server can perform node attestation of the VM instance.

    1. Save the following policy configration files.
      cat >spire-server-delegate.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "SpireServerDelegate",
            "Effect": "Allow",
            "Action": [
              "ec2:DescribeInstances",
              "iam:GetInstanceProfile"
            ],
            "Resource": "*"
          }
        ]
      }
      EOF
      
      cat >spire-trust.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "AWS": "arn:aws:iam::$ACCOUNT:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {}
          }
        ]
      }
      EOF
      
      cat >spire-server.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "SpireServer",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::$ACCOUNT:role/SpireServerDelegate"
          }
        ]
      }
      EOF
      
    2. Create an IAM role that allows the SPIRE server to describe the EC2 instances in your account.
      aws iam create-policy --policy-name SpireServerDelegate --policy-document file://spire-server-delegate.json
      aws iam create-role --role-name SpireServerDelegate --assume-role-policy-document file://spire-trust.json
      aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT:policy/SpireServerDelegate --role-name SpireServerDelegate
      
    3. Create an IAM policy to bind the role to the SPIRE service account that you create in subsequent steps.
      aws iam create-policy --policy-name SpireServer --policy-document file://spire-server.json
      

Step 2: Set up the management cluster

Install the Gloo Platform control plane in the management cluster. Note that the installation settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.

  1. Set your Gloo Mesh license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

    export GLOO_MESH_LICENSE_KEY=<license_key>
    
  2. Save the following settings in a mgmt-server.yaml Helm values file.

    cat >mgmt-server.yaml <<EOF
    common:
      cluster: $MGMT_CLUSTER
    glooMgmtServer:
      extraEnvs:
        # Enable onboarding external workloads
        FEATURE_ENABLE_EXTERNAL_WORKLOADS:
          value: "true"
      serviceOverrides:
        metadata:
          annotations:
            # Instruct GKE to create a backend service-based 
            # external passthrough NLB
            cloud.google.com/l4-rbs: enabled
    licensing:
      glooMeshLicenseKey: $GLOO_MESH_LICENSE_KEY
    EOF
    
    cat >mgmt-server.yaml <<EOF
    common:
      cluster: $MGMT_CLUSTER
    glooMgmtServer:
      extraEnvs:
        # Enable onboarding external workloads
        FEATURE_ENABLE_EXTERNAL_WORKLOADS:
          value: "true"
      serviceOverrides:
        metadata:
          annotations:
            # AWS-specific annotations for load balancers
            service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
            service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
            service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
            service.beta.kubernetes.io/aws-load-balancer-type: external
    licensing:
      glooMeshLicenseKey: $GLOO_MESH_LICENSE_KEY
    EOF
    

  3. Install the Gloo Platform control plane in your management cluster. This command uses a basic profile to create a gloo-mesh namespace, install the control plane components in your management cluster, such as the management server, and enable the ability to add external workloads to your Gloo environment.

    meshctl install \
      --kubecontext $MGMT_CONTEXT \
      --version 2.4.1 \
      --profiles mgmt-server \
      --chart-values-file mgmt-server.yaml
    
  4. Verify that the control plane pods are running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    
  5. Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents that collect logs in the workload cluster and in the VM send metrics to this address.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    

  6. Create a workspace that includes all namespaces in all clusters, and create workspace settings that enable communication through the east-west gateway. For more information, see Organize team resources with workspaces.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
            - name: sleep
            - name: httpbin
            - name: vm-config
            - name: istio-eastwest
    ---
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      options:
        eastWestGateways:
        - selector:
            labels:
              istio: eastwestgateway
    EOF
    

  7. Decide on the type of certificates for the Istio installations you deploy to your workload cluster in the next section.

    • Default Istio certificates: To use the default self-signed certificates, no further steps are required. When you deploy Istio in the next section, the Istio components use the self-signed certificates by default.
    • Custom Istio certificates: To set up your own certificates to deploy Istio, complete the following example steps. These steps show how to use Istio's certificate generator tool to quickly generate self-signed root and intermediate CA certificates and keys. For more information and advanced certificate setup options, see Istio certificates.
      1. Create the istio-system namespace on the workload cluster, so that the certificate secret can be created in that namespace.

        kubectl create namespace istio-system --context $REMOTE_CONTEXT
        
      2. Navigate to the Istio certs directory.

        cd istio-$ISTIO_VERSION/tools/certs
        
      3. Generate a self-signed root CA certificate and private key.

        make -f Makefile.selfsigned.mk \
        ROOTCA_CN="Solo Root CA" \
        ROOTCA_ORG=Istio \
        root-ca
        
      4. Store the root CA certificate, private key and certificate chain in a Kubernetes secret on the management cluster.

        cp root-cert.pem ca-cert.pem
        cp root-key.pem ca-key.pem
        cp root-cert.pem cert-chain.pem
                 
        kubectl --context $MGMT_CONTEXT -n gloo-mesh create secret generic my-root-trust-policy.gloo-mesh \
        --from-file=./ca-cert.pem \
        --from-file=./ca-key.pem \
        --from-file=./cert-chain.pem \
        --from-file=./root-cert.pem
        
      5. Create the root trust policy in the management cluster and reference the root CA Kubernetes secret in the spec.config.mgmtServerCa.secretRef section. You can optionally customize the number of days the root and derived intermediate CA certificates are valid for by specifying the ttlDays in the mgmtServerCA (root CA) and intermediateCertOptions (intermediate CA) of your root trust policy.

        The following example policy sets up the root CA with the credentials that you stored in the my-root-trust-policy.gloo-mesh secret. The root CA certificate is valid for 730 days. The intermediate CA certificates that are automatically created by Gloo Mesh are valid for 1 day.

        kubectl apply --context $MGMT_CONTEXT -f- << EOF 
        apiVersion: admin.gloo.solo.io/v2
        kind: RootTrustPolicy
        metadata:
          name: root-trust-policy
          namespace: gloo-mesh
        spec:
          config:
            autoRestartPods: true
            intermediateCertOptions:
              secretRotationGracePeriodRatio: 0.1
              ttlDays: 1
            mgmtServerCa: 
              generated:
                ttlDays: 730
              secretRef:
                 name: my-root-trust-policy.gloo-mesh
                 namespace: gloo-mesh
        EOF
        
      6. Verify that the cacerts Kubernetes secret was created in the istio-system namespace on the workload cluster. This secret represents the intermediate CA and is used by the Istio control plane istiod to issue leaf certificates to the workloads in your service mesh.

        kubectl get secret cacerts -n istio-system --context $REMOTE_CONTEXT
        
      7. Verify the certificate chain for the intermediate CA. Because the intermediate CA was derived from the root CA, the root CA must be listed as the root-cert in the cacerts Kubernetes secret.

        1. Get the root CA certificate from the my-root-trust-policy.gloo-mesh secret on the management cluster.
          kubectl get secret my-root-trust-policy.gloo-mesh -n gloo-mesh -o jsonpath='{.data.ca-cert\.pem}' --context $MGMT_CONTEXT| base64 --decode
          
        2. In each workload cluster, get the root CA certificate that is listed as the root-cert in the cacerts Kubernetes secret.
          kubectl get secret cacerts -n istio-system --context $REMOTE_CONTEXT -o jsonpath='{.data.root-cert\.pem}' | base64 --decode
          
        3. Verify that the root CA certificate that is listed in the intermediate CA secret (8.2) matches the root CA certificate that was created by the Gloo Mesh root trust policy (8.1).

Step 3: Set up the workload cluster

Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane.

Deploy test apps

To verify connectivity from the VM to services that run in the workload cluster, deploy the sleep and test sample applications to your workload cluster.

  1. Create the sleep app, and label the sleep namespace for Istio sidecar injection.

    kubectl --context $REMOTE_CONTEXT create namespace sleep
    kubectl --context $REMOTE_CONTEXT label ns sleep istio-injection=enabled
    kubectl --context $REMOTE_CONTEXT apply -n sleep -f https://raw.githubusercontent.com/istio/istio/1.18.2/samples/sleep/sleep.yaml
    
  2. Create the httpbin app, and label the httpbin namespace for Istio sidecar injection.

    kubectl --context $REMOTE_CONTEXT create namespace httpbin
    kubectl --context $REMOTE_CONTEXT label ns httpbin istio-injection=enabled
    kubectl --context $REMOTE_CONTEXT apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/1.18.2/samples/httpbin/httpbin.yaml
    
  3. Get the IP address of the httpbin pod.

    kubectl --context $REMOTE_CONTEXT get pods -n httpbin -o wide
    
  4. Log in to your VM, and curl the pod IP address. This ensures that the VM and workload cluster are available to each other on the same network subnet, before you deploy Istio sidecars to either.

    curl -s <httpbin_pod_ip>:80
    

    Note: If the curl is unsuccessful, verify that the security rules that you created for the VM allow outbound traffic and permit TCP traffic on ports 22, 80, and 5000.

Deploy Istio

Set up the Istio control plane and gateways in the workload cluster. Note that the installation settings included in these steps are tailored for basic setups.

  1. Create the following Istio namespaces, and label the istio-system namespace with topology.istio.io/network=$REMOTE_CLUSTER.

    kubectl --context $REMOTE_CONTEXT create namespace istio-ingress
    kubectl --context $REMOTE_CONTEXT create namespace istio-eastwest
    kubectl --context $REMOTE_CONTEXT create namespace istio-system
    kubectl --context $REMOTE_CONTEXT label namespace istio-system topology.istio.io/network=$REMOTE_CLUSTER
    
  2. Download the IstioOperator resource, which contains Istio installation settings that are required for onboarding a VM to the service mesh.

    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/vm-agent/gcp.yaml > gcp.yaml
    
    curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/vm-agent/aws.yaml > aws.yaml
    

  3. Update the file with your workload cluster name, and install the control plane and gateways.

    export CLUSTER_NAME=$REMOTE_CLUSTER
    envsubst < gcp.yaml | istioctl install --context $REMOTE_CONTEXT -y -f -
    
    export CLUSTER_NAME=$REMOTE_CLUSTER
    envsubst < aws.yaml | istioctl install --context $REMOTE_CONTEXT -y -f -
    

  4. Verify that the Istio pods are healthy.

    kubectl --context $REMOTE_CONTEXT get pods -n istio-system
    
  5. Verify that the load balancer service for the east-west gateway is created. Note that it might take a few minutes for an external IP address to be assigned. The LoadBalancer service that exposes the east-west gateway allows the VM to access the cluster's service mesh and be onboarded into the mesh. After the VM is onboarded, all traffic sent to and from the VM goes through the east-west gateway.

    kubectl --context $REMOTE_CONTEXT get svc -n istio-eastwest
    
  6. Rollout a restart to the test apps that you deployed prior to the Istio installation, so that they are re-deployed with an Istio sidecar.

    kubectl --context $REMOTE_CONTEXT rollout restart deploy/sleep -n sleep
    kubectl --context $REMOTE_CONTEXT rollout restart deploy/httpbin -n httpbin
    

Register the workload cluster

Register the workload cluster with the Gloo management server. Note that the registration settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.

  1. Save the settings that are required to deploy SPIRE and PostgreSQL when you register your workload cluster. The steps vary based on the certificates you used to deploy Istio.

    • Default Istio certificates: If you used the default Istio certificates and did not set up and manage your own certificates to deploy Istio, you can use the default certificates to also secure the SPIRE server.
      1. Copy the Istio root CA secret for the SPIRE server and use it as the certificate authority for the SPIRE server.
        kubectl --context $REMOTE_CONTEXT create ns gloo-mesh
        kubectl --context $REMOTE_CONTEXT get secret istio-ca-secret -n istio-system -o yaml | \
          grep -v '^\s*namespace:\s' | sed 's/istio-ca-secret/spire-ca/' | \
          kubectl --context $REMOTE_CONTEXT apply -n gloo-mesh -f -
        
      2. Save the following settings for SPIRE and PostgreSQL in an agent.yaml Helm values file. Note that if you want to use a MySQL database instead, you can change the databaseType to mysql. For more infromation, see the SPIRE docs.
        cat >agent.yaml <<EOF
        demo:
          manageAddonNamespace: true
        glooSpireServer:
          controller:
            verbose: true
          enabled: true
          plugins:
            datastore:
              databaseType: postgres
              connectionString: "dbname=spire user=spire password=gloomesh host=postgresql sslmode=disable"
            nodeAttestor:
              gcp:
                allowedProjectIds: 
                  - "$PROJECT"
                allowedLabelKeys:
                  - app
                enabled: true
            upstreamAuthority:
              disk:
                certFilePath: /run/spire/certs/ca-cert.pem
                bundleFilePath: /run/spire/certs/ca-cert.pem
        postgresql:
          enabled: true
          global:
            postgresql:
              auth:
                database: spire
                password: gloomesh
                username: spire
        EOF
        
        cat >agent.yaml <<EOF
        demo:
          manageAddonNamespace: true
        glooSpireServer:
          controller:
            verbose: true
          enabled: true
          plugins:
            nodeAttestor:
              aws:
                assumeRole: "SpireServerDelegate"
                enabled: true
            upstreamAuthority:
              disk:
                certFilePath: /run/spire/certs/ca-cert.pem
                bundleFilePath: /run/spire/certs/ca-cert.pem
        postgresql:
          enabled: true
          global:
            postgresql:
              auth:
                database: spire
                password: gloomesh
                username: spire
        EOF
        
    • Custom Istio certificates: If you set up your own certificates to deploy Istio, you also can use these certificates to secure the SPIRE server.
      1. Be sure that you created a RootTrustPolicy as part of the custom Istio certificate setup process that you followed. The root trust policy allows the Gloo agent to automatically create a spire-ca secret, which contains the Istio root CA for the SPIRE server to use as its certificate authority.
      2. Save the following settings for SPIRE and PostgreSQL in an agent.yaml Helm values file.
        cat >agent.yaml <<EOF
        demo:
          manageAddonNamespace: true
        glooSpireServer:
          controller:
            verbose: true
          enabled: true
          plugins:
            nodeAttestor:
              gcp:
                allowedProjectIds: 
                  - "$PROJECT"
                allowedLabelKeys:
                  - app
                enabled: true
        postgresql:
          enabled: true
          global:
            postgresql:
              auth:
                database: spire
                password: gloomesh
                username: spire
        EOF
        
        cat >agent.yaml <<EOF
        demo:
          manageAddonNamespace: true
        glooSpireServer:
          controller:
            verbose: true
          enabled: true
          plugins:
            nodeAttestor:
              aws:
                assumeRole: "SpireServerDelegate"
                enabled: true
        postgresql:
          enabled: true
          global:
            postgresql:
              auth:
                database: spire
                password: gloomesh
                username: spire
        EOF
        
  2. Register the workload cluster with the management server. This command uses basic profiles to install the Gloo agent, rate limit server, and external auth server, as well the values file to install the SPIRE and PostgreSQL deployments.

    meshctl cluster register $REMOTE_CLUSTER \
      --kubecontext $MGMT_CONTEXT \
      --remote-context $REMOTE_CONTEXT \
      --version 2.4.1 \
      --profiles agent,ratelimit,extauth \
      --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS \
      --gloo-mesh-agent-chart-values agent.yaml
    
  3. Verify that the Gloo data plane components are healthy, and that your Gloo Mesh setup is correctly installed.

    meshctl check --kubecontext $REMOTE_CONTEXT
    meshctl check --kubecontext $MGMT_CONTEXT
    
  4. Depending on your cloud provider, annotate the necessary Kubernetes service accounts in your workload cluster with the IAM permissions that you created earlier.

    1. Annotate the Kubernetes service account for the SPIRE server with its GCP IAM permissions.
      kubectl annotate serviceaccount gloo-spire-server \
        --context $REMOTE_CONTEXT -n gloo-mesh \
        iam.gke.io/gcp-service-account=SpireServer@$PROJECT.iam.gserviceaccount.com
      
    2. Restart the SPIRE deployment to apply the change.
      kubectl --context $REMOTE_CONTEXT rollout restart deploy/gloo-spire-server -n gloo-mesh
      
    3. Annotate the Kubernetes service account for the OTel collector with its GCP IAM permissions.
      kubectl annotate serviceaccount gloo-telemetry-collector \
        --context $REMOTE_CONTEXT -n gloo-mesh \
        iam.gke.io/gcp-service-account=OTelCollector@$PROJECT.iam.gserviceaccount.com
      
    4. Restart the OTel collector daemonset to apply the change.
      kubectl --context $REMOTE_CONTEXT rollout restart daemonset/gloo-telemetry-collector-agent -n gloo-mesh
      

    Annotate the Kubernetes service account for the SPIRE server with its AWS IAM role. 5. In the workload cluster, create an EKS IAM service account, and attach the policy for the SPIRE server's IAM role that you previously created. sh eksctl create iamserviceaccount \ --name gloo-spire-server --namespace gloo-mesh \ --cluster $REMOTE_CLUSTER --role-name SpireServer \ --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/SpireServer \ --override-existing-serviceaccounts --approve

    1. Verify that the IAM role is associated with the SPIRE server's service account.

      aws iam list-attached-role-policies --role-name SpireServer --query AttachedPolicies[].PolicyArn --output text
      

      Example output:

      arn:aws:iam::802411188784:policy/SpireServer
      
    2. Verify that the SPIRE Kubernetes service account is annotated with the IAM role.

      kubectl --context $REMOTE_CONTEXT describe sa gloo-spire-server -n gloo-mesh
      

      Example output:

      Name:                gloo-spire-server
      Namespace:           gloo-mesh
      Labels:              app.kubernetes.io/managed-by=eksctl
      Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::802411188784:role/SpireServer
      Image pull secrets:  <none>
      Mountable secrets:   <none>
      Tokens:              <none>
      Events:              <none>
      
    3. Restart the SPIRE deployment to apply the change.

      kubectl --context $REMOTE_CONTEXT rollout restart deploy/gloo-spire-server -n gloo-mesh
      

  5. Verify that the pods in your workload cluster are healthy.

    kubectl --context $REMOTE_CONTEXT get pods -n gloo-mesh
    

    Example output:

    NAME                                   READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-d689d4544-g8fzn        1/1     Running   0          4m29s
    gloo-spire-server-cd88fb77d-jk7mr      2/2     Running   0          53s
    gloo-telemetry-collector-agent-7jzl4   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-86ktk   1/1     Running   0          4m28s
    gloo-telemetry-collector-agent-l8c99   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-pkh2v   1/1     Running   0          4m28s
    gloo-telemetry-collector-agent-pmqrh   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-wnq7d   1/1     Running   0          4m28s
    postgresql-0                           1/1     Running   0          4m28s
    

Step 4: Onboard the VM

Onboard the VM to your Gloo Mesh setup by installing the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.

  1. On the workload cluster, create a namespace for the VM configuration that you set up in subsequent steps.

    kubectl --context $REMOTE_CONTEXT create namespace vm-config
    
  2. Save the following ExternalWorkload Gloo resource to create an identity for apps that run on the VM. The following example resource provisions an identity in the vm-config namespace of the workload cluster, for services that listen on port 5000 and that run on a VM of the specified identity selector. For more information and available options, see the API reference documentation.

    This example creates an identity for only the test app that you create on the VM in subsequent steps, which you select by specifying port 5000. If you run multiple apps on your VM that you want to include in the service mesh, you can specify multiple ports to select each app. Then, when you create a virtual destination for the test app in subsequent steps, you can create additional virtual destinations for each of your other apps.

    cat >externalworkload.yaml <<EOF
    apiVersion: networking.gloo.solo.io/v2alpha1
    kind: ExternalWorkload
    metadata:
      labels:
        # Label to use later when you create a virtual destination for the test app
        app: vm-ext-workload
        version: v1
      name: vm-ext-workload
      namespace: vm-config
    spec:
      connectedClusters:
        # Map of workload cluster name to VM configuration namespace
        $REMOTE_CLUSTER: vm-config
      identitySelector:
        gcp:
          # VM instance name
          - name: $VM_NAME
      # Port for each app to include
      ports:
        # Test app port
        - name: http
          number: 5000
    EOF
    
    cat >externalworkload.yaml <<EOF
    apiVersion: networking.gloo.solo.io/v2alpha1
    kind: ExternalWorkload
    metadata:
      labels:
        # Label to use later when you create a virtual destination for the test app
        app: vm-ext-workload
        version: v1
      name: vm-ext-workload
      namespace: vm-config
    spec:
      connectedClusters:
        # Map of workload cluster name to VM configuration namespace
        $REMOTE_CLUSTER: vm-config
      identitySelector:
        aws:
          # ID of the security group for the VM
          - securityGroupId: $VM_SG_ID
      # Port for each app to include
      ports:
        # Test app port
        - name: http
          number: 5000
    EOF
    

  3. Create the ExternalWorkload resource in the workload cluster.

    kubectl apply --context $REMOTE_CONTEXT -f externalworkload.yaml
    
  4. Confirm that a WorkloadGroup Istio resource is created in your VM configuration namespace. This resource summarizes all workloads on the VM that you selected by port number in the ExternalWorkload.

    kubectl --context $REMOTE_CONTEXT get workloadgroup -n vm-config
    

    Example output:

    NAME               AGE
    vm-ext-workload    10s
    
  5. In the management cluster, use meshctl to generate the bootstrap bundle that is required to install the Istio sidecar, OTel collector, and SPIRE agents on the VM. The generated bundle, bootstrap.tar.gz, is created based on the ExternalWorkload configuration that you previously applied in your workload cluster. For more information about the options for this command, see the CLI reference.

    meshctl x external-workload generate-bootstrap-bundle \
       --kubecontext $REMOTE_CONTEXT \
       --cluster $REMOTE_CLUSTER \
       --cluster-gw-svc istio-eastwest/istio-eastwestgateway \
       --attestor gcp \
       -f externalworkload.yaml \
       -o /tmp/bootstrap.tar.gz
    
    meshctl x external-workload generate-bootstrap-bundle \
       --kubecontext $REMOTE_CONTEXT \
       --cluster $REMOTE_CLUSTER \
       --cluster-gw-svc istio-eastwest/istio-eastwestgateway \
       --attestor aws \
       -f externalworkload.yaml \
       -o /tmp/bootstrap.tar.gz
    

    The output /tmp/bootstrap.tar.gz directory on your local machine contains the cluster.env, hosts, mesh.yaml, root-cert.pem, and spire-agent.conf files.

    INFO  ✅ Bootstrap bundle generated: /tmp/bootstrap.tar.gz
    
  6. Download the package to install the Istio sidecar on your VM.

    1. For the Istio version that you downloaded, get the link for the Solo Istio cloud storage bucket by logging in to the Support Center and reviewing the Istio packages built by Solo.io support article. Versions 1.18.0 through 1.18.2, and 1.17.4 through 1.17.5 are supported.
    2. In the cloud storage bucket, download the Istio package for your VM image type. For example, you might download one of: istio-sidecar.deb, istio-sidecar-arm64.deb, istio-sidecar.rpm, or istio-sidecar-arm64.rpm.
  7. Use one of the following tabs to download the packages for your VM image type that contain the OTel collector agent, the SPIRE agent, and the bootstrap script that installs the agents. Note that if you used a different version of Gloo Platform than 2.4.1, change the links to use that version instead.

    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/otel/gloo-otel-collector.deb
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/spire/gloo-spire-agent.deb
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/bootstrap.sh
    
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/otel/gloo-otel-collector-arm64.deb
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/spire/gloo-spire-agent-arm64.deb
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/bootstrap.sh
    
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/otel/gloo-otel-collector.rpm
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/spire/gloo-spire-agent.rpm
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/bootstrap.sh
    
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/otel/gloo-otel-collector-arm64.rpm
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/spire/gloo-spire-agent-arm64.rpm
    curl -LO https://storage.googleapis.com/gloo-platform/vm/2.4.1/bootstrap.sh
    

  8. On the VM, create the gloo-mesh-config directory, and navigate to the directory.

    mkdir gloo-mesh-config
    cd gloo-mesh-config
    
  9. From your local machine, copy the bootstrap bundle, the bootstrap script, and the Istio sidecar, OTel collector, and SPIRE agent packages to the ~/gloo-mesh-config/ directory on the VM. For example, you might run the following commands:

    scp /tmp/bootstrap.tar.gz <username>@<instance_address>:~/gloo-mesh-config/
    scp bootstrap.sh <username>@<instance_address>:~/gloo-mesh-config/
    scp istio-sidecar.deb <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-otel-collector.deb <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-spire-agent.deb <username>@<instance_address>:~/gloo-mesh-config/
    
    scp /tmp/bootstrap.tar.gz <username>@<instance_address>:~/gloo-mesh-config/
    scp bootstrap.sh <username>@<instance_address>:~/gloo-mesh-config/
    scp istio-sidecar-arm64.deb <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-otel-collector-arm64.deb <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-spire-agent-arm64.deb <username>@<instance_address>:~/gloo-mesh-config/
    
    scp /tmp/bootstrap.tar.gz <username>@<instance_address>:~/gloo-mesh-config/
    scp bootstrap.sh <username>@<instance_address>:~/gloo-mesh-config/
    scp istio-sidecar.rpm <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-otel-collector.rpm <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-spire-agent.rpm <username>@<instance_address>:~/gloo-mesh-config/
    
    scp /tmp/bootstrap.tar.gz <username>@<instance_address>:~/gloo-mesh-config/
    scp bootstrap.sh <username>@<instance_address>:~/gloo-mesh-config/
    scp istio-sidecar-arm64.rpm <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-otel-collector-arm64.rpm <username>@<instance_address>:~/gloo-mesh-config/
    scp gloo-spire-agent-arm64.rpm <username>@<instance_address>:~/gloo-mesh-config/
    

  10. On the VM, install the Istio sidecar, OTEL collector, and SPIRE agents.

    sudo ./bootstrap.sh --istio-pkg istio-sidecar.deb --otel-pkg gloo-otel-collector.deb --spire-pkg gloo-spire-agent.deb -b bootstrap.tar.gz --install --start
    
    sudo ./bootstrap.sh --istio-pkg istio-sidecar.rpm --otel-pkg gloo-otel-collector.rpm --spire-pkg gloo-spire-agent.rpm -b bootstrap.tar.gz --install --start
    

Step 5: Test connectivity

Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.

  1. On your VM, deploy a simple HTTP server that listens on port 5000. This app is represented by the ExternalWorkload identity that you previously created.

    nohup python3 -m http.server 5000 &
    
  2. From the VM, curl the httpbin service in your workload cluster. Note that this curl command uses the app's cluster DNS name instead of its pod IP address, because the VM is now connected to the Istio service mesh that runs in your workload cluster.

    curl -s httpbin.httpbin:8000 -v
    

    Example output:

    *   Trying 10.XX.XXX.XXX:8000...
    * Connected to httpbin.httpbin (10.XX.XXX.XXX) port 8000 (#0)
    > GET / HTTP/1.1
    > Host: httpbin.httpbin:8000
    > User-Agent: curl/7.74.0
    > Accept: */*
    > 
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    < server: envoy
    < date: Mon, 10 Jul 2023 18:00:32 GMT
    < content-type: text/html; charset=utf-8
    < content-length: 9593
    < access-control-allow-origin: *
    < access-control-allow-credentials: true
    < x-envoy-upstream-service-time: 4
    < 
    <!DOCTYPE html>
    ...
    
  3. In the workload cluster, create a Gloo VirtualDestination resource so that apps in the service mesh can also access the HTTP server test app that runs on the VM through the testapp.vd hostname. Note that if you selected multiple apps that run on the VM in your ExternalWorkload resource, you can create a virtual destination for each app by using the app's port that you specified.

    kubectl --context $REMOTE_CONTEXT apply -f - <<EOF
    apiVersion: networking.gloo.solo.io/v2
    kind: VirtualDestination
    metadata:
      labels:
        app: testapp
      name: testapp-vd
      namespace: vm-config
    spec:
      externalWorkloads:
      - labels:
          # Label that you gave to the ExternalWorkload resource
          app: vm-ext-workload
      hosts:
      # Hostname to use for the app
      - testapp.vd
      ports:
      # Port that you specified in the ExternalWorkload resource to select the test app
      - number: 5000
        protocol: HTTP
        targetPort:
          name: http
    EOF
    
  4. From the sleep app in your workload cluster, curl the HTTP server on the VM by using the testapp.vd hostname.

    kubectl --context $REMOTE_CONTEXT exec deploy/sleep -n sleep -- curl -s testapp.vd:5000 -v
    

    Example output:

    *   Trying 244.XXX.XXX.XX:5000...
    * Connected to testapp.vd (244.XXX.XXX.XX) port 5000 (#0)
    > GET / HTTP/1.1
    > Host: testapp.vd:5000
    > User-Agent: curl/8.1.2
    > Accept: */*
    > 
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <title>Directory listing for /</title>
    </head>
    <body>
    <h1>Directory listing for /</h1>
    <hr>
    <ul>
    <li><a href=".bash_history">.bash_history</a></li>
    <li><a href=".bash_logout">.bash_logout</a></li>
    <li><a href=".bashrc">.bashrc</a></li>
    <li><a href=".profile">.profile</a></li>
    <li><a href=".ssh/">.ssh/</a></li>
    <li><a href="gm-config/">gm-config/</a></li>
    <li><a href="nohup.out">nohup.out</a></li>
    </ul>
    <hr>
    </body>
    </html>
    < HTTP/1.1 200 OK
    < server: envoy
    < date: Mon, 10 Jul 2023 18:00:50 GMT
    < content-type: text/html; charset=utf-8
    < content-length: 600
    < x-envoy-upstream-service-time: 3
    < 
    { [600 bytes data]
    * Connection #0 to host testapp.vd left intact
    

Step 6 (optional): Launch the UI

To visualize the connection to your VM in your Gloo Mesh setup, you can lauch the Gloo UI.

  1. Access the Gloo UI.
    meshctl dashboard --kubecontext $MGMT_CONTEXT
    
  2. Click the Graph tab to open the network visualization graph for your Gloo Mesh setup.
  3. From the footer toolbar, click Layout Settings.
  4. Toggle Group By to INFRA to review the clusters, virtual machines, and Kubernetes namespaces that your app nodes are organized in. This view also shows details for the cloud provider infrastructure, such as the VPCs and subnets that your resources are deployed to.
  5. Verify that you see your VM connection to your workload cluster. In this example graph, the VM instance connects to the httpbin app in the workload cluster. Graph for the workload cluster and VM connection
  6. You can also see more information about the VM instance by clicking on its icon, which opens the details pane for the connection. In this example details pane, the title helloworld -> httpbin demonstrates that an external workload named helloworld, which represents apps on the VM, connects to the httpbin app in the workload cluster. Details pane for the VM

Congratulations! Your VM is now registered with Gloo Mesh. You can now create Gloo resources for the workloads that you run on the VM, such as Gloo traffic policies. For example, if you selected multiple apps in your ExternalWorkload resource and want to apply a policy to all of those apps, you can use the label on the ExternalWorkload in the policy selector. Or, for policies that apply to destinations, you can select only the virtual destination for one of the apps. For more information, see Policy enforcement.

Uninstall

  1. On the VM:

    1. Remove the Istio sidecar, OTel collector, and SPIRE agents.
      sudo ./bootstrap.sh --uninstall
      
    2. Remove the bootstrap script and bundle, the agent packages, and the test app data.
      cd ..
      rm -r gloo-mesh-config
      
  2. On the workload cluster:

    1. Delete the vm-config, sleep, and httpbin namespaces.
      kubectl --context $REMOTE_CONTEXT delete namespace vm-config
      kubectl --context $REMOTE_CONTEXT delete namespace sleep
      kubectl --context $REMOTE_CONTEXT delete namespace httpbin
      
    2. Delete the Gateway and VirtualService resources from the istio-eastwest namespace.
      kubectl --context $REMOTE_CONTEXT delete Gateway istiod-gateway -n istio-eastwest
      kubectl --context $REMOTE_CONTEXT delete Gateway spire-gateway -n istio-eastwest
      kubectl --context $REMOTE_CONTEXT delete VirtualService istiod-vs -n istio-eastwest
      kubectl --context $REMOTE_CONTEXT delete VirtualService spire-vs -n istio-eastwest
      
  3. Continue to use your Gloo Mesh setup, or uninstall it.

    • To continue to use your Gloo Mesh setup, you can optionally remove the SPIRE and PostgreSQL servers by following the upgrade guide to remove their settings from the Helm values for your workload cluster.
    • To uninstall your Gloo Mesh setup:
      1. Uninstall the Istio service mesh.
        istioctl uninstall --context $REMOTE_CONTEXT --purge
        
      2. Remove the Istio namespaces.
        kubectl delete ns --context $REMOTE_CONTEXT istio-system
        kubectl delete ns --context $REMOTE_CONTEXT istio-ingress
        kubectl delete ns --context $REMOTE_CONTEXT istio-eastwest
        
      3. Follow the steps in Uninstall Gloo Mesh to deregister the workload cluster and uninstall the Gloo management plane.