This guide walks you through the following general steps:

  1. Create compute resources: Create two Kubernetes clusters and a virtual machine. Additionally, configure IAM permissions for the SPIRE server you later deploy to the workload cluster.
  2. Set up the management cluster: Deploy the Gloo management plane to the management cluster. The installation settings enable the onboarding of external workloads to the service mesh.
  3. Set up the workload cluster: Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane. The registration settings include deploying a SPIRE server to the workload cluster, and using PostgreSQL as the default datastore for the SPIRE server.
  4. Onboard the VM: Onboard the VM to your Gloo Mesh environment by generating a bootstrap bundle that installs the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.
  5. Test connectivity: Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.
  6. Launch the UI (optional): To visualize the connection to your VM in your Gloo Mesh setup, you can launch the Gloo UI.

For more information about onboarding VMs to a service mesh, see the high-level architectural overview of Istio’s virtual machine integration.

Before you begin

Network considerations

When you have a multi-network environment, you have two options for onboarding a VM:

  • Use a Kubernetes cluster and VM that are created in the same VPC network. In this network setup, Istio workloads on the cluster are able to communicate directly with Istio workloads on the VM.
  • Use a Kubernetes cluster and VM that are created in different VPC networks. When your VM is in a different network, you must specify the following tags when you run the meshctl external-workload-onboard CLI command in step 4 of this guide:
    • --network: When you create a Kubernetes cluster, it is associated with a network tag. Workloads in the cluster inherit this tag by default, and Istio uses this tag to identify workloads as “local” to the cluster. When you onboard a VM that is in the same VPC subnet as the cluster, the workloads in this VM inherit this network tag too. However, if the VM is in a different subnet than the cluster, you must onboard the VM with a different network tag. Istio uses this VM network tag to identify workloads that are “remote”, and is then able to use the east-west gateway to send traffic requests to Istio services in the VM (“remote”) network. Because this tag is internal to Istio, it can be arbitrary, but you might want to use the same name of the VM’s actual infrastructure network for clarity.
    • --external-ip (optional): If your VM must accept inbound connections, you must specify the external IP address of the VM in the --external-ip flag. All outbound communication from cluster to the VM goes through the cluster’s east-west gateway to this IP address.

Install CLI tools

Install the following CLI tools.

  • helm, the Kubernetes package manager.
  • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
  • istioctl, the Istio command line tool. Important: Versions 1.17.4 through 1.23.2-patch1 are supported in Gloo Mesh Enterprise for onboarding VMs.
  • meshctl, the Gloo command line tool for bootstrapping Gloo Mesh Enterprise, registering clusters, describing configured resources, and more. Be sure to download version 2.7.0-beta1, which uses the latest Gloo Mesh installation values.
      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.7.0-beta1 sh -
    export PATH=$HOME/.gloo-mesh/bin:$PATH
      

Step 1: Create compute resources

Create the virtual machine, management cluster, and at least one workload cluster, and configure IAM permissions for the SPIRE server.

  1. Create or use existing Amazon Elastic Kubernetes Service (EKS) clusters.

    1. Create or use at least two clusters. In subsequent steps, you deploy the Gloo management plane to one cluster, and the Gloo data plane and an Istio service mesh to the other cluster.
      • Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
      • For more information, see the System requirements.
    2. Set the names of your clusters from your infrastructure provider.
        export MGMT_CLUSTER=<mgmt-cluster-name>
      export REMOTE_CLUSTER=<workload-cluster-name>
        
    3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
        export MGMT_CONTEXT=<management-cluster-context>
      export REMOTE_CONTEXT=<workload-cluster-context>
        
    4. Create an IAM OIDC provider for the workload cluster. The OIDC provider allows the Kubernetes service account for the SPIRE server in your workload cluster to act as an AWS IAM service account. The SPIRE server pod can then automatically authenticate as the IAM service account when accessing your VM.
  2. Install the required AWS add-ons in your workload cluster.

    1. Save your AWS account ID in an environment variable.
        export ACCOUNT="$(aws sts get-caller-identity --query Account --output text)"
        
    2. Install the EBS CSI driver required by the persistent volume that the PostgreSQL database uses.
        eksctl create iamserviceaccount \
        --name ebs-csi-controller-sa \
        --namespace kube-system \
        --cluster "${REMOTE_CLUSTER}" \
        --role-name AmazonEKS_EBS_CSI_DriverRole-${REMOTE_CLUSTER} \
        --role-only \
        --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
        --approve \
        --override-existing-serviceaccounts
      
      eksctl create addon \
        --name aws-ebs-csi-driver \
        --cluster "${REMOTE_CLUSTER}" \
        --service-account-role-arn "arn:aws:iam::${ACCOUNT}:role/AmazonEKS_EBS_CSI_DriverRole-${REMOTE_CLUSTER}" \
        --force
        
    3. Install the Load Balancer Controller required to expose load balancer services.
        curl -o /tmp/aws_lb_iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json
      
      aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file:///tmp/aws_lb_iam_policy.json
      
      eksctl create iamserviceaccount \
        --name=aws-load-balancer-controller \
        --cluster="${REMOTE_CLUSTER}" \
        --namespace=kube-system \
        --role-name AmazonEKSLoadBalancerControllerRole \
        --attach-policy-arn="arn:aws:iam::${ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy" \
        --approve
      
      helm repo add eks https://aws.github.io/eks-charts
      helm repo update
      
      helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
        --kube-context "${REMOTE_CONTEXT}" \
        -n kube-system \
        --set clusterName="${REMOTE_CLUSTER}" \
        --set serviceAccount.create=false \
        --set serviceAccount.name=aws-load-balancer-controller
        
  3. Create an EKS security group for the VM.

    1. Get the ID of the AWS VPC that the VM is attached to.
    2. Create a security group for that VPC.
        aws ec2 create-security-group --description "VM test" --group-name <name> --vpc-id <eks_cluster_VPC_ID>
        
    3. From the output, save the security group ID in an environment variable.
        export VM_SG_ID=<security group ID>
        
    4. For testing connectivity, allow all outbound traffic, and permit TCP traffic on ports 22 (SSH access) and 5000 (sample app). Additionally, if your cluster exists on the same network as the VM, ensure that the cluster’s security group rules allow inbound traffic from VMs.
        aws ec2 authorize-security-group-ingress --group-id ${VM_SG_ID} --protocol tcp --port 22 --cidr 0.0.0.0/0
      aws ec2 authorize-security-group-ingress --group-id ${VM_SG_ID} --protocol tcp --port 5000 --cidr 0.0.0.0/0
        
  4. Create the VM.

    1. Create an EC2 instance that uses either a Debian or RPM image. Example command:
        # Create a key pair for SSH
      aws ec2 create-key-pair --key-name vmtest --query 'KeyMaterial' --output text > ~/vmtest.pem
      
      # Create the EC2 instance. Note that the instance is not required to be on the same subnet as the cluster.
      aws ec2 run-instances --image-id ami-03f65b8614a860c29  --instance-type t3.micro --count 1 --security-group-ids ${VM_SG_ID} --associate-public-ip-address --key-name vmtest --subnet-id <workload_cluster_subnet>
        
    2. Update the security group for the EKS cluster nodegroup to permit traffic from the VM’s security group.
  5. Create IAM permissions for the SPIRE server service account so that the server can perform node attestation of the VM instance.

    1. Save the following policy configration files.
        cat >spire-server-delegate.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "SpireServerDelegate",
            "Effect": "Allow",
            "Action": [
              "ec2:DescribeInstances",
              "iam:GetInstanceProfile"
            ],
            "Resource": "*"
          }
        ]
      }
      EOF
      
      cat >spire-trust.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "AWS": "arn:aws:iam::${ACCOUNT}:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {}
          }
        ]
      }
      EOF
      
      cat >spire-server.json <<EOF
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "SpireServer",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::${ACCOUNT}:role/SpireServerDelegate"
          }
        ]
      }
      EOF
        
    2. Create an IAM role that allows the SPIRE server to describe the EC2 instances in your account.
        aws iam create-policy --policy-name SpireServerDelegate --policy-document file://spire-server-delegate.json
      aws iam create-role --role-name SpireServerDelegate --assume-role-policy-document file://spire-trust.json
      aws iam attach-role-policy --policy-arn arn:aws:iam::${ACCOUNT}:policy/SpireServerDelegate --role-name SpireServerDelegate
        
    3. Create an IAM policy to bind the role to the SPIRE service account that you create in subsequent steps.
        aws iam create-policy --policy-name SpireServer --policy-document file://spire-server.json
        

Step 2: Set up the management cluster

Install the Gloo management plane in the management cluster. Note that the installation settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.

  1. Set your Gloo Mesh Enterprise license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_LICENSE_KEY=<license_key>
      
  2. Save the following settings in a mgmt-plane.yaml Helm values file.

      cat >mgmt-plane.yaml <<EOF
    common:
      cluster: ${MGMT_CLUSTER}
    glooMgmtServer:
      serviceOverrides:
        metadata:
          annotations:
            # AWS-specific annotations for load balancers
            service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
            service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
            service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
            service.beta.kubernetes.io/aws-load-balancer-type: external
    licensing:
      glooMeshLicenseKey: ${GLOO_MESH_LICENSE_KEY}
    EOF
      
  3. Install the Gloo management plane in your management cluster. This command uses a basic profile to create a gloo-mesh namespace, install the management plane components in your management cluster, such as the management server, and enable the ability to add external workloads to your Gloo environment.

      meshctl install \
      --kubecontext ${MGMT_CONTEXT} \
      --version 2.7.0-beta1 \
      --profiles mgmt-server \
      --chart-values-file mgmt-plane.yaml \
      --set featureGates.ExternalWorkloads=true
      
  4. Verify that the management plane pods are running.

      kubectl get pods -n gloo-mesh --context ${MGMT_CONTEXT}
      
  5. Save the external address and port that were assigned by AWS to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents that collect logs in the workload cluster and in the VM send metrics to this address.

      export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context ${MGMT_CONTEXT} -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context ${MGMT_CONTEXT} -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
      
  6. Create a workspace that includes relevant namespaces in all clusters, and create workspace settings that enable communication through the east-west gateway. If you do not want to include the gloo-mesh namespace in this demo workspace, create a separate admin workspace for the gloo-mesh namespace, and use the workspace import-export functionality to import the gloo-telemetry-collector Kubernetes service in the gloo-mesh namespace to the application workspace. For more information, see Organize team resources with workspaces.

      kubectl apply --context ${MGMT_CONTEXT} -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: demo
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
            - name: sleep
            - name: httpbin
            - name: vm-config
            - name: istio-eastwest
            - name: gloo-mesh
    ---
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: demo
      namespace: gloo-mesh
    spec:
      options:
        eastWestGateways:
        - selector:
            labels:
              istio: eastwestgateway
    EOF
      
  7. Decide on the type of certificates for the Istio installations you deploy to your workload cluster in the next section.

    • Default Istio certificates: To use the default self-signed certificates, no further steps are required. When you deploy Istio in the next section, the Istio components use the self-signed certificates by default.
    • Custom Istio certificates: To set up your own certificates to deploy Istio, complete the following example steps. These steps show how to use Istio’s certificate generator tool to quickly generate self-signed root and intermediate CA certificates and keys. For more information and advanced certificate setup options, see Istio certificates.
    1. Create the istio-system namespace on the workload cluster, so that the certificate secret can be created in that namespace.

        kubectl create namespace istio-system --context ${REMOTE_CONTEXT}
        
    2. Navigate to the Istio certs directory.

        cd istio-${ISTIO_VERSION}/tools/certs
        
    3. Generate a self-signed root CA certificate and private key.

        make -f Makefile.selfsigned.mk \
      ROOTCA_CN="Solo Root CA" \
      ROOTCA_ORG=Istio \
      root-ca
        
    4. Store the root CA certificate, private key and certificate chain in a Kubernetes secret on the management cluster.

        cp root-cert.pem ca-cert.pem
      cp root-key.pem ca-key.pem
      cp root-cert.pem cert-chain.pem
      
      kubectl --context ${MGMT_CONTEXT} -n gloo-mesh create secret generic my-root-trust-policy.gloo-mesh \
      --from-file=./ca-cert.pem \
      --from-file=./ca-key.pem \
      --from-file=./cert-chain.pem \
      --from-file=./root-cert.pem
        
    5. Create the root trust policy in the management cluster and reference the root CA Kubernetes secret in the spec.config.mgmtServerCa.secretRef section. You can optionally customize the number of days the root and derived intermediate CA certificates are valid for by specifying the ttlDays in the mgmtServerCA (root CA) and intermediateCertOptions (intermediate CA) of your root trust policy.

      The following example policy sets up the root CA with the credentials that you stored in the my-root-trust-policy.gloo-mesh secret. The root CA certificate is valid for 730 days. The intermediate CA certificates that are automatically created by Gloo Mesh are valid for 1 day.

        kubectl apply --context ${MGMT_CONTEXT} -f- << EOF
      apiVersion: admin.gloo.solo.io/v2
      kind: RootTrustPolicy
      metadata:
        name: root-trust-policy
        namespace: gloo-mesh
      spec:
        config:
          intermediateCertOptions:
            secretRotationGracePeriodRatio: 0.1
            ttlDays: 1
          mgmtServerCa:
            generated:
              ttlDays: 730
            secretRef:
               name: my-root-trust-policy.gloo-mesh
               namespace: gloo-mesh
      EOF
        
    6. Verify that the cacerts Kubernetes secret was created in the istio-system namespace on the workload cluster. This secret represents the intermediate CA and is used by the Istio control plane istiod to issue leaf certificates to the workloads in your service mesh.

        kubectl get secret cacerts -n istio-system --context ${REMOTE_CONTEXT}
        
    7. Verify the certificate chain for the intermediate CA. Because the intermediate CA was derived from the root CA, the root CA must be listed as the root-cert in the cacerts Kubernetes secret.

      1. Get the root CA certificate from the my-root-trust-policy.gloo-mesh secret on the management cluster.
          kubectl get secret my-root-trust-policy.gloo-mesh -n gloo-mesh -o jsonpath='{.data.ca-cert\.pem}' --context ${MGMT_CONTEXT}| base64 --decode
          
      2. In each workload cluster, get the root CA certificate that is listed as the root-cert in the cacerts Kubernetes secret.
          kubectl get secret cacerts -n istio-system --context ${REMOTE_CONTEXT} -o jsonpath='{.data.root-cert\.pem}' | base64 --decode
          
      3. Verify that the root CA certificate that is listed in the intermediate CA secret (8.2) matches the root CA certificate that was created by the Gloo root trust policy (8.1).

Step 3: Set up the workload cluster

Set up your workload cluster to communicate with the management cluster and your VM, including deploying test apps, deploying Istio, and registering the workload cluster with the Gloo management plane.

Deploy test apps

To verify connectivity from the external workload to services that run in the workload cluster, deploy the sleep and test sample applications to your workload cluster.

  1. Create the sleep app, and label the sleep namespace for Istio sidecar injection.

      kubectl --context ${REMOTE_CONTEXT} create namespace sleep
    kubectl --context ${REMOTE_CONTEXT} label ns sleep istio-injection=enabled
    kubectl --context ${REMOTE_CONTEXT} apply -n sleep -f https://raw.githubusercontent.com/istio/istio/1.23.2/samples/sleep/sleep.yaml
      
  2. Create the httpbin app, and label the httpbin namespace for Istio sidecar injection.

      kubectl --context ${REMOTE_CONTEXT} create namespace httpbin
    kubectl --context ${REMOTE_CONTEXT} label ns httpbin istio-injection=enabled
    kubectl --context ${REMOTE_CONTEXT} apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/1.23.2/samples/httpbin/httpbin.yaml
      
  3. Get the IP address of the httpbin pod.

      kubectl --context ${REMOTE_CONTEXT} get pods -n httpbin -o wide
      
  4. Log in to your VM, and curl the pod IP address. This ensures that the VM and workload cluster are available to each other, before you deploy Istio sidecars to either.

      curl -s <httpbin_pod_ip>:80
      

    Note: If the curl is unsuccessful, verify that the security rules that you created for the VM allow outbound traffic and permit TCP traffic on ports 22 and 5000.

Deploy Istio

Set up the Istio control plane and gateways in the workload cluster. Note that the installation settings included in these steps are tailored for basic setups.

  1. Create the following Istio namespaces, and label the istio-system namespace with topology.istio.io/network=${REMOTE_CLUSTER}.

      kubectl --context ${REMOTE_CONTEXT} create namespace istio-eastwest
    kubectl --context ${REMOTE_CONTEXT} create namespace istio-system
    kubectl --context ${REMOTE_CONTEXT} label namespace istio-system topology.istio.io/network=${REMOTE_CLUSTER}
      
  2. Download the IstioOperator resource, which contains Istio installation settings that are required for onboarding an VM to the service mesh.

      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/vm-agent/aws.yaml > aws.yaml
      
  3. Update the file with your workload cluster name, and install the control plane and gateways.

      export CLUSTER_NAME=${REMOTE_CLUSTER}
    envsubst < aws.yaml | istioctl install --context ${REMOTE_CONTEXT} -y -f -
      
  4. Verify that the Istio pods are healthy.

      kubectl --context ${REMOTE_CONTEXT} get pods -n istio-system
      
  5. Verify that the load balancer service for the east-west gateway is created. Note that it might take a few minutes for an external IP address to be assigned. When your VM and workload cluster are in different VPC networks, the LoadBalancer service that exposes the east-west gateway allows the VM to access the cluster’s service mesh and be onboarded into the mesh. After the VM is onboarded, all traffic sent to and from the VM goes through the east-west gateway.

      kubectl --context ${REMOTE_CONTEXT} get svc -n istio-eastwest
      
  6. Rollout a restart to the test apps that you deployed prior to the Istio installation, so that they are re-deployed with an Istio sidecar.

      kubectl --context ${REMOTE_CONTEXT} rollout restart deploy/sleep -n sleep
    kubectl --context ${REMOTE_CONTEXT} rollout restart deploy/httpbin -n httpbin
      
  7. Enable telemetry for all workloads in the workload cluster.

      kubectl apply --context ${REMOTE_CONTEXT} -f - <<EOF
    apiVersion: telemetry.istio.io/v1alpha1
    kind: Telemetry
    metadata:
      name: demo
      namespace: istio-system
    spec:
      # no selector specified, applies to all workloads
      metrics:
      - providers:
        - name: prometheus
    EOF
      

Register the workload cluster

Register the workload cluster with the Gloo management server. Note that the registration settings included in these steps are tailored for basic setups. For more information about advanced settings, see the setup documentation.

  1. Save the settings that are required to deploy SPIRE and PostgreSQL when you register your workload cluster. The steps vary based on the certificates you used to deploy Istio.

  2. Register the workload cluster with the management server. This command uses basic profiles to install the Gloo agent, rate limit server, and external auth server, as well the values file to install the SPIRE and PostgreSQL deployments.

      meshctl cluster register ${REMOTE_CLUSTER} \
      --kubecontext ${MGMT_CONTEXT} \
      --remote-context ${REMOTE_CONTEXT} \
      --version 2.7.0-beta1 \
      --profiles agent,ratelimit,extauth \
      --telemetry-server-address ${TELEMETRY_GATEWAY_ADDRESS} \
      --gloo-mesh-agent-chart-values data-plane.yaml \
      --set featureGates.ExternalWorkloads=true
      
  3. Verify that the Gloo data plane components are healthy, and that your Gloo Mesh setup is correctly installed.

      meshctl check --kubecontext ${REMOTE_CONTEXT}
    meshctl check --kubecontext ${MGMT_CONTEXT}
      
  4. Annotate the Kubernetes service account for the SPIRE server with its AWS IAM role.

    1. In the workload cluster, create an EKS IAM service account, and attach the policy for the SPIRE server’s IAM role that you previously created.

        eksctl create iamserviceaccount \
        --name gloo-spire-server --namespace gloo-mesh \
        --cluster ${REMOTE_CLUSTER} --role-name SpireServer \
        --attach-policy-arn arn:aws:iam::${ACCOUNT}:policy/SpireServer \
        --override-existing-serviceaccounts --approve
        
    2. Verify that the IAM role is associated with the SPIRE server’s service account.

        aws iam list-attached-role-policies --role-name SpireServer --query AttachedPolicies[].PolicyArn --output text
        

      Example output:

        arn:aws:iam::802411188784:policy/SpireServer
        
    3. Verify that the SPIRE Kubernetes service account is annotated with the IAM role.

        kubectl --context ${REMOTE_CONTEXT} describe sa gloo-spire-server -n gloo-mesh
        

      Example output:

        Name:                gloo-spire-server
      Namespace:           gloo-mesh
      Labels:              app.kubernetes.io/managed-by=eksctl
      Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::802411188784:role/SpireServer
      Image pull secrets:  <none>
      Mountable secrets:   <none>
      Tokens:              <none>
      Events:              <none>
        
    4. Restart the SPIRE deployment to apply the change.

        kubectl --context ${REMOTE_CONTEXT} rollout restart deploy/gloo-spire-server -n gloo-mesh
        
  5. Verify that the pods in your workload cluster are healthy.

      kubectl --context ${REMOTE_CONTEXT} get pods -n gloo-mesh
      

    Example output:

      NAME                                   READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-d689d4544-g8fzn        1/1     Running   0          4m29s
    gloo-spire-server-cd88fb77d-jk7mr      2/2     Running   0          53s
    gloo-telemetry-collector-agent-7jzl4   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-86ktk   1/1     Running   0          4m28s
    gloo-telemetry-collector-agent-l8c99   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-pkh2v   1/1     Running   0          4m28s
    gloo-telemetry-collector-agent-pmqrh   1/1     Running   0          4m29s
    gloo-telemetry-collector-agent-wnq7d   1/1     Running   0          4m28s
    postgresql-0                           1/1     Running   0          4m28s
      

Step 4: Onboard the VM

Onboard the VM to your Gloo Mesh setup by installing the Istio sidecar, OpenTelemetry (OTel) collector, and SPIRE agents on the VM.

  1. On the workload cluster, create a namespace for the external workload configuration that you set up in subsequent steps.

      kubectl --context ${REMOTE_CONTEXT} create namespace vm-config
      
  2. Save the following ExternalWorkload Gloo resource to create an identity for apps that run on the VM. The following example resource provisions an identity in the vm-config namespace of the workload cluster, for services that listen on port 5000 and that run on an VM of the specified identity selector. For more information and available options, see the API reference documentation.

      cat >externalworkload.yaml <<EOF
    apiVersion: networking.gloo.solo.io/v2alpha1
    kind: ExternalWorkload
    metadata:
      labels:
        # Label to use later when you create a virtual destination for the test app
        app: vm-ext-workload
        version: v1
      name: vm-ext-workload
      namespace: vm-config
    spec:
      connectedClusters:
        # Map of workload cluster name to VM configuration namespace
        ${REMOTE_CLUSTER}: vm-config
      identitySelector:
        aws:
          # ID of the security group for the VM
          - securityGroupId: ${VM_SG_ID}
      # Port for each app to include
      ports:
        # Test app port
        - name: http
          number: 5000
    EOF
      
  3. Create the ExternalWorkload resource in the workload cluster.

      kubectl apply --context ${REMOTE_CONTEXT} -f externalworkload.yaml
      
  4. Confirm that a WorkloadGroup Istio resource is created in your VM configuration namespace. This resource summarizes all workloads on the VM that you selected by port number in the ExternalWorkload.

      kubectl --context ${REMOTE_CONTEXT} get workloadgroup -n vm-config
      

    Example output:

      NAME               AGE
    vm-ext-workload    10s
      
  5. Get the address of the east-west gateway in the workload cluster.

      kubectl get svc --context ${REMOTE_CONTEXT} -n gloo-mesh-gateways istio-eastwestgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"
      
  6. On your VM, save the following environment variables. Later in this section, you use these variables in a command to connect the VM to the workload cluster through the cluster’s east-west gateway.

      export EW_GW_ADDR=<eastwest_gateway_address>
    export REMOTE_CLUSTER=<workload_cluster_name>
      
  7. Get the URLs for the packages to install the Istio sidecar agent and Gloo workload agent on your VM, and save them as environment variables on the VM.

    1. Log in to the Support Center and review the Solo packages for external workload integration support article.
    2. Save the package URL for the Istio sidecar agent.
      1. For the same Istio version that you previously downloaded, open the link for cloud storage bucket of the Solo distribution of Istio.
      2. In the storage bucket, open the -solo directory for the version you want to use, such as 1.23.2-patch1-solo/.
      3. Depending on your VM image type, open either the deb/ or rpm/ directory.
      4. On either the istio-sidecar or istio-sidecar-arm64 binary package, click the menu button, and click Copy Public URL.
      5. On your VM, save the URL as an environment variable.
          export ISTIO_URL=<istio_package_url>
          
    3. Save the package URL for the Gloo workload agent.
      1. Open the link for the Solo cloud storage bucket.
      2. In the storage bucket, open the directory for the Gloo Mesh version that you use.
      3. On the package for your VM type, click the menu button, and click Copy Public URL.
      4. On your VM, save the URL as an environment variable.
          export GLOO_AGENT_URL=<gloo_agent_package_url>
          
  8. On your VM, download meshctl. Be sure to download the same version that you use on your local machine, such as 2.7.0-beta1 (latest).

      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.7.0-beta1 sh -
    export PATH=$HOME/.gloo-mesh/bin:$PATH
      
  9. On your VM, use meshctl to install the Istio sidecar agent and Gloo workload agent. For more information about the options for this command, see the CLI reference. Note that this process will continue to execute even if the SSH session is accidentally terminated.

      nohup meshctl external-workload onboard --install \
    --attestor aws \
    --cluster ${REMOTE_CLUSTER} \
    --gateway-addr ${EW_GW_ADDR} \
    --gateway istio-eastwest/istio-eastwestgateway \
    --trust-domain cluster.local \
    -i ${ISTIO_URL} -g ${GLOO_AGENT_URL} \
    --ext-workload vm-config/vm-ext-workload \
    > /tmp/onboard.log 2>&1 </dev/null &
      
  10. You can monitor the progress of the onboarding by checking the log file.

      tail -f /tmp/onboard.log
      

Step 5: Test connectivity

Verify that the onboarding process was successful by testing the bi-directional connection between the apps that run in the service mesh in your workload cluster and a test app on your VM.

  1. On your VM, deploy a simple HTTP server that listens on port 5000. This app is represented by the ExternalWorkload identity that you previously created.

      nohup python3 -m http.server 5000 &
      
  2. From the VM, curl the httpbin service in your workload cluster. Note that this curl command uses the app’s cluster DNS name instead of its pod IP address, because the VM is now connected to the Istio service mesh that runs in your workload cluster.

      curl -s httpbin.httpbin:8000 -v
      

    Example output:

      *   Trying 10.XX.XXX.XXX:8000...
    * Connected to httpbin.httpbin (10.XX.XXX.XXX) port 8000 (#0)
    > GET / HTTP/1.1
    > Host: httpbin.httpbin:8000
    > User-Agent: curl/7.74.0
    > Accept: */*
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    < server: envoy
    < date: Mon, 10 Jul 2023 18:00:32 GMT
    < content-type: text/html; charset=utf-8
    < content-length: 9593
    < access-control-allow-origin: *
    < access-control-allow-credentials: true
    < x-envoy-upstream-service-time: 4
    <
    <!DOCTYPE html>
    ...
      
  3. In the workload cluster, create a Gloo VirtualDestination resource so that apps in the service mesh can also access the HTTP server test app that runs on the VM through the testapp.vd hostname. Note that if you selected multiple apps that run on the VM in your ExternalWorkload resource, you can create a virtual destination for each app by using the app’s port that you specified.

      kubectl --context ${REMOTE_CONTEXT} apply -f - <<EOF
    apiVersion: networking.gloo.solo.io/v2
    kind: VirtualDestination
    metadata:
      labels:
        app: testapp
      name: testapp-vd
      namespace: vm-config
    spec:
      externalWorkloads:
      - labels:
          # Label that you gave to the ExternalWorkload resource
          app: vm-ext-workload
      hosts:
      # Hostname to use for the app
      - testapp.vd
      ports:
      # Port that you specified in the ExternalWorkload resource to select the test app
      - number: 5000
        protocol: HTTP
        targetPort:
          name: http
    EOF
      
  4. From the sleep app in your workload cluster, curl the HTTP server on the VM by using the testapp.vd hostname.

      kubectl --context ${REMOTE_CONTEXT} exec deploy/sleep -n sleep -- curl -s testapp.vd:5000 -v
      

    Example output:

      *   Trying 244.XXX.XXX.XX:5000...
    * Connected to testapp.vd (244.XXX.XXX.XX) port 5000 (#0)
    > GET / HTTP/1.1
    > Host: testapp.vd:5000
    > User-Agent: curl/8.1.2
    > Accept: */*
    >
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <title>Directory listing for /</title>
    </head>
    <body>
    <h1>Directory listing for /</h1>
    <hr>
    <ul>
    <li><a href=".bash_history">.bash_history</a></li>
    <li><a href=".bash_logout">.bash_logout</a></li>
    <li><a href=".bashrc">.bashrc</a></li>
    <li><a href=".profile">.profile</a></li>
    <li><a href=".ssh/">.ssh/</a></li>
    <li><a href="gm-config/">gm-config/</a></li>
    <li><a href="nohup.out">nohup.out</a></li>
    </ul>
    <hr>
    </body>
    </html>
    < HTTP/1.1 200 OK
    < server: envoy
    < date: Mon, 10 Jul 2023 18:00:50 GMT
    < content-type: text/html; charset=utf-8
    < content-length: 600
    < x-envoy-upstream-service-time: 3
    <
    { [600 bytes data]
    * Connection #0 to host testapp.vd left intact
      

Congratulations! Your VM is now registered with Gloo Mesh. You can now create Gloo resources for the workloads that you run on the VM, such as Gloo traffic policies. For example, if you selected multiple apps in your ExternalWorkload resource and want to apply a policy to all of those apps, you can use the label on the ExternalWorkload in the policy selector. Or, for policies that apply to destinations, you can select only the virtual destination for one of the apps. For more information, see Policy enforcement.

Uninstall

  1. On the VM:

    1. Remove the Istio sidecar, OTel collector, and SPIRE agents.
        sudo ./bootstrap.sh --uninstall
        
    2. Remove the bootstrap script and bundle, the agent packages, and the test app data.
        cd ..
      rm -r gloo-mesh-config
        
  2. On the workload cluster:

    1. Delete the vm-config, sleep, and httpbin namespaces.
        kubectl --context ${REMOTE_CONTEXT} delete namespace vm-config
      kubectl --context ${REMOTE_CONTEXT} delete namespace sleep
      kubectl --context ${REMOTE_CONTEXT} delete namespace httpbin
        
    2. Delete the Gateway and VirtualService resources from the istio-eastwest namespace.
        kubectl --context ${REMOTE_CONTEXT} delete Gateway istiod-gateway -n istio-eastwest
      kubectl --context ${REMOTE_CONTEXT} delete Gateway spire-gateway -n istio-eastwest
      kubectl --context ${REMOTE_CONTEXT} delete VirtualService istiod-vs -n istio-eastwest
      kubectl --context ${REMOTE_CONTEXT} delete VirtualService spire-vs -n istio-eastwest
        
  3. Continue to use your Gloo Mesh setup, or uninstall it.

    • To continue to use your Gloo Mesh setup, you can optionally remove the SPIRE and PostgreSQL servers by following the upgrade guide to remove their settings from the Helm values for your workload cluster.
    • To uninstall your Gloo Mesh setup:
      1. Uninstall the Istio service mesh.
          istioctl uninstall --context ${REMOTE_CONTEXT} --purge
          
      2. Remove the Istio namespaces.
          kubectl delete ns --context ${REMOTE_CONTEXT} istio-system
        kubectl delete ns --context ${REMOTE_CONTEXT} istio-eastwest
          
      3. Follow the steps in Uninstall Gloo Mesh to deregister the workload cluster and uninstall the Gloo management plane.