About

As you build your ambient mesh, you might want to add a workload that runs on an external machine to your cluster environment. For example, you might run an app or service in a virtual machine (VM) that must communicate with services in the Istio ambient mesh that runs in your Kubernetes cluster.

To extend the mesh to include workloads running on VMs, you leverage the istioctl bootstrap command to generate a bootstrap token, and deploy a ztunnel instance on the VM that uses that token to onboard to your mesh. Then, the workloads on your VM can communicate with in-mesh services in your cluster via the ztunnel.

Before you begin

  1. Set up an ambient mesh in a single or multicluster setup.

  2. Deploy the bookinfo sample app.

  3. If you have not already, get the Solo distribution of Istio binary and install istioctl, which you use for the bootstrap command in this guide.

    1. Save the Solo distribution of Istio version that you installed.

      • Istio 1.29 and later:
          export ISTIO_VERSION=1.28.1
        export ISTIO_IMAGE=${ISTIO_VERSION}-solo
        export REPO=us-docker.pkg.dev/soloio-img/istio
        export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helm
          
      • Istio 1.28 and earlier: Save the repo key for the minor version of the Solo distribution of Istio. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
          export ISTIO_VERSION=1.28.1
        export ISTIO_IMAGE=${ISTIO_VERSION}-solo
        # 12-character hash at the end of the repo URL
        export REPO_KEY=<repo_key>
        export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
        export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
          
    2. Download the Solo distribution of Istio binary and install istioctl. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

        bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
      export PATH=${HOME}/.istioctl/bin:${PATH}
        
  4. If you haven’t already, create an east-west gateway to facilitate traffic between the VM, istiod, and other workloads within the mesh.

    • Solo distribution of istioctl: For more information about this command, see the CLI reference.
        kubectl create namespace istio-eastwest
      istioctl multicluster expose --namespace istio-eastwest --generate > ew-gateway.yaml
      kubectl apply -f ew-gateway.yaml
        
    • Helm: For more information about the peering chart, see the Helm values reference. For recommendations on customizing the east-west gateway for resiliency and availability with the Helm chart, see the best practices for multicluster peering.
        helm upgrade -i peering-eastwest oci://${HELM_REPO}/peering \
        --version ${ISTIO_IMAGE} \
        --namespace istio-eastwest \
        --create-namespace \
        -f - <<EOF
      eastwest:
        create: true
        cluster: ${CLUSTER_NAME}
           # The network that the istio-system namespace is labeled with.
           # In prod environments, network and cluster are likely not the same value.
        network: ${CLUSTER_NAME}
        deployment: {}
      EOF
        
  5. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest
      

    Example output:

      NAME                              READY   STATUS    RESTARTS   AGE
    istio-eastwest-5d4f757664-6hw7b   1/1     Running   0          9s
      
  6. Install docker on the VM, which will be used to run to run ztunnel as a container alongside the application.

Onboard a VM to the ambient mesh

  1. If you haven’t already, update your istiod installation to add the REQUIRE_3P_TOKEN="false" environment variable on istiod, which is required for the ztunnel that you deploy to the VM in later steps to connect to istiod. In a multicluster mesh setup, enable this environment variable on the istiod installation in the cluster you want to connect the VM to.

  2. Save the name of the VM, namespace, and service account for use later. These can be named whatever you prefer. The service account represents the VM in the cluster so that Istio controle plane manages it the same way as any other pod in the mesh. If you later want to apply Istio resources to your VM workload, you can use this service account and namespace in the configuration.

      export VM_NAME=vm-example
    export VM_NAMESPACE=vm-ns
    export VM_SERVICE_ACCOUNT=vm-sa
      
  3. In your cluster, generate an Istio bootstrap configuration.

    • This command creates a bootstrap token that includes the necessary certificates and metadata for the VM to join the ambient mesh. ztunnel will use this token to authenticate with the istiod.
    • For more information about this command, run istioctl bootstrap --help or see the CLI reference.
      kubectl create namespace ${VM_NAMESPACE}
    kubectl label namespace ${VM_NAMESPACE} istio.io/dataplane-mode=ambient
    kubectl --namespace ${VM_NAMESPACE} create serviceaccount ${VM_SERVICE_ACCOUNT}
    istioctl bootstrap --namespace ${VM_NAMESPACE} --service-account ${VM_SERVICE_ACCOUNT}
      
  4. Log in to your VM, such as by using SSH.

  5. On your VM, copy and save the bootstrap token that you generated as an environment variable.

      export BOOTSTRAP_TOKEN=<generated_token>
      
  6. Start a ztunnel instance on the VM. ztunnel is a lightweight data plane component that enables the VM to participate in the ambient mesh. This command pulls the ztunnel container image and starts it with the necessary configuration to connect to the mesh.

      docker run -d -e BOOTSTRAP_TOKEN=${BOOTSTRAP_TOKEN} --network=host us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}/ztunnel:1.28.1-solo-distroless
      
  7. Test connectivity from the VM to services in the mesh, such as to the productpage service in the bookinfo namespace. For example, the following curl commands test connectivity by using productpage’s Kubernetes DNS name and mesh-internal DNS name. Two 200 OK responses indicate that the VM has successfully joined the mesh and can communicate with other in-mesh services.

      export ALL_PROXY=socks5h://127.0.0.1:15080
    curl productpage.bookinfo:9080
    curl productpage.bookinfo.mesh.internal:9080
      

Routing traffic to the VM

To route traffic from other mesh workloads to the VM, create a ServiceEntry and WorkloadEntry to represent the VM within the mesh.

  1. Save the following routing details in environment variables.

      # The port that clients in the mesh will route to
    CLIENT_PORT=80
    # The port ztunnel will proxy inbound requests to on the VM
    APPLICATION_PORT=8080
    # Replace with the VM's private IP address
    VM_PRIVATE_IP=<private_IP>
    NETWORK=$(kubectl get namespace istio-system  -o jsonpath='{.metadata.labels.topology\.istio\.io\/network}')
      
  2. Create a ServiceEntry and WorkloadEntry to represent the VM within the mesh. After you apply these resources, the ambient mesh now routes traffic from Kubernetes pods to the VM via the address vm-example.vm-ns.svc.cluster.local.

      kubectl apply -n ${VM_NAMESPACE} -f - << EOF
    apiVersion: networking.istio.io/v1beta1
    kind: ServiceEntry
    metadata:
      name: ${VM_NAME}
      namespace: ${VM_NAMESPACE}
    spec:
      hosts:
      - ${VM_NAME}.${VM_NAMESPACE}.svc.cluster.local
      ports:
      - number: ${CLIENT_PORT}
        name: http
        protocol: HTTP
        targetPort: ${APPLICATION_PORT} 
      resolution: STATIC 
      workloadSelector: 
        labels:
          app: ${VM_NAME}
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: WorkloadEntry
    metadata:
      name: ${VM_NAME}
      namespace: ${VM_NAMESPACE}
      labels:
        app: ${VM_NAME}
      annotations:
        ambient.istio.io/redirection: enabled
    spec:
      address: ${VM_PRIVATE_IP}
      serviceAccount: ${VM_SERVICE_ACCOUNT}
      network: ${NETWORK}
    EOF