Add VMs to the mesh ALPHA
Onboard workloads that run in a virtual machine to your ambient mesh.
About
As you build your ambient mesh, you might want to add a workload that runs on an external machine to your cluster environment. For example, you might run an app or service in a virtual machine (VM) that must communicate with services in the Istio ambient mesh that runs in your Kubernetes cluster.
To extend the mesh to include workloads running on VMs, you leverage the istioctl bootstrap command to generate a bootstrap token, and deploy a ztunnel instance on the VM that uses that token to onboard to your mesh. Then, the workloads on your VM can communicate with in-mesh services in your cluster via the ztunnel.
VM integration into an ambient mesh is an alpha feature. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
Before you begin
Set up an ambient mesh in a single or multicluster setup.
If you have not yet set up an ambient mesh, be sure to include theREQUIRE_3P_TOKEN="false"environment variable in istiod when you follow either of these guides to install an ambient mesh. For details, see the first step in the next section.Deploy the
bookinfosample app.If you have not already, get the Solo distribution of Istio binary and install
istioctl, which you use for the bootstrap command in this guide.Save the Solo distribution of Istio version that you installed.
- Istio 1.29 and later:
export ISTIO_VERSION=1.28.1 export ISTIO_IMAGE=${ISTIO_VERSION}-solo export REPO=us-docker.pkg.dev/soloio-img/istio export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helm - Istio 1.28 and earlier:
Save the repo key for the minor version of the Solo distribution of Istio. This is the 12-character hash at the end of the repo URL
us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.export ISTIO_VERSION=1.28.1 export ISTIO_IMAGE=${ISTIO_VERSION}-solo # 12-character hash at the end of the repo URL export REPO_KEY=<repo_key> export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY} export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
- Istio 1.29 and later:
Download the Solo distribution of Istio binary and install
istioctl. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh) export PATH=${HOME}/.istioctl/bin:${PATH}
If you haven’t already, create an east-west gateway to facilitate traffic between the VM, istiod, and other workloads within the mesh.
- Solo distribution of
istioctl: For more information about this command, see the CLI reference.kubectl create namespace istio-eastwest istioctl multicluster expose --namespace istio-eastwest --generate > ew-gateway.yaml kubectl apply -f ew-gateway.yaml - Helm: For more information about the
peeringchart, see the Helm values reference. For recommendations on customizing the east-west gateway for resiliency and availability with the Helm chart, see the best practices for multicluster peering.helm upgrade -i peering-eastwest oci://${HELM_REPO}/peering \ --version ${ISTIO_IMAGE} \ --namespace istio-eastwest \ --create-namespace \ -f - <<EOF eastwest: create: true cluster: ${CLUSTER_NAME} # The network that the istio-system namespace is labeled with. # In prod environments, network and cluster are likely not the same value. network: ${CLUSTER_NAME} deployment: {} EOF
- Solo distribution of
Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwestExample output:
NAME READY STATUS RESTARTS AGE istio-eastwest-5d4f757664-6hw7b 1/1 Running 0 9sInstall
dockeron the VM, which will be used to run to run ztunnel as a container alongside the application.
Onboard a VM to the ambient mesh
If you haven’t already, update your istiod installation to add the
REQUIRE_3P_TOKEN="false"environment variable on istiod, which is required for the ztunnel that you deploy to the VM in later steps to connect to istiod. In a multicluster mesh setup, enable this environment variable on the istiod installation in the cluster you want to connect the VM to.Save the name of the VM, namespace, and service account for use later. These can be named whatever you prefer. The service account represents the VM in the cluster so that Istio controle plane manages it the same way as any other pod in the mesh. If you later want to apply Istio resources to your VM workload, you can use this service account and namespace in the configuration.
export VM_NAME=vm-example export VM_NAMESPACE=vm-ns export VM_SERVICE_ACCOUNT=vm-saIn your cluster, generate an Istio bootstrap configuration.
- This command creates a bootstrap token that includes the necessary certificates and metadata for the VM to join the ambient mesh. ztunnel will use this token to authenticate with the istiod.
- For more information about this command, run
istioctl bootstrap --helpor see the CLI reference.
kubectl create namespace ${VM_NAMESPACE} kubectl label namespace ${VM_NAMESPACE} istio.io/dataplane-mode=ambient kubectl --namespace ${VM_NAMESPACE} create serviceaccount ${VM_SERVICE_ACCOUNT} istioctl bootstrap --namespace ${VM_NAMESPACE} --service-account ${VM_SERVICE_ACCOUNT}Log in to your VM, such as by using SSH.
On your VM, copy and save the bootstrap token that you generated as an environment variable.
export BOOTSTRAP_TOKEN=<generated_token>Start a ztunnel instance on the VM. ztunnel is a lightweight data plane component that enables the VM to participate in the ambient mesh. This command pulls the ztunnel container image and starts it with the necessary configuration to connect to the mesh.
docker run -d -e BOOTSTRAP_TOKEN=${BOOTSTRAP_TOKEN} --network=host us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}/ztunnel:1.28.1-solo-distrolessTest connectivity from the VM to services in the mesh, such as to the
productpageservice in thebookinfonamespace. For example, the following curl commands test connectivity by using productpage’s Kubernetes DNS name and mesh-internal DNS name. Two200 OKresponses indicate that the VM has successfully joined the mesh and can communicate with other in-mesh services.export ALL_PROXY=socks5h://127.0.0.1:15080 curl productpage.bookinfo:9080 curl productpage.bookinfo.mesh.internal:9080
Routing traffic to the VM
To route traffic from other mesh workloads to the VM, create a ServiceEntry and WorkloadEntry to represent the VM within the mesh.
Note that public IP addresses are not supported. To allow inbound traffic, the VM’s private IP address must be reachable by other mesh workloads.
Save the following routing details in environment variables.
# The port that clients in the mesh will route to CLIENT_PORT=80 # The port ztunnel will proxy inbound requests to on the VM APPLICATION_PORT=8080 # Replace with the VM's private IP address VM_PRIVATE_IP=<private_IP> NETWORK=$(kubectl get namespace istio-system -o jsonpath='{.metadata.labels.topology\.istio\.io\/network}')Create a ServiceEntry and WorkloadEntry to represent the VM within the mesh. After you apply these resources, the ambient mesh now routes traffic from Kubernetes pods to the VM via the address
vm-example.vm-ns.svc.cluster.local.kubectl apply -n ${VM_NAMESPACE} -f - << EOF apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: ${VM_NAME} namespace: ${VM_NAMESPACE} spec: hosts: - ${VM_NAME}.${VM_NAMESPACE}.svc.cluster.local ports: - number: ${CLIENT_PORT} name: http protocol: HTTP targetPort: ${APPLICATION_PORT} resolution: STATIC workloadSelector: labels: app: ${VM_NAME} --- apiVersion: networking.istio.io/v1beta1 kind: WorkloadEntry metadata: name: ${VM_NAME} namespace: ${VM_NAMESPACE} labels: app: ${VM_NAME} annotations: ambient.istio.io/redirection: enabled spec: address: ${VM_PRIVATE_IP} serviceAccount: ${VM_SERVICE_ACCOUNT} network: ${NETWORK} EOF