Add ECS services to the mesh ENTERPRISE ALPHA
Onboard workloads that run in Amazon ECS to your ambient mesh.
About
As you build your ambient mesh, you might want to add a workload or service that is external to your cluster environment. For example, you might run a task or service in Amazon’s Elastic Container Service (ECS) that must communicate with services in the Istio ambient mesh of your Kubernetes cluster. Additionally, you might need to secure the connections between ECS applications with mTLS, as well as centrally manage routing and policies by using Istio APIs.
In the Solo distribution of Istio version 1.28 and later, you can extend the mesh to include workloads running in an ECS cluster by leveraging the istioctl ecs add-service command. This command automatically bootstraps existing ECS services with a ztunnel sidecar container, which uses IAM roles to authenticate with your Istio installation. The workloads in ECS can then use the ztunnel to communicate with the in-mesh services in your Kubernetes cluster as well as securely communicate over mTLS to other ECS workloads.
The following diagram shows the EKS and ECS architecture, in which EKS services and ECS tasks can communicate through mTLS-secured connections in your ambient mesh. The steps in this guide enable bi-directional communication, both from and to your ECS services.


Version and license requirements
This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh (OSS APIs). Contact your account representative to obtain a valid license.ECS integration into an ambient mesh is an alpha feature. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
Platform considerations
The east-west gateway of your ambient mesh must be accessible to all ECS tasks and services that are integrated into the mesh. The easiest way to accomplish this is to run your ambient mesh in an Amazon Elastic Kubernetes Service (EKS) cluster, and run ECS tasks in the same Virtual Private Cloud (VPC) as the EKS nodes. However, the cluster can be hosted anywhere as long as the east-west gateway is accessible.
Step 1: Set up tools
Install the following CLI tools.
aws, the AWS command line tool.eksctl, the CLI tool for creating and managing EKS clusters.kubectl, the Kubernetes command line tool. Download thekubectlversion that is within one minor version of the EKS cluster you plan to use.helm, the Kubernetes package manager.jq, a command-line JSON processor.envsubst, a command-line tool for substituting environment variables in files. This is typically available via thegettextpackage, such as throughbrew install gettexton macOS, orapt-get install gettext-baseon Ubuntu.
Clone the
gloo-mesh-use-casesrepository, which contains scripts and manifests for this guide, and navigate to theecsdirectory.git clone https://github.com/solo-io/gloo-mesh-use-cases.git cd gloo-mesh-use-cases/gloo-mesh/ecs
Step 2: Set up the EKS cluster with Istio
Create IAM roles for Istio. Then, either create a new EKS cluster and deploy an Istio ambient mesh to the cluster, or update an existing EKS cluster where a Solo distribution of Istio ambient mesh runs. The ambient mesh setup must include an east-west gateway to facilitate traffic between the EKS and ECS clusters.
Save the following details in environment variables.
AWS_ACCOUNT: The ID of the AWS account that you want to use.AWS_REGION: The AWS region that you want to use. Currently, the EKS cluster and ECS cluster must exist in the same region for istiod to discover tasks and services in the ECS cluster.CLUSTER_NAME: A name for the EKS cluster. If you already created a cluster where you installed the Solo distribution of Istio ambient mesh, you can use that cluster name. Otherwise, you create an EKS cluster in the following steps. The ECS cluster name is also derived from this name, such asecs-${CLUSTER_NAME}. The cluster name must start with a letter or number and can contain only letters, numbers, hyphens, and underscores. Periods and other special characters are not supported.ECS_DOMAIN: A custom domain suffix for your ECS resources, such asexample.ecsormyapp.local. This domain is used to generate consistent hostnames for ECS services within the mesh in the format<service>.<ecs_cluster>.<ECS_domain>.
export AWS_ACCOUNT=<ID> export AWS_REGION=us-east-1 export CLUSTER_NAME=demo export ECS_DOMAIN=example.ecsCreate IAM permissions to read from the ECS API, which allow istiod to perform automatic discovery of ECS services and tasks. This guide assumes that you create the EKS and ECS clusters in the same account, but automatic discovery can be enabled even if istiod runs in a different AWS account than your ECS resources.
The following script checks if the required IAM roles and policies already exist in your AWS account. If they do not exist, the script creates the following resources that are automatically associated with the
istio-system/istiodKubernetes service account when you create the EKS cluster:istiodIAM role with permissions to read from the ECS API to be assumed by EKS podsistiod-ecsIAM role that provides read-only access to the ECS APIecs-read-onlyIAM policy attached to theistiod-ecsroleistiod-${AWS_ACCOUNT}IAM policy that allows theistiodrole to assume theistiod-ecsrole
scripts/build/iam-istiod.shExample output:
Starting creation of Istiod IAM roles and policies... Creating IAM role 'istiod'... Successfully created istiod role. Waiting for IAM role to propagate... Creating IAM role 'istiod-ecs'... Successfully created istiod-ecs role. Creating IAM policy 'ecs-read-only'... Successfully created ecs-read-only policy. Successfully attached ecs-read-only policy to istiod-ecs role. Creating IAM policy 'istiod-123456789012'... Successfully created istiod-123456789012 policy. Successfully attached istiod-123456789012 policy to istiod role. Istiod IAM setup completed successfully. Created resources: - IAM Role: istiod - IAM Role: istiod-ecs - IAM Policy: ecs-read-only (attached to istiod-ecs) - IAM Policy: istiod-123456789012 (attached to istiod)Create or update an existing EKS cluster with the pod identity association that links the
istio-system/istiodKubernetes service account with the istiod IAM role. If you follow the steps to create an EKS cluster, you also deploy an ambient mesh into the cluster.
Step 3: Deploy tasks to an ECS cluster and enable service discovery
Next, set up your ECS tasks in an ECS cluster. This involves creating an IAM role for task execution, creating service accounts for the tasks, deploying shell and echo service tasks to an ECS cluster, and tagging the ECS cluster for Istio service discovery. For more information about the task resource strategy, see Service accounts and task roles.
Create an IAM role for ECS tasks. The following script checks if the ECS task role already exists in your AWS account. If the role does not exist, the script:
- Creates an IAM policy with the necessary permissions.
- Creates an IAM role.
- Assigns the permissions to the IAM role.
- Exports the ARNs of the roles as environment variables for use in subsequent steps.
source scripts/build/iam-task.shExample output:
Creating task role... TASK_ROLE_ARN exported: arn:aws:iam::012345678912:role/ecs/ambient/eks-ecs-task-role Creating task policy... Task role is ready.For the ECS cluster that you will create, create a namespace, label the namespace for ambient mesh inclusion, create a service account, and annotate the service account with the task role ARN. The namespace is used to store configuration objects related to ECS workloads, such as Istio WorkloadEntries and ServiceEntries.
kubectl create ns ecs-${CLUSTER_NAME} kubectl label namespace ecs-${CLUSTER_NAME} istio.io/dataplane-mode=ambient kubectl create sa ecs-demo-sa -n ecs-${CLUSTER_NAME} kubectl -n ecs-${CLUSTER_NAME} annotate sa ecs-demo-sa ecs.solo.io/role-arn=$(echo $TASK_ROLE_ARN | sed 's/\/ecs\/ambient//') export ECS_SERVICE_ACCOUNT_NAME=ecs-demo-saDeploy ECS tasks and services. This script performs the following actions:
- Registers ECS task definitions for a shell task and an echo service, injecting the task role ARN and CloudWatch logging configuration
- Discovers the VPC, subnets, and security groups from your EKS cluster
- Creates an ECS security group if it does not already exist
- Creates an ECS cluster named
ecs-${CLUSTER_NAME} - Creates a CloudWatch log group for ECS task logs
- Deploys two ECS services: a shell task to initiate test calls to services in the EKS ambient mesh, and an echo service to receive test calls from services in the EKS ambient mesh
- Authorizes ingress on the EKS security group to allow traffic from ECS (for demo purposes)
scripts/build/ecs-tasks.shExample output:
Registering task definition for shell-task.json... Task definition shell-task.json registered successfully. Registering task definition for echo-task.json... Task definition echo-task.json registered successfully. All task definitions registered successfully. ecs_vpc_id: vpc-07116fe0105d74ae7 Private Subnet IDs: subnet-043fee32952089cb7,subnet-0d114b43f59a03a3b Security Group IDs: sg-02026b180ffc4e89a CloudWatch log group '/ecs/ecs-demo' created successfully. ECS services script is completed. Ingress authorized for EKS security group sg-01c8f4c9d9b44d80e (for ECS demo purposes).The definition for the shell task sets the environment variableALL_PROXY=socks5h://127.0.0.1:15080. This configuration ensures that all traffic is routed through the local SOCKS5 proxy with port 15080. As a result, all communication from the ECS task is redirected through the ztunnel container. Although not required, this setting is recommended so that traffic is managed by the ambient mesh in the same way as other in-mesh services.To allow Istio to discover resources in the ECS cluster, tag the cluster with
ecs.solo.io/discovery-enabled: true. ECS cluster discovery is opt-in, and each cluster must be individually tagged.aws ecs tag-resource --resource-arn arn:aws:ecs:${AWS_REGION}:${AWS_ACCOUNT}:cluster/ecs-${CLUSTER_NAME} --tags 'key=ecs.solo.io/discovery-enabled,value=true'To verify that Istio automatically discovered all ECS tasks and services in the cluster, verify that Istio ServiceEntry resources are created to represent the ECS services, and WorkloadEntry resources to represent the ECS tasks.
kubectl get serviceentry -n istio-system kubectl get workloadentry -n istio-systemExample output:
NAME HOSTS LOCATION RESOLUTION AGE ecs-service-802411188784-us-east-2-cluster1-curl ["curl.cluster1.example.ecs"] STATIC 4m53s ecs-service-802411188784-us-east-2-cluster1-echo ["echo.cluster1.example.ecs"] STATIC 4m53s ecs-service-802411188784-us-east-2-cluster2-curl ["curl.cluster2.example.ecs"] STATIC 4m54s ecs-service-802411188784-us-east-2-cluster2-echo ["echo.cluster2.example.ecs"] STATIC 4m54s ecs-service-802411188784-us-east-2-ecs-demo-echo-service ["echo-service.ecs-demo.example.ecs"] STATIC 45s ecs-service-802411188784-us-east-2-ecs-demo-shell-task ["shell-task.ecs-demo.example.ecs"] STATIC 45s NAME AGE ADDRESS ecs-task-802411188784-us-east-2-cluster1-a79e11098d0c43a6bf2e8a098cff42d1 4m53s 192.168.139.70 ecs-task-802411188784-us-east-2-cluster1-b55390a97d204b8882f4d1f822e3270f 4m53s 192.168.131.71 ecs-task-802411188784-us-east-2-cluster2-1017acaf0b2e4eec8e862d124bb8ca3f 4m54s 192.168.179.66 ecs-task-802411188784-us-east-2-cluster2-9324a8b6b4be4ee18220ac840b506dc9 4m53s 192.168.115.29 ecs-task-802411188784-us-east-2-ecs-demo-7835e66a80d64fd7b80fc710a8450e83 28s 192.168.119.164 ecs-task-802411188784-us-east-2-ecs-demo-c218fde98cfb490f95ede8a98471053c 19s 192.168.117.166
Step 4: Add ECS services to the ambient mesh
At this point, services in the EKS cluster can connect to the ECS services by using their generated ServiceEntry hostnames. However, all connections are in plain text, and the ECS tasks and services are unable to reach other service hostnames in the mesh. To fully include the ECS services in the mesh, you add a ztunnel sidecar to each ECS service. This ztunnel handles routing based on hostnames and secures all traffic with mTLS.
Add each ECS service to the ambient mesh by running the
istioctl ecs add-servicecommand. This command updates the task definition of the ECS service with the ztunnel container, configures ztunnel to bootstrap the connection to istiod, and redeploys the service with a ztunnel sidecar that runs alongside the application.istioctl ecs add-service shell-task --cluster ecs-${CLUSTER_NAME} --external --namespace ecs-${CLUSTER_NAME} istioctl ecs add-service echo-service --cluster ecs-${CLUSTER_NAME} --external --namespace ecs-${CLUSTER_NAME}Review the following flags that you can specify in the
istioctl ecs add-servicecommand.Option Required? Default Description <ECS_service_name>✅ The name of the ECS service. cluster✅ The name of the ECS cluster. externalfalseSet to trueto include the ECS service in the mesh as an external workload. This option can be useful when the ECS service is in a different network than Istio, so that all requests are proxied through the east-west gateway in Kubernetes.hostname<service>.<ecs_cluster>.<ECS_domain>The DNS hostname that you want to expose the ECS service on in the mesh. Although the default format is <service>.<ecs_cluster>.<ECS_domain>, you can choose any hostname. You can set multiple ECS services as the same hostname to loadbalance requests between the services automatically.namespaceistio-systemThe Kubernetes namespace to associate the workload with. In this namespace, an Istio ServiceEntry is created for the ECS service, and a WorkloadEntry is created for the ECS task. portshttp:80:8080Port configuration for the service as a forward slash-separated list of protocol:port[:targetPort]pairs, such ashttp:80:8080/tcp:9090.profileThe AWS CLI profile name. Defaults to the default awsCLI profile.service-accountdefaultThe Kubernetes service account in the namespacethat the ECS service runs with. The service account is associated with the task execution role of the ECS service. For more information, see Service accounts.Example output:
• Generating a bootstrap token for ecs-demo/default... • Fetched Istiod Root Cert • Fetched Istio network (demo) • Fetching Istiod URL... • Service "istio-eastwest" provides Istiod access on port 15012 • Fetching Istiod URL (https://a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6-123456789.us-east-1.elb.amazonaws.com:15012) • Workload is authorized to run as role "arn:aws:iam::123456789012:role/ecs/ambient/eks-ecs-task-role" • Marking this workload as external to the network (pass --internal to override) • Created task definition arn:aws:ecs:us-east-1:123456789012:task-definition/shell-task-definition:2 • Successfully enrolled service "shell-task" (arn:aws:ecs:us-east-1:123456789012:service/ecs-demo/shell-task) to the mesh • Generating a bootstrap token for ecs-demo/default... • Fetched Istiod Root Cert • Fetched Istio network (demo) • Fetching Istiod URL... • Service "istio-eastwest" provides Istiod access on port 15012 • Fetching Istiod URL (https://a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6-123456789.us-east-1.elb.amazonaws.com:15012) • Workload is authorized to run as role "arn:aws:iam::123456789012:role/ecs/ambient/eks-ecs-task-role" • Marking this workload as external to the network (pass --internal to override) • Created task definition arn:aws:ecs:us-east-1:123456789012:task-definition/echo-service-definition:2 • Successfully enrolled service "echo-service" (arn:aws:ecs:us-east-1:123456789012:service/ecs-demo/echo-service) to the meshVerify new Istio ServiceEntry and WorkloadEntry resources are created in the
ecs-${CLUSTER_NAME}namespace to represent the redeployed ECS services and tasks. Note that it might take a few minutes for all resources to be created.kubectl get serviceentry -n ecs-${CLUSTER_NAME} kubectl get workloadentry -n ecs-${CLUSTER_NAME}Example output:
NAME HOSTS LOCATION RESOLUTION AGE ecs-service-123456789012-us-east-1-ecs-demo-echo-service ["echo-service.ecs-demo.example.ecs"] STATIC 3m19s ecs-service-123456789012-us-east-1-ecs-demo-shell-task ["shell-task.ecs-demo.example.ecs"] STATIC 3m21s NAME AGE ADDRESS ecs-task-123456789012-us-east-1-ecs-demo-a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6 2m36s 192.168.1.10 ecs-task-123456789012-us-east-1-ecs-demo-z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4 73s 192.168.1.20Verify that the ztunnel workloads are created for each task. Note that the
PROTOCOLfor each isHBONE, which indicates that the workloads can now communicate with other services in the mesh by using mTLS.istioctl ztunnel-config workloads | grep ecs-${CLUSTER_NAME}Example output:
ecs-demo ecs-task-123456789012-us-east-1-ecs-demo-a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6 192.168.1.10 None HBONE ecs-demo ecs-task-123456789012-us-east-1-ecs-demo-z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4 192.168.1.20 None HBONE
Other applications in the mesh can now reach the ECS services by sending requests to their in-mesh hostnames.
Step 5: Test connectivity
Verify that all traffic between ECS and EKS workloads is secured with mTLS, ensuring that it is encrypted, verified, and routed through the Istio ztunnel.
EKS to ECS
After the ECS echo service redeploys, applications in the EKS ambient mesh can now reach it by calling its in-mesh hostname, http://echo-service.ecs-${CLUSTER_NAME}.${ECS_DOMAIN}:8080. All traffic is encrypted with mTLS to the ztunnel container, which then proxies requests to the application container. You can test this connectivity by making HTTP requests from an example EKS shell pod to the echo service running in ECS.
Label the default namespace in the EKS cluster for ambient mesh inclusion.
kubectl label namespace default istio.io/dataplane-mode=ambientDeploy shell and echo applications to the default namespace.
kubectl apply -f manifests/eks-echo.yaml kubectl apply -f manifests/eks-shell.yamlSend a curl request from the EKS shell pod to the ECS echo service. Because you use the ECS service’s in-mesh hostname, the request is routed via the ztunnel.
kubectl exec -it $(kubectl get pods -l app=eks-shell -o jsonpath="{.items[0].metadata.name}") -- curl echo-service.ecs-${CLUSTER_NAME}.${ECS_DOMAIN}:8080Example output:
ServiceVersion= ServicePort=8080 Host=echo-service.ecs-demo.example.ecs:8080 URL=/ Method=GET Proto=HTTP/1.1 IP=192.168.1.15 RequestHeader=Accept:*/* RequestHeader=User-Agent:curl/8.10.1 Hostname=ip-192-168-1-15.us-east-2.compute.internalOptional: Review the ztunnel logs for the ECS echo service. The following command tails the CloudWatch logs for the ztunnel container and filters for connection logs that show the mTLS-secured request from the EKS shell service.
aws logs tail /ecs/ecs-${CLUSTER_NAME} --since 10m --log-stream-name-prefix "ztunnel/ztunnel/" --format short | grep "connection complete"Example output:
2025-11-24T18:48:42 2025-11-24T18:48:42.288125Z info access connection complete src.addr=192.168.175.14:35720 src.workload="eks-shell-6b7bb8b475-rt72t" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/default" dst.addr=192.168.120.172:15008 dst.hbone_addr=192.168.120.172:8080 dst.service="echo-service.ecs-demo.example.ecs" dst.workload="ecs-task-802411188784-us-east-2-ecs-demo-z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4" dst.namespace="ecs-demo" dst.identity="spiffe://cluster.local/ns/ecs-demo/sa/default" direction="inbound" bytes_sent=377 bytes_recv=116 duration="4ms"
ECS to EKS
Test connectivity from the ECS shell service to the EKS echo service by using a test script. This script uses the aws ecs execute-command functionality to run a curl command from the ECS shell task to the EKS echo service.
For this demo, the security group settings are relaxed to simplify testing and ensure connectivity between ECS and EKS workloads. In a production environment, be sure to configure security group settings to restrict access based on specific CIDR ranges, protocols, and ports. Limiting access helps to maintain security and prevent unwanted traffic between ECS and EKS environments.
If you have not already, download the AWS Session Manager plugin, which is required for the
aws ecs execute-commandfunctionality that the test script uses to run commands on ECS containers. For example, you can use the following sample command to install the plugin withhomebrew.brew install --cask session-manager-pluginRun the test script, in which the ECS shell task calls the EKS echo service.
scripts/test/call-from-ecs.shExample output:
Connecting to ECS cluster: ecs-demo Using Task ID: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6 ----- Testing connectivity from ECS to EKS Running command: curl eks-echo.default:8080 ----- ServiceVersion= ServicePort=8080 Host=eks-echo.default:8080 URL=/ Method=GET Proto=HTTP/1.1 IP=192.168.1.25 RequestHeader=Accept:*/* RequestHeader=User-Agent:curl/8.17.0 Hostname=eks-echo-a1b2c3d4-x5y6z Test completed successfully.
Step 6: Control access with authorization policies
Test access control by applying Layer 4 authorization policies to both the EKS and ECS workloads in the ambient mesh.
L4 authorization for EKS workloads
Apply a deny-all L4 authorization policy to block all traffic to the EKS echo service.
kubectl apply -f manifests/eks-l4-deny.yamlTest that ECS to EKS communication is now blocked by the L4 policy.
scripts/test/call-from-ecs.shThe request fails with a connection error or timeout, indicating that the L4 policy is blocking the traffic.
Verify that EKS to ECS communication still succeeds, because no policy is blocking it.
kubectl exec -it $(kubectl get pods -l app=eks-shell -o jsonpath="{.items[0].metadata.name}") -- curl echo-service.ecs-${CLUSTER_NAME}.${ECS_DOMAIN}:8080Example output:
ServiceVersion= ServicePort=8080 Host=echo-service.ecs-demo.example.ecs:8080 URL=/ Method=GET Proto=HTTP/1.1 IP=192.168.1.20 RequestHeader=Accept:*/* RequestHeader=User-Agent:curl/8.17.0 Hostname=ip-192-168-1-20.us-east-2.compute.internal
L4 authorization for ECS workloads
Remove the EKS authorization policy to restore traffic flow.
kubectl delete -n default authorizationpolicies eks-echo-denyApply a deny-all L4 authorization policy to the ECS namespace to block all traffic to ECS services.
kubectl apply -n ecs-${CLUSTER_NAME} -f manifests/ecs-l4-deny.yamlVerify that ECS to EKS communication is now allowed, since no L4 policy applies to EKS workloads.
scripts/test/call-from-ecs.shExample output:
Connecting to ECS cluster: ecs-demo Using Task ID: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6 ----- Testing connectivity from ECS to EKS Running command: curl eks-echo.default:8080 ----- ServiceVersion= ServicePort=8080 Host=eks-echo.default:8080 URL=/ Method=GET Proto=HTTP/1.1 IP=192.168.1.25 RequestHeader=Accept:*/* RequestHeader=User-Agent:curl/8.17.0 Hostname=eks-echo-a1b2c3d4-x5y6z Test completed successfully.Verify that EKS to ECS communication is now blocked by the L4 policy applied to ECS workloads.
kubectl exec -it $(kubectl get pods -l app=eks-shell -o jsonpath="{.items[0].metadata.name}") -- curl echo-service.ecs-${CLUSTER_NAME}.${ECS_DOMAIN}:8080The request fails with a connection error or timeout, indicating that the L4 policy is blocking traffic to the ECS service.
Clean up the authorization policy.
kubectl delete -n ecs-${CLUSTER_NAME} authorizationpolicies ecs-echo-deny
Further configuration
Service accounts and task roles
By default, the task execution role of an ECS task is associated with the istio-system/default service account in Kubernetes. This is used as part of the automatic IAM authentication that the ztunnel sidecar performs with istiod. A one-to-one relationship is required between the task execution role and the service account.
Because you can reuse the same task execution role among different ECS services and tasks, associating all of your ECS resources with the same service account can be a valid deployment strategy. However, if you want to integrate ECS services from multiple AWS accounts, or simply want different ECS services to use different task execution roles, you must create more service account resources to associate the ECS services with, and link the ECS services with the new service accounts by using AWS resource tags.
For example, if you want to add an ECS service called demo-svc in ECS cluster example-ecs-cluster with a service account called example-sa, you can create the service account and add the service by using the following commands.
kubectl create service-account example-sa -n istio-system
istioctl ecs add-service demo-svc --cluster example-ecs-cluster --service-account example-sa
Updating service configuration
After you integrate an ECS service, you might need to modify its routing and configuration. Most options from the istioctl ecs add-service can be overridden by applying the following AWS resource tags on the ECS service, and redeploying the ECS service.
| Option | Default | Description |
|---|---|---|
ecs.solo.io/hostname | <service>.<ecs_cluster>.<ECS_domain> | The DNS hostname that you want to expose the ECS service on in the mesh. Although the default format is <service>.<ecs_cluster>.<ECS_domain>, you can choose any hostname. You can set multiple ECS services as the same hostname to loadbalance requests between the services automatically. |
ecs.solo.io/ports | http:80:8080 | Port configuration for the service as a forward slash-separated list of protocol:port[:targetPort] pairs, such as http:80:8080/tcp:9090. |
ecs.solo.io/namespace | istio-system | The Kubernetes namespace to associate the workload with. In this namespace, an Istio ServiceEntry is created for the ECS service, and a WorkloadEntry is created for the ECS task. |
ecs.solo.io/service-account | default | The Kubernetes service account in the namespace that the ECS service runs with. The service account is associated with the task execution role of the ECS service. For more information, see Service accounts. |
Example to update the service account that the ECS service is associated with:
aws ecs tag-resource --resource-arn arn:aws:ecs:us-east-1:123456789012:service/example-ecs-cluster/demo-svc --tags 'key=ecs.solo.io/service-account,value=example-sa'
Be sure to trigger a redeployment of the ECS service’s tasks after making any changes to the AWS resource tags.
Cleanup
Remove the resources that you created in this guide by running the cleanup scripts. Each script provides detailed output about the resources being deleted.
Delete the ECS resources. This script performs the following cleanup actions:
- Scales down the
shell-taskandecho-serviceECS services to zero tasks - Deletes both ECS services from the
ecs-${CLUSTER_NAME}cluster - Deregisters all task definitions tagged with
environment=ecs-demo - Deletes the
ecs-demo-sgsecurity group - Deletes the
ecs-${CLUSTER_NAME}ECS cluster - Deletes the
/ecs/ecs-${CLUSTER_NAME}CloudWatch log group
scripts/cleanup/ecs-cluster.sh- Scales down the
Delete the IAM roles and policies. This script performs the following cleanup actions:
- Detaches and deletes the
eks-ecs-task-policyfrom theeks-ecs-task-role - Deletes the
eks-ecs-task-roleIAM role - Detaches and deletes the
istiod-${AWS_ACCOUNT}policy from theistiodrole - Deletes the
istiodIAM role - Detaches and deletes the
ecs-read-onlypolicy from theistiod-ecsrole - Deletes the
istiod-ecsIAM role
scripts/cleanup/iam.sh- Detaches and deletes the
Delete the EKS cluster. This script performs the following cleanup actions:
- Deletes the
${CLUSTER_NAME}EKS cluster - Removes all associated node groups
- Deletes the VPC and networking resources created by
eksctl - Removes the pod identity associations
scripts/cleanup/eks-cluster.shThis step can take several minutes to complete aseksctlremoves all cluster resources.- Deletes the