Install with Argo CD
Use Argo Continuous Delivery (Argo CD) to automate the deployment and management of Gloo Mesh (Gloo Platform APIs) and Istio in your cluster.
Argo CD is a declarative, Kubernetes-native continuous deployment tool that can read and pull code from Git repositories and deploy it to your cluster. Because of that, you can integrate Argo CD into your GitOps pipeline to automate the deployment and synchronization of your apps.
In this guide, you learn how to use Argo CD applications to deploy the following components:
- Gloo Platform CRDs
- Gloo Mesh (Gloo Platform APIs)
- Istio control plane istiod
- Istio gateways
This guide assumes a single cluster setup for Gloo Mesh (Gloo Platform APIs) and Istio. If you want to use Argo CD in a multicluster setup, you must configure your applications to deploy resources in either the management or workload clusters.
Before you begin
Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number) to follow the Kubernetes DNS label standard.
export CLUSTER_NAME=<cluster_name>Save your Gloo Mesh (Gloo Platform APIs) license in an environment variable. If you do not have a license key, contact an account representative.
export GLOO_MESH_LICENSE_KEY=<license-key>Save the Gloo Mesh (Gloo Platform APIs) version that you want to install in an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.9.4-fips’. Do not include
vbefore the version number.export GLOO_MESH_VERSION=2.9.4Set environment variables for the Solo distribution of Istio that you want to install.
REPO: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.ISTIO_VERSION: The version of Istio that you want to install, such as1.26.7.ISTIO_IMAGE: The Solo distribution of Istio patch version and-solotag. You can optionally append other Solo tags as needed.
export REPO=<repo-key> export ISTIO_VERSION=1.26.7 export ISTIO_IMAGE=${ISTIO_VERSION}-solo
- Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version
1.26.1-patch0and later only. - In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the
license.valueorlicense.secretReffield of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the--set pilot.env.SOLO_LICENSE_KEYfield. - Istio patch versions 1.25.1 and 1.24.4 contain an upstream certificate rotation bug in which requests with more than one trusted root certificate cannot be validated. If you use Gloo Mesh (Gloo Platform APIs) to manage root certificate rotation and use Istio 1.25 or 1.24, be sure to use 1.25.2 or 1.24.5 and later only.
- Istio 1.22 is supported only as patch version
1.22.1-patch0and later. Do not use patch versions 1.22.0 and 1.22.1, which contain bugs that impact several Gloo Mesh (Gloo Platform APIs) routing features that rely on virtual destinations. Additionally, in Istio 1.22.0-1.22.3, theISTIO_DELTA_XDSenvironment variable must be set tofalse. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4. - If you have multiple external services that use the same host and plan to use Istio 1.22, you must use patch version 1.22.1-patch0 or later to ensure that the Istio service entry that is created for those external services is correct.
- Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (Gloo Platform APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.
Install Argo CD
Create the Argo CD namespace in your cluster.
kubectl create namespace argocdDeploy Argo CD by using the non-HA YAML manifests.
until kubectl apply -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/ > /dev/null 2>&1; do sleep 2; doneVerify that the Argo CD pods are up and running.
kubectl get pods -n argocdExample output:
NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 46s argocd-applicationset-controller-6d8f595ffd-jhplp 1/1 Running 0 48s argocd-dex-server-64d4c94598-bcdzb 1/1 Running 0 48s argocd-notifications-controller-f6998b6c-pbwfc 1/1 Running 0 47s argocd-redis-b5d6bf5f5-4mj2x 1/1 Running 0 47s argocd-repo-server-5bc5469bbc-qhh4s 1/1 Running 0 47s argocd-server-d985cbf9b-s66lv 2/2 Running 0 46sUpdate the default Argo CD password for the admin user to
solo.io.# bcrypt(password)=$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy # password: solo.io kubectl -n argocd patch secret argocd-secret \ -p '{"stringData": { "admin.password": "$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy", "admin.passwordMtime": "'$(date +%FT%T%Z)'" }}'Port-forward the Argo CD server on port 9999.
kubectl port-forward svc/argocd-server -n argocd 9999:443Open the Argo CD UI
Log in as the admin user with the password
solo.io.


Install Gloo Mesh (Gloo Platform APIs)
Use Argo CD applications to deploy the Gloo Platform CRD and Gloo Mesh (Gloo Platform APIs) Helm charts in your cluster.
Create an Argo CD application to install the Gloo Platform CRD Helm chart.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gloo-platform-crds namespace: argocd spec: destination: namespace: gloo-mesh server: https://kubernetes.default.svc project: default source: chart: gloo-platform-crds repoURL: https://storage.googleapis.com/gloo-platform/helm-charts targetRevision: ${GLOO_MESH_VERSION} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true retry: limit: 2 backoff: duration: 5s maxDuration: 3m0s factor: 2 EOFCreate another application to install the Gloo Mesh (Gloo Platform APIs) Helm chart. The following application prepopulates a set of Helm values to install Gloo Mesh (Gloo Platform APIs) components, and enable the Gloo telemetry pipeline and the built-in Prometheus server. To customize these settings, see the Helm reference.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gloo-platform-helm namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: server: https://kubernetes.default.svc namespace: gloo-mesh project: default source: chart: gloo-platform helm: skipCrds: true values: | licensing: licenseKey: ${GLOO_MESH_LICENSE_KEY} common: cluster: ${CLUSTER_NAME} glooMgmtServer: enabled: true serviceType: ClusterIP registerCluster: true createGlobalWorkspace: true ports: healthcheck: 8091 prometheus: enabled: true redis: deployment: enabled: true telemetryGateway: enabled: true service: type: LoadBalancer telemetryCollector: enabled: true config: exporters: otlp: endpoint: gloo-telemetry-gateway.gloo-mesh:4317 glooUi: enabled: true serviceType: ClusterIP glooAgent: enabled: true relay: serverAddress: gloo-mesh-mgmt-server:9900 glooInsightsEngine: enabled: true repoURL: https://storage.googleapis.com/gloo-platform/helm-charts targetRevision: ${GLOO_MESH_VERSION} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOFVerify that the Gloo Mesh (Gloo Platform APIs) components are installed and in a healthy state.
kubectl get pods -n gloo-meshExample output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-6497df4cf9-htqw4 1/1 Running 0 27s gloo-mesh-mgmt-server-6d5546757f-6fzxd 1/1 Running 0 27s gloo-mesh-redis-7c797d595d-lf9dr 1/1 Running 0 27s gloo-mesh-ui-7567bcd54f-6tvjt 2/3 Running 0 27s gloo-telemetry-collector-agent-8jvh2 1/1 Running 0 27s gloo-telemetry-collector-agent-x2brj 1/1 Running 0 27s gloo-telemetry-gateway-689cb78547-sqqgg 1/1 Running 0 27s prometheus-server-946c89d8f-zx5sf 1/2 Running 0 27s
Install Istio
With Gloo Mesh (Gloo Platform APIs) installed in your environment, you can now install Istio by using the Istio Helm chart directly.
Create an Argo CD application to deploy the Istio base Helm chart to your cluster. This chart installs the CRDs that are necessary to deploy Istio.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-base namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-3" spec: destination: server: https://kubernetes.default.svc namespace: istio-system project: default source: chart: base repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOFCreate another application to deploy the Istio control plane istiod.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istiod namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: server: https://kubernetes.default.svc namespace: istio-system project: default source: chart: istiod repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main helm: values: | revision: main global: meshID: mesh1 multiCluster: clusterName: ${CLUSTER_NAME} network: network1 hub: ${REPO} tag: ${ISTIO_IMAGE} meshConfig: trustDomain: ${CLUSTER_NAME} accessLogFile: /dev/stdout accessLogEncoding: JSON enableAutoMtls: true defaultConfig: # Wait for the istio-proxy to start before starting application pods holdApplicationUntilProxyStarts: true envoyAccessLogService: address: gloo-mesh-agent.gloo-mesh:9977 proxyMetadata: ISTIO_META_DNS_CAPTURE: "true" ISTIO_META_DNS_AUTO_ALLOCATE: "true" outboundTrafficPolicy: mode: ALLOW_ANY rootNamespace: istio-system pilot: env: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" syncPolicy: automated: prune: true selfHeal: true #automated: {} ignoreDifferences: - group: '*' kind: '*' managedFieldsManagers: - argocd-application-controller EOFOptional: Create another application to deploy an Istio east-west gateway.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-eastwestgateway namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-1" spec: destination: server: https://kubernetes.default.svc namespace: istio-eastwest project: default source: chart: gateway repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main helm: values: | # Name allows overriding the release name. Generally this should not be set name: "istio-eastwestgateway" # revision declares which revision this gateway is a part of revision: "main" replicaCount: 1 service: # Type of service. Set to "None" to disable the service entirely type: LoadBalancer ports: # Port for health checks on path /healthz/ready. # For AWS ELBs, this port must be listed first. - port: 15021 targetPort: 15021 name: status-port # Port for multicluster mTLS passthrough; required for Gloo Mesh east/west routing - port: 15443 targetPort: 15443 # Gloo Mesh looks for this default name 'tls' on a gateway name: tls # Port required for VM onboarding #- port: 15012 #targetPort: 15012 # Required for VM onboarding discovery address #name: tls-istiod annotations: # AWS NLB Annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" loadBalancerIP: "" loadBalancerSourceRanges: [] externalTrafficPolicy: "" # Pod environment variables env: {} annotations: proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' # Labels to apply to all resources labels: # Set a unique label for the gateway so that virtual gateways # can select this workload. app: istio-eastwestgateway istio: eastwestgateway revision: main # Matches spec.values.global.network in the istiod deployment topology.istio.io/network: ${CLUSTER_NAME} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOFOptional: Create another application to deploy the Istio ingress gateway.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-ingressgateway namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-1" spec: destination: server: https://kubernetes.default.svc namespace: istio-ingress project: default source: chart: gateway repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: ${ISTIO_VERSION} helm: values: | # Name allows overriding the release name. Generally this should not be set name: "istio-ingressgateway" # revision declares which revision this gateway is a part of revision: "main" replicaCount: 1 service: # Type of service. Set to "None" to disable the service entirely type: LoadBalancer ports: - name: http2 port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 annotations: # AWS NLB Annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" loadBalancerIP: "" loadBalancerSourceRanges: [] externalTrafficPolicy: "" # Pod environment variables env: {} annotations: proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' # Labels to apply to all resources labels: istio.io/rev: main istio: ingressgateway syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOFVerify that the Istio pods are up and running.
kubectl get pods -n istio-system kubectl get pods -n istio-ingress kubectl get pods -n istio-eastwestExample output:
NAME READY STATUS RESTARTS AGE istiod-main-64ff8d9c9c-sl62w 1/1 Running 0 72s NAME READY STATUS RESTARTS AGE istio-ingressgateway-main-674cbfc747-bjm64 1/1 Running 0 65s NAME READY STATUS RESTARTS AGE istio-eastwestgateway-main-7895666dc8-hm6gp 1/1 Running 0 29s
Congratulations! You successfully used Argo CD to deploy Gloo Mesh (Gloo Platform APIs) and Istio in your cluster.
Test the resilience of your setup
Managing deployments with Argo CD allows you to declare the desired state of your components in a versioned-controlled source of truth, such as Git, and to automatically sync changes to your environments whenever the source of truth is changed. This approach significantly reduces the risk of configuration drift between your environments, but also helps to detect discrepancies between the desired state in Git and the actual state in your cluster to kick off self-healing mechanisms.
Review the deployments that were created when you installed Gloo Mesh (Gloo Platform APIs) with Argo CD.
kubectl get deployments -n gloo-meshExample output:
NAME READY UP-TO-DATE AVAILABLE AGE gloo-mesh-agent 1/1 1 1 3h11m gloo-mesh-mgmt-server 1/1 1 1 3h11m gloo-mesh-redis 1/1 1 1 3h11m gloo-mesh-ui 1/1 1 1 3h11m gloo-telemetry-gateway 1/1 1 1 3h11m prometheus-server 1/1 1 1 3h11mSimulate a chaos scenario where all of your deployments in the
gloo-meshnamespace are deleted. Without Argo CD, deleting a deployment permanently deletes all of the pods that the deployment manages. However, when your deployments are monitored and managed by Argo CD, and you enabled theselfHeal: trueandprune: trueoptions in your Argo CD application, Argo automatically detects that the actual state of your deployment does not match the desired state in Git, and kicks off its self-healing mechanism.kubectl delete deployments --all -n gloo-meshIf you use self-signed TLS certificates for the relay connection between the Gloo management server and agent, you must remove the secrets in thegloo-meshnamespace as the certificates are automatically rotated during a redeploy or upgrade of the management server and agent. To delete the secrets, runkubectl delete secrets --all -n gloo-mesh.Verify that Argo CD automatically recreated all of the deployments in the
gloo-meshnamespace.kubectl get deployments -n gloo-meshExample output:
NAME READY UP-TO-DATE AVAILABLE AGE gloo-mesh-agent 1/1 1 1 5m gloo-mesh-mgmt-server 1/1 1 1 5m gloo-mesh-redis 1/1 1 1 5m gloo-mesh-ui 1/1 1 1 5m gloo-telemetry-gateway 1/1 1 1 5m prometheus-server 1/1 1 1 5m
Next steps
Now that you have Gloo Mesh (Gloo Platform APIs) and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh (Gloo Platform APIs):
- Enable insights to review and improve your setup’s health and security posture.
- Apply Gloo policies to manage the security and resiliency of your service mesh environment.
- Organize team resources with workspaces.
- When it’s time to upgrade Gloo Mesh (Gloo Platform APIs), see the upgrade guide.
Istio: Now that you have Gloo Mesh (Gloo Platform APIs) and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.
- Find out more about hardened Istio
n-4version support built into Solo distributions of Istio. - Review how Gloo Mesh (Gloo Platform APIs) custom resources are automatically translated into Istio resources.
- Monitor and observe your Istio environment with Gloo Mesh (Gloo Platform APIs)’s built-in telemetry tools.
- When it’s time to upgrade Istio, check out Upgrade managed service meshes.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.
Cleanup
You can optionally remove the resources that you created as part of this guide.
kubectl delete applications istiod istio-base istio-ingressgateway istio-eastwestgateway -n argocd
kubectl delete applications gloo-platform-helm gloo-platform-crds -n argocd
kubectl delete applications istio-lifecyclemanager-deployments -n argocd
kubectl delete -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/
kubectl delete namespace argocd gloo-mesh