Install with Argo CD
Use Argo Continuous Delivery (Argo CD) to automate the deployment and management of Gloo Mesh Enterprise and Istio in your cluster.
Argo CD is a declarative, Kubernetes-native continuous deployment tool that can read and pull code from Git repositories and deploy it to your cluster. Because of that, you can integrate Argo CD into your GitOps pipeline to automate the deployment and synchronization of your apps.
In this guide, you learn how to use Argo CD applications to deploy the following components:
- Gloo Platform CRDs
- Gloo Mesh Enterprise
- Istio control plane istiod
- Istio gateways
This guide assumes a single cluster setup for Gloo Mesh Enterprise and Istio. If you want to use Argo CD in a multicluster setup, you must configure your applications to deploy resources in either the management or workload clusters.
Before you begin
Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
export CLUSTER_NAME=<cluster_name>
Save your Gloo Mesh Enterprise license in an environment variable. If you do not have a license key, contact an account representative.
export GLOO_MESH_LICENSE_KEY=<license-key>
Save the Gloo Mesh Enterprise version that you want to install in an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.7.0-fips’. Do not include
v
before the version number.export GLOO_MESH_VERSION=2.7.0
Set environment variables for the Solo distribution of Istio that you want to install.
REPO
: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.ISTIO_VERSION
: The version of Istio that you want to install, such as1.24.2
.ISTIO_IMAGE
: The Solo distrubution of Istio patch version and-solo
tag. You can optionally append other Solo tags as needed.
export REPO=<repo-key> export ISTIO_VERSION=1.24.2 export ISTIO_IMAGE=${ISTIO_VERSION}-solo
Istio 1.22 is supported only as patch version 1.22.1-patch0
and later. Do not use patch versions 1.22.0 and 1.22.1, which contain bugs that impact several Gloo Mesh Enterprise routing features that rely on virtual destinations. Additionally, in Istio 1.22.0-1.22.3, the ISTIO_DELTA_XDS
environment variable must be set to false
. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4.
Istio 1.20 is supported only as patch version 1.20.1-patch1
and later. Do not use patch versions 1.20.0 and 1.20.1, which contain bugs that impact several Gloo Mesh Enterprise features that rely on Istio ServiceEntries.
If you have multiple external services that use the same host and plan to use Istio 1.20, 1.21, or 1.22, you must use patch versions 1.20.7, 1.21.3, or 1.22.1-patch0 or later to ensure that the Istio service entry that is created for those external services is correct.
Install Argo CD
Create the Argo CD namespace in your cluster.
kubectl create namespace argocd
Deploy Argo CD by using the non-HA YAML manifests.
until kubectl apply -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/ > /dev/null 2>&1; do sleep 2; done
Verify that the Argo CD pods are up and running.
kubectl get pods -n argocd
Example output:
NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 46s argocd-applicationset-controller-6d8f595ffd-jhplp 1/1 Running 0 48s argocd-dex-server-64d4c94598-bcdzb 1/1 Running 0 48s argocd-notifications-controller-f6998b6c-pbwfc 1/1 Running 0 47s argocd-redis-b5d6bf5f5-4mj2x 1/1 Running 0 47s argocd-repo-server-5bc5469bbc-qhh4s 1/1 Running 0 47s argocd-server-d985cbf9b-s66lv 2/2 Running 0 46s
Update the default Argo CD password for the admin user to
solo.io
.# bcrypt(password)=$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy # password: solo.io kubectl -n argocd patch secret argocd-secret \ -p '{"stringData": { "admin.password": "$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy", "admin.passwordMtime": "'$(date +%FT%T%Z)'" }}'
Port-forward the Argo CD server on port 9999.
kubectl port-forward svc/argocd-server -n argocd 9999:443
Open the Argo CD UI
Log in as the admin user with the password
solo.io
.


Install Gloo Mesh Enterprise
Use Argo CD applications to deploy the Gloo Platform CRD and Gloo Mesh Enterprise Helm charts in your cluster.
Create an Argo CD application to install the Gloo Platform CRD Helm chart.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gloo-platform-crds namespace: argocd spec: destination: namespace: gloo-mesh server: https://kubernetes.default.svc project: default source: chart: gloo-platform-crds repoURL: https://storage.googleapis.com/gloo-platform/helm-charts targetRevision: ${GLOO_MESH_VERSION} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true retry: limit: 2 backoff: duration: 5s maxDuration: 3m0s factor: 2 EOF
Create another application to install the Gloo Mesh Enterprise Helm chart. The following application prepopulates a set of Helm values to install Gloo Mesh Enterprise components, and enable the Gloo telemetry pipeline and the built-in Prometheus server. To customize these settings, see the Helm reference.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gloo-platform-helm namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: server: https://kubernetes.default.svc namespace: gloo-mesh project: default source: chart: gloo-platform helm: skipCrds: true values: | licensing: licenseKey: ${GLOO_MESH_LICENSE_KEY} common: cluster: ${CLUSTER_NAME} glooMgmtServer: enabled: true serviceType: ClusterIP registerCluster: true createGlobalWorkspace: true ports: healthcheck: 8091 prometheus: enabled: true redis: deployment: enabled: true telemetryGateway: enabled: true service: type: LoadBalancer telemetryCollector: enabled: true config: exporters: otlp: endpoint: gloo-telemetry-gateway.gloo-mesh:4317 glooUi: enabled: true serviceType: ClusterIP glooAgent: enabled: true relay: serverAddress: gloo-mesh-mgmt-server:9900 glooInsightsEngine: enabled: true repoURL: https://storage.googleapis.com/gloo-platform/helm-charts targetRevision: ${GLOO_MESH_VERSION} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOF
Verify that the Gloo Mesh Enterprise components are installed and in a healthy state.
kubectl get pods -n gloo-mesh
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-6497df4cf9-htqw4 1/1 Running 0 27s gloo-mesh-mgmt-server-6d5546757f-6fzxd 1/1 Running 0 27s gloo-mesh-redis-7c797d595d-lf9dr 1/1 Running 0 27s gloo-mesh-ui-7567bcd54f-6tvjt 2/3 Running 0 27s gloo-telemetry-collector-agent-8jvh2 1/1 Running 0 27s gloo-telemetry-collector-agent-x2brj 1/1 Running 0 27s gloo-telemetry-gateway-689cb78547-sqqgg 1/1 Running 0 27s prometheus-server-946c89d8f-zx5sf 1/2 Running 0 27s
Install Istio
With Gloo Mesh Enterprise installed in your environment, you can now install Istio by using the Istio Helm chart directly.
Create an Argo CD application to deploy the Istio base Helm chart to your cluster. This chart installs the CRDs that are necessary to deploy Istio.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-base namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-3" spec: destination: server: https://kubernetes.default.svc namespace: istio-system project: default source: chart: base repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOF
Create another application to deploy the Istio control plane istiod.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istiod namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: server: https://kubernetes.default.svc namespace: istio-system project: default source: chart: istiod repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main helm: values: | revision: main global: meshID: mesh1 multiCluster: clusterName: ${CLUSTER_NAME} network: network1 hub: ${REPO} tag: ${ISTIO_IMAGE} meshConfig: trustDomain: ${CLUSTER_NAME} accessLogFile: /dev/stdout accessLogEncoding: JSON enableAutoMtls: true defaultConfig: # Wait for the istio-proxy to start before starting application pods holdApplicationUntilProxyStarts: true envoyAccessLogService: address: gloo-mesh-agent.gloo-mesh:9977 proxyMetadata: ISTIO_META_DNS_CAPTURE: "true" ISTIO_META_DNS_AUTO_ALLOCATE: "true" outboundTrafficPolicy: mode: ALLOW_ANY rootNamespace: istio-system pilot: env: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" syncPolicy: automated: prune: true selfHeal: true #automated: {} ignoreDifferences: - group: '*' kind: '*' managedFieldsManagers: - argocd-application-controller EOF
Optional: Create another application to deploy an Istio east-west gateway.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-eastwestgateway namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-1" spec: destination: server: https://kubernetes.default.svc namespace: istio-eastwest project: default source: chart: gateway repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: main helm: values: | # Name allows overriding the release name. Generally this should not be set name: "istio-eastwestgateway" # revision declares which revision this gateway is a part of revision: "main" replicaCount: 1 service: # Type of service. Set to "None" to disable the service entirely type: LoadBalancer ports: # Port for health checks on path /healthz/ready. # For AWS ELBs, this port must be listed first. - port: 15021 targetPort: 15021 name: status-port # Port for multicluster mTLS passthrough; required for Gloo Mesh east/west routing - port: 15443 targetPort: 15443 # Gloo Mesh looks for this default name 'tls' on a gateway name: tls # Port required for VM onboarding #- port: 15012 #targetPort: 15012 # Required for VM onboarding discovery address #name: tls-istiod annotations: # AWS NLB Annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" loadBalancerIP: "" loadBalancerSourceRanges: [] externalTrafficPolicy: "" # Pod environment variables env: {} annotations: proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' # Labels to apply to all resources labels: # Set a unique label for the gateway so that virtual gateways # can select this workload. app: istio-eastwestgateway istio: eastwestgateway revision: main # Matches spec.values.global.network in the istiod deployment topology.istio.io/network: ${CLUSTER_NAME} syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOF
Optional: Create another application to deploy the Istio ingress gateway.
kubectl apply -f- <<EOF apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: istio-ingressgateway namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-wave: "-1" spec: destination: server: https://kubernetes.default.svc namespace: istio-ingress project: default source: chart: gateway repoURL: https://istio-release.storage.googleapis.com/charts targetRevision: ${ISTIO_VERSION} helm: values: | # Name allows overriding the release name. Generally this should not be set name: "istio-ingressgateway" # revision declares which revision this gateway is a part of revision: "main" replicaCount: 1 service: # Type of service. Set to "None" to disable the service entirely type: LoadBalancer ports: - name: http2 port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 annotations: # AWS NLB Annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" loadBalancerIP: "" loadBalancerSourceRanges: [] externalTrafficPolicy: "" # Pod environment variables env: {} annotations: proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' # Labels to apply to all resources labels: istio.io/rev: main istio: ingressgateway syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true EOF
Verify that the Istio pods are up and running.
kubectl get pods -n istio-system kubectl get pods -n istio-ingress kubectl get pods -n istio-eastwest
Example output:
NAME READY STATUS RESTARTS AGE istiod-main-64ff8d9c9c-sl62w 1/1 Running 0 72s NAME READY STATUS RESTARTS AGE istio-ingressgateway-main-674cbfc747-bjm64 1/1 Running 0 65s NAME READY STATUS RESTARTS AGE istio-eastwestgateway-main-7895666dc8-hm6gp 1/1 Running 0 29s
Congratulations! You successfully used Argo CD to deploy Gloo Mesh Enterprise and Istio in your cluster.
Test the resilience of your setup
Managing deployments with Argo CD allows you to declare the desired state of your components in a versioned-controlled source of truth, such as Git, and to automatically sync changes to your environments whenever the source of truth is changed. This approach significantly reduces the risk of configuration drift between your environments, but also helps to detect discrepancies between the desired state in Git and the actual state in your cluster to kick off self-healing mechanisms.
Review the deployments that were created when you installed Gloo Mesh Enterprise with Argo CD.
kubectl get deployments -n gloo-mesh
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE gloo-mesh-agent 1/1 1 1 3h11m gloo-mesh-mgmt-server 1/1 1 1 3h11m gloo-mesh-redis 1/1 1 1 3h11m gloo-mesh-ui 1/1 1 1 3h11m gloo-telemetry-gateway 1/1 1 1 3h11m prometheus-server 1/1 1 1 3h11m
Simulate a chaos scenario where all of your deployments in the
gloo-mesh
namespace are deleted. Without Argo CD, deleting a deployment permanently deletes all of the pods that the deployment manages. However, when your deployments are monitored and managed by Argo CD, and you enabled theselfHeal: true
andprune: true
options in your Argo CD application, Argo automatically detects that the actual state of your deployment does not match the desired state in Git, and kicks off its self-healing mechanism.kubectl delete deployments --all -n gloo-mesh
If you use self-signed TLS certificates for the relay connection between the Gloo management server and agent, you must remove the secrets in thegloo-mesh
namespace as the certificates are automaticatically rotated during a redeploy or upgrade of the management server and agent. To delete the secrets, runkubectl delete secrets --all -n gloo-mesh
.Verify that Argo CD automatically recreated all of the deployments in the
gloo-mesh
namespace.kubectl get deployments -n gloo-mesh
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE gloo-mesh-agent 1/1 1 1 5m gloo-mesh-mgmt-server 1/1 1 1 5m gloo-mesh-redis 1/1 1 1 5m gloo-mesh-ui 1/1 1 1 5m gloo-telemetry-gateway 1/1 1 1 5m prometheus-server 1/1 1 1 5m
Next steps
Now that you have Gloo Mesh Enterprise and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh Enterprise:
- Enable insights to review and improve your setup’s health and security posture.
- Apply Gloo policies to manage the security and resiliency of your service mesh environment.
- Organize team resources with workspaces.
- When it’s time to upgrade Gloo Mesh Enterprise, see the upgrade guide.
Istio: Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.
- Find out more about hardened Istio
n-4
version support built into Solo distributions of Istio. - Review how Gloo Mesh Enterprise custom resources are automatically translated into Istio resources.
- Monitor and observe your Istio environment with Gloo Mesh Enterprise’s built-in telemetry tools.
- When it’s time to upgrade Istio, check out
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.
Cleanup
You can optionally remove the resources that you created as part of this guide.
kubectl delete applications istiod istio-base istio-ingressgateway istio-eastwestgateway -n argocd
kubectl delete applications gloo-platform-helm gloo-platform-crds -n argocd
kubectl delete applications istio-lifecyclemanager-deployments -n argocd
kubectl delete -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/
kubectl delete namespace argocd gloo-mesh