Argo CD is a declarative, Kubernetes-native continuous deployment tool that can read and pull code from Git repositories and deploy it to your cluster. Because of that, you can integrate Argo CD into your GitOps pipeline to automate the deployment and synchronization of your apps.

In this guide, you learn how to use Argo CD applications to deploy the following components:

  • Gloo Platform CRDs
  • Gloo Mesh Enterprise
  • Istio control plane istiod
  • Istio gateways

Before you begin

  1. Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

      export CLUSTER_NAME=<cluster_name>
      
  2. Save your Gloo Mesh Enterprise license in an environment variable. If you do not have a license key, contact an account representative.

      export GLOO_MESH_LICENSE_KEY=<license-key>
      
  3. Save the Gloo Mesh Enterprise version that you want to install in an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.4.16-fips’. Do not include v before the version number.

      export GLOO_MESH_VERSION=2.4.16
      
  4. Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.

    • REPO: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.
    • ISTIO_IMAGE: The version that you want to use with the solo tag, such as 1.18.7-patch3-solo. You can optionally append other tags of Solo distributions of Istio as needed.
    • REVISION: Take the Istio major and minor versions and replace the periods with hyphens, such as 1-18.

    • ISTIO_VERSION: The version of Istio that you want to install, such as 1.18.7-patch3.
      export REPO=<repo-key>
    export ISTIO_IMAGE=1.18.7-patch3-solo
    export REVISION=1-18
    export ISTIO_VERSION=1.18.7-patch3
      

Install Argo CD

  1. Create the Argo CD namespace in your cluster.

      kubectl create namespace argocd
      
  2. Deploy Argo CD by using the non-HA YAML manifests.

      until kubectl apply -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/ > /dev/null 2>&1; do sleep 2; done
      
  3. Verify that the Argo CD pods are up and running.

      kubectl get pods -n argocd
      

    Example output:

      NAME                                                READY   STATUS    RESTARTS   AGE
    argocd-application-controller-0                     1/1     Running   0          46s
    argocd-applicationset-controller-6d8f595ffd-jhplp   1/1     Running   0          48s
    argocd-dex-server-64d4c94598-bcdzb                  1/1     Running   0          48s
    argocd-notifications-controller-f6998b6c-pbwfc      1/1     Running   0          47s
    argocd-redis-b5d6bf5f5-4mj2x                        1/1     Running   0          47s
    argocd-repo-server-5bc5469bbc-qhh4s                 1/1     Running   0          47s
    argocd-server-d985cbf9b-s66lv                       2/2     Running   0          46s
      
  4. Update the default Argo CD password for the admin user to solo.io.

      # bcrypt(password)=$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy
    # password: solo.io
    kubectl -n argocd patch secret argocd-secret \
      -p '{"stringData": {
        "admin.password": "$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy",
        "admin.passwordMtime": "'$(date +%FT%T%Z)'"
      }}'
      
  5. Port-forward the Argo CD server on port 9999.

      kubectl port-forward svc/argocd-server -n argocd 9999:443 
      
  6. Open the Argo CD UI

  7. Log in as the admin user with the password solo.io.

Argo CD welcome screen
Argo CD welcome screen

Install Gloo Mesh Enterprise

Use Argo CD applications to deploy the Gloo Platform CRD and Gloo Mesh Enterprise Helm charts in your cluster.

  1. Create an Argo CD application to install the Gloo Platform CRD Helm chart.

      kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: gloo-platform-crds
      namespace: argocd
    spec:
      destination:
        namespace: gloo-mesh
        server: https://kubernetes.default.svc
      project: default
      source:
        chart: gloo-platform-crds
        repoURL: https://storage.googleapis.com/gloo-platform/helm-charts
        targetRevision: ${GLOO_MESH_VERSION}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true 
        retry:
          limit: 2
          backoff:
            duration: 5s
            maxDuration: 3m0s
            factor: 2
    EOF
      
  2. Create another application to install the Gloo Mesh Enterprise Helm chart. The following application prepopulates a set of Helm values to install Gloo Mesh Enterprise components, and enable the Gloo telemetry pipeline and the built-in Prometheus server. To customize these settings, see the Helm reference.

      kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: gloo-platform-helm
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: gloo-mesh
      project: default
      source:
        chart: gloo-platform
        helm:
          skipCrds: true
          values: |
            licensing:
              licenseKey: ${GLOO_MESH_LICENSE_KEY}
            common:
              cluster: ${CLUSTER_NAME}
            glooMgmtServer:
              enabled: true
              serviceType: ClusterIP
              registerCluster: true
              createGlobalWorkspace: true
              ports:
                healthcheck: 8091
            prometheus:
              enabled: true
            redis:
              deployment:
                enabled: true
            telemetryGateway:
              enabled: true
              service:
                type: LoadBalancer
            telemetryCollector:
              enabled: true
              config:
                exporters:
                  otlp:
                    endpoint: gloo-telemetry-gateway.gloo-mesh:4317
            glooUi:
              enabled: true
              serviceType: ClusterIP
            glooAgent:
              enabled: true
              relay:
                serverAddress: gloo-mesh-mgmt-server:9900
        repoURL: https://storage.googleapis.com/gloo-platform/helm-charts
        targetRevision: ${GLOO_MESH_VERSION}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true
    EOF
      
  3. Verify that the Gloo Mesh Enterprise components are installed and in a healthy state.

      kubectl get pods -n gloo-mesh
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-6497df4cf9-htqw4          1/1     Running   0          27s
    gloo-mesh-mgmt-server-6d5546757f-6fzxd    1/1     Running   0          27s
    gloo-mesh-redis-7c797d595d-lf9dr          1/1     Running   0          27s
    gloo-mesh-ui-7567bcd54f-6tvjt             2/3     Running   0          27s
    gloo-telemetry-collector-agent-8jvh2      1/1     Running   0          27s
    gloo-telemetry-collector-agent-x2brj      1/1     Running   0          27s
    gloo-telemetry-gateway-689cb78547-sqqgg   1/1     Running   0          27s
    prometheus-server-946c89d8f-zx5sf         1/2     Running   0          27s
      

Install Istio

With Gloo Mesh Enterprise installed in your environment, you can now install Istio. You can choose between a managed Istio installation that uses the Gloo Mesh Istio lifecycle manager resource to set up Istio in your cluster, or to install unmanaged Istio by using the Istio Helm chart directly.

Congratulations! You successfully used Argo CD to deploy Gloo Mesh Enterprise and Istio in your cluster.

Test the resilience of your setup

Managing deployments with Argo CD allows you to declare the desired state of your components in a versioned-controlled source of truth, such as Git, and to automatically sync changes to your environments whenever the source of truth is changed. This approach significantly reduces the risk of configuration drift between your environments, but also helps to detect discrepancies between the desired state in Git and the actual state in your cluster to kick off self-healing mechanisms.

  1. Review the deployments that were created when you installed Gloo Mesh Enterprise with Argo CD.

      kubectl get deployments -n gloo-mesh
      

    Example output:

      NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    gloo-mesh-agent          1/1     1            1           3h11m
    gloo-mesh-mgmt-server    1/1     1            1           3h11m
    gloo-mesh-redis          1/1     1            1           3h11m
    gloo-mesh-ui             1/1     1            1           3h11m
    gloo-telemetry-gateway   1/1     1            1           3h11m
    prometheus-server        1/1     1            1           3h11m
      
  2. Simulate a chaos scenario where all of your deployments in the gloo-mesh namespace are deleted. Without Argo CD, deleting a deployment permanently deletes all of the pods that the deployment manages. However, when your deployments are monitored and managed by Argo CD, and you enabled the selfHeal: true and prune: true options in your Argo CD application, Argo automatically detects that the actual state of your deployment does not match the desired state in Git, and kicks off its self-healing mechanism.

      kubectl delete deployments --all -n gloo-mesh  
      
  3. Verify that Argo CD automatically recreated all of the deployments in the gloo-mesh namespace.

      kubectl get deployments -n gloo-mesh
      

    Example output:

      NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    gloo-mesh-agent          1/1     1            1           5m
    gloo-mesh-mgmt-server    1/1     1            1           5m
    gloo-mesh-redis          1/1     1            1           5m
    gloo-mesh-ui             1/1     1            1           5m
    gloo-telemetry-gateway   1/1     1            1           5m
    prometheus-server        1/1     1            1           5m
      

Next steps

Now that you have Gloo Mesh Enterprise and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.

Gloo Mesh Enterprise:

Istio: Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.

Help and support:

Cleanup

You can optionally remove the resources that you created as part of this guide.

  kubectl delete applications istiod istio-base istio-ingressgateway istio-eastwestgateway -n argocd
kubectl delete applications gloo-platform-helm gloo-platform-crds -n argocd
kubectl delete applications istio-lifecyclemanager-deployments -n argocd
kubectl delete -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/
kubectl delete namespace argocd gloo-mesh