Install with Argo CD

Use Argo Continuous Delivery (Argo CD) to automate the deployment and management of Gloo Mesh Enterprise and Istio in your cluster. Argo CD is a declarative, Kubernetes-native continuous deployment tool that can read and pull code from Git repositories and deploy it to your cluster. Because of that, you can integrate Argo CD into your GitOps pipeline to automate the deployment and synchronization of your apps.

In this guide, you learn how to use Argo CD applications to deploy the following components:

This guide assumes a single cluster setup for Gloo Mesh Enterprise and Istio. If you want to use Argo CD in a multicluster setup, you must configure your applications to deploy resources in either the management or workload clusters.

Before you begin

  1. Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

    export CLUSTER_NAME=<cluster_name>
    
  2. Save your Gloo Mesh Enterprise license in an environment variable. If you do not have a license key, contact an account representative.

    export GLOO_MESH_LICENSE_KEY=<license-key>
    
  3. Save the Gloo Mesh Enterprise version that you want to install in an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.5.0-fips’. Do not include v before the version number.

    export GLOO_MESH_VERSION=2.5.0
    
  4. Save the Istio version information as environment variables.

    • For REPO, use a Solo repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article. For more information, see Get the Solo distribution of Istio that you want to use.
    • For ISTIO_VERSION, enter the version of Istio that you want to install, such as 1.20.2.
    • For ISTIO_IMAGE, append the solo tag to Istio version that you want to install. The solo tag is required to use many enterprise features. You can optionally append other tags, as described in About the Solo distribution of Istio. If you downloaded a different version than the following, make sure to specify that version instead. Note: The Istio lifecycle manager is supported only for Istio versions 1.15.4 or later.
    • For REVISION, take the Istio major and minor version numbers and replace the period with a hyphen, such as 1-20. Note: For testing environments only, you can deploy a revisionless installation. Revisionless installations permit in-place upgrades, which are quicker than the canary-based upgrades that are required for revisioned installations. To omit a revision, skip setting the revision environment variable. Then in subsequent steps, you edit the sample files that you download to remove the revision and gatewayRevision fields. Note that if you deploy multiple Istio installations in the same cluster, only one installation can be revisionless.
    export REPO=<repo-key>
    export ISTIO_VERSION=1.20.2
    export ISTIO_IMAGE=$ISTIO_VERSION-solo
    export REVISION=1-20
    

Install Argo CD

  1. Create the Argo CD namespace in your cluster.

    kubectl create namespace argocd
    
  2. Deploy Argo CD by using the non-HA YAML manifests.

    until kubectl apply -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/ > /dev/null 2>&1; do sleep 2; done
    
  3. Verify that the Argo CD pods are up and running.

    kubectl get pods -n argocd
    

    Example output:

    NAME                                                READY   STATUS    RESTARTS   AGE
    argocd-application-controller-0                     1/1     Running   0          46s
    argocd-applicationset-controller-6d8f595ffd-jhplp   1/1     Running   0          48s
    argocd-dex-server-64d4c94598-bcdzb                  1/1     Running   0          48s
    argocd-notifications-controller-f6998b6c-pbwfc      1/1     Running   0          47s
    argocd-redis-b5d6bf5f5-4mj2x                        1/1     Running   0          47s
    argocd-repo-server-5bc5469bbc-qhh4s                 1/1     Running   0          47s
    argocd-server-d985cbf9b-s66lv                       2/2     Running   0          46s
    
  4. Update the default Argo CD password for the admin user to solo.io.

    # bcrypt(password)=$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy
    # password: solo.io
    kubectl -n argocd patch secret argocd-secret \
      -p '{"stringData": {
        "admin.password": "$2a$10$79yaoOg9dL5MO8pn8hGqtO4xQDejSEVNWAGQR268JHLdrCw6UCYmy",
        "admin.passwordMtime": "'$(date +%FT%T%Z)'"
      }}'
    
  5. Port-forward the Argo CD server on port 9999.

    kubectl port-forward svc/argocd-server -n argocd 9999:443 
    
  6. Open the Argo CD UI

  7. Log in as the admin user with the password solo.io.

Argo CD welcome screen

Install Gloo Mesh Enterprise

Use Argo CD applications to deploy the Gloo Platform CRD and Gloo Mesh Enterprise Helm charts in your cluster.

  1. Create an Argo CD application to install the Gloo Platform CRD Helm chart.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: gloo-platform-crds
      namespace: argocd
    spec:
      destination:
        namespace: gloo-mesh
        server: https://kubernetes.default.svc
      project: default
      source:
        chart: gloo-platform-crds
        repoURL: https://storage.googleapis.com/gloo-platform/helm-charts
        targetRevision: ${GLOO_MESH_VERSION}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true 
        retry:
          limit: 2
          backoff:
            duration: 5s
            maxDuration: 3m0s
            factor: 2
    EOF
    
  2. Create another application to install the Gloo Mesh Enterprise Helm chart. The following application prepopulates a set of Helm values to install Gloo Mesh Enterprise components, and enable the Gloo telemetry pipeline and the built-in Prometheus server. To customize these settings, see the Helm reference.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: gloo-platform-helm
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: gloo-mesh
      project: default
      source:
        chart: gloo-platform
        helm:
          skipCrds: true
          values: |
            licensing:
              licenseKey: ${GLOO_MESH_LICENSE_KEY}
            common:
              cluster: ${CLUSTER_NAME}
            glooMgmtServer:
              enabled: true
              serviceType: ClusterIP
              registerCluster: true
              createGlobalWorkspace: true
              ports:
                healthcheck: 8091
            prometheus:
              enabled: true
            redis:
              deployment:
                enabled: true
            telemetryGateway:
              enabled: true
              service:
                type: LoadBalancer
            telemetryCollector:
              enabled: true
              config:
                exporters:
                  otlp:
                    endpoint: gloo-telemetry-gateway.gloo-mesh:4317
            glooUi:
              enabled: true
              serviceType: ClusterIP
            glooAgent:
              enabled: true
              relay:
                serverAddress: gloo-mesh-mgmt-server:9900
        repoURL: https://storage.googleapis.com/gloo-platform/helm-charts
        targetRevision: ${GLOO_MESH_VERSION}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true
    EOF
    
  3. Verify that the Gloo Mesh Enterprise components are installed and in a healthy state.

    kubectl get pods -n gloo-mesh
    

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-6497df4cf9-htqw4          1/1     Running   0          27s
    gloo-mesh-mgmt-server-6d5546757f-6fzxd    1/1     Running   0          27s
    gloo-mesh-redis-7c797d595d-lf9dr          1/1     Running   0          27s
    gloo-mesh-ui-7567bcd54f-6tvjt             2/3     Running   0          27s
    gloo-telemetry-collector-agent-8jvh2      1/1     Running   0          27s
    gloo-telemetry-collector-agent-x2brj      1/1     Running   0          27s
    gloo-telemetry-gateway-689cb78547-sqqgg   1/1     Running   0          27s
    prometheus-server-946c89d8f-zx5sf         1/2     Running   0          27s
    

Install Istio

With Gloo Mesh Enterprise installed in your environment, you can now install Istio. You can choose between a managed Istio installation that uses the Gloo Mesh Istio lifecycle manager resource to set up Istio in your cluster, or to install unmanaged Istio by using the Istio Helm chart directly.

The Istio and Gateway lifecycle managers automate the deployment and management of Istio resources across your clusters. Because these resources must be customized to your cluster environment and the Istio version that you want to use, it is good practice to first deploy these resources in your cluster directly before you automate this process with ArgoCD.

  1. Create an Istio lifecycle manager resource that installs the Istio control plane istiod.

    kubectl apply -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: IstioLifecycleManager
    metadata:
      name: istiod-control-plane
      namespace: gloo-mesh
      annotations:
        argocd.argoproj.io/sync-wave: "-8"
    spec:
      installations:
        # The revision for this installation, such as 1-20
      - revision: ${REVISION}
        # List all workload clusters to install Istio into
        clusters:
        - name: ${CLUSTER_NAME}
          # If set to true, the spec for this revision is applied in the cluster
          defaultRevision: true
          # When set to true, the lifecycle manager allows you to perform in-place upgrades by skipping checks that are required for canary upgrades
        skipUpgradeValidation: true
        istioOperatorSpec:
          # Only the control plane components are installed
          # (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
          profile: minimal
          # Repository for the Solo distribution of Istio images
          # You get the repo key from your Solo Account Representative.
          hub: ${REPO}
          # The version of the Solo distribution of Istio
          # Include any tags, such as 1.20.2-solo
          tag: ${ISTIO_IMAGE}
          namespace: istio-system
          # Mesh configuration
          meshConfig:
            # Enable access logging only if using.
            accessLogFile: /dev/stdout
            # Encoding for the proxy access log (TEXT or JSON). Default value is TEXT.
            accessLogEncoding: JSON
            # Enable span tracing only if using.
            enableTracing: true
            defaultConfig:
              # Wait for the istio-proxy to start before starting application pods
              holdApplicationUntilProxyStarts: true
              proxyMetadata:
                # Enable Istio agent to handle DNS requests for known hosts
                # Unknown hosts are automatically resolved using upstream DNS servers
                # in resolv.conf (for proxy-dns)
                ISTIO_META_DNS_CAPTURE: "true"
                # Enable automatic address allocation (for proxy-dns)
                ISTIO_META_DNS_AUTO_ALLOCATE: "true"
            # Set the default behavior of the sidecar for handling outbound traffic
            # from the application
            outboundTrafficPolicy:
              mode: ALLOW_ANY
              # The administrative root namespace for Istio configuration
            rootNamespace: istio-system
           # Traffic management
          values:
            global:
              meshID: gloo-mesh
              network: ${CLUSTER_NAME}
              multiCluster:
                clusterName: ${CLUSTER_NAME}
          # Traffic management
          components:
            pilot:
              k8s:
                env:
                # Disable selecting workload entries for local service routing.
                # Required for Gloo VirtualDestinaton functionality.
                - name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES
                  value: "false"
    EOF
    
  2. Verify that the istiod control plane is installed successfully.

    kubectl get pods -n istio-system
    

    Example output:

    NAME                           READY   STATUS    RESTARTS   AGE
    istiod-1-20-54546db668-s7xd6   1/1     Running   0          17s
    
  3. Create the namespace for the Istio ingress gateway that you deploy in a later step.

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
      name: gloo-mesh-gateways
      annotations:
        argocd.argoproj.io/sync-wave: "-10"
    EOF
    
  4. Create the load balancer service that exposes the ingress gateway. Separating the service from the gateway configuration is a good practice so that you manage the service lifecycle separately from the gateway. For example, in canary deployments you can easily switch between versions by updating the revision selector in the service.

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: istio-ingressgateway
        istio: ingressgateway
      annotations:
        # uncomment if using the default AWS Cloud in-tree controller 
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
        # uncomment if using the default AWS LB controller
        #service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
        #service.beta.kubernetes.io/aws-load-balancer-type: "external"
        #service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
        argocd.argoproj.io/sync-wave: "-9"
      name: istio-ingressgateway
      namespace: gloo-mesh-gateways
    spec:
      ports:
      # Port for health checks on path /healthz/ready.
      # For AWS ELBs, this port must be listed first.
      - name: status-port
        port: 15021
        targetPort: 15021
      # Main HTTP ingress port
      - name: http2
        port: 80
        protocol: TCP
        targetPort: 8080
      # Main HTTPS ingress port
      - name: https
        port: 443
        protocol: TCP
        targetPort: 8443
      - name: tls
        port: 15443
        targetPort: 15443
      selector:
        app: istio-ingressgateway
        istio: ingressgateway
        revision: $REVISION
      type: LoadBalancer   
    EOF
    
  5. Create a Gateway lifecycle manager resource to deploy the ingress gateway.

    kubectl apply -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: GatewayLifecycleManager
    metadata:
      name: istio-ingressgateway
      namespace: gloo-mesh
      annotations:
        argocd.argoproj.io/sync-wave: "-7"
    spec:
      installations:
        # The revision for this installation, such as 1-20
      - gatewayRevision: ${REVISION}
        # List all workload clusters to install Istio into
        clusters:
        - name: ${CLUSTER_NAME}
          activeGateway: true
        istioOperatorSpec:
          # No control plane components are installed
          profile: empty
          # Repository for the Solo distribution of Istio images
          # You get the repo key from your Solo Account Representative.
          hub: ${REPO}
          # The version of the Solo distribution of Istio
          # Include any tags, such as <major><minor>.<patch>-solo
          tag: ${ISTIO_IMAGE}
          values:
            gateways:
              istio-ingressgateway:
                customService: true
          components:
            ingressGateways:
              - name: istio-ingressgateway
                namespace: gloo-mesh-gateways
                enabled: true
                label:
                  istio: ingressgateway
                  app: istio-ingressgateway
    EOF
    
  6. Verify that the ingress gateway is deployed successfully.

    kubectl get pods -n gloo-mesh-gateways 
    

    Example output:

    NAME                                        READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-1-20-bcdd7867b-6pksl   1/1     Running   0          95s
    
  7. Now that you have your Istio and Gateway lifecycle manager resource configurations in place, automate the deployment of these resources with Argo CD.

    1. Upload the YAML files for the Istio and Gateway lifecycle managers, the gateway namespace, and load balancer service to a GitHub repo. The YAML files already have the argocd.argoproj.io/sync-wave annotation that instruct Argo CD to deploy these resources in order from least to greatest.

      Note: When you store your YAML files in a GitHub repo, they cannot contain environment variables, such as $REVISION. Make sure to first replace all of the environment variables in all YAML files with the actual value before uploading the YAML file to your GitHub repo.

    2. Get the URL of the GitHub repo where you stored your resources, such as https://github.com/myorg/argo/istio.

    3. Create an Argo CD application that deploys the Istio resources with Argo.

      kubectl apply -f - <<EOF
      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
        name: istio-lifecyclemanager-deployments
        namespace: argocd
        finalizers:
        - resources-finalizer.argocd.argoproj.io
      spec:
        project: default
        source:
          repoURL: $GH_REPO # Example: https://github.com/myorg/argo
          path: $GH_REPO_PATH # Example: /istio
          targetRevision: HEAD
          directory:
            recurse: true
        destination:
          server: https://kubernetes.default.svc
        syncPolicy:
          automated:
            prune: true
            selfHeal: true
          retry:
            limit: 5
            backoff:
              duration: 5s
              factor: 2
              maxDuration: 3m0s
      EOF
      
    4. Make sure your Argo CD application has a sync status of Synced.

      kubectl get application istio-lifecyclemanager-deployments -n argocd
      

      Example output:

      NAME                                 SYNC STATUS   HEALTH STATUS
      istio-lifecyclemanager-deployments   Synced       Healthy
      

Install Istio by using the Istio Helm chart and Argo CD.

  1. Create an Argo CD application to deploy the Istio Helm chart to your cluster.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: istio-base
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
      annotations:
        argocd.argoproj.io/sync-wave: "-3"
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: istio-system
      project: default
      source:
        chart: base
        repoURL: https://istio-release.storage.googleapis.com/charts
        targetRevision: ${ISTIO_VERSION}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true
    EOF
    
  2. Create another application to deploy the Istio control plane istiod.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: istiod
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: istio-system
      project: default
      source:
        chart: istiod
        repoURL: https://istio-release.storage.googleapis.com/charts
        targetRevision: ${ISTIO_VERSION}
        helm:
          values: |
            revision: ${REVISION}
            global:
              meshID: mesh1
              multiCluster:
                clusterName: ${CLUSTER_NAME}
              network: network1
              hub: ${REPO}
              tag: ${ISTIO_IMAGE}
            meshConfig:
              trustDomain: ${CLUSTER_NAME}
              accessLogFile: /dev/stdout
              accessLogEncoding: JSON
              enableAutoMtls: true
              defaultConfig:
                # Wait for the istio-proxy to start before starting application pods
                holdApplicationUntilProxyStarts: true
                envoyAccessLogService:
                  address: gloo-mesh-agent.gloo-mesh:9977
                proxyMetadata:
                  ISTIO_META_DNS_CAPTURE: "true"
                  ISTIO_META_DNS_AUTO_ALLOCATE: "true"
              outboundTrafficPolicy:
                mode: ALLOW_ANY
              rootNamespace: istio-system
            pilot:
              env:
                PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false"
                PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        #automated: {}
      ignoreDifferences:
      - group: '*'
        kind: '*'
        managedFieldsManagers:
        - argocd-application-controller
    EOF
    
  3. Optional: Create another application to deploy an Istio east-west gateway.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: istio-eastwestgateway
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
      annotations:
        argocd.argoproj.io/sync-wave: "-1"
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: istio-eastwest
      project: default
      source:
        chart: gateway
        repoURL: https://istio-release.storage.googleapis.com/charts
        targetRevision: ${ISTIO_VERSION}
        helm:
          values: |
            # Name allows overriding the release name. Generally this should not be set
            name: "istio-eastwestgateway-${REVISION}"
            # revision declares which revision this gateway is a part of
            revision: "${REVISION}"
            replicaCount: 1
            env:
              # 'sni-dnat' enables AUTO_PASSTHROUGH mode for east-west communication through the gateway.
              # The default value ('standard') does not set up a passthrough cluster.
              # Required for multi-cluster communication and to preserve SNI.
              ISTIO_META_ROUTER_MODE: "sni-dnat"
            service:
              # Type of service. Set to "None" to disable the service entirely
              type: LoadBalancer
              ports:
                # Port for health checks on path /healthz/ready.
                # For AWS ELBs, this port must be listed first.
                - port: 15021
                  targetPort: 15021
                  name: status-port
                # Port for multicluster mTLS passthrough; required for Gloo Mesh east/west routing
                - port: 15443
                  targetPort: 15443
                  # Gloo Mesh looks for this default name 'tls' on a gateway
                  name: tls
                # Port required for VM onboarding
                #- port: 15012
                  #targetPort: 15012
                  # Required for VM onboarding discovery address
                  #name: tls-istiod
              annotations:
                # AWS NLB Annotation
                service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
              loadBalancerIP: ""
              loadBalancerSourceRanges: []
              externalTrafficPolicy: ""
            # Pod environment variables
            env: {}
            annotations:
              proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
            # Labels to apply to all resources
            labels:
              # Set a unique label for the gateway so that virtual gateways
              # can select this workload.
              app: istio-eastwestgateway-${REVISION}
              istio: eastwestgateway
              revision: ${REVISION}
              # Matches spec.values.global.network in the istiod deployment
              topology.istio.io/network: ${CLUSTER_NAME}
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true
    EOF
    
  4. Optional: Create another application to deploy the Istio ingress gateway.

    kubectl apply -f- <<EOF
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: istio-ingressgateway
      namespace: argocd
      finalizers:
      - resources-finalizer.argocd.argoproj.io
      annotations:
        argocd.argoproj.io/sync-wave: "-1"
    spec:
      destination:
        server: https://kubernetes.default.svc
        namespace: istio-ingress
      project: default
      source:
        chart: gateway
        repoURL: https://istio-release.storage.googleapis.com/charts
        targetRevision: ${ISTIO_VERSION}
        helm:
          values: |
            # Name allows overriding the release name. Generally this should not be set
            name: "istio-ingressgateway-${REVISION}"
            # revision declares which revision this gateway is a part of
            revision: "${REVISION}"
            replicaCount: 1
            service:
              # Type of service. Set to "None" to disable the service entirely
              type: LoadBalancer
              ports:
              - name: http2
                port: 80
                protocol: TCP
                targetPort: 80
              - name: https
                port: 443
                protocol: TCP
                targetPort: 443
              annotations:
                # AWS NLB Annotation
                service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
              loadBalancerIP: ""
              loadBalancerSourceRanges: []
              externalTrafficPolicy: ""
            # Pod environment variables
            env: {}
            annotations:
              proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
            # Labels to apply to all resources
            labels:
              istio.io/rev: ${REVISION}
              istio: ingressgateway
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true
    EOF
    
  5. Verify that the Istio pods are up and running.

    kubectl get pods -n istio-system
    kubectl get pods -n istio-ingress
    kubectl get pods -n istio-eastwest
    

    Example output:

    NAME                           READY   STATUS    RESTARTS   AGE
    istiod-1-19-64ff8d9c9c-sl62w   1/1     Running   0          72s
    NAME                                         READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-1-19-674cbfc747-bjm64   1/1     Running   0          65s
    NAME                                          READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-1-19-7895666dc8-hm6gp   1/1     Running   0          29s
    

Congratuliations! You successfully used Argo CD to deploy Gloo Mesh Enterprise and Istio in your cluster.

Test the resilience of your setup

Managing deployments with Argo CD allows you to declare the desired state of your components in a versioned-controlled source of truth, such as Git, and to automatically sync changes to your environments whenever the source of truth is changed. This approach significantly reduces the risk of configuration drift between your environments, but also helps to detect discrepancies between the desired state in Git and the actual state in your cluster to kick off self-healing mechanisms.

  1. Review the deployments that were created when you installed Gloo Mesh Enterprise with Argo CD.

    kubectl get deployments -n gloo-mesh
    

    Example output:

    NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    gloo-mesh-agent          1/1     1            1           3h11m
    gloo-mesh-mgmt-server    1/1     1            1           3h11m
    gloo-mesh-redis          1/1     1            1           3h11m
    gloo-mesh-ui             1/1     1            1           3h11m
    gloo-telemetry-gateway   1/1     1            1           3h11m
    prometheus-server        1/1     1            1           3h11m
    
  2. Simulate a chaos scenario where all of your deployments in the gloo-mesh namespace are deleted. Without Argo CD, deleting a deployment permanently deletes all of the pods that the deployment manages. However, when your deployments are monitored and managed by Argo CD, and you enabled the selfHeal: true and prune: true options in your Argo CD application, Argo automatically detects that the actual state of your deployment does not match the desired state in Git, and kicks off its self-healing mechanism.

    kubectl delete deployments --all -n gloo-mesh  
    

    If you use self-signed TLS certificates for the relay connection between the Gloo management server and agent, you must remove the secrets in the gloo-mesh namespace as the certificates are automaticatically rotated during a redeploy or upgrade of the management server and agent. To delete the secrets, run kubectl delete secrets --all -n gloo-mesh

  3. Verify that Argo CD automatically recreated all of the deployments in the gloo-mesh namespace.

    kubectl get deployments -n gloo-mesh
    

    Example output:

    NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    gloo-mesh-agent          1/1     1            1           5m
    gloo-mesh-mgmt-server    1/1     1            1           5m
    gloo-mesh-redis          1/1     1            1           5m
    gloo-mesh-ui             1/1     1            1           5m
    gloo-telemetry-gateway   1/1     1            1           5m
    prometheus-server        1/1     1            1           5m
    

Next

Now that you have Gloo Mesh Enterprise and Istio up and running, you can check out the following links to explore the capabilities of Gloo Mesh Enterprise:

Cleanup

You can optionally remove the resources that you created as part of this guide.

kubectl delete applications istiod istio-base istio-ingressgateway istio-eastwestgateway -n argocd
kubectl delete applications gloo-platform-helm gloo-platform-crds -n argocd
kubectl delete applications istio-lifecyclemanager-deployments -n argocd
kubectl delete -k https://github.com/solo-io/gitops-library.git/argocd/deploy/default/
kubectl delete namespace argocd gloo-mesh