Use Argo Continuous Delivery (Argo CD) to automate the deployment and management of Gloo Mesh Enterprise and Istio in your cluster.
Argo CD is a declarative, Kubernetes-native continuous deployment tool that can read and pull code from Git repositories and deploy it to your cluster. Because of that, you can integrate Argo CD into your GitOps pipeline to automate the deployment and synchronization of your apps.
In this guide, you learn how to use Argo CD applications to deploy the following components:
Gloo Platform CRDs
Gloo Mesh Enterprise
Istio control plane istiod
Istio gateways
info
This guide assumes a single cluster setup for Gloo Mesh Enterprise and Istio. If you want to use Argo CD in a multicluster setup, you must configure your applications to deploy resources in either the management or workload clusters.
Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
export CLUSTER_NAME=<cluster_name>
Save your Gloo Mesh Enterprise license in an environment variable. If you do not have a license key, contact an account representative.
export GLOO_MESH_LICENSE_KEY=<license-key>
Save the Gloo Mesh Enterprise version that you want to install in an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.4.16-fips’. Do not include v before the version number.
export GLOO_MESH_VERSION=2.4.16
Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.
REPO: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.
ISTIO_IMAGE: The version that you want to use with the solo tag, such as 1.18.7-patch3-solo. You can optionally append other tags of Solo distributions of Istio as needed.
REVISION: Take the Istio major and minor versions and replace the periods with hyphens, such as 1-18.
info
For testing environments only, you can deploy a revisionless installation. Revisionless installations permit in-place upgrades, which are quicker than the canary-based upgrades that revisioned installations require. To omit a revision, do not set a revision environment variable. Then in the following sections, you edit the sample IstioLifecycleManager and GatewayLifecycleManager files that you download to remove the revision and gatewayRevision fields. Note that if you deploy multiple Istio installations in the same cluster, only one installation can be revisionless.
ISTIO_VERSION: The version of Istio that you want to install, such as 1.18.7-patch3.
For FIPS-compliant Solo distributions of Istio 1.17.2 and 1.16.4, you must use the -patch1 versions of the latest Istio builds published by Solo, such as 1.17.2-patch1-solo-fips for Solo distribution of Istio 1.17. These patch versions fix a FIPS-related issue introduced in the upstream Envoy code. In 1.17.3 and later, FIPS compliance is available in the -fips tags of regular Solo distributions of Istio, such as 1.17.3-solo-fips.
Create another application to install the Gloo Mesh Enterprise Helm chart. The following application prepopulates a set of Helm values to install Gloo Mesh Enterprise components, and enable the Gloo telemetry pipeline and the built-in Prometheus server. To customize these settings, see the Helm reference.
With Gloo Mesh Enterprise installed in your environment, you can now install Istio. You can choose between a managed Istio installation that uses the Gloo Mesh Istio lifecycle manager resource to set up Istio in your cluster, or to install unmanaged Istio by using the Istio Helm chart directly.
The Istio and Gateway lifecycle managers automate the deployment and management of Istio resources across your clusters. Because these resources must be customized to your cluster environment and the Istio version that you want to use, it is good practice to first deploy these resources in your cluster directly before you automate this process with ArgoCD.
Create an Istio lifecycle manager resource that installs the Istio control plane istiod.
kubectl apply -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: IstioLifecycleManager
metadata:
name: istiod-control-plane
namespace: gloo-mesh
annotations:
argocd.argoproj.io/sync-wave: "-8"
spec:
installations:
# The revision for this installation, such as 1-18
- revision: ${REVISION}
# List all workload clusters to install Istio into
clusters:
- name: ${CLUSTER_NAME}
# If set to true, the spec for this revision is applied in the cluster
defaultRevision: true
# When set to true, the lifecycle manager allows you to perform in-place upgrades by skipping checks that are required for canary upgrades
skipUpgradeValidation: true
istioOperatorSpec:
# Only the control plane components are installed
# (https://istio.io/latest/docs/setup/additional-setup/config-profiles/)
profile: minimal
# Repository for the Solo distribution of Istio images
# You get the repo key from your Solo Account Representative.
hub: ${REPO}
# The version of the Solo distribution of Istio
# Include any tags, such as 1.18.7-patch3-solo
tag: ${ISTIO_IMAGE}
namespace: istio-system
# Mesh configuration
meshConfig:
# Enable access logging only if using.
accessLogFile: /dev/stdout
# Encoding for the proxy access log (TEXT or JSON). Default value is TEXT.
accessLogEncoding: JSON
# Enable span tracing only if using.
enableTracing: true
defaultConfig:
# Wait for the istio-proxy to start before starting application pods
holdApplicationUntilProxyStarts: true
proxyMetadata:
# Enable Istio agent to handle DNS requests for known hosts
# Unknown hosts are automatically resolved using upstream DNS servers
# in resolv.conf (for proxy-dns)
ISTIO_META_DNS_CAPTURE: "true"
# Enable automatic address allocation (for proxy-dns)
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
# Set the default behavior of the sidecar for handling outbound traffic
# from the application
outboundTrafficPolicy:
mode: ALLOW_ANY
# The administrative root namespace for Istio configuration
rootNamespace: istio-system
# Traffic management
values:
global:
meshID: gloo-mesh
network: ${CLUSTER_NAME}
multiCluster:
clusterName: ${CLUSTER_NAME}
# Traffic management
components:
pilot:
k8s:
env:
# Disable selecting workload entries for local service routing.
# Required for Gloo VirtualDestinaton functionality.
- name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES
value: "false"
EOF
In each workload cluster, verify that the Istio pods have a status of Running.
kubectl get pods -n istio-system --context $REMOTE_CONTEXT1
kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
Example output:
NAME READY STATUS RESTARTS AGE
istiod-1-18-b65676555-g2vmr 1/1 Running 0 47s
NAME READY STATUS RESTARTS AGE
istiod-1-18-7b96cb895-4nzv9 1/1 Running 0 43s
Create the namespace for the Istio ingress gateway that you deploy in a later step.
Create the load balancer service that exposes the ingress gateway. Separating the service from the gateway configuration is a good practice so that you manage the service lifecycle separately from the gateway. For example, in canary deployments you can easily switch between versions by updating the revision selector in the service.
kubectl apply -f- <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
annotations:
# uncomment if using the default AWS Cloud in-tree controller
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
# uncomment if using the default AWS LB controller
#service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
#service.beta.kubernetes.io/aws-load-balancer-type: "external"
#service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
argocd.argoproj.io/sync-wave: "-9"
name: istio-ingressgateway
namespace: gloo-mesh-gateways
spec:
ports:
# Port for health checks on path /healthz/ready.
# For AWS ELBs, this port must be listed first.
- name: status-port
port: 15021
targetPort: 15021
# Main HTTP ingress port
- name: http2
port: 80
protocol: TCP
targetPort: 8080
# Main HTTPS ingress port
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: tls
port: 15443
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
revision: $REVISION
type: LoadBalancer
EOF
Create a Gateway lifecycle manager resource to deploy the ingress gateway.
kubectl apply -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: GatewayLifecycleManager
metadata:
name: istio-ingressgateway
namespace: gloo-mesh
annotations:
argocd.argoproj.io/sync-wave: "-7"
spec:
installations:
# The revision for this installation, such as 1-18
- gatewayRevision: ${REVISION}
# List all workload clusters to install Istio into
clusters:
- name: ${CLUSTER_NAME}
activeGateway: true
istioOperatorSpec:
# No control plane components are installed
profile: empty
# Repository for the Solo distribution of Istio images
# You get the repo key from your Solo Account Representative.
hub: ${REPO}
# The version of the Solo distribution of Istio
# Include any tags, such as <major><minor>.<patch>-solo
tag: ${ISTIO_IMAGE}
values:
gateways:
istio-ingressgateway:
customService: true
components:
ingressGateways:
- name: istio-ingressgateway
namespace: gloo-mesh-gateways
enabled: true
label:
istio: ingressgateway
app: istio-ingressgateway
EOF
Verify that the ingress gateway is deployed successfully.
kubectl get pods -n gloo-mesh-gateways
Example output:
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-1-18-bcdd7867b-6pksl 1/1 Running 0 95s
Now that you have your Istio and Gateway lifecycle manager resource configurations in place, automate the deployment of these resources with Argo CD.
Upload the YAML files for the Istio and Gateway lifecycle managers, the gateway namespace, and load balancer service to a GitHub repo. The YAML files already have the argocd.argoproj.io/sync-wave annotation that instruct Argo CD to deploy these resources in order from least to greatest.Note: When you store your YAML files in a GitHub repo, they cannot contain environment variables, such as $REVISION. Make sure to first replace all of the environment variables in all YAML files with the actual value before uploading the YAML file to your GitHub repo.
Get the URL of the GitHub repo where you stored your resources, such as https://github.com/myorg/argo/istio.
Create an Argo CD application that deploys the Istio resources with Argo.
Optional: Create another application to deploy an Istio east-west gateway.
kubectl apply -f- <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: istio-eastwestgateway
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
annotations:
argocd.argoproj.io/sync-wave: "-1"
spec:
destination:
server: https://kubernetes.default.svc
namespace: istio-eastwest
project: default
source:
chart: gateway
repoURL: https://istio-release.storage.googleapis.com/charts
targetRevision: ${ISTIO_VERSION}
helm:
values: |
# Name allows overriding the release name. Generally this should not be set
name: "istio-eastwestgateway-${REVISION}"
# revision declares which revision this gateway is a part of
revision: "${REVISION}"
replicaCount: 1
service:
# Type of service. Set to "None" to disable the service entirely
type: LoadBalancer
ports:
# Port for health checks on path /healthz/ready.
# For AWS ELBs, this port must be listed first.
- port: 15021
targetPort: 15021
name: status-port
# Port for multicluster mTLS passthrough; required for Gloo Mesh east/west routing
- port: 15443
targetPort: 15443
# Gloo Mesh looks for this default name 'tls' on a gateway
name: tls
# Port required for VM onboarding
#- port: 15012
#targetPort: 15012
# Required for VM onboarding discovery address
#name: tls-istiod
annotations:
# AWS NLB Annotation
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
# Pod environment variables
env: {}
annotations:
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
# Labels to apply to all resources
labels:
# Set a unique label for the gateway so that virtual gateways
# can select this workload.
app: istio-eastwestgateway-${REVISION}
istio: eastwestgateway
revision: ${REVISION}
# Matches spec.values.global.network in the istiod deployment
topology.istio.io/network: ${CLUSTER_NAME}
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
Optional: Create another application to deploy the Istio ingress gateway.
kubectl apply -f- <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: istio-ingressgateway
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
annotations:
argocd.argoproj.io/sync-wave: "-1"
spec:
destination:
server: https://kubernetes.default.svc
namespace: istio-ingress
project: default
source:
chart: gateway
repoURL: https://istio-release.storage.googleapis.com/charts
targetRevision: ${ISTIO_VERSION}
helm:
values: |
# Name allows overriding the release name. Generally this should not be set
name: "istio-ingressgateway-${REVISION}"
# revision declares which revision this gateway is a part of
revision: "${REVISION}"
replicaCount: 1
service:
# Type of service. Set to "None" to disable the service entirely
type: LoadBalancer
ports:
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
annotations:
# AWS NLB Annotation
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
# Pod environment variables
env: {}
annotations:
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
# Labels to apply to all resources
labels:
istio.io/rev: ${REVISION}
istio: ingressgateway
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
Verify that the Istio pods are up and running.
kubectl get pods -n istio-system
kubectl get pods -n istio-ingress
kubectl get pods -n istio-eastwest
Example output:
NAME READY STATUS RESTARTS AGE
istiod-1-18-64ff8d9c9c-sl62w 1/1 Running 0 72s
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-1-18-674cbfc747-bjm64 1/1 Running 0 65s
NAME READY STATUS RESTARTS AGE
istio-eastwestgateway-1-18-7895666dc8-hm6gp 1/1 Running 0 29s
Congratulations! You successfully used Argo CD to deploy Gloo Mesh Enterprise and Istio in your cluster.
Managing deployments with Argo CD allows you to declare the desired state of your components in a versioned-controlled source of truth, such as Git, and to automatically sync changes to your environments whenever the source of truth is changed. This approach significantly reduces the risk of configuration drift between your environments, but also helps to detect discrepancies between the desired state in Git and the actual state in your cluster to kick off self-healing mechanisms.
Review the deployments that were created when you installed Gloo Mesh Enterprise with Argo CD.
Simulate a chaos scenario where all of your deployments in the gloo-mesh namespace are deleted. Without Argo CD, deleting a deployment permanently deletes all of the pods that the deployment manages. However, when your deployments are monitored and managed by Argo CD, and you enabled the selfHeal: true and prune: true options in your Argo CD application, Argo automatically detects that the actual state of your deployment does not match the desired state in Git, and kicks off its self-healing mechanism.
kubectl delete deployments --all -n gloo-mesh
info
If you use self-signed TLS certificates for the relay connection between the Gloo management server and agent, you must remove the secrets in the gloo-mesh namespace as the certificates are automaticatically rotated during a redeploy or upgrade of the management server and agent. To delete the secrets, run kubectl delete secrets --all -n gloo-mesh.
Verify that Argo CD automatically recreated all of the deployments in the gloo-mesh namespace.
Now that you have Gloo Mesh Enterprise and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh Enterprise:
Apply Gloo policies to manage the security and resiliency of your service mesh environment.
When it’s time to upgrade Gloo Mesh Enterprise, see the upgrade guide.
Istio: Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.