Migrate to the gloo-platform Helm chart

Migrate from the legacy Helm charts to the improved gloo-platform Helm chart.

In Gloo Gateway 2.3 and later, the gloo-mesh-enterpise, gloo-mesh-agent, and other included Helm charts are considered legacy. If you installed Gloo Gateway by using these legacy Helm charts, or if you used meshctl version 2.2 or earlier to install Gloo Gateway, you can migrate your existing installation to the new gloo-platform Helm chart by using the meshctl migrate helm command.

gloo-platform chart overview

In Gloo Platform version 2.3 and later, all Gloo Platform components are available in a single Helm chart, gloo-platform. Additionally, the custom resource definitions (CRDs) that are required by Gloo Platform controllers are maintained by the gloo-platform-crds Helm chart.

When you migrate to the gloo-platform chart, you unlock new key benefits such as the following:

Within the gloo-platform chart, you can find the configuration options for all components in the following sections. To review these sections before you migrate, you can run helm show values gloo-platform/gloo-platform --version v2.3.22.

Component section Description
common Common values shared across components. When applicable, these can be overridden in specific components.
demo Demo-specific features that improve quick setups. Do not use in production.
experimental Experimental features for Gloo Platform. Disabled by default. Do not use in production.
extAuthService Configuration for the Gloo external authentication service.
glooAgent Configuration for the Gloo agent.
glooMgmtServer Configuration for the Gloo management server.
glooPortalServer Configuration for the Gloo Platform Portal server deployment.
redis Configuration for the default Redis instance.
glooUi Configuration for the Gloo UI.
glooNetwork Gloo Network configuration options.
istioInstallations Configuration for deploying managed Istio control plane and gateway installations by using the Istio lifecycle manager.
legacyMetricsPipeline Deprecated: Configuration for the legacy metrics pipeline, which uses Gloo agents to propagate metrics to the management server. This pipeline is deprecated and is planned to be unsupported in Gloo Gateway version 2.4. Use the telemetryCollector and telemetryGateway options instead.
licensing Gloo Platform product licenses.
telemetryCollector Configuration for the Gloo Platform Telemetry Collector. See the OpenTelemetry Helm chart for the complete set of values.
telemetryCollectorCustomization Optional customization for the Gloo Platform Telemetry Collector.
telemetryGateway Configuration for the Gloo Platform Telemetry Gateway. See the OpenTelemetry Helm chart for the complete set of values.
telemetryGatewayCustomization Optional customization for the Gloo Platform Telemetry Gateway.
prometheus Helm values for configuring Prometheus. See the Prometheus Helm chart for the complete set of values.
rateLimiter Configuration for the Gloo rate limiting service.

Migration process

To migrate your existing Gloo Gateway installation to the new Helm chart, you can use the meshctl migrate helm command. This command generates a Helm install command that includes a values file for the new gloo-platform-crds chart, and one or more Helm upgrade commands that include values files for the new gloo-platform chart, based on the values of your existing Helm releases.

For example, you might have a single-cluster Gloo Gateway setup. The management server, agent, and gateway proxy components are maintained by the legacy gloo-mesh-enterprise Helm chart, with the release named gloo-mgmt. When you run meshctl migrate helm, the command generates:

After you run the generated commands, you can verify that your Gloo Gateway installation is running as expected. If you need to make changes to the environment, you can roll back the Helm upgrade and make adjustments.

Before you begin

The gloo-platform Helm chart is available in Gloo version 2.3 and later only. If your legacy setup runs version 2.2 or earlier, you must upgrade your version during this migration process to version 2.3. Prepare for the migration in the same way that you might prepare for a typical Gloo Gateway version update.

  1. Review the changelog for version you want to upgrade to. Focus especially on any Breaking Changes that might require a different upgrade procedure.

    • AWS Lambda integration: In 2.3, the Gloo custom resources for the AWS Lambda integration are changed in the following breaking ways. If you used the AWS Lambda integration in 2.2, create copies of your existing CloudProvider, CloudResources, and any Lambda RouteTable resources and update the copies to the new format, which you apply during step 4 of the migration. To see the new format, check out the Lambda integration documentation.
      • The CloudProvider and CloudResources CRs are moved from the networking.gloo.solo.io/v2 API group to the infrastructure.gloo.solo.io/v2 API group. You must delete the old cloudproviders.networking.gloo.solo.io and cloudresources.networking.gloo.solo.io CRDs before you upgrade your installation.
      • The logicalName field is removed from the CloudResources CR.
      • Routes for Lambda functions in RouteTable CRs now use a new destination type, awsLambda.
      • The WRAP_AS_API_GATEWAY, UNWRAP_AS_API_GATEWAY, and UNWRAP_AS_ALB transformation settings are removed from the RouteTable CR, as the transformation functionality from these settings is now included in the REQUEST_DEFAULT and RESPONSE_DEFAULT transformations.
  2. Check that your underlying Kubernetes platform runs a supported version for the Gloo version.

    1. Review the supported versions.
    2. Compare the supported version against the version of Kubernetes that you run in your clusters.
    3. If necessary, upgrade Kubernetes. Consult your cluster infrastructure provider.

Migrate to the gloo-platform chart

  1. Upgrade the meshctl CLI to the version that you want to upgrade to during the migration. For example, if you currently use version 2.2, upgrade your CLI to the latest version of 2.3, 2.3.22. During the migration, your Gloo Gateway installation is upgraded to this version.

    curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.3.22 sh -
    export PATH=$HOME/.gloo-mesh/bin:$PATH
    
  2. Run the migration command.

    meshctl migrate helm
    
  3. From the command output, copy and paste the following generated commands:

    • helm repo add gloo-platform ...: Run this command to add the Helm repository for Gloo Platform.
    • helm upgrade --install gloo-platform-crds ...: Run this command to create the gloo-platform-crds Helm release. This release applies the Gloo Platform CRDs for the Gloo version you are upgrading to. In future upgrades, you apply CRDs for target versions by upgrading this Helm release.
    • helm upgrade --install gloo-agent-addons ...: If the migration generated this command, run it to upgrade your add-ons release. In the future, you continue to upgrade this Helm release separately from your main installation release.
    • Do not yet run the upgrade command for your main installation release, such as the helm upgrade --install gloo-mesh-enterprise ... or gloo-mgmt command.
  4. AWS Lambda integration: If you created copies of your existing Gloo Lambda-related CRs and updated them to the new format:

    1. Get the existing CloudProvider and CloudResources resources in the outdated format. Do not get any Lambda route tables, because the API group for route tables did not change, and your route table updates are applied in-place.
      kubectl get CloudProvider -A
      kubectl get CloudResources -A
      
    2. Delete the outdated CloudProvider and CloudResources resources.
      kubectl delete CloudProvider <name> -n <namespace>
      kubectl delete CloudResources <name> -n <namespace>
      
    3. Apply your copies of the CloudProvider, CloudResources, and Lambda RouteTable resources in the updated format to your cluster.
    4. Verify that the updated resources are applied.
      kubectl get CloudProvider -A
      kubectl get CloudResources -A
      kubectl get RouteTable -A
      
  5. Remove CRDs that are deprecated in 2.3.

    kubectl delete crd apischemas.apimanagement.gloo.solo.io
    kubectl delete crd cloudproviders.networking.gloo.solo.io
    kubectl delete crd cloudresources.networking.gloo.solo.io
    
  6. Optional for Istio and gateway lifecycle management: If you manually created IstioLifecycleManager and GatewayLifecycleManager CRs to deploy and manage your gateway proxies, you can either continue to manage your gateway proxies in the CRs separately from your Helm release, or move your gateway proxy management into your Helm release.

    To manage your gateways by using the CRs, no changes to the migration process are required. In the future, you continue to upgrade your gateway proxies through CRs separately from your main Helm installation.

    To include gateway proxy management into your Helm installation release, you perform a canary deployment by using your main installation Helm chart. Including gateway management in your Helm chart simplifies future upgrades.

    1. Edit the generated values file for the main installation, such as by running open gloo-mgmt-gloo-mesh-2.3.22-values.yaml.

    2. Add the following istioInstallations section to the values file. You'll add the configuration for each installation in subsequent steps.

      istioInstallations:
          enabled: true
          controlPlane:
              enabled: true
              installations:
                  ...
          eastWestGateways: null
          northSouthGateways:
              - enabled: true
                name: istio-ingressgateway
                installations:
                    ...
      
    3. Get the current configuration in your IstioLifecycleManager and GatewayLifecycleManager CRs.

      • Control plane:
        kubectl get IstioLifecycleManager -n gloo-mesh istiod-control-plane -o yaml
        
      • Ingress gateway:
        kubectl get GatewayLifecycleManager -n gloo-mesh istio-ingressgateway -o yaml
        
      • East-west gateway (multicluster only):
        kubectl get GatewayLifecycleManager -n gloo-mesh istio-eastwestgateway -o yaml
        
    4. For each CRD, copy all configuration after the installations field, and paste it in the corresponding istioInstallations.controlPlane.installations and istioInstallations.northSouthGateways.installations sections of your values file. If you have a multicluster setup, add your configuration in the eastWestGateways section too.

    5. Make the following changes to the values file.

      • Change the value of revision and gatewayRevision to a different value than your existing revision. For example, you might use ${REVISION}-MIGRATION.
      • Set defaultRevision and activeGateway to false. When you use this values file in the Helm upgrade for your Gloo Gateway installation, this setting ensures that only the Istio control plane and gateway revision that is managed by your existing CRs continues to run.

        Example configuration, which uses placeholder values that you replace with your own details:
      istioInstallations:
          enabled: true
          controlPlane:
              enabled: true
              installations:
                    # Different revision than your current installations
                  - revision: ${REVISION}-MIGRATION
                    clusters:
                      - name: $CLUSTER_NAME
                        # Set to false for this canary installation
                        defaultRevision: false
                    istioOperatorSpec:
                      profile: minimal
                      hub: $REPO
                      tag: $ISTIO_IMAGE
                      namespace: istio-system
                      ...
          eastWestGateways: null
          northSouthGateways:
              - enabled: true
                name: istio-ingressgateway
                installations:
                    # Different revision than your current installations
                  - gatewayRevision: ${REVISION}-MIGRATION
                    clusters:
                      - name: $CLUSTER_NAME
                        # Set to false for this canary installation
                        activeGateway: false
                    istioOperatorSpec:
                      profile: empty
                      hub: $REPO
                      tag: $ISTIO_IMAGE
                      namespace: gloo-mesh-gateways
                      ...
      
    6. Save and close the file.

    7. Proceed to the next step to migrate your main installation, in which you use your updated values file.

    8. After you complete all of the steps in this migration guide, continue with step 2 in the gateway proxy upgrade guide to verify your canary Helm-managed revision, set the canary revision to the default version, and uninstall your old revision by deleting the CRs.

  7. Migrate your Gloo Gateway values to the gloo-platform chart by upgrading your existing release.

    1. Check out the contents of the generated values file for the main installation, such as by running cat gloo-mgmt-gloo-mesh-2.3.22-values.yaml.

    2. Copy and paste the generated helm upgrade --install gloo-mgmt ... command to map the values from the legacy chart fields to the new fields in the gloo-platform chart. You release might have a different name.

    3. Verify that the Helm release now lists gloo-platform-2.3.22 as the base Helm chart.

      helm list -A
      

      Example output:

      NAME          NAMESPACE      REVISION	    UPDATED                               	STATUS      CHART                     	APP VERSION
      gloo-mgmt     gloo-mesh      1       	    2023-02-08 13:25:57.937179 -0500 -0500	deployed    gloo-platform-2.3.22  
      
    4. Verify your Gloo Gateway components are running.

      meshctl check
      

      Example output:

      ...
      🟢 Gloo deployment status
      
      Namespace        | Name                  | Ready | Status 
      gloo-mesh        | gloo-mesh-agent       | 1/1   | Healthy
      gloo-mesh        | gloo-mesh-mgmt-server | 1/1   | Healthy
      gloo-mesh        | gloo-mesh-redis       | 1/1   | Healthy
      gloo-mesh        | gloo-mesh-ui          | 1/1   | Healthy
      gloo-mesh        | prometheus-server     | 1/1   | Healthy
      gloo-mesh-addons | ext-auth-service      | 1/1   | Healthy
      gloo-mesh-addons | rate-limiter          | 1/1   | Healthy
      gloo-mesh-addons | redis                 | 1/1   | Healthy
      
      🟢 Mgmt server connectivity to workload agents
      
      Cluster  | Registered | Connected Pod                                   
      cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
      
  8. Multicluster setups only: Migrate each workload cluster.

    1. Switch to the context for a workload cluster. Remember to change the context for each workload cluster that you upgrade.
      kubectl use-context $REMOTE_CONTEXT
      
    2. Repeat steps 2 - 6 for each workload cluster.
  9. Important metrics updates: Depending on whether you run the Gloo OpenTelemetry (OTel) pipeline or legacy metrics pipeline, you might need to make changes to your installation.

    If you set up the Gloo OpenTelemetry (OTel) pipeline prior to migration, your installation continues to use the OTel pipeline after the migration process.

    However, due to a service name change for telemetry components, you must update the telemetry gateway IP address that the collector agents send metrics to.

    1. Save the new telemetry gateway IP address in an environment variable. In multicluster setups, run these commands in your management cluster.
      export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
      export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
      echo $TELEMETRY_GATEWAY_ADDRESS
      
    2. Repeat the upgrade command that you used in step 6, and include the --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS flag in the command. In multicluster setups, you repeat the upgrade command, including the flag, for each workload cluster.

    The legacy pipeline is deprecated and is planned to be removed in Gloo Gateway version 2.4. For a highly available and scalable telemetry solution that is decoupled from the Gloo agent and management server core functionality, migrate to the Gloo OpenTelemetry pipeline.

    If you did not previously set up the Gloo OpenTelemetry (OTel) pipeline, and you use the legacy metrics pipeline, your installation continues to use the legacy metrics pipeline after the migration process.

    However, note that Istio version 1.17 does not support the legacy metrics pipeline. Before you upgrade or deploy gateway proxies with Istio 1.17, be sure that you set up the Gloo OpenTelemetry (OTel) pipeline instead.