Single cluster

  1. Get your current installation Helm values, and save them in a file.

      helm get values gloo-platform -n gloo-mesh -o yaml > gloo-single.yaml
    open gloo-single.yaml
      
  2. Add the following configuration to your Helm values file to enable logs for Solo Enterprise for Istio components. Enabling the management server is required to gather logs for the Solo Enterprise for Istio components, but the management server does not affect other aspects of your setup.

      
    glooMgmtServer:
      enabled: true
    telemetryCollectorCustomization:
      pipelines:
        logs/ui:
          enabled: true
      
  3. Upgrade your installation by using your updated values file.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
    --namespace gloo-mesh \
    --values gloo-single.yaml
    --version ${MGMT_VERSION}
      
  4. Verify that your settings were added to the telemetry collector configmap.

      kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
      
  5. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pod.

      kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent
      
  6. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090. You can connect by using the meshctl or kubectl CLIs.

    • meshctl: For more information, see the CLI documentation.
        meshctl dashboard
        
    • kubectl:
      1. Port-forward the gloo-mesh-ui service on 8090.
          kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
          
      2. Open your browser and connect to http://localhost:8090.
  7. In the navigation pane, click Logs. Select the cluster, component, pod, and container for which you want to see the logs.

Multicluster

  1. Get the Helm values files for your current version.

    1. Get your current values for the management plane.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${context1} > mgmt-plane.yaml
      open mgmt-plane.yaml
        
    2. Get your current values for the data plane.
        helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${context2} > data-plane.yaml
      open data-plane.yaml
        
  2. In the Helm values for the management plane, add the following configuration to enable logs for Solo Enterprise for Istio components.

      
    telemetryCollectorCustomization:
      pipelines: 
        logs/ui: 
          enabled: true
      
  3. In the Helm values file for the data plane, add the following configuration to enable logs for Solo Enterprise for Istio components. Logs are automatically sent to the telemetry gateway in the management cluster.

      
    telemetryCollectorCustomization:
      pipelines: 
        logs/ui: 
          enabled: true
      
  4. Upgrade the management plane release.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
    --kube-context ${context1} \
    --namespace gloo-mesh \
    -f mgmt-plane.yaml \
    --version ${MGMT_VERSION} 
      
  5. Verify that your settings are applied in the management cluster.

    1. Verify that your settings were added to the telemetry gateway configmap.

        kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml --context ${context1}
        
    2. Perform a rollout restart of the telemetry gateway deployment to force your configmap changes to be applied to the telemetry gateway pod.

        kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway --context ${context1}
        
  6. Upgrade the data plane release.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
    --kube-context ${context2} \
    --namespace gloo-mesh \
    -f data-plane.yaml \
    --version ${MGMT_VERSION}
      
  7. Verify that your settings are applied in the workload cluster.

    1. Verify that your settings were added to the telemetry collector configmap.

        kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml --context ${context2}
        
    2. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pods.

        kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent --context ${context2}
        
  8. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090 in the cluster where the management plane is deployed. You can connect by using the meshctl or kubectl CLIs.

    • meshctl: For more information, see the CLI documentation.
        meshctl dashboard --kube-context ${context1}
        
    • kubectl:
      1. Port-forward the gloo-mesh-ui service on 8090.
          kubectl port-forward -n gloo-mesh --context ${context1} svc/gloo-mesh-ui 8090:8090
          
      2. Open your browser and connect to http://localhost:8090.
  9. In the navigation pane, click Logs. Select the cluster, component, pod, and container for which you want to see the logs.