Add support information
Collect valuable information for Solo to review and troubleshoot your support request.
Environment
- Get your Gloo Mesh Enterprise version that you run in the management cluster.
meshctl --kubecontext $MGMT_CONTEXT version
- Get the version of Kubernetes that you run in each of your clusters.
kubectl --context $MGMT_CONTEXT version -o yaml kubectl --context $REMOTE_CONTEXT version -o yaml
- Get the Istio version that you run in each of your workload clusters.
istioctl --context $REMOTE_CONTEXT version
- List the infrastructure provider that hosts your environment, such as AWS or GCP.
Setup
Share the installation method that you used to install Gloo Mesh Enterprise, such as Helm, meshctl
, or Argo CD, and the configuration that you used during the installation.
Issue
Provide a detailed description of the issue. Make sure to include the following information:
- If reproducible, steps to reproduce the issue.
- Number of workload clusters.
- Federated trust details, such as self-signed certificates or certificates provided by a certificate management provider.
- High-level diagram of the clusters showing where Gloo Mesh Enterprise components and applications are running.
- If you use workspaces, details of how the workspaces are configured.
- Details of the Istio installation and configuration.
Describe the impact of the issue. For example, the issue might block an update or a demo, or cause the loss of data or an entire system.
Export the relevant configuration files that are related to the issue.
- Gloo resources: Use the following script to dump all Gloo Mesh Enterprise custom resources into a file. Attach the
gloo-mesh-configuration.yaml
file to your support request.for n in $(kubectl --context <cluster context> get crds | grep solo.io | awk '{print $1}'); do kubectl --context <cluster context> get $n --all-namespaces -o yaml >> gloo-mesh-configuration.yaml; echo "---" >> gloo-mesh-configuration.yaml; done
- Istio resources: Use the following command to create an Istio bug report that you can attach to your support request. Run it against all the impacted workload clusters.
istioctl --context $REMOTE_CONTEXT bug-report --istio-namespace <istio-control-plane-namespace>
- Gloo resources: Use the following script to dump all Gloo Mesh Enterprise custom resources into a file. Attach the
Product-specific details
Gloo Mesh Enterprise
Capture the output of the
meshctl --kubecontext $MGMT_CONTEXT check
command.
Typically, the command output indicates any license errors or details of unhealthy agents, such as in the following example.🟢 License status INFO gloo-mesh Trial license expiration is 02 Nov 32 10:32 GMT INFO gloo-network Trial license expiration is 22 Aug 33 09:53 GMT INFO gloo-gateway Trial license expiration is 11 Nov 32 21:33 GMT INFO Valid GraphQL license module found 🟢 CRD version check 🟢 Gloo Platform deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 0/0 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy 🔴 Mgmt server connectivity to workload agents ERROR cluster east-mesh is registered but not connected ERROR cluster west-mesh is registered but not connected Cluster | Registered | Connected Pod east-mesh | true | Not connected west-mesh | true | Not connected Connected Pod | Clusters Not connected | 2 ERROR Encountered failed checks.
Collect the logs for various components that are impacted by the issue, such as
gloo-mesh-mgmt-server
,gloo-telemetry-gateway
,gloo-mesh-redis
, orgloo-mesh-ui
. The components vary depending on your Gloo Mesh Enterprise setup and can be found by default in thegloo-mesh
namespace.
Example: Follow the steps below to get the logs for themgmt-server
controller pod.Optional: Set the log level to
debug
.Run the following command in a non-production environment only. Changing the log level requires management plane components to restart.kubectl --context $MGMT_CONTEXT patch -n <gloo-namespace> deploy/gloo-mesh-mgmt-server --type "json" -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--verbose=true"}]'
Capture the logs when reproducing the issue.
kubectl --context $MGMT_CONTEXT logs -f deploy/gloo-mesh-mgmt-server -n <gloo-namespace> > gloo.log
Optional: After you capture the logs, reset the log level to
info
.Run the following command in a non-production environment only. Changing the log level requires management plane components to restart.kubectl --context $MGMT_CONTEXT patch -n <gloo-namespace> deploy/gloo-mesh-mgmt-server --type "json" -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--verbose=false"}]'
Repeat these steps for all the Gloo Mesh Enterprise components in the management cluster.
Get the full debug report for each cluster that is impacted by the issue. The following command creates a
tgz
file that you can attach to your support ticket.Depending on the size of your cluster, this command might take a few minutes to complete.meshctl --kubecontext $MGMT_CONTEXT debug report meshctl --kubecontext $REMOTE_CONTEXT debug report
Istio
- If you used Helm to install Istio in the workload clusters, get the Helm configuration that you used during your installation.
helm --kube-context $REMOTE_CONTEXT get all <release-name> --namespace <istio-system-namespace> > istio_installation_helm.yaml