Sign InTry Free

Troubleshoot TiDB in Kubernetes

This document describes some common issues and solutions when you use a TiDB cluster in Kubernetes.

Use the diagnostic mode

When a Pod is in the CrashLoopBackoff state, the containers in the Pod quit continually. As a result, you cannot use kubectl exec or tkctl debug normally, making it inconvenient to diagnose issues.

To solve this problem, TiDB in Kubernetes provides the Pod diagnostic mode for PD, TiKV, and TiDB components. In this mode, the containers in the Pod hang directly after starting, and will not get into a state of repeated crash. Then you can use kubectl exec or tkctl debug to connect to the Pod containers for diagnosis.

To use the diagnostic mode for troubleshooting:

  1. Add an annotation to the Pod to be diagnosed:

    kubectl annotate pod <pod-name> -n <namespace> runmode=debug

    The next time the container in the Pod is restarted, it detects this annotation and enters the diagnostic mode.

  2. Wait for the Pod to enter the Running state.

    watch kubectl get pod <pod-name> -n <namespace>
  3. Start the diagnosis.

    Here's an example of using kubectl exec to get into the container for diagnosis:

    kubectl exec -it <pod-name> -n <namespace> -- /bin/sh
  4. After finishing the diagnosis and resolving the problem, delete the Pod.

    kubectl delete pod <pod-name> -n <namespace>

    After the Pod is rebuilt, it automatically returns to the normal mode.

Recover the cluster after accidental deletion

TiDB Operator uses PV (Persistent Volume) and PVC (Persistent Volume Claim) to store persistent data. If you accidentally delete a cluster using helm delete, the PV/PVC objects and data are still retained to ensure data safety.

To restore the cluster at this time, use the helm install command to create a cluster with the same name. The retained PV/PVC and data are reused.

helm install pingcap/tidb-cluster -n <release-name> --namespace=<namespace> --version=<chart_version> -f values.yaml

Pod is not created normally

After creating a cluster using helm install, if the Pod is not created, you can diagnose it using the following commands:

kubectl get tidbclusters -n <namespace> kubectl get statefulsets -n <namespace> kubectl describe statefulsets -n <namespace> <release-name>-pd

Network connection failure between Pods

In a TiDB cluster, you can access most Pods by using the Pod's domain name (allocated by the Headless Service). The exception is when TiDB Operator collects the cluster information or issues control commands, it accesses the PD (Placement Driver) cluster using the service-name of the PD service.

When you find some network connection issues between Pods from the log or monitoring metrics, or you find the network connection between Pods might be abnormal according to the problematic condition, you can follow the following process to diagnose and narrow down the problem:

  1. Confirm that the endpoints of the Service and Headless Service are normal:

    kubectl -n <namespace> get endpoints <release-name>-pd kubectl -n <namespace> get endpoints <release-name>-tidb kubectl -n <namespace> get endpoints <release-name>-pd-peer kubectl -n <namespace> get endpoints <release-name>-tikv-peer kubectl -n <namespace> get endpoints <release-name>-tidb-peer

    The ENDPOINTS field shown in the above command should be a comma-separated list of cluster_ip:port. If the field is empty or incorrect, check the health of the Pod and whether kube-controller-manager is working properly.

  2. Enter the Pod's Network Namespace to diagnose network problems:

    tkctl debug -n <namespace> <pod-name>

    After the remote shell is started, use the dig command to diagnose the DNS resolution. If the DNS resolution is abnormal, refer to Debugging DNS Resolution for troubleshooting.

    dig <HOSTNAME>

    Use the ping command to diagnose the connection with the destination IP (the ClusterIP resolved using dig):

    ping <TARGET_IP>
    • If the ping check fails, refer to Debugging Kubernetes Networking for troubleshooting.

    • If the ping check succeeds, continue to check whether the target port is open by using telnet:

      telnet <target_ip> <target_port>

      If the telnet check fails, check whether the port corresponding to the Pod is correctly exposed and whether the applied port is correctly configured:

      # Checks whether the ports are consistent. kubectl -n <namespace> get po <pod-name> -ojson | jq '.spec.containers[].ports[].containerPort' # Checks whether the application is correctly configured to serve the specified port. # The default port of PD is 2379 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/pd/pd.toml | grep client-urls # The default port of PD is 20160 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/tikv/tikv.toml | grep addr # The default port of TiDB is 4000 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/tidb/tidb.toml | grep port

The Pod is in the Pending state

The Pending state of a Pod is usually caused by conditions of insufficient resources, such as:

  • The StorageClass of the PVC used by PD, TiKV, Monitor Pod does not exist or the PV is insufficient.
  • No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod
  • The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler

You can check the specific reason for Pending by using the kubectl describe pod command:

kubectl describe po -n <namespace> <pod-name>
  • If the CPU or memory resources are insufficient, you can lower the CPU or memory resources requested by the corresponding component for scheduling, or add a new Kubernetes node.

  • If the StorageClass of the PVC cannot be found, change storageClassName in the values.yaml file to the name of the StorageClass available in the cluster; run helm upgrade; and delete Statefulset and the corresponding PVCs. Run the following command to get the StorageClass available in the cluster:

    kubectl get storageclass
  • If a StorageClass exists in the cluster but the available PV is insufficient, you need to add PV resources correspondingly. For Local PV, you can expand it by referring to Local PV Configuration.

  • tidb-scheduler has a high availability scheduling policy for TiKV and PD. For the same TiDB cluster, if there are N replicas of TiKV or PD, then the number of PD Pods that can be scheduled to each node is M=(N-1)/2 (if N<3, then M=1) at most, and the number of TiKV Pods that can be scheduled to each node is M=ceil(N/3) (if N<3, then M=1; ceil means rounding up) at most. If the Pod's state becomes Pending because the high availability scheduling policy is not satisfied, you need to add more nodes in the cluster.

The Pod is in the CrashLoopBackOff state

A Pod in the CrashLoopBackOff state means that the container in the Pod repeatedly aborts, in the loop of abort - restart by kubelet - abort. There are many potential causes of CrashLoopBackOff. In this case, the most effective way to locate it is to view the log of the Pod container:

kubectl -n <namespace> logs -f <pod-name>

If the log fails to help diagnose the problem, you can add the -p parameter to output the log information when the container was last started:

kubectl -n <namespace> logs -p <pod-name>

After checking the error messages in the log, you can refer to Cannot start tidb-server, Cannot start tikv-server, and Cannot start pd-server for further troubleshooting.

When the "cluster id mismatch" message appears in the TiKV Pod log, it means that the TiKV Pod might have used old data from other or previous TiKV Pod. If the data on the local disk remain uncleared when you configure local storage in the cluster, or the data is not recycled by the local volume provisioner due to a forced deletion of PV, an error might occur.

If you are confirmed that the TiKV should join the cluster as a new node and that the data on the PV should be deleted, you can delete the TiKV Pod and the corresponding PVC. The TiKV Pod automatically rebuilds and binds the new PV for use. When configuring local storage, delete local storage on the machine to avoid Kubernetes using old data. In cluster operation and maintenance, manage PV using the local volume provisioner and do not delete it forcibly. You can manage the lifecycle of PV by creating, deleting PVCs, and setting reclaimPolicy for the PV.

In addition, TiKV might also fail to start when ulimit is insufficient. In this case, you can modify the /etc/security/limits.conf file of the Kubernetes node to increase the ulimit:

root soft nofile 1000000 root hard nofile 1000000 root soft core unlimited root soft stack 10240

If you cannot confirm the cause from the log and ulimit is also a normal value, troubleshoot it further by using the diagnostic mode.

Unable to access the TiDB service

If you cannot access the TiDB service, first check whether the TiDB service is deployed successfully using the following method:

  1. Check whether all components of the cluster are up and the status of each component is Running.

    kubectl get po -n <namespace>
  2. Check the log of TiDB components to see whether errors are reported.

    kubectl logs -f <tidb-pod-name> -n <namespace> -c tidb

If the cluster is successfully deployed, check the network using the following steps:

  1. If you cannot access the TiDB service using NodePort, try to access the TiDB service using the service domain or clusterIP on the node. If the serviceName or clusterIP works, the network within the Kubernetes cluster is normal. Then the possible issues are as follows:

    • Network failure exists between the client and the node.
    • Check whether the externalTrafficPolicy attribute of the TiDB service is Local. If it is Local, you must access the client using the IP of the node where the TiDB Pod is located.
  2. If you still cannot access the TiDB service using the service domain or clusterIP, connect using <PodIP>:4000 on the TiDB service backend. If the PodIP works, you can confirm that the problem is in the connection between the service domain and PodIP or between clusterIP and PodIP. Check the following items:

    • Check whether the DNS service works well.

      kubectl get po -n kube-system -l k8s-app=kube-dns dig <tidb-service-domain>
    • Check whether kube-proxy on each node is working.

      kubectl get po -n kube-system -l k8s-app=kube-proxy
    • Check whether the TiDB service rule is correct in the iptables rules.

      iptables-save -t nat |grep <clusterIP>
    • Check whether the corresponding endpoint is correct.

  3. If you cannot access the TiDB service even using PodIP, the problem is on the Pod level network. Check the following items:

TiKV Store is in Tombstone status abnormally

Normally, when a TiKV Pod is in a healthy state (Running), the corresponding TiKV store is also in a healthy state (UP). However, concurrent scale-in or scale-out on TiKV components might cause part of TiKV stores to fall into the Tombstone state abnormally. When this happens, try the following steps to fix it:

  1. View the state of the TiKV store:

    kubectl get -n <namespace> tidbcluster <release-name> -ojson | jq '.status.tikv.stores'
  2. View the state of the TiKV Pod:

    kubectl get -n <namespace> po -l app.kubernetes.io/component=tikv
  3. Compare the state of the TiKV store with that of the Pod. If the store corresponding to a TiKV Pod is in the Offline state, it means the store is being taken offline abnormally. You can use the following commands to cancel the offline process and perform recovery operations:

    1. Open the connection to the PD service:

      kubectl port-forward -n <namespace> svc/<cluster-name>-pd <local-port>:2379 &>/tmp/portforward-pd.log &
    2. Bring online the corresponding store:

      curl -X POST http://127.0.0.1:2379/pd/api/v1/store/<store-id>/state?state=Up
  4. If the TiKV store with the latest lastHeartbeatTime that corresponds to a Pod is in a Tombstone state, it means that the offline process is completed. At this time, you need to re-create the Pod and bind it with a new PV to perform recovery by taking the following steps:

    1. Set the reclaimPolicy value of the PV corresponding to the store to Delete:

      kubectl patch $(kubectl get pv -l app.kubernetes.io/instance=<release-name>,tidb.pingcap.com/store-id=<store-id> -o name) -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}
    2. Remove the PVC used by the Pod:

      kubectl delete -n <namespace> pvc tikv-<pod-name> --wait=false
    3. Remove the Pod, and wait for it to be re-created:

      kubectl delete -n <namespace> pod <pod-name>

    After the Pod is re-created, a new store is registered in the TiKV cluster. Then the recovery is completed.

Long queries are abnormally interrupted in TiDB

Load balancers often set the idle connection timeout. If no data is sent over a connection for a specific period of time, load balancer closes the connection.

If a long query is interrupted when you use TiDB, check the middleware program between the client and the TiDB server. If the idle timeout is not long enough for your query, try to set the timeout to a larger value. If you cannot reset it, enable the tcp-keep-alive option in TiDB.

In Linux, the keepalive probe packet is sent every 7,200 seconds by default. To shorten the interval, configure sysctls via the podSecurityContext field.

  • If --allowed-unsafe-sysctls=net.* can be configured for kubelet in the Kubernetes cluster, configure this parameter for kubelet and configure TiDB in the following way:

    tidb: ... podSecurityContext: sysctls: - name: net.ipv4.tcp_keepalive_time value: "300"
  • If --allowed-unsafe-sysctls=net.* cannot be configured for kubelet, configure TiDB in the following way:

    tidb: annotations: tidb.pingcap.com/sysctl-init: "true" podSecurityContext: sysctls: - name: net.ipv4.tcp_keepalive_time value: "300" ...
Download PDF
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.