Troubleshoot TiDB in Kubernetes
This document describes some common issues and solutions when you use a TiDB cluster in Kubernetes.
Use the diagnostic mode
When a Pod is in the CrashLoopBackoff
state, the containers in the Pod quit continually. As a result, you cannot use kubectl exec
or tkctl debug
normally, making it inconvenient to diagnose issues.
To solve this problem, TiDB in Kubernetes provides the Pod diagnostic mode for PD, TiKV, and TiDB components. In this mode, the containers in the Pod hang directly after starting, and will not get into a state of repeated crash. Then you can use kubectl exec
or tkctl debug
to connect to the Pod containers for diagnosis.
To use the diagnostic mode for troubleshooting:
Add an annotation to the Pod to be diagnosed:
kubectl annotate pod <pod-name> -n <namespace> runmode=debugThe next time the container in the Pod is restarted, it detects this annotation and enters the diagnostic mode.
Wait for the Pod to enter the Running state.
watch kubectl get pod <pod-name> -n <namespace>Start the diagnosis.
Here's an example of using
kubectl exec
to get into the container for diagnosis:kubectl exec -it <pod-name> -n <namespace> -- /bin/shAfter finishing the diagnosis and resolving the problem, delete the Pod.
kubectl delete pod <pod-name> -n <namespace>After the Pod is rebuilt, it automatically returns to the normal mode.
Recover the cluster after accidental deletion
TiDB Operator uses PV (Persistent Volume) and PVC (Persistent Volume Claim) to store persistent data. If you accidentally delete a cluster using helm delete
, the PV/PVC objects and data are still retained to ensure data safety.
To restore the cluster at this time, use the helm install
command to create a cluster with the same name. The retained PV/PVC and data are reused.
helm install pingcap/tidb-cluster -n <release-name> --namespace=<namespace> --version=<chart_version> -f values.yaml
Pod is not created normally
After creating a cluster using helm install
, if the Pod is not created, you can diagnose it using the following commands:
kubectl get tidbclusters -n <namespace>
kubectl get statefulsets -n <namespace>
kubectl describe statefulsets -n <namespace> <release-name>-pd
Network connection failure between Pods
In a TiDB cluster, you can access most Pods by using the Pod's domain name (allocated by the Headless Service). The exception is when TiDB Operator collects the cluster information or issues control commands, it accesses the PD (Placement Driver) cluster using the service-name
of the PD service.
When you find some network connection issues between Pods from the log or monitoring metrics, or you find the network connection between Pods might be abnormal according to the problematic condition, you can follow the following process to diagnose and narrow down the problem:
Confirm that the endpoints of the Service and Headless Service are normal:
kubectl -n <namespace> get endpoints <release-name>-pd kubectl -n <namespace> get endpoints <release-name>-tidb kubectl -n <namespace> get endpoints <release-name>-pd-peer kubectl -n <namespace> get endpoints <release-name>-tikv-peer kubectl -n <namespace> get endpoints <release-name>-tidb-peerThe
ENDPOINTS
field shown in the above command should be a comma-separated list ofcluster_ip:port
. If the field is empty or incorrect, check the health of the Pod and whetherkube-controller-manager
is working properly.Enter the Pod's Network Namespace to diagnose network problems:
tkctl debug -n <namespace> <pod-name>After the remote shell is started, use the
dig
command to diagnose the DNS resolution. If the DNS resolution is abnormal, refer to Debugging DNS Resolution for troubleshooting.dig <HOSTNAME>Use the
ping
command to diagnose the connection with the destination IP (the ClusterIP resolved usingdig
):ping <TARGET_IP>If the
ping
check fails, refer to Debugging Kubernetes Networking for troubleshooting.If the
ping
check succeeds, continue to check whether the target port is open by usingtelnet
:telnet <target_ip> <target_port>If the
telnet
check fails, check whether the port corresponding to the Pod is correctly exposed and whether the applied port is correctly configured:# Checks whether the ports are consistent. kubectl -n <namespace> get po <pod-name> -ojson | jq '.spec.containers[].ports[].containerPort' # Checks whether the application is correctly configured to serve the specified port. # The default port of PD is 2379 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/pd/pd.toml | grep client-urls # The default port of PD is 20160 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/tikv/tikv.toml | grep addr # The default port of TiDB is 4000 when not configured. kubectl -n <namespace> -it exec <pod-name> -- cat /etc/tidb/tidb.toml | grep port
The Pod is in the Pending state
The Pending state of a Pod is usually caused by conditions of insufficient resources, such as:
- The
StorageClass
of the PVC used by PD, TiKV, Monitor Pod does not exist or the PV is insufficient. - No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod
- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler
You can check the specific reason for Pending by using the kubectl describe pod
command:
kubectl describe po -n <namespace> <pod-name>
If the CPU or memory resources are insufficient, you can lower the CPU or memory resources requested by the corresponding component for scheduling, or add a new Kubernetes node.
If the
StorageClass
of the PVC cannot be found, changestorageClassName
in thevalues.yaml
file to the name of theStorageClass
available in the cluster; runhelm upgrade
; and delete Statefulset and the corresponding PVCs. Run the following command to get theStorageClass
available in the cluster:kubectl get storageclassIf a
StorageClass
exists in the cluster but the available PV is insufficient, you need to add PV resources correspondingly. For Local PV, you can expand it by referring to Local PV Configuration.tidb-scheduler has a high availability scheduling policy for TiKV and PD. For the same TiDB cluster, if there are N replicas of TiKV or PD, then the number of PD Pods that can be scheduled to each node is
M=(N-1)/2
(if N<3, then M=1) at most, and the number of TiKV Pods that can be scheduled to each node isM=ceil(N/3)
(if N<3, then M=1;ceil
means rounding up) at most. If the Pod's state becomesPending
because the high availability scheduling policy is not satisfied, you need to add more nodes in the cluster.
The Pod is in the CrashLoopBackOff
state
A Pod in the CrashLoopBackOff
state means that the container in the Pod repeatedly aborts, in the loop of abort - restart by kubelet
- abort. There are many potential causes of CrashLoopBackOff
. In this case, the most effective way to locate it is to view the log of the Pod container:
kubectl -n <namespace> logs -f <pod-name>
If the log fails to help diagnose the problem, you can add the -p
parameter to output the log information when the container was last started:
kubectl -n <namespace> logs -p <pod-name>
After checking the error messages in the log, you can refer to Cannot start tidb-server
, Cannot start tikv-server
, and Cannot start pd-server
for further troubleshooting.
When the "cluster id mismatch" message appears in the TiKV Pod log, it means that the TiKV Pod might have used old data from other or previous TiKV Pod. If the data on the local disk remain uncleared when you configure local storage in the cluster, or the data is not recycled by the local volume provisioner due to a forced deletion of PV, an error might occur.
If you are confirmed that the TiKV should join the cluster as a new node and that the data on the PV should be deleted, you can delete the TiKV Pod and the corresponding PVC. The TiKV Pod automatically rebuilds and binds the new PV for use. When configuring local storage, delete local storage on the machine to avoid Kubernetes using old data. In cluster operation and maintenance, manage PV using the local volume provisioner and do not delete it forcibly. You can manage the lifecycle of PV by creating, deleting PVCs, and setting reclaimPolicy
for the PV.
In addition, TiKV might also fail to start when ulimit
is insufficient. In this case, you can modify the /etc/security/limits.conf
file of the Kubernetes node to increase the ulimit
:
root soft nofile 1000000
root hard nofile 1000000
root soft core unlimited
root soft stack 10240
If you cannot confirm the cause from the log and ulimit
is also a normal value, troubleshoot it further by using the diagnostic mode.
Unable to access the TiDB service
If you cannot access the TiDB service, first check whether the TiDB service is deployed successfully using the following method:
Check whether all components of the cluster are up and the status of each component is
Running
.kubectl get po -n <namespace>Check the log of TiDB components to see whether errors are reported.
kubectl logs -f <tidb-pod-name> -n <namespace> -c tidb
If the cluster is successfully deployed, check the network using the following steps:
If you cannot access the TiDB service using
NodePort
, try to access the TiDB service using the service domain orclusterIP
on the node. If theserviceName
orclusterIP
works, the network within the Kubernetes cluster is normal. Then the possible issues are as follows:- Network failure exists between the client and the node.
- Check whether the
externalTrafficPolicy
attribute of the TiDB service isLocal
. If it isLocal
, you must access the client using the IP of the node where the TiDB Pod is located.
If you still cannot access the TiDB service using the service domain or
clusterIP
, connect using<PodIP>:4000
on the TiDB service backend. If thePodIP
works, you can confirm that the problem is in the connection between the service domain andPodIP
or betweenclusterIP
andPodIP
. Check the following items:Check whether the DNS service works well.
kubectl get po -n kube-system -l k8s-app=kube-dns dig <tidb-service-domain>Check whether
kube-proxy
on each node is working.kubectl get po -n kube-system -l k8s-app=kube-proxyCheck whether the TiDB service rule is correct in the
iptables
rules.iptables-save -t nat |grep <clusterIP>Check whether the corresponding endpoint is correct.
If you cannot access the TiDB service even using
PodIP
, the problem is on the Pod level network. Check the following items:- Check whether the relevant route rules on the node are correct.
- Check whether the network plugin service works well.
- Refer to network connection failure between Pods section.
TiKV Store is in Tombstone
status abnormally
Normally, when a TiKV Pod is in a healthy state (Running
), the corresponding TiKV store is also in a healthy state (UP
). However, concurrent scale-in or scale-out on TiKV components might cause part of TiKV stores to fall into the Tombstone
state abnormally. When this happens, try the following steps to fix it:
View the state of the TiKV store:
kubectl get -n <namespace> tidbcluster <release-name> -ojson | jq '.status.tikv.stores'View the state of the TiKV Pod:
kubectl get -n <namespace> po -l app.kubernetes.io/component=tikvCompare the state of the TiKV store with that of the Pod. If the store corresponding to a TiKV Pod is in the
Offline
state, it means the store is being taken offline abnormally. You can use the following commands to cancel the offline process and perform recovery operations:Open the connection to the PD service:
kubectl port-forward -n <namespace> svc/<cluster-name>-pd <local-port>:2379 &>/tmp/portforward-pd.log &Bring online the corresponding store:
curl -X POST http://127.0.0.1:2379/pd/api/v1/store/<store-id>/state?state=Up
If the TiKV store with the latest
lastHeartbeatTime
that corresponds to a Pod is in aTombstone
state, it means that the offline process is completed. At this time, you need to re-create the Pod and bind it with a new PV to perform recovery by taking the following steps:Set the
reclaimPolicy
value of the PV corresponding to the store toDelete
:kubectl patch $(kubectl get pv -l app.kubernetes.io/instance=<release-name>,tidb.pingcap.com/store-id=<store-id> -o name) -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}Remove the PVC used by the Pod:
kubectl delete -n <namespace> pvc tikv-<pod-name> --wait=falseRemove the Pod, and wait for it to be re-created:
kubectl delete -n <namespace> pod <pod-name>
After the Pod is re-created, a new store is registered in the TiKV cluster. Then the recovery is completed.
Long queries are abnormally interrupted in TiDB
Load balancers often set the idle connection timeout. If no data is sent over a connection for a specific period of time, load balancer closes the connection.
If a long query is interrupted when you use TiDB, check the middleware program between the client and the TiDB server.
If the idle timeout is not long enough for your query, try to set the timeout to a larger value. If you cannot reset it, enable the tcp-keep-alive
option in TiDB.
In Linux, the keepalive probe packet is sent every 7,200 seconds by default. To shorten the interval, configure sysctls
via the podSecurityContext
field.
If
--allowed-unsafe-sysctls=net.*
can be configured for kubelet in the Kubernetes cluster, configure this parameter for kubelet and configure TiDB in the following way:tidb: ... podSecurityContext: sysctls: - name: net.ipv4.tcp_keepalive_time value: "300"If
--allowed-unsafe-sysctls=net.*
cannot be configured for kubelet, configure TiDB in the following way:tidb: annotations: tidb.pingcap.com/sysctl-init: "true" podSecurityContext: sysctls: - name: net.ipv4.tcp_keepalive_time value: "300" ...