Troubleshoot High Disk I/O Usage in TiDB

This document introduces how to locate and address the issue of high disk I/O usage in TiDB.

Check the current I/O metrics

If TiDB's response slows down after you have troubleshot the CPU bottleneck and the bottleneck caused by transaction conflicts, you need to check I/O metrics to help determine the current system bottleneck.

Locate I/O issues from monitor

The quickest way to locate I/O issues is to view the overall I/O status from the monitor, such as the Grafana dashboard which is deployed by default by TiDB Ansible and TiUP. The dashboard panels related to I/O include Overview, Node_exporter, and Disk-Performance.

The first type of monitoring panels

In Overview> System Info> IO Util, you can see the I/O status of each machine in the cluster. This metric is similar to util in the Linux iostat monitor. The higher percentage represents higher disk I/O usage:

  • If there is only one machine with high I/O usage in the monitor, currently there might be read and write hotspots on this machine.
  • If the I/O usage of most machines in the monitor is high, the cluster now has high I/O loads.

For the first situation above (only one machine with high I/O usage), you can further observe I/O metrics from the Disk-Performance Dashboard such as Disk Latency and Disk Load to determine whether any anomaly exists. If necessary, use the fio tool to check the disk.

The second type of monitoring panels

The main storage component of the TiDB cluster is TiKV. One TiKV instance contains two RocksDB instances: one for storing Raft logs, located in data/raft, and the other for storing real data, located in data/db.

In TiKV-Details > Raft IO, you can see the metrics related to disk writes of these two instances:

  • Append log duration: This metric indicates the response time of writes into RockDB that stores Raft logs. The .99 response time should be within 50 ms.
  • Apply log duration: This metric indicates the response time of writes into RockDB that stores real data. The .99 response should be within 100 ms.

These two metrics also have the .. per server monitoring panel to help you view the write hotspots.

The third type of monitoring panels

In TiKV-Details > Storage, there are monitoring metrics related to storage:

  • Storage command total: Indicates the number of different commands received.
  • Storage async write duration: Includes monitoring metrics such as disk sync duration, which might be related to Raft I/O. If you encounter an abnormal situation, check the working statuses of related components by checking logs.

Other panels

In addition, some other panel metrics might help you determine whether the bottleneck is I/O, and you can try to set some parameters. By checking the prewrite/commit/raw-put (for raw key-value clusters only) of TiKV gRPC duration, you can determine that the bottleneck is indeed the slow TiKV write. The common situations of slow TiKV writes are as follows:

  • append log is slow. TiKV Grafana's Raft I/O and append log duration metrics are relatively high, which is often due to slow disk writes. You can check the value of WAL Sync Duration max in RocksDB-raft to determine the cause of slow append log. Otherwise, you might need to report a bug.

  • The raftstore thread is busy. In TiKV Grafana, Raft Propose/propose wait duration is significantly higher than append log duration. Check the following aspects for troubleshooting:

    • Whether the value of store-pool-size of [raftstore] is too small. It is recommended to set this value between [1,5] and not too large.
    • Whether the CPU resource of the machine is insufficient.
  • append log is slow. TiKV Grafana's Raft I/O and append log duration metrics are relatively high, which might usually occur along with relatively high Raft Propose/apply wait duration. The possible causes are as follows:

    • The value of apply-pool-size of [raftstore] is too small. It is recommended to set this value between [1, 5] and not too large. The value of Thread CPU/apply cpu is also relatively high.
    • Insufficient CPU resources on the machine.
    • Write hotspot issue of a single Region (Currently, the solution to this issue is still on the way). The CPU usage of a single apply thread is high (which can be viewed by modifying the Grafana expression, appended with by (instance, name)).
    • Slow write into RocksDB, and RocksDB kv/max write duration is high. A single Raft log might contain multiple key-value pairs (kv). 128 kvs are written to RocksDB in a batch, so one apply log might involve multiple RocksDB writes.
    • For other causes, report them as bugs.
  • raft commit log is slow. In TiKV Grafana, Raft I/O and commit log duration (only available in Grafana 4.x) metrics are relatively high. Each Region corresponds to an independent Raft group. Raft has a flow control mechanism similar to the sliding window mechanism of TCP. To control the size of a sliding window, adjust the [raftstore] raft-max-inflight-msgs parameter. If there is a write hotspot and commit log duration is high, you can properly set this parameter to a larger value, such as 1024.

Locate I/O issues from log

  • If the client reports errors such as server is busy or especially raftstore is busy, the errors might be related to I/O issues.

    You can check the monitoring panel (Grafana -> TiKV -> errors) to confirm the specific cause of the busy error. server is busy is TiKV's flow control mechanism. In this way, TiKV informs tidb/ti-client that the current pressure of TiKV is too high, and the client should try later.

  • Write stall appears in TiKV RocksDB logs.

    It might be that too many level-0 SST files cause the write stall. To address the issue, you can add the [rocksdb] max-sub-compactions = 2 (or 3) parameter to speed up the compaction of level-0 SST files. This parameter means that the compaction tasks of level-0 to level-1 can be divided into max-sub-compactions subtasks for multi-threaded concurrent execution.

    If the disk's I/O capability fails to keep up with the write, it is recommended to scale up the disk. If the throughput of the disk reaches the upper limit (for example, the throughput of SATA SSD is much lower than that of NVMe SSD), which results in write stall, but the CPU resource is relatively sufficient, you can try to use a compression algorithm of higher compression ratio to relieve the pressure on the disk, that is, use CPU resources to make up for disk resources.

    For example, when the pressure of default cf compaction is relatively high, you can change the parameter[rocksdb.defaultcf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd" , "zstd"] to compression-per-level = ["no", "no", "zstd", "zstd", "zstd", "zstd", "zstd"].

I/O issues found in alerts

The cluster deployment tools (TiDB Ansible and TiUP) deploy the cluster with alert components by default that have built-in alert items and thresholds. The following alert items are related to I/O:

  • TiKV_write_stall
  • TiKV_raft_log_lag
  • TiKV_async_request_snapshot_duration_seconds
  • TiKV_async_request_write_duration_seconds
  • TiKV_raft_append_log_duration_secs
  • TiKV_raft_apply_log_duration_secs

Handle I/O issues

  • When an I/O hotspot issue is confirmed to occur, you need to refer to Handle TiDB Hotspot Issues to eliminate the I/O hotspots.
  • When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and increase the number of TiKV nodes to have greater overall I/O throughput.
  • Adjust some of the parameters as described above, and use computing/memory resources to make up for disk storage resources.