SSD Fails Due to I/O Errors

Prev Next

Summary

There are cases in which you may receive an alert or alarm similar to the one shown below:

  • Carrier dbox-A123-LEFT-37-SSD Failure - 2022-01-31 10:10:30 UTC

  • SSD dbox-A123-RIGHT-37-SSD-1 serial: XYZ - 2022-01-31 10:10:30 UTC

If this happens, a Carrier or SSD may be experiencing a transient I/O error and recover automatically.

This article will help to troubleshoot this scenario and find the applicable solution.

Solution

Common reasons for SSD failures are I/O errors on the device.

These errors are logged in /var/log/messages or dmesg on the DNodes.

An example of this can be seen here:

2021-08-11T14:19:13.767242+00:00 dn-120 kernel: nvme nvme6: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10
2021-08-11T14:19:13.925607+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.925758+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.925795+00:00 dn-120 kernel: block/blk-core.c[2888]: blk_update_request: 4813 callbacks suppressed
2021-08-11T14:19:13.925899+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.925941+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 8573039808
2021-08-11T14:19:13.926060+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.926096+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 19660119280
2021-08-11T14:19:13.926126+00:00 dn-120 kernel: block/blk-core.c[2889]: blk_update_request: 3000 callbacks suppressed
2021-08-11T14:19:13.926237+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.926278+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 14039794912
2021-08-11T14:19:13.926317+00:00 dn-120 kernel: ------------[ cut here ]------------
2021-08-11T14:19:13.926364+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 20708238912
2021-08-11T14:19:13.926395+00:00 dn-120 kernel: WARNING: CPU: 35 PID: 0 at block/blk-core.c:2890 blk_update_request+0x2d4/0x3b0
2021-08-11T14:19:13.926505+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.926627+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.926742+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.926780+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 9602612664
2021-08-11T14:19:13.926815+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 3062231009
2021-08-11T14:19:13.926848+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 15275315600
2021-08-11T14:19:13.926957+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.927008+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 19391333360
2021-08-11T14:19:13.927128+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.927166+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 22832703760
2021-08-11T14:19:13.927196+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 14196598928
2021-08-11T14:19:13.927230+00:00 dn-120 kernel: Modules linked in: mst_pciconf(OE) xt_conntrack ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xt_addrtype iptable_filter iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrac
k libcrc32c br_netfilter bridge nvmet_rdma(OE) nvmet(OE) nvme_rdma(OE) nvme_fabrics(OE) overlay(T) nvme(OE) nvme_core(OE) 8021q garp mrp stp llc bonding rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) mlx5_fpga_tools(OE) mlx4_en(OE) mlx4_i
b(OE) mlx4_core(OE) iTCO_wdt iTCO_vendor_support mxm_wmi sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr mgag200 i2c_algo_bit ttm drm_kms_helper s
yscopyarea sysfillrect sysimgblt
2021-08-11T14:19:13.928474+00:00 dn-120 kernel: fb_sys_fops i2c_i801 drm joydev sg mei_me lpc_ich drm_panel_orientation_quirks mei ioatdma dca ipmi_si ipmi_devintf ipmi_msghandler wmi acpi_power_meter pcc_cpufreq ip_tables ext4 mbcache jbd2 sr_mod cdrom mlx5_ib(OE) i
b_uverbs(OE) sd_mod crc_t10dif crct10dif_generic ib_core(OE) ahci uas mlx5_core(OE) libahci vfio_mdev(OE) vfio_iommu_type1 e1000e crct10dif_pclmul crct10dif_common vfio crc32c_intel libata mdev(OE) mlxfw(OE) devlink usb_storage mlx_compat(OE) ptp pps_core sunrpc dm_mirror
dm_region_hash dm_log dm_mod [last unloaded: mst_pci]
2021-08-11T14:19:13.929399+00:00 dn-120 kernel: CPU: 35 PID: 0 Comm: swapper/35 Kdump: loaded Tainted: G W OE ------------ T 3.10.0-1062.12.1.el7.vastos.8.x86_64 #1
2021-08-11T14:19:13.929442+00:00 dn-120 kernel: Hardware name: Viking Enterprise Solutions NSS2560/NSS-HW2EC, BIOS V11.03 01/31/2019
2021-08-11T14:19:13.929475+00:00 dn-120 kernel: Call Trace:
2021-08-11T14:19:13.929508+00:00 dn-120 kernel: <IRQ> [<ffffffff82b7bc73>] dump_stack+0x19/0x1b
2021-08-11T14:19:13.929541+00:00 dn-120 kernel: [<ffffffff8249b958>] __warn+0xd8/0x100
2021-08-11T14:19:13.929581+00:00 dn-120 kernel: [<ffffffff8249ba9d>] warn_slowpath_null+0x1d/0x20
2021-08-11T14:19:13.929619+00:00 dn-120 kernel: [<ffffffff82751d44>] blk_update_request+0x2d4/0x3b0
2021-08-11T14:19:13.929650+00:00 dn-120 kernel: [<ffffffff8275bcba>] blk_mq_end_request+0x1a/0x70
2021-08-11T14:19:13.929681+00:00 dn-120 kernel: [<ffffffffc0600df8>] nvme_complete_rq_ext+0x158/0x250 [nvme_core]
2021-08-11T14:19:13.929713+00:00 dn-120 kernel: [<ffffffffc065443f>] nvme_pci_complete_rq+0x1ef/0x380 [nvme]
2021-08-11T14:19:13.929744+00:00 dn-120 kernel: [<ffffffff8255819d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xe0
2021-08-11T14:19:13.929777+00:00 dn-120 kernel: [<ffffffff8275a239>] __blk_mq_complete_request_remote+0x19/0x20
2021-08-11T14:19:13.929804+00:00 dn-120 kernel: [<ffffffff825166d3>] flush_smp_call_function_queue+0x63/0x130
2021-08-11T14:19:13.929832+00:00 dn-120 kernel: [<ffffffff82516dd3>] generic_smp_call_function_single_interrupt+0x13/0x30
2021-08-11T14:19:13.929868+00:00 dn-120 kernel: [<ffffffff8245949d>] smp_call_function_single_interrupt+0x2d/0x40
2021-08-11T14:19:13.929905+00:00 dn-120 kernel: [<ffffffff82b911aa>] call_function_single_interrupt+0x16a/0x170
2021-08-11T14:19:13.929948+00:00 dn-120 kernel: <EOI> [<ffffffff829c2297>] ? cpuidle_enter_state+0x57/0xd0
2021-08-11T14:19:13.929998+00:00 dn-120 kernel: [<ffffffff829c23ee>] cpuidle_idle_call+0xde/0x230
2021-08-11T14:19:13.930046+00:00 dn-120 kernel: [<ffffffff82437c7e>] arch_cpu_idle+0xe/0xc0
2021-08-11T14:19:13.930085+00:00 dn-120 kernel: [<ffffffff825015ba>] cpu_startup_entry+0x14a/0x1e0
2021-08-11T14:19:13.930122+00:00 dn-120 kernel: [<ffffffff8245a0f7>] start_secondary+0x1f7/0x270
2021-08-11T14:19:13.930171+00:00 dn-120 kernel: [<ffffffff824000d5>] start_cpu+0x5/0x14
2021-08-11T14:19:13.930206+00:00 dn-120 kernel: ---[ end trace e17b39827690e4fe ]---
2021-08-11T14:19:14.762180+00:00 dn-120 kernel: nvme nvme6: Starting init: timeout=29422513487 jiffies=29422422987 cap=140459053055 HZ=1000
2021-08-11T14:19:15.570184+00:00 dn-120 kernel: nvme nvme6: Finished init: 29422423795
2021-08-11T14:19:15.597818+00:00 dn-120 kernel: nvme nvme6: Shutdown timeout set to 10 seconds2021-08-11T14:19:15.598323+00:00 dn-120 kernel: nvme nvme6: Could not set queue count (16390)
2021-08-11T14:19:15.598475+00:00 dn-120 kernel: nvme nvme6: IO queues not created
2021-08-11T14:19:15.598532+00:00 dn-120 kernel: nvme6n1: detected capacity change from 15362991415296 to 0
2021-08-11T14:19:15.598659+00:00 dn-120 kernel: nvme nvme6: removing ns ffff95ec98dfb700 id 0x1 device nvme6n1
2021-08-11T14:19:15.620118+00:00 dn-120 kernel: nvme nvme6: removed ns ffff95ec98dfb700
2021-08-11T14:19:15.620215+00:00 dn-120 kernel: nvme nvme6: device reset done, reset_count=2

 

In the example above, we hit multiple I/O errors trying to update specific sectors on the drive:

2021-08-11T14:19:13.926780+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 9602612664
2021-08-11T14:19:13.926815+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 3062231009
2021-08-11T14:19:13.926848+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 15275315600
2021-08-11T14:19:13.926957+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.927008+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 19391333360
2021-08-11T14:19:13.927128+00:00 dn-120 kernel: nvme nvme6: request error status=0x7
2021-08-11T14:19:13.927166+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 22832703760
2021-08-11T14:19:13.927196+00:00 dn-120 kernel: blk_update_request: I/O error, dev nvme6n1, sector 14196598928

For TLC systems:

In Carrier systems, the system will initiate a teardown flow, which means the entire Carrier will be reset and cause all of the SSDs in the Carrier to fail and recover:

2021-12-20T05:34:52.252537+00:00 dn-108 kernel: nvme nvme69: I/O 268 QID 8 (op: 2) timeout, aborting
2021-12-20T05:34:52.257298+00:00 dn-108 kernel: nvme nvme69: Abort status: 0x0
2021-12-20T05:34:52.257348+00:00 dn-108 kernel: plx-link-0000:97:00.0: prepping for delete - nvme idx 0 [] (refs=8)
2021-12-20T05:34:52.257384+00:00 dn-108 kernel: nvmet: [nvmet_bringdown_callback:351] <ssnqn dnode-116-ssds-0, nsid 1346549721, dev /vast/dev/nvme66n1, disk nvme66n1> queueing work for teardown
2021-12-20T05:34:52.257523+00:00 dn-108 kernel: nvme nvme66: state 5: 1 teardown callbacks called
2021-12-20T05:34:52.257578+00:00 dn-108 kernel: nvmet: [nvmet_ns_bdev_teardown:605] <ssnqn dnode-116-ssds-0, nsid 1346549721, dev /vast/dev/nvme66n1, disk nvme66n1> calling to disable for bdev teardown
2021-12-20T05:34:52.257609+00:00 dn-108 kernel: nvmet: freeing nvmet_ns_weakref: ffff8f61f52b6b90
2021-12-20T05:34:52.257642+00:00 dn-108 kernel: plx-link-0000:97:00.0: prepping for delete - nvme idx 1 [] (refs=8)
2021-12-20T05:34:52.257686+00:00 dn-108 kernel: nvmet: [nvmet_bringdown_callback:351] <ssnqn dnode-116-ssds-0, nsid 1105963266, dev /vast/dev/nvme67n1, disk nvme67n1> queueing work for teardown
2021-12-20T05:34:52.257794+00:00 dn-108 kernel: nvme nvme67: state 5: 1 teardown callbacks called
2021-12-20T05:34:52.257829+00:00 dn-108 kernel: plx-link-0000:97:00.0: prepping for delete - nvme idx 2 [] (refs=8)
2021-12-20T05:34:52.257853+00:00 dn-108 kernel: nvmet: [nvmet_bringdown_callback:351] <ssnqn dnode-116-ssds-0, nsid 1292741802, dev /vast/dev/nvme68n1, disk nvme68n1> queueing work for teardown
2021-12-20T05:34:52.257969+00:00 dn-108 kernel: nvme nvme68: state 5: 1 teardown callbacks called
2021-12-20T05:34:52.258000+00:00 dn-108 kernel: plx-link-0000:97:00.0: prepping for delete - nvme idx 3 [] (refs=8)
2021-12-20T05:34:52.258068+00:00 dn-108 kernel: nvmet: [nvmet_bringdown_callback:351] <ssnqn dnode-116-ssds-0, nsid 1535269047, dev /vast/dev/nvme69n1, disk nvme69n1> queueing work for teardown
2021-12-20T05:34:52.258173+00:00 dn-108 kernel: nvme nvme69: state 5: 1 teardown callbacks called
2021-12-20T05:34:52.258214+00:00 dn-108 kernel: plx-link-0000:97:00.0: prepping for delete - nvme idx 4 [] (refs=8)
2021-12-20T05:34:52.258235+00:00 dn-108 kernel: nvmet: [nvmet_bringdown_callback:351] <ssnqn dnode-116-ssds-0, nsid 570355791, dev /vast/dev/nvme70n1, disk nvme70n1> queueing work for teardown
2021-12-20T05:34:52.258410+00:00 dn-108 kernel: nvme nvme70: state 5: 1 teardown callbacks called
2021-12-20T05:34:52.258442+00:00 dn-108 kernel: nvmet: [nvmet_ns_bdev_teardown:605] <ssnqn dnode-116-ssds-0, nsid 1105963266, dev /vast/dev/nvme67n1, disk nvme67n1> calling to disable for bdev teardown
2021-12-20T05:34:52.258463+00:00 dn-108 kernel: nvmet: freeing nvmet_ns_weakref: ffff8f61f52b6750
2021-12-20T05:34:52.258515+00:00 dn-108 kernel: plx-link-0000:97:00.0: removing nvme idx 0 [] (refs=8)
2021-12-20T05:34:52.258644+00:00 dn-108 kernel: nvme nvme66: removing pci device
2021-12-20T05:34:52.258752+00:00 dn-108 kernel: nvme nvme66: disabling device, shutdown=0
2021-12-20T05:34:52.258805+00:00 dn-108 kernel: nvmet: [nvmet_ns_bdev_teardown:605] <ssnqn dnode-116-ssds-0, nsid 1292741802, dev /vast/dev/nvme68n1, disk nvme68n1> calling to disable for bdev teardown
2021-12-20T05:34:52.258828+00:00 dn-108 kernel: nvmet: freeing nvmet_ns_weakref: ffff8f59c59d2870
2021-12-20T05:34:52.258862+00:00 dn-108 kernel: nvmet: [nvmet_ns_bdev_teardown:605] <ssnqn dnode-116-ssds-0, nsid 1535269047, dev /vast/dev/nvme69n1, disk nvme69n1> calling to disable for bdev teardown
2021-12-20T05:34:52.258911+00:00 dn-108 kernel: nvmet: freeing nvmet_ns_weakref: ffff8f62f4b8c590
2021-12-20T05:34:52.258954+00:00 dn-108 kernel: nvmet: [nvmet_ns_bdev_teardown:605] <ssnqn dnode-116-ssds-0, nsid 570355791, dev /vast/dev/nvme70n1, disk nvme70n1> calling to disable for bdev teardown
2021-12-20T05:34:52.258992+00:00 dn-108 kernel: nvmet: freeing nvmet_ns_weakref: ffff8f631dcb4af0

 

The same flow should appear on both DNodes, since the Carrier is attached to both.

If the Carrier does not recover after a teardown flow and the disks are not attached to any DNodes, this indicates a physical SSD issue and will likely require a replacement.

For QLC systems:

The following example shows when the drive has been reset by the kernel, and queues could not be created after the reset:

2021-08-11T14:19:15.598323+00:00 dn-120 kernel: nvme nvme6: Could not set queue count (16390)
2021-08-11T14:19:15.598475+00:00 dn-120 kernel: nvme nvme6: IO queues not created

 

When this happens, it means that the kernel will not create a device file:

2021-08-11T14:19:15.598532+00:00 dn-120 kernel: nvme6n1: detected capacity change from 15362991415296 to 0
2021-08-11T14:19:15.598659+00:00 dn-120 kernel: nvme nvme6: removing ns ffff95ec98dfb700 id 0x1 device nvme6n1
2021-08-11T14:19:15.620118+00:00 dn-120 kernel: nvme nvme6: removed ns ffff95ec98dfb700
2021-08-11T14:19:15.620215+00:00 dn-120 kernel: nvme nvme6: device reset done, reset_count=2

Recovery Steps

If the device file is not visible to the OS, we can attempt to recover it by performing a power cycle of the slot where the device resides. Please work with the VAST Data Customer Success Team to complete this step.

If the slot power cycle does not revive the SSD, contact the VAST Data Customer Success Team to process an RMA for the device.