It is 2010 and RAID5 still works …

Some years ago (2007, 2008) when I cared a little more about things like RAID and RAID recovery, I read an article in ZDNET by Robin Harris that made the case for why disk capacity increases coupled with an almost invariant URE (Unrecoverable Read Error) rate meant that RAID5 was dead in 2009. A follow-on article appeared recently, also by Robin Harris that extends the same logic and claims that RAID6 would stop working in 2019.

The crux of the argument is this. As disk drives have become larger and larger (approximately doubling in two years), the URE has not improved at the same rate. URE measures the frequency of occurrence of an Unrecoverable Read Error and is typically measured in errors per bits read. For example an URE rate of 1E-14 (10 ^ -14) implies that statistically, an unrecoverable read error would occur once in every 1E14 bits read (1E14 bits = 1.25E13 bytes or approximately 12TB).

Further, Robin argues that a RAID array (RAID5 or RAID6) is running normally when a drive suffers a catastrophic failure that prompts a reconstruction from parity. In that scenario, it is perfectly conceivable that while reading the (N-1) data drives and the parity stripe in order to rebuild the failed data drive, a single URE may occur. That URE would render the RAID volume failed.

The argument is that as disk capacities grow, and URE rate does not improve at the same rate, the possibility of a RAID5 rebuild failure increases over time. Statistically he shows that in 2009, disk capacities would have grown enough to make it meaningless to use RAID5 for any meaningful array.

So, in 2007 he wrote:

RAID 5 protects against a single disk failure. You can recover all your data if a single disk breaks. The problem: once a disk breaks, there is another increasingly common failure lurking. And in 2009 it is highly certain it will find you.

and in 2009, he wrote:

SATA RAID 6 will stop being reliable sooner unless drive vendors get their game on. More good news: one of them already has.

The logic proposed is accurate but, IMHO, incomplete. One important aspect that the analysis fails to point out is something that RAID vendors have already been doing for many years now.

Image courtesy of http://www.computer-history.info

When disk drives looked like this (picture at right), the predominant failure mode was the catastrophic failure. Drives either worked or didn’t work any longer. At some level, that was a reflection of the fact that the Drive Permanent Failure (DPF) frequency was significantly higher than the URE frequency, and therefore the only observed failure mode was catastrophic failure.

As drives got bigger, and certainly in 1988 when Patterson and others first proposed the notion of RAID, it made perfect sense to wait for a DPF and then begin drive reconstruction. The possibility of a URE was so low (given drive capacities) that all you had to worry about was the rebuild time, and the degraded performance during the rebuild (as I/O’s may have to be satisfied through block reconstruction).

But, that isn’t how most RAID controllers today deal with drive URE’s and drive failures. On the contrary, for some time now, RAID controllers (at least the recent ones I’ve read about) have used better methods to determine when to perform the rebuild.

A 5400 RPM SATA DriveConsider this alternative, that I know to be used by at least a couple of array vendors. When a drive in a RAID volume reports a URE, the array controller increments a count and satisfies the I/O by rebuilding the block from parity. It then performs a rewrite on the disk that reported the URE (potentially with verify) and if the sector is bad, the microcode will remap and all will be well.

When the counter exceeds some threshold, and with the disk that reported the URE still in a usable condition, the RAID controller will begin the RAID5 recovery. Robin is correct that RAID recovery after DPF is something that will become less and less useful as drive capacities grow. But, with improvements in integration of SMART and the significant improvements in the predictability of drive failures, the frequency of RAID5 and RAID6 reconstruction failures are dramatically lower than those predicted in the referenced articles as these reconstructions occur on URE and not DPF.

Look at the specifications for the RAID controller you use.

When is RAID recovery initiated? Upon the occurrence of an Unrecoverable Read Error (URE) or upon the occurrence of a Drive Permanent Failure (DPF)?

Several have proposed ZFS with multiple copies is the way to go. While it addresses the issue, I submit to you that it is at the wrong level of the stack. Mirroring at the block level, with the option to have multiple mirrors is the correct (IMHO) solution. Disk block error recovery should not be handled in the file system.