Dell T420 Server PERC H700 BIOS
Dell T420 Data Recovery Case Study: “State Foreign” Error
February 14, 2017
A typical BOOTMGR image error: "BOOTMGR image is corrupt. The system cannot boot."
Data Recovery Case Study: Fixing a BOOTMGR Image Error (The Hard Way)
February 16, 2017

Iomega StorCenter IX4 Data Recovery Case Study: RAID-10 Failure

Iomega StorCenter IX4 NAS device data recovery

The client in this RAID data recovery case had a four-drive RAID-10 array that had failed when two of its four hard drives died. The RAID-10 array had lived inside an Iomega StorCenter IX4 NAS device. With half of the drives in the array offline, the contents of the NAS device were inaccessible. In order to recover the AutoCAD project files from their RAID-10 array, the client sent their failed Iomega StorCenter NAS to the NAS data recovery specialists in Gillware’s Madison, Wisconsin data recovery lab.

Iomega StorCenter IX4 NAS device data recovery


Iomega StorCenter IX4 Data Recovery Case Study: RAID-10 Failure
RAID Level: RAID-10
Drive Model: Seagate ST32000542AS (x4)
Total Capacity: 4 TB
Data Loss Situation: Two hard drives died
Type of Data Recovered: AutoCAD project files
Binary Read: 100%
Gillware Data Recovery Case Rating: 10


When your NAS device combines multiple hard drives into a redundant array of independent disks (or RAID), you have a lot of options to choose from, each with their own strengths and weaknesses. The Iomega StorCenter IX4 NAS supports three RAID levels: RAID 5, RAID 10, and JBOD (which is not technically a RAID). Different RAID levels offer differing benefits. Some prioritize performance over redundancy and data safety.

For instance, multiple hard drives in a RAID-0 offer increased speed, since you can access data from multiple drives simultaneously. However, if even a single hard drive goes offline, the entire array crashes. Other RAID levels, like RAID-1, prioritize the safety of your data over performance by “mirroring” the contents of one drive onto another in case one drive crashes. RAID levels such as RAID-5 and RAID-6 make varying degrees of compromise between performance and redundancy.

You can also stack RAID levels on top of each other. This creates a nested RAID array like RAID-10. RAID-10 combines RAID-0 and RAID-1.

How RAID-10 Works

To create a RAID-10 array, the RAID controller first takes an even number of hard drives and divvies them up into pairs. Each pair of drives forms a RAID-1 array of mirrored “twins”. Then, the mirrored pairs are put together and striped, akin to how the RAID controller breaks up the data you write to a RAID-0 array. Essentially, a RAID-10 array treats each mirrored pair as a single hard drive in a RAID-0 array.

The end result of a RAID-10 array is an array that combines the fault tolerance of RAID-1 with the performance boosts of RAID-0.

How RAID-10 Failure Happens

RAID-10 failure probability

As you add drives to a RAID-10 array, the chance of two drives causing your array to fail decreases.

But no storage device and no level of RAID array is immune to failure. RAID-10 is no exception. A RAID-0 array fails when one hard drive (out of however many hard drives you string together) fails. Likewise, a RAID-10 array fails when one mirrored pair of drives falls offline. In order for that to happen, two drives must fail. That seems unlikely, but as everyone at Gillware can attest—it happens. A lot.

The more hard drives (and more mirrored pairs) you have in your RAID-10 array, the lower your chances that any two hard drive failures will take place within the same mirrored set. For example, with eight hard drives forming four mirrored pairs, there are 28 possible combinations of failed drives, four of which result in failure (a 14.2% chance of failure).

As you decrease the number of hard drives down to four, you end up with only six combinations, two of which result in failure (a 33.3% chance of failure). The client in this RAID file recovery case had four hard drives in their RAID-10 array—the minimum number of drives needed to build a RAID-10. Half of those hard drives in the client’s array had crashed, and the client turned to Gillware’s professional hard drive data recovery services.

Iomega StorCenter IX4 Data Recovery

Our engineers took a look at the four drives pulled from the client’s Iomega StorCenter in our lab’s cleanroom. When it comes to rebuilding a RAID-10 array, we only need one hard drive from each mirrored pair—half of the user’s total hard drives. The tricky part comes from finding which mirrored pair has failed and determining which of the two drives is in the best shape.

One of the four hard drives in the array was what we in the industry call “powder”. When a hard drive’s read/write heads crash onto its platters and the drive keeps running, the heads gouge out the coating of the platters where all of the user’s data lives. The heads keep moving while the user tries to issue read and write commands, which can scrape off massive amounts of coating. All of the scraped-off material becomes dust, which coats the innards of the hard drive. When our engineers encounter extremely severe rotational scoring, the word “powder” does a fairly good job of representing the state of the client’s data.

Fortunately, the other hard drive in the same mirrored pair was in much better shape. The drive had failed much less severely. In fact, with the help of our fault-tolerant imaging tools, our engineers could read 100% of its platters’ contents and make a complete disk image of the drive. With a perfect disk image of the other two drives, our RAID-10 data recovery specialists could piece together 100% of the client’s data. We rated this Iomega StorCenter IX4 recovery case a perfect 10 on our case rating scale.

Will Ascenzo
Will Ascenzo
Will is the lead blogger, copywriter, and copy editor for Gillware Data Recovery and Forensics, and a staunch advocate against the abuse of innocent semicolons.
//]]>