RAID Data Recovery

Whether you’re a small business owner with a small network attached storage (NAS) device or part of a corporate IT team taking care of a massive storage area network (SAN) device, you’re bound to experience a RAID crash eventually.

Although RAID systems are implemented for capacity and data redundancy reasons, no system is 100% crash proof. Especially if this is your first experience with a NAS, SAN, or server crash, you might not know what to do or where to go next.

When you can’t get that data back on your own, you can rely on the experts at Gillware for help. For especially mission-critical business data, our expedited emergency RAID data recovery services can get your business back up and running in no time at all.

Never does a day go by that Gillware is not performing multiple RAID recoveries. Our RAID recovery experts are familiar with every level of RAID array, and our engineers have successfully solved thousands of RAID recovery cases since 2004.  Our RAID experts have more than 10,000 hours of data recovery experience in our lab. Using advanced engineering techniques, we can recover data from RAIDs large and small, from three- or four-drive NAS devices to SAN devices with dozens of hard drives.

Are you ready for Gillware to take care of your RAID data recovery needs?

Our R&D Director Greg Andrzejewski has thousands of successful RAID data recovery cases under his belt.
A University of Michigan alum, Greg Andrzejewski has been with us for over 8 years. He is the lead designer of our data recovery software HOMBRE and has thousands of successful RAID cases under his belt in our data recovery lab.
A NAS RAID for a small business our RAID data recovery engineers were able to assist
A NAS RAID belonging to a small business owner our RAID data recovery engineers were able to assist

RAID Data Recovery Process

Free Phone Consultation

First you’ll speak to one of our RAID experts that will gather information about the type of array where you have lost data. They will then give you an accurate estimate on the likelihood of success, the amount of time it will take, and of course the recovery cost.

Free Inbound Shipping by UPS Overnight

If you decide to move forward with your 100% free RAID evaluation and are located in the continental US our associate will provide you with a complimentary UPS Overnight shipping label. You will typically only need to send us the drives, not whole servers or miscellaneous RAID controller equipment.

Free Evaluation and Data Recovery Work

RAID data recovery engineers at Gillware will make forensic write-blocked image copies of all the drives, possibly making temporary repairs to individual drives first to achieve an optimal read of each disk. We never alter the data on individual drives during a RAID recovery effort. As we image the drives with our Hombre platform we are building a massive relational database with millions of pieces of meta data.  Each and every file header, file definition for thousands of file types and over ten file systems to put into this database.

With the knowledge of millions of pieces of evidence our engineers will figure out what type of RAID level exists, which is the optimal combination of drives to use in the recovery effort, what the stripe size is, what the stripping pattern is, and ultimately what order the drives are in.

Now that they’ve figured out the optimal physical array they must locate the logical units and file systems that contain your data. Each and every LUN, partition and file system will be inspected.  Thousands of files will be tested automatically as part of our process to prove our results are positive, before you pay us anything.

Truly the Best Option for RAID Recovery Services

It is not often you are able to hire a team of advanced computer scientists to perform many days of forensic consulting services and have it be financially risk-free!  It is not free for us to have human experts with computer science degrees spend days or weeks on data recovery cases. The only way we could do it this way is if we have amazing RAID recovery success rates–and we do!

As part of our financially risk-free process, we do all the work before charging you, and we only send you a bill after we’ve successfully recovered your important data.

RAID Levels Explained

If you want to prove you are an expert in RAID data recovery, you must demonstrate full knowledge of how all these RAID levels function normally. If you don’t have a 100% understanding of the ins and outs of RAID levels, you’ll have very little luck recovering your data.

Single level RAID 0

Single level RAID 0 is a RAID configuration without redundancy.  This lack of redundancy makes RAID recovery for RAID 0 arrays a common occurrence. If any drive in a RAID 0 array fails, the whole array fails and the system will not be able to read or write files. Data from individual disks can be recovered but it is useless unless it can be reconstructed properly. Since every RAID 0 implementation has unique configuration parameters, RAID 0 reconstruction is a very difficult task. At Gillware, we have technicians that specialize in RAID 0 reconstructions and a wide variety of tools to accomplish to task. We also have the ability to develop custom programs to reconstruct RAID 0 arrays of odd configurations.

Learn more about RAID 0 recovery

JBOD (“Just a Bunch of Disks”) can be similar to RAID 0, but without striping. Like RAID 0, the entire logical volume will die if one drive falls offline.

RAID 0 reconstruction prices depend on the stripe width of the array (number of drives being striped) and the total capacity of the array. Please call us at 877-624-7206 Ext. 1 for a quote.

RAID 0 arrays are built for performance. Instead of reading/writing one file to one disk, RAID 0 arrays read/write one file to multiple disks in parallel. The process of breaking up a file into pieces and writing it to multiple disks is called striping. The following is a simple example illustrating how RAID 0 works.

Say we have a text file that consists of four characters “ABCD” and a RAID 0 array of two disks (stripe width of two) with a stripe/block size of four bits (equal to one half of a character).

Here’s a data representation of our text file:

Text in file A B C D
ASCII code 4 1 4 2 4 3 4 4
Raw binary data 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 0

Here’s what happens when we save the file:

The first block of four bits (first half of letter A) is written to disk 1 and the second block of four bits (second half of letter A) is written to disk 2. This pattern will repeat until all bits are written.

DISK 1 DISK 2
0

1

0

0

0

0

0

1

A
0

1

0

0

0

0

1

0

B
0

1

0

0

0

0

1

1

C
0

1

0

0

0

1

0

0

D

As you can see, half of the file is on disk 1 and the other half is on disk 2. Notice that the file is not split down the middle (down the middle would have “AB” on disk 1 and “CD” on disk 2). Half of each character is stored on each disk.

Now consider what happens when a disk fails and the data from each disk is recovered.

DISK 1 DISK 2
0

1

0

0

0

1

0

0

D 0

0

0

1

0

0

1

0

“DC2”
0

1

0

0

0

1

0

0

D 0

0

1

1

0

1

0

0

4

Disk 1 by itself contains the text “DD” and disk 2 by itself contains a special character called “Device Control 2” and the number 4. Obviously, the data from each disk is garbage unless it’s reconstructed in the proper order.

The array must be reconstructed properly by assembling alternating blocks from disk 1 and disk 2 like this:

DISK 1 0

1

0

0

A
DISK 2 0

0

0

1

DISK 1 0

1

0

0

B
DISK 2 0

0

1

0

DISK 1 0

1

0

0

C
DISK 2 0

0

1

1

DISK 1 0

1

0

0

D
DISK 2 0

1

0

0

Disk 1 by itself contains the text “DD” and disk 2 by itself contains a special character called “Device Control 2” and the number 4. Obviously, the data from each disk is garbage unless it’s reconstructed in the proper order.

Single level RAID 1

Single level RAID 1 is the simplest RAID configuration requiring just two disks of equal size. When data is written to a RAID 1 array, it is written to one disk and simultaneously copied or “mirrored” to a redundant disk. If one drive of a RAID 1 array fails, the system will continue to operate using the redundant disk.

RAID mirrors are common candidates for data recovery because often both drives die simultaneously, or the RAID controller was not properly set up for failure notification.

Learn more about RAID 1 recovery

RAID 1 arrays are built for security. Instead of writing data to 1 disk, RAID 1 arrays write the same data to two disks. The following is a simple example illustrating how RAID 1 works.

Say we have a text file that consists of four characters “ABCD” and a RAID 0 array of two disks (stripe width of two) with a stripe/block size of four bits (equal to one half of a character).

Here’s a data representation of our text file:

Text in file A B C D
ASCII code 4 1 4 2 4 3 4 4
Raw binary data 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 0

Here’s what happens when we save the file:

The first block of four bits (first half of letter A) is written to disk 1 and the second block of four bits (second half of letter A) is written to disk 2. This pattern will repeat until all bits are written.

DISK 1 DISK 2
0

1

0

0

0

0

0

1

0

1

0

0

0

0

0

1

A
0

1

0

0

0

0

1

0

0

1

0

0

0

0

1

0

B
0

1

0

0

0

0

1

1

0

1

0

0

0

0

1

1

C
0

1

0

0

0

1

0

0

0

1

0

0

0

1

0

0

D

As you can see, a RAID 1 system is very simple. Recovering data from a simple RAID 1 system is exactly the same as recovering data from a single disk.

Single level RAID 5

Single level RAID 5 is a RAID configuration that combines striping with redundancy. The striping portion of RAID 5 is very similar that of RAID 0, but the redundancy portion is quite different from RAID 1. RAID 5 systems create redundancy by calculating parity blocks and distributing these parity blocks among all disks in the array. A minimum of three disks is required for a RAID 5 system. The maximum number of disks is limited by the RAID controller. RAID 5 systems are very popular since they have the performance benefits of striping with the added security of redundancy. Even better is that the storage efficiency (the ratio of the RAID system capacity to the total capacity of all individual disks) is much higher than that of RAID 1 (which is 50%).

Commonly when we receive RAID cases at Gillware, multiple drives have failed which has destroyed the redundancy.

Learn more about RAID 5 data recovery

Before getting into the details of how data is stored in a RAID 5 system, let’s take a look at parity and how exactly redundant storage efficiency can exceed 50%. Calculating parity is nothing more than applying the XOR binary operator to the data stored on the disks. XOR stands for “exclusive OR” meaning that the output will equal 1 if and only if the two bits being XOR’d are different. The following is a truth table for the XOR function that illustrates this clearly.

Say we have a text file that consists of four characters “ABCD” and a RAID 0 array of two disks (stripe width of two) with a stripe/block size of four bits (equal to one half of a character).

Here’s a data representation of our text file:

A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0

The XOR function has a very unique property that lends itself to efficient data redundancy. If XOR is applied twice in a row, it negates itself. So if we have A and XOR it twice with B, we get A as the result:

A XOR B XOR B = A

The following example will demonstrate how to get 80% redundant storage efficiency out of an array of 5 disks. For 80% efficiency, we must store real data on 4 disks and use only 1 disk for redundancy. Let’s store the string “RAID” on our 4 non-redundant disks.

Here’s how our string is represented in binary:

Text in file R A I D
ASCII code 5 2 4 1 4 9 4 4
Raw binary data 0 1 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0 0

Here is our string in our disk array at 80% efficiency with the redundancy portion (parity) taking just 20%. The data on Disk E is the parity and is calculated by applying XOR to all the other data like this: E = A XOR B XOR C XOR D

DISK A DISK B DISK C DISK D DISK E
0

1

0

1

0

0

1

0

0

1

0

0

0

0

0

1

0

1

0

0

1

0

0

1

0

1

0

0

0

1

0

0

0

0

0

1

1

1

1

0

R A I D PARITY
Real Data

Now say that Disk C fails and we’re left with the data from Disk A, B, D, and E. We can rebuild the data from Disk C by applying XOR to the data on the remaining disks. Since the data on Disk E = A XOR B XOR C XOR D, applying XOR to all remaining data reduces to

(A XOR B XOR C XOR D) XOR A XOR B XOR D

Now, reformat to

(A XOR A) XOR (B XOR B) XOR (D XOR D) XOR C Since XOR is applied twice in a row to the data from A, B, and D, all we are left with at the end is C. This is of course the data from our failed disk representing the letter “I”. The following chart illustrates the calculation:

DISK A DISK B DISK D DISK E A xor B xor D xor E = C
0

1

0

1

0

0

1

0

0

1

0

0

0

0

0

1

0

1

0

0

0

1

0

0

0

0

0

1

1

1

1

0

0

1

0

0

1

0

0

1

R A D PARITY I

RAID 5 by Example

RAID 5 arrays are built for performance and redundancy. Instead of reading/writing 1 file to 1 disk, RAID 5 arrays read/write 1 file to multiple disks in parallel. They also calculate a parity block when writing so that data can be recovered if a disk fails. This XOR parity is critical when attempting a RAID recovery.  The following is a simple example illustrating how RAID 5 works.

Say we have a text file that consists of three characters “ABC” and a RAID 5 array of three disks with a stripe/block size of four bits (equal to one half of a character).

Here’s a data representation of our text file:

Text in file A B C
ASCII code 4 1 4 2 4 3
Raw binary data 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1

Here’s what happens when we save the file:

The first block of four bits (first half of letter A) is written to Disk 1 and the second block of four bits (second half of letter A) is written to Disk 2. Disk 3 is parity for Block 0. Next, the third block (first half of letter B) is written to Disk 1 and the fourth block (second half of letter B) is written to Disk 3. Disk 2 is parity for Block 1. Finally, the fifth block (first half of letter C) is written to Disk 2 and the sixth block (second half of letter C) is written to Disk 3. Disk 1 is parity for Block 2. Notice how the parity is on a different disk for every block. The order of the parity is called the parity order, parity map, or parity rotation. In this example, the parity order is backwards (Disk 3, Disk 2, Disk 1).

DISK 1 DISK 2 DISK 3
Block 0 0

1

0

0

0

0

0

1

0

1

0

1

Block 1 0

1

0

0

0

1

1

0

0

0

1

0

Block 2 0

1

1

1

0

1

0

0

0

0

1

1

*Pink data represents the parity calculation for the block

Now consider what happens when Disk 3 fails and all we have left is Disk 1 and Disk 2. Continue reading about RAID Arrays at one of our informational pages listed below.

DISK 1 0
1
0
0
D
0
1
0
0
0
1
1
1
q
DISK 2 0
0
0
1
0
1
1
0
d
0
1
0
0

As you can see, the raw data from Disk 1 and Disk 2 represents the string “Dqd” which is complete garbage. Our data should be the string “ABC”. To recover the data from the two remaining disks, we must properly reassemble the stripes and use the XOR operator to recalculate the lost data from Disk 3.

The following chart illustrates how to properly reassemble the RAID 5 array:

Disk 1 Block 0 0
1
0
0
A
Disk 2 Block 0 0
0
0
1
Disk 1 Block 1 0
1
0
0
B
(Disk 1 Block 1) xor (Disk 2 Block 1) 0
0
1
0
Disk 2 Block 2 0
1
0
0
C
(Disk 1 Block 2) xor (Disk 2 Block 2) 0
0
1
1

As you can see, the RAID 5 reconstruction process is quite complex. The reconstruction must be done exactly right or the resulting data will be garbage.

Multi-level RAID 0+1

Multi-level RAID 0+1 combines the performance enhancements of RAID 0 with the redundancy of RAID 1. A RAID 0+1 array is constructed by taking a RAID 0 array consisting of two or more disks and mirroring the entire array to a different array consisting of an equal number of disks. This creates a RAID 1 Super-Array consisting of two RAID 0 Sub-Arrays. RAID 0+1 is often referred to as a mirror of stripes because it operates by creating a mirror of a stripe set. Some people might refer to this configuration as a RAID 1+0 but this does not follow most industry standards. The standard multi-level RAID naming convention is to list the Sub-Array first and the Super-Array second. The following chart illustrates a RAID 0+1 setup for an array of six disks. To see the difference between RAID 0+1 and RAID 1+0, checkout the RAID 1+0 tab.

RAID 0 (stripes)

RAID 1 Super-Array (mirror) Disk 1 Disk 2 Disk 3 Sub-Array A
Disk 1 Disk 2 Disk 3 Sub-Array B

When data is inserted into a RAID 0+1 array, it is first striped into one RAID 0 Sub-Array and then the entire Sub-Array is mirrored. Using our chart from above as an example, data would first be striped into RAID 0 Sub-Array A and then mirrored to Sub-Array B. It is important to note that if any disk from a RAID 0 Sub-Array is lost, the entire Sub-Array is lost (because there is no redundancy in RAID 0). For example, if Disk 4 fails, Sub-Array B as a whole fails. This is because the RAID controller views each Sub-Array as a single logical unit. The system will continue to operate without Sub-Array B but it is now reduced to a single level RAID 0. If any drive in Sub-Array A fails at this point, the entire system will fail.

The RAID 0+1 reconstruction process is nearly identical to the single level RAID 0 reconstruction process with the added complications of determining disk layout. See the RAID 0 tab for a detailed explanation.

Multi-Level RAID 1+0 (RAID 10)

Multi-Level RAID 1+0 (RAID 10) combines the redundancy of RAID 1 with the performance enhancements of RAID 0. A RAID 1+0 array is constructed by taking two or more RAID 1 Sub-Arrays and applying RAID 0 striping across these arrays to create a RAID 0 Super-Array. RAID 1+0 is often referred to as a stripe of mirrors because it operates by striping data across multiple mirrored sets. Some people might refer to this configuration as a RAID 0+1 but this does not follow most industry standards. The standard multi-level RAID naming convention is to list the Sub-Array first and the Super-Array second. The following chart illustrates a RAID 1+0 setup for an array of six disks. To see the difference between RAID 1+0 and RAID 0+1, see the RAID 0+1 tab.

Learn more about RAID 10 data recovery

RAID 1 Sub-Arrays (mirrors)

RAID 0 Super-Array (stripes) Disk 1 Disk 2 Sub-Array A
Disk 3 Disk 4 Sub-Array B
Disk 5 Disk 6 Sub-Array c

When data is inserted into a RAID 1+0 array, it is first split into stripes and each stripe is written to one disk of each RAID 1 Sub-Array. The stripes are then mirrored individually to the other disk in the Sub-Array. If any disk from a RAID 1 Sub-Array fails, the system will continue to operate normally since the Sub-Arrays are redundant. The system will even continue to operate normally if a single disk from each Sub-Array fails. When a disk failure occurs in a RAID 1+0 system, the system is reduced to a state in between RAID 0 and RAID 1. For example, if disk 4 fails, the system has redundant copies of the stripes from disks 1 and 5 but not from disk 3.

See the RAID 0 tab for a detailed explanation.

RAID 2

Single level 2 RAID systems are very rare. Our engineers were hard pressed to recount the last time we’ve seen one in our lab. We are only discussing it here for the sake of completion. RAID 2 stripes data across many disks at the bit level and calculates redundancy bits that get stored on dedicated disks. The data on the redundancy disks is calculated using Hamming Error Correction Code (ECC). When data is read, the ECC is also read and the system is able to correct for errors that occurred in writing. RAID 2 is not used in practice today since modern drives have ECC codes built in to each sector on the disk.

RAID 3

Single level RAID 3 is similar to RAID 5 in that it stripes data across many disks and calculates parity. But instead of distributing parity across many disks, RAID 3 uses a dedicated parity disk. RAID 3 also stripes data on the byte level.

RAID 4

Single level RAID 4 is similar to RAID 5 in that it stripes data across many disks and calculates parity. But instead of distributing parity across many disks, RAID 4 uses a dedicated parity disk. RAID 4 differs from RAID 3 in that it stripes data on the block level.

RAID 6

Single level RAID 6 is similar to RAID 5 in that it stripes data across many disks and distributes parity through out the disks. But instead of storing one set of parity data per block, RAID 6 stores two sets of parity data per block.  Most of the time we are performing  data recovery on a RAID 6 array there have been three or more drive failures, usually due to electrical surges or improper shutdowns of servers.  As a result of this additional parity, RAID 6 requires a minimum of four disks to implement compared to three for RAID 5. This “dual distributed parity” allows for simultaneous loss of two drives in the system.

Learn more about RAID 6 recovery

Other multi-level RAID types

Multi-level RAID systems are very complicated. Reconstruction and data recovery of multi-level RAID systems is also very complicated. Gillware is very proud of its technical staff and their RAID analysis and reconstruction skills. If you’ve ever tried to reassemble a complicated multi-level RAID system, you know that the tools available to do so are very limited. At Gillware, we analyze your system in detail to determine its parameters and if necessary, write custom reconstruction software specific to your system. Gillware looks forward to helping you with your RAID recovery needs.

Multi-Level RAID Naming Convention

Multi-level RAID naming conventions are confusing. The standard multi-level RAID naming convention that is most commonly used is to list the Sub-Array first and the Super-Array second. For example, if you have three RAID 1 Sub-Arrays made up of two disks each and put them together to make a RAID 0 Super-Array, you are implementing a RAID 1+0 system of six disks. Alternatively, if you have two RAID 0 Sub-Arrays of three disks each and put them together to make a RAID 1 Super-Array, you are implementing RAID 0+1. See the RAID 1+0 and RAID 0+1 tabs for more details.

RAID 5+0

RAID 5+0 is a RAID 0 Super-Array consisting of RAID 5 Sub-Arrays. Data is striped into the RAID 0 array and then striped several more times with a parity calculation into several RAID 5 arrays. If either RAID-5 fails, because they are striped together, you’ll be needing RAID recovery services.

RAID 5 Sub-Arrays (stripes w/parity)
RAID 0 Super-Array (stripes) Disk 1 Disk 2 Disk 3 Sub-Array A
Disk 4 Disk 5 Disk 6 Sub-Array B

RAID 0+5

RAID 0+5 is a RAID 5 Super-Array consisting of RAID 0 Sub-Arrays. Data is striped with parity into the RAID 5 array and then the data and parity are striped several more times into several RAID 0 arrays.

RAID 0 Sub-Arrays (stripes)
RAID 5 Super-Array (stripes w/parity) Disk 1 Disk 2 Sub-Array A
Disk 3 Disk 4 Sub-Array B
Disk 5 Disk 6 Sub-Array C

RAID 3+0

RAID 3+0 is a RAID 0 Super-Array consisting of RAID 3 Sub-Arrays. Data is striped into the RAID 0 array and then striped several more times with a parity into several RAID 3 arrays each with their own dedicated parity disk.

RAID 3 Sub-Arrays (stripes w/ dedicated parity)
RAID 0 Super-Array (stripes) Disk 1 Disk 2 Disk 3 Sub-Array A
Disk 4 Disk 5 Disk 6 Sub-Array B

RAID 0+3

RAID 0+3 is a RAID 3 Super-Array consisting of RAID 0 Sub-Arrays. Data is striped with dedicated parity into the RAID 3 array and then the data and parity are striped several more times into several RAID 0 arrays. With RAID 0+3, one of the RAID 0 Sub-Array stores all of the parity calculations.

RAID 3 Sub-Arrays (stripes)
RAID 0 Super-Array (stripes w/ dedicated parity) Disk 1 Disk 2 Sub-Array A
Disk 3 Disk 4 Sub-Array B
Disk 5 Disk 6 Sub-Array C

RAID 5+1

RAID 5+1 is a RAID 1 Super-Array consisting of RAID 5 Sub-Arrays. Data is striped w/parity into one RAID 5 Sub-Array and then the entire Sub-Array is mirrored. RAID 5+1 is called a “belt + suspenders” because of its extremely high fault tolerance. In the six-disk RAID 5+1 example shown below, it is possible to recover all data even after 4 out of 6 disks fail simultaneously! All that is needed to recover the data is two stripes. This means that as long as the four failed disks are NOT sets of matching stripes (disks 1, 2, 4, 5) for example, data can be recovered. As a result of this high fault-tolerance, storage efficiency is very low and cost is very high.  This extreme level of redundancy makes them a rare candidate for data recovery efforts, but mass environmental disasters like fires or floods can and do happen.

RAID 5 Sub-Arrays (stripes w/parity)
RAID 1 Super-Array (mirrors) Disk 1 Disk 2 Disk 3 Sub-Array A
Disk 4 Disk 5 Disk 6 Sub-Array B

RAID 1+5

RAID 1+5 is a RAID 5 Super-Array consisting of RAID 1 Sub-Arrays. Data is striped w/parity into one RAID 5 Sub-Array and then each stripe in the RAID 5 array is mirrored. RAID 1+5 is called a “belt + suspenders” because of its extremely high fault tolerance. In the six-disk RAID 1+5 example shown below, it is possible to recover all data even after 4 out of 6 disks fail simultaneously! All that is needed to recover the data is two stripes. This means that as long as the four failed disks are NOT sets of matching stripes (disks 1, 2, 3, 4) for example, data can be recovered. As a result of this high fault-tolerance, storage efficiency is very low and cost is very high.

RAID 1 Sub-Arrays (mirrors)
RAID 5 Super-Array (stripes w/ parity) Disk 1 Disk 2 Sub-Array A
Disk 3 Disk 4 Sub-Array B
Disk 5 Disk 6 Sub-Array C