Talk:RAID/Archive 2

Latest comment: 17 years ago by Kaldosh in topic Merge proposal
Archive 1Archive 2Archive 3Archive 4Archive 5

Redundant Array of Independent DIMMs

How about a new page (with appropriate disambiguation) for memory RAID? Not much information out there on it, but it is available and fairly easy to configure on, for instance, a Dell PowerEdge:

http://support.dell.com/support/edocs/systems/pe6850/en/it/h2147c60.htm#wp1081442

(Scroll down to Memory RAID Support.)

The contingency being: is R.A.I.D. (D now standing for DIMM, just to make matters confusing) the correct acronym?

--64.139.53.163 19:11, 2 March 2006 (UTC)

Is this technique limited to DIMMs specifically, or can it be implemented with memory modules in general? I propose RAIMM. Then you could have a RAIMM for your RIMM RAM, or a Redundant Array of Inexpensive RAM. It being RAIR to use inexpensive RIMM, let's just call it RAR. Marketing will be on that like a tiger. Is there room for ROM? That wouldn't be a ROAR, but a ROAM. I've obviously neglected REM for too long; kindly leave the room. --Markzero 15:31, 27 April 2006 (UTC)


"Seeing as how the VP is such a VIP, shouldn't we keep the PC on the QT? Because if it leaks to the VC, you could end up an MIA, and then we'd all be put on KP."
--Baylink 17:46, 5 September 2006 (UTC)

Matrix Raid

I don't get the sentence no

[quote] "Currently, most (all?) of the other cheap RAID BIOS products only allow one disk to participate in a single array." [/quote]

Surely that should be the other way round. no? allow a disk to participate in only a single array?

It depends on the intention of the author as to what the word "only" describes (as well as which term gets emphasized by its singularity). As quoted, the sentence means that there are several different allowances (ie RAID cards have different ways of utilizing disks inside of arrays), but that the RAID controllers allow only one of these methods to be used. In the alternate phrasing (ie "... allow a disk to participate in only a single array"), the "only" is describing the array, and so its semantics are geared more toward the idea of the RAID controlers allowing only one array as opposed to a disk being shared in multiple arrays. In fact, the sentence could be changed to "... RAID BIOS products only allow one disk to participate in only a single array", which gets across both meanings; however the multiple uses of the word would actually detract from readability. Since the semantics of the sentence vary slightly with placement of the word "only", I'd say "don't fix what ain't broke" (the only real difference that the placement of the word "only" would have is if it was used to describe the disk, as in "... allow only a disk to participate in a single array". Here it sounds as though the controllers are allowing a disk (as opposed to other forms of media) to participlate in an array; clearly this would be a case where the sentence meaning differs from what the author intended).
Pyth007 19:22, 13 April 2006 (UTC)

Not restriping disks?

If I have

A,B,C disks A,B are for data, C for parity something like RAID 4

A,B are full and so is C I add a new buffer D,E I do not wish to "resize A,B"

Can I simply not start writing to D and simply modify the parity on C to do C= parity A,B,D, E ? if parity is XOR then simply Xorin C with new data on D xor E should suffice... (you may have to maintain a pointer as to how much on C is old parity and how much is new etc as data comes on D,E)

In other words one may not need to "restripe" everything? Is this done somewhere? It would imply treating "disks added" as bunch across which there is a stripe.

-alokdube@hotpop.com

>>>>> With present CPU speeds, software RAID can be faster than hardware RAID, though at the cost of using CPU power which might be best used for other tasks

This is totally unsubstantiated. I am unaware of a hardware raid solution that doesn't operate at the maximum hardware I/O speeds. Provide a reference to substantiate this claim.

See: http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/custom-guide/s1-raid-approaches.html

"With today's fast CPUs, Software RAID performance can excel against Hardware RAID."

>>>>>

--> wouldnt matter would it? as long as one can "plug off the BOD and put it elsewhere" and not worry, whether it is software or hardware it should not matter. However in both cases "not restriping" is a benefit.

X-RAID

This writing is definitely NOT up to the quality required to post to the article (I am extremely tired right now, which does not help my writing ability); could someone please proofread and post to the main article (or at least provide some suggestions on reworking it when I'm more awake)? Thanks! 70.92.160.23 05:24, 24 March 2006 (UTC)

Infrant Technologies's X-RAID is a RAID method that allows for the dynamic expansion of existing disk volumes without losing any data already present on the volume. It currently is limited to a maximum of 4 drives per RAID volume, but certain OEM products utilising custom Infrant boards have room for up to 7 drives (although the X-RAID mode will still only work with a maximum of 4 drives per volume, work is being done to increase the maximum number of drives per volume). X-RAID utilises a proprietary extension to regular Linux volume management and runs using the Ext3 filesystem. This means that X-RAID volumes can be mounted by a standard Linux installation, even when connected to a non-Infrant device. When two drives are installed in an X-RAID device, the X-RAID system runs in a redundant RAID 1 mode; with three or more drives, the device runs in a mode similar to RAID 5. X-RAID allows for the automatic expansion of volumes by replacing all disks in a volume with larger-sized disks. Once all disks are replaced, the volume automatically expands to fill the newly available space. It also allows for starting with only one disk and adding disks on-the-fly without losing any of the data already existing on the volume. Technical details about the X-RAID system are currently unavailable due to a pending U.S. patent filing.

Merge proposal

Since spanned volumes are AFAIK basically Windows NT's built in software RAID concentaton/JBOD I don't see any reason to have a dedicated article anymore then we don't have a dedicated article for striped volumes and mirrored volumes. Of course JBOD/concentation is not RAID but it's discussed here as it probably should be because it's the most suitable article. Having said that, we probably should add a bit more about Windows NT's built in software RAID/dynamic disks since it's not really discussed here... Nil Einne 20:16, 29 March 2006 (UTC)

I've had another thought. Alternatively, we could very briefly mention Windows NT dynamic disks in the article and perhaps under RAID 0, RAID 1 and JBOD and make a dynamic disks article to discuss them in detail (since this article is getting rather big) Nil Einne 20:20, 29 March 2006 (UTC)

I don't think it should be merged. This article is already pretty big. Definitely make it a prominent link, (or a section, with a "main article" link, and brief descriptions here), but there is too much information in the raid levels article to jam it all in here. Kaldosh 08:39, 18 March 2007 (UTC)

This article should not be merged with spanned volume.

This article should not be merged with spanned volume. While some forms of non fault tollerent raid might relate to spanned volume, spanned volume is not fault tollerent. Do not merge the articles.

I agree. It's also a popular search term (RAID <whatever>), and trying to dig info about RAID specs out of another article would be annoying.
~ender 2006-04-10 02:32:AM MST
Agreed, please keep them seperate. -Fadookie Talk 21:03, 23 April 2006 (UTC)

RAID 0 typo?

"for example, if a 120 GB disk is striped together with a 100 GB disk, the size of the array will be 200 GB"

Should that be 220Gb? Or is there some kinda storage space loss that wasn't posted? 10% loss seems like a lot of space to lose. Ghostalker 23:37, 3 April 2006 (UTC)

No, it's correct. The reason for the storage space loss was posted, the sentence before the example reads: "A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk". It's just a design descision for performance. The last 20 GB would not be striped. Almost all hardware raid implementions do this. The linux software RAID 0 driver will use the remaining space (with three disks you'd end up with striping across three disks for the size of the smallest disk, then striping across two disks up to the size of the middle disk, and then just normal storage up to the end of the largest disk). Not sure what other unix like OSes do.
Avernar 18:55, 9 April 2006 (UTC)

It is correct that a 120GB drive and 100GB drive in RAID 0 will yield 200GB. @Avernar - I am not aware of any Linux software RAID drivers being able to use the remaining space of the larger drive in RAID 0. Please document or link to the correct product and version/implimentation. Thanks. As far as I know, Windows XP cannot use in any way the extra space on the larger drive. Freedomlinux 20:21, 25 August 2006 (UTC)

It's not done automagically by the Linux software RAID drivers in the kernel, but you can configure this manually. Like I have done: 120+120+200: 3*100MiB /boot RAID1 + 3*1GiB swap + 3*119GiB / RAID5 (238GiB) + remaining 80GiB windoze. Amedee 12:43, 18 October 2006 (UTC)
It's in the raid0.c file. Took me a little bit to find an article about it. Go to RAID0 Implementation Under Linux. Search for "stripe zones". Avernar 08:24, 28 October 2006 (UTC)

Independent vs Inexpensive: Beginning of article, gross error

"In Raid 0 there is no fault tolerance; therefore, each disk in the system is independent and there is no increase in performance."

Excuse me ? There is no increase in performance ? And clearly this is not a typo. People editing should at least know what they're talking about, this is polution!

Also, this whole paragraph about Independent vs Inexpensive is not consistent at all. It talks about data being independent (loosely defining the meaning of the usage) and claiming RAID 0,1,2 fit the definition while 4,5 (where the author says "stripping and stripping + parity", like RAID 0 wasn't striped!) would not fit the "independence" definition.

Contrast the logic: The first part of the paragraph loosely says RAID 1 and 2 are fault tolerant, therefore called independent, and later we have, "In Raid 0 there is no fault tolerance; therefore, each disk in the system is independent".

This paragraph seems like a mess to me, I would flush it.

Independant?!

Sorry but WHAT THE HELL ARE YOU ON ABOUT???!

RAID arrays are entirely dependant on every disk in them whilst they are functioning - no matter what RAID level is being used. If the array was to fail, then the disks would be independant but the are certainly NOT when the array is functioning. Correct definition of the term RAID is most definately Redundant array of INEXPENSIVE disks. Period.

I'm no expert (I came to this page to learn about it) but a quick internet search shows that 'independent' is a very common interpretation of the 'I'. Also, that section in the article made sense to me: In RAID 1 and 2, multiple independent copies of data are stored: One can be lost without affecting the other, they are independent. The other raids treat the hard drives as one continuous medium, and so cannot logically be seperated. Then again, I really have no clue about RAID... BlankAxolotl 01:22, 26 April 2006 (UTC)
The drives are independent in the sense that each drive has its own motor, its own low level controller and is generally seperate from the rest of the drives (unfortunately they do often share a common power supply though). Yes all drives are used by the raid controller but they are not dependent on each other. Plugwash 15:46, 26 April 2006 (UTC)
Then how does this line from the article fit in: "This is because these RAID configurations use two or more disks acting as one; there is nothing independent about RAID levels 0, 4 and 5.". (edit of original post:) Actually, here is where I think the confusion is: We can talk about independence for either access time or for redundancy of data: If we are talking about access time, we mean that different disk can be accessed independently, at the same time. If we are talking about redundancy, we mean that the data on one disk is independent of the data on the others: One disk can fail, and the data on the other disks still makes sense. (In raid 0,4,5 losing one disk makes the data on the other disks no longer make sense: we might have only portions of certain files (as I understand form article)). The article seems to alternately use these two definitions. I think that whole section is confusing and needs to be reworked, but it is not my place to do that. Maybe just remove it and just say 'some call it independent, others inexpensive' BlankAxolotl 18:51, 26 April 2006 (UTC)
This section of the talk page is definitely redundant :-) (I'm not the one who wrote it, but I wrote the one just above ("independent blabla gross error"). Read this. The quick version is this: the section I'm referring to in the article is crap, like I said above, I would flush it.
Ya, at first I thought it made sense, but then I realized that it had only confused me, so had to correct myself. Anyway, since the people who actually know about raid aren't doing anything, I went ahead and deleted that section. (You seem to know about raid. You should change things you see are wrong! people can always change it back) It wasn't an interesting section anyway (I skipped it when I first read the article) BlankAxolotl 16:21, 27 April 2006 (UTC)
I'm not sure how the opening of this thread can get away with this, nor how the article can continue to report the acronym as "inexpensive" disks. Does that mean I can only "RAID" inexpensive disks? What if I want to "RAID" costly brand new 750GB drives? Or what if i want to RAID (in software for example) very costly 2TB hardware RAIDs? The "independant" is correct, and the article should be changed to reflect this. "Independant" is to refer to the disk being able to be recognized by the system alone -normally-, but some controller is going to now use that otherwise independant drive in an array. Thus, Redundant Array of Independant Disks. It's an array... of disks... which would otherwise be independant. So it's correct.
The history of the term RAID goes back to berkeley (where it was created) - they intended for it to be "inexpensive" (even the acronym) but modern usage has changed RAID so that it's overwhelmingly "independant". The wiki opening should probably reflect this change.
Correct, in the original article (1988) RAID drives were inexpensive, as compared to SLEDs (Single Large Expensive Drives). Even your costly brand new 750GiB drives are inexpensive compared to a SLED.Amedee 12:53, 18 October 2006 (UTC)
A real argument could be that RAID 0 isn't redundant, but it we all understand that at some point RAID becomes a verb and a noun, so we can accept RAID 0 as a RAID level.
jclauzet - Jun. 30, 18:51:13 UTC
The opening sentence reports both meanings and identifies which came first. A few paragraphs below there's some more on the topic. RAID 0 isn't redundant, which is noted at the beginning of the RAID 0 section. Aluvus 19:23, 30 June 2006 (UTC)
Sigh, I got more caught up in the thread than the facts. You're right. The thread poster was so passionate I got caught up. It's not hard to read about RAID's berkeley roots on the web. I'm not the right person to add it to the history section, but it seems a good idea

Benchmarks

Can anywone find benchmarks to support the claims that RAID 0 is faster? Some authors say it's not.

I can't supply benchmarks, but the basic principle of RAID 0 allows for dramatically increased performance (up to N * 100%, where N is the number of disks in the array) both reading and writing, because each disk in the striped set can do its own read and write operations simultaneously. So if you have 3 disks striped together, you can theoretically read 3 times the data in one time segment as a single disk. It's up to the controller to sequence the data correctly so that it's consistent and correct to the system performing the I/O. — KieferSkunk (talk) — 23:01, 2 March 2007 (UTC)

New diagrams are Wrong

Sorry, Fadookie, but I think your nice new diagrams are slightly wrong/incomplete. (note I am no expert on raid, I came here to learn about it). The raid 0 and 1 diagrams seem good.

However, the raid 3 diagram is wrong because it needs to have the 'B' blocks, like in the old text diagrams. So A10 A11 A12 become B1 B2 B3. Additionally it would be nice if the 'P' secions would somehow indicate which parity data they hold (as they do in the old text diagrams). I would also change the wording of the accompanying text to "A simultaneous request for any B block would have to wait." from what it is. (It's better to wait 'til the diagrams are there, in case they change the numbering)

The raid 4 diagrams are wrong because they do not emphasize how the parity block for the 'A' blocks is on a seperate disk from the 'A' blocks (and same for the other letters). Also, I do not know about raid, but the old text diagrams also seem to indicate that the parity A block has parity covering all three other disks. If the A blocks are meant to be continuous, it also implies that even continuous data is split up among the disks. As I say, I don't know how it works, safest is to do it just like in the original text diagrams (where A is split up).

The raid 5 and 6 diagrams again doesn't show which parity block is for which data, and does not split up A blocks (and other letters) like in the text diagrams. I would again assume the text diagrams are right, and do just like them.

I haven't really looked at the rest of your diagrams. Raid 1.5 also looks wrong, but I didn't look carefully. Raid 1+0 and 10 seem OK, but better double-check. I have no clue for the double parity one.

Although the diagrams are a little bit wrong, they do look very nice!

BlankAxolotl 00:36, 26 April 2006 (UTC)

I didn't make the diagrams, I just grabbed them off wikimedia commons. I think they were made by someone on the German Wikipedia.
How about the ones here? Talk:RAID#Rework_Diagrams.3F
-Fadookie Talk 11:34, 27 May 2006 (UTC)
I quite like the look of the diagrams currently on the page. I can probably produce a set of diagrams matching that visual style but conforming to the information in the ASCII art versions, if we're settled that the ASCII renderings are correct. I'm not familiar enough with the ins and outs of some of the RAID levels to make corrections. Aluvus 10:01, 28 May 2006 (UTC)
Done for some. Still need a little retouching, but I think they still communicate things a lot more clearly than the ASCII versions. I have added them in, revert if you object. Aluvus 04:20, 8 June 2006 (UTC)
They look really good (better than the original!), and are now correct I think. Maybe the thumbs are a little too small, but I don't think you have control over that. Great! BlankAxolotl 02:03, 16 June 2006 (UTC)

Introduction: Too Long

I think the introduction paragraphs need to be trimmed down, moving the extra information to appropriate sections. The first two paragraphs alone are enough to introduce the topic, the rest would flow better in appropriate sections rather than prominently at the top. What do you think? DanDanRevolutiontalk 15:13, 4 May 2006 (UTC)

Confusion about confusion

From the 0+1 section,

"A RAID 0+1 (also called RAID 01, though it shouldn't be confused with RAID 10)"

The likely confusion is with RAID 1, not RAID 10, surely?

This is probably the case, though I'm not the author. -DanDanRevolution 19:05, 23 May 2006 (UTC)

While it is possible that users could mistake RAID 01 for the longhand version of RAID 1. However, it RAID 01 is also VERY easily confused with RAID 10.

Example: Assuming you have 10 hard drives RAID 01 is making 2 RAID 0 arrays, then mirroring the RAID 0 arrays with RAID 1.

   You end up with 2 RAID 0 arrays, each of 5 disks, then mirror the two sets of 5 together with RAID 1 to get a RAID 01 volume

RAID 10 is making 5 RAID 1 arrays, then striping them together with RAID 0.

   You end up with 5 RAID 1 arrays, each containing 2 drives. Then, the 5 RAID 1 arrays are striped with RAID 0 to yield a single RAID 10 volume with 10 disks.

I am sorry if this is not exactly that clear, but I am tired and the concept is also slightly confusing. See http://www.pcguide.com/ref/hdd/perf/raid/levels/mult.htm

Good Article nomination has failed

The Good article nomination for RAID/Archive 2 has failed, for the following reason:

It violates the "well written" requirement outlined in WP:WIAGA. It's currently disorganized. The introduction is too long, the images have conceptual errors, and there is an outstanding {{cleanup}} tag. DanDanRevolution 05:19, 22 May 2006 (UTC)

What is "Advanced Data Guarding" ?

ADG links to Advanced Data Guarding which redirects to RAID. Does Wikipedia:Redirect require us to mention it at the beginning of the RAID article?

http://searchstorage.techtarget.com/sDefinition/0,,sid5_gci1027532,00.html claims that RAID 6, and Advanced Data Guarding (RAID_ADG), and diagonal-parity RAID are the same thing.

But I know that RAID 6 and diagonal-parity RAID, while very, very similar, are not exactly the same. RAID-6 calculates the Q block from the same row of blocks that it calculates the parity P block (but with a different function). Diagonal-parity calculates its 2nd parity per row using the same function (XOR) that the row-parity P block is calculated with, but using a "diagonal" set of blocks.

So ... what is Advanced Data Guarding? Is it exactly the same as RAID 6, but with a catchier name? If not, how is it different? -- User:DavidCary --70.189.73.224 03:14, 30 May 2006 (UTC)

assorted comments:

-- i think it would be helpful to establish up front that some raid terminology has entered common usage in ways that may not be strictly correct and to then use this idea as a structural element throughout the rest of the article. that is, for each topic make a point of treating both the correct and pop/marketing meaning of each term. that opens up the queston of what is "correct." for starters, let me suggest "as specified in the standards documents" or as "empirically demonstrable".

an example of something "empirically demonstrable" would be the idea that not all "hardware raid" implementations are implemented in hardware to the same degree. some simple mirroring or striping controllers implement all logic in hardware, some sophisticated controllers run embedded raid software on a general purpose cpu, sometimes with a coprocessor for parity. there may not be standards defining this stuff but many raid controllers have diagnostics that will tell you what's inside so you can check this.

-- some empirical data regarding performance and reliability of various raid levels might be of interest as well. calculations are very handy things, but the proof is in the pudding.

-- my original understanding of the term JBOD was that it meant just that. Just a Bunch of Drives, working independently. no concatenation, no nuttin! at that time folks were refering to concatenations as "cats" and jbods as jbods. since then i've seen the term jbod used to refer to cats, which i consider incorrect and confusing. this another area where discussing the strict and popular usage of a term would be helpful.

-- contrary to the article and in agreement with a previous comment, AFAIK raid 0 does not provide redundancy but does provide a performance benefit (all else being equall) which is the advantage it has over a concatenation. performance and reliability considerations are sometimes independant of each other.

-- as one would expect, the article focused mostly on theory. it's impossible to explore all the implementation considerations and all the permutations in a concise article (if at all). however, i think it would be worth noting that generally implementation specifics have a lot to with actual performance and that in real life "all else" is rarely equal. (go ahead and stripe all the disks you want. put a slow interconnect between the disks and your hosts, you won't see the benefit, etc.)

-- there is enough confusion about terminology in the field that we need to be very careful about checking definitions when communicating, especially when starting work with a new team, or kicking off a new client/vendor relationship.

-- i'd like to see some attention to the trade-offs involved in selecting raid configuations and how changing hardware economics change those trade-offs over time.

for example, in some cases application requirements that might have called for raid 5 on a dedicated controller a few years ago can now can be satisfied by software raid 1 because disks are bigger and cheaper and cpu cycles are cheaper too. there are also considerations of how the app behaves. offloading raid work from the cpu(s) makes a lot sense when the cpu(s) have things they can do in the mean time. if you have a host that's dedicated to a single i/o bound application that is just going to sit and wait for disk reads to complete before getting on with it, you might as well use software raid, as the cycles it uses are effectivley free. (yes, i know, that was very broad, but i'm just trying illustrate the type of subject matter i had in mind.)

-- i'm curious about the write algorthms used in various raid 1 implementations. theoretically, raid 1 should be fast on reads (if a round-robin, or race algorithm is used) and slow on writes as writes on both disks need to complete before the transaction is complete. one can imagine optimizations to the write, but they ain't free. i wonder what's actually being used especially in controllers that provide hardware assistance. i've been asked about this a few times and don't know the answer. maybe someone reading this does?

- ef

RAID parity blocks

I have places tags to merge-in the section RAID parity blocks from the article Parity bit. I think this should be included on the same page as RAID, as it pertains only to RAID, and should only be referenced from the Parity bit article for further reading. This is my username; I decided to make oneThe7thone1188 21:28, 3 June 2006 (UTC)

I agree. --DanDanRevolution 18:50, 4 June 2006 (UTC)
I concur. It's an implementation of parity specific to RAID and would greatly increase user's comprehension of how RAID's implementation of parity works. Dr1819 11:56, 10 June 2006 (UTC)

OK then; It's a week later, and I have just, moved the section, removed the banners, and removed all the old links to the parity bit page. I put it at the beginning of the RAID Levels section, because that seemed logical. I've done my part, so I will unwatch the page. Please move the section if you think there is a better or more logical place for it. Thanks! The7thone1188 03:40, 11 June 2006 (UTC)

Z RAID

Conceptually, it's called Z RAID, as it's the end all, be all, of RAIDs. Ideally, Z RAID would be able to use a nearly unlimited number of drives of varying manufacturers, speeds, and interface connections, whether locally grouped or distributed, and would include automated detection and addition to the volume, as well as automated load balancing and throughput optimization while using a triple-parity mechanism similar to the dual-parity mechanism used by RAID 6.

This section of the Talk page is intended to foster serious consideration about the principle shortcomings of nearly all current RAID designs, and stimulate industry interest and innovation in coming up with a solution that can work for all applications.

Criteria:

1. 32-bit addressing would be required to support a virtually unlimited number of drives.

2. Storage database (similar to NTFS' MAT) required to support drives being from varying manufacturers, of differing sizes, and throughput speeds and access latencies.

3. Associated software management required to work in conjunction with the HD controller to analyze size, throughput, and latencies to provide for automated load balancing and throughput optimization.

4. It would be nice if you could use any network shared resource, including excess server or workstation space on the network.

Future benefits - This approach, if perfected, could mean the end of operating systems residing on either the workstation or the server. Instead, a single, multi-user OS could reside on all the computers, with enough fault tolerance such that you could remove a third of the computers without loss of data or program/OS code - and the OS would automatically rebuild it's self-image to re-establish the requisite redundancy on the smaller footprint, prompting the administrator to "Add storage - capacity currently too small to support maximum level of parity."

Comments, anyone? Dr1819 15:02, 10 June 2006 (UTC)

Sounds cool, but I think it sounds like it's a bit too hard to implement... A hardware RAID controller needs disks that are all physically the same, because it's most efficient. Something like Google's cache server filesystem/network was built from the ground up with a need for high fault-tolerance and multi-read/multi-write of large files. A web-based OS would be best designed around something like that, assuming it's big and bloated (because of cool features), instead of some Damn Small Linux release, or similar. Basically, it's a cool idea, but there're already better solutions to the problem. =D The7thone1188 03:48, 11 June 2006 (UTC)
better solutions to the problem -- please don't taunt me. Tell me what these solutions are.
a bit too hard to implement -- the first law of engineering: If someone does it, it must be possible.
It sounds like you want a fault-tolerant distributed file system, aka Distributed fault tolerant file systems and Distributed parallel fault tolerant file systems. Examples include MogileFS or Gfarm file system or Hadoop Distributed File System. Coincidentally (?), your suggestion of the term "Z RAID" sounds very similar to one of those systems, "zFS".
--70.189.77.59 03:30, 26 October 2006 (UTC)