Note that only striping was tested. There was not time to test mirroring, mirroring+striping, or RAID-5.
We have only tested RAID-5, and striping (RAID-0), figuring that these are the extremes of reliability versus performance, and that other choices such as RAID-1 (a.k.a. "mirroring") and RAID-1+0 (a.k.a. "mirroring+striping") would probably fall somewhere between. We have summarized these latest results and compared them against the previous tests, and have drawn some conclusions.
However, I still don't trust RAID-5 under vinum (it has had a long and colorful history of surprisingly negative interactions with software that it should not, such as "softupdates"), and I have not yet had a chance to test vinum under failure mode conditions (where at least one disk of a RAID set has failed). In cases where I care about reliability and where I want or need to use RAID-5, I'll probably be forced to choose an alternative option that almost certainly will not perform as well.
Moral: More spindles are good
Moral: Fast spindles are better
Moral: More RAM cache is better
Moral: Fast spindles are good
Moral: More spindles are better
Moral: More RAM cache is better
Moral: More spindles are better
In my experience, mail servers tend to have a random I/O pattern, but it tends to be split almost exactly 50/50 reads versus writes (a mail message comes in and is written the the users mailbox, then the message is read and it almost certainly never gets touched again, with the possible exception of being deleted). Keep in mind that your absolute number one killer is I/O latency, especially with regards to synchronous meta-data updates.
If you look at the random read and random write charts in the second test, and you visually average those results, you'll notice that vinum with the five 7200RPM Quantum Viking II drives is notably faster than vinum with the four 10kRPM IBM 9LZX drives. The implication is that having more spindles tends to be more important than having faster spindles, although ideally you should have both more spindles and they should all be as fast as is feasible.
However, if you have to economize, the place to spend the money is on getting more spindles, even if they have to be a little slower. Of course, all the spindles should be identical (at least in performance characteristics, if not size), otherwise the whole system is likely to be yoked down to the performance characteristics of the least powerful drive, and you will have wasted any money you spent on buying anything better.
Of course, if you have an existing system which needs to be upgraded, think about adding more spindles first. It's usually relatively easy to add more spindles to a machine, but faster spindles may not always be available.
Take care not to add too many spindles to a single channel on a single controller -- for absolute minimum latency, a good rule of thumb is four HDAs (Hard Disk Assemblies, or what I've been calling "spindles") per controller channel, although I usually find that for the kinds of applications I see, five or six HDAs per channel per controller is not too excessive.
If you have external RAID devices with their own built-in controller(s), this limitation needs to be applied internally to the device as well as with respect to the total number of devices and how many channels on how many host adaptors that they are connected to.
Of course, all of this assumes SCSI or FibreChannel disk devices, which are known to perform reasonably well under heavy multi-user/multi-processing load (i.e., hundreds of processes would ideally like to be simultaneously reading and writing user mailboxes). I won't even bother to attempt to test IDE/ATAPI drives in this kind of application mix, since we already know that they perform very poorly for these kinds of loads. Perhaps there are RAID devices which communicate to the host via SCSI or FibreChannel but which use IDE/ATAPI drives internally, and which can be made to perform reasonably well in this kind of application mix and at a decent price, but I have not yet personally seen them.
The average mail message size and the average mailbox size will have a HUGE impact on what would be the best overall choice of stripe size, which you ideally want to be at least as large as the average I/O operation (but probably not more than 256KB, where experience shows that you tend to start getting diminishing returns). The result is that each access can (on average) be completed by a single head in a single operation, so that you maximize the total number of simultaneous different operations you can have ongoing at any one point in time, and you minimize overall average latency.