So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning: WARNING: raid level RAID10 not supported, skipping md10. I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10? Russ
> -----Original Message----- > From: centos-bounces at centos.org > [mailto:centos-bounces at centos.org] On Behalf Of Ruslan Sivak > Sent: Monday, May 07, 2007 12:53 PM > To: CentOS mailing list > Subject: [CentOS] Anaconda doesn't support raid10 > > So after troubleshooting this for about a week, I was finally able to > create a raid 10 device by installing the system, copying the > md modules > onto a floppy, and loading the raid10 module during the install. > > Now the problem is that I can't get it to show up in anaconda. It > detects the other arrays (raid0 and raid1) fine, but the raid10 array > won't show up. Looking through the logs (Alt-F3), I see the > following > warning: > > WARNING: raid level RAID10 not supported, skipping md10. > > I'm starting to hate the installer more and more. Why won't > it let me > install on this device, even though it's working perfectly from the > shell? Why am I the only one having this problem? Is nobody > out there > using md based raid10?Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage. Anaconda was never designed to create RAID5/RAID10 during install. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:>> -----Original Message----- >> From: centos-bounces at centos.org >> [mailto:centos-bounces at centos.org] On Behalf Of Ruslan Sivak >> Sent: Thursday, May 10, 2007 4:31 PM >> To: CentOS mailing list >> Subject: Re: [CentOS] Re: Anaconda doesn't support raid10 >> >> Ross S. W. Walker wrote: >> >>>> I can get all that data, but can I actually test it somehow? >>>> Does linux >>>> know anything about NCQ, or is everything abstracted to the >>>> controller? >>>> >>>> >>> Good question, not knowing the answer I did a quick google and this >>> came to the top: >>> >>> http://linux-ata.org/driver-status.html >>> >>> another good one, >>> >>> http://blog.kovyrin.net/2006/08/11/turn-on-ncq-on-ich-linux/ >>> >>> Looks like support wasn't added until 2.6.18 and isn't widely >>> supported until 2.6.19 and 2.6.20. >>> >>> -Ross >>> >>> >>> >> Ross, >> >> Thank you for the links. Looks like my controller doesn't >> support NCQ >> :-(. I have the SIL 3114 based card. Doesn't look like >> there are any >> cheap alternatives on the PCI bus, but I think I can live with the >> performance of this system. >> > > How did it go creating the interleaved LVs? > > -Ross > >It worked... I think. I already had the LVM partitions set up, but when I booted up into the install, and went to the shell, I couldn't see them (although the installer saw them). I had to do raidstart on all my md devices, and then scan by doing somethign liek vgscan and lvscan, and then the devices showed up. So I deleted them and re-added them, per your instructions, and when I printed out the config, looks like they were striping. I was then able to install on it. So I don't think the reboot step is necessary, you just need to precreate the config manually in the shell first. Thanks for your help. Russ
Feizhou wrote:> >> Thank you for the links. Looks like my controller doesn't support >> NCQ :-(. I have the SIL 3114 based card. Doesn't look like there >> are any cheap alternatives on the PCI bus, but I think I can live >> with the performance of this system. >> > > How much did your si3114 cost? You can get a si3124 card for under > 100USD. >I think it was around $25 shipped.
>Message: 67 >Date: Fri, 11 May 2007 11:40:25 +0800 >From: Feizhou <feizhou at graffiti.net> >Subject: Re: [CentOS] Re: Anaconda doesn't support raid10 >To: CentOS mailing list <centos at centos.org> >Message-ID: <4643E5A9.3030302 at graffiti.net>Feizhou wrote: <snip>>The SCSI drive: >Spindle Speed 15000 rpm >Average latency 2.0 msec >Random read seek time 3.50 msec >Random write seek time 4.0 msec>The SATA drive: >Spindle Speed 7200 rpm >Native Command Queuing Y >Average latency 4.16 msec >Random read seek time <8.5 msec >Random write seek time <10.0 msec >Maximum interface transfer rate 300 Mbytes/sec>Compare to a 10K scsi drive: >Spindle Speed 10,000 rpm >Sustained data transfer rate 80 Mbytes/sec. >Average latency 3.0 msec >Random read seek time 4.9 msec >Random write seek time 5.4 msec >Maximum interface transfer rate 320 Mbytes/secThe above specifications are about Performance. If maximum Reliability is the goal, look at the MTBF in the specifications. If the Design Engineers have done their job, and the Manufacturing Engineers maintain high Quality Control, the result should be a quality component. As has been pointed out in this thread, RAID is *not* a substitute for backups. RAID is intended to keep the box up and running. Valuable data should always be stored off site, in removable drives, or via the WAN. Lanny
>Message: 31 >Date: Tue, 15 May 2007 11:25:42 +0800 >From: Feizhou <feizhou at graffiti.net> >Subject: Re: [CentOS] Re: Anaconda doesn't support raid10 >Message-ID: <46492836.3010808 at graffiti.net><snip> <the reliability factor has been proven to be the same across the board. <You must have missed the thread on the Google report on various drives <that they use Feizhou: Yes, I did not see that. If the MTBF's are the same, then the Performance specs you provided are what one needs to go with, when choosing components. If Google has posted Failure rates, for the drives they use, that would be more meaningful than MTBF by the drive manufacturer, for that particular drive model. Questions: Is it your belief that all PATA/SATA/SCSI drives made by one manufacturer have the same MTBF? (I find that hard to believe. However, that might be the case.....) Also, is it your belief that these drives, made by different manufacturers, all have the same MTBF?>Who says? You? I like to have an online offsite RAID backup server. Is >there an ONE TRUE WAY OF BACKUP?No. Obviously, there is no one true way of backup. Everyone's needs are different. Your online off site RAID backup server is one way. Backups *must* be off site, in the event of a catastrophic problem. Thank you, for sharing your time and knowledge with the list! Lanny