I am new to CentOS, so please bare with me... Trying to install on a new SuperMicro Dual Xeon with Adaptec 2010S raid SCSI card with 5 x Hitachi 73GB 10K RPM drives. I tell the disk druid to auto partition and get an error that there is no place to install to. I just downloaded the CentOS 4.0 iso''s yesterday. Any suggestions? Thanks, Michael Weisman mike@theaddoctors.com
Michael Weisman wrote:>I am new to CentOS, so please bare with me... > >Trying to install on a new SuperMicro Dual Xeon with Adaptec 2010S raid SCSI >card with 5 x Hitachi 73GB 10K RPM drives. > >I tell the disk druid to auto partition and get an error that there is no >place to install to. > >I just downloaded the CentOS 4.0 iso''s yesterday. > >Any suggestions? > > >I could be mistaken, but I don''t think that controller is very well supported. Adaptec''s site shows drivers for RH9 and Suse 8, but it also says "*minimally tested*". Probably not the ringing endorsement I''d want for a RAID controller on a production box. Back in the day (Tm), the Mylex and DPT (which is now also part of Adaptec) cards seemed to enjoy better Linux support than Adaptec cards. But given the bang for the buck with SATA drives and the 3ware cards, I haven''t entertained a SCSI RAID card for sometime. So perhaps my information is a bit out of date. Good luck! Cheers, C
Thanks for your comments, I supect that you are quite right about the card. I do remember the Mylex card, too. I was not aware that this card was not well supported. I''ll have to use a different card when the time comes. This machine is a internal system, though, noy our main production server. As afr as the SATA, etc, I just finished living a nightmare with adaptec and 3ware SATA raid, which is why I went back to SCSI. It''s tried and true, and (until now) has never let me down, in a production environment. Michael Weisman. On Tuesday 15 March 2005 12:16, Chris Mauritz wrote:> Michael Weisman wrote: > >I am new to CentOS, so please bare with me... > > > >Trying to install on a new SuperMicro Dual Xeon with Adaptec 2010S raid > > SCSI card with 5 x Hitachi 73GB 10K RPM drives. > > > >I tell the disk druid to auto partition and get an error that there is no > >place to install to. > > > >I just downloaded the CentOS 4.0 iso''s yesterday. > > > >Any suggestions? > > I could be mistaken, but I don''t think that controller is very well > supported. Adaptec''s site shows drivers for RH9 and Suse 8, but it also > says "*minimally tested*". Probably not the ringing endorsement I''d > want for a RAID controller on a production box. > > Back in the day (Tm), the Mylex and DPT (which is now also part of > Adaptec) cards seemed to enjoy better Linux support than Adaptec cards. > But given the bang for the buck with SATA drives and the 3ware cards, I > haven''t entertained a SCSI RAID card for sometime. So perhaps my > information is a bit out of date. > > Good luck! > > Cheers, > > C > > _______________________________________________ > CentOS mailing list > CentOS@caosity.org > http://lists.caosity.org/mailman/listinfo/centos
Michael Weisman wrote:> As afr as the SATA, etc, I just finished living a nightmare with adaptec and > 3ware SATA raid, which is why I went back to SCSI. It''s tried and true, and > (until now) has never let me down, in a production environment.Just out of curiosity, could you please offer more details regarding your experience with the 3ware SATA card? Which model, what type of RAID array and which operating system. Anything related to this: https://bugzilla.redhat.com/beta/show_bug.cgi?id=121434 ? Unfortunately, sometimes SCSI isn''t an option =/
On Tue, 2005-03-15 at 12:16 -0500, Chris Mauritz wrote:> Michael Weisman wrote: > > >I am new to CentOS, so please bare with me... > > > >Trying to install on a new SuperMicro Dual Xeon with Adaptec 2010S raid SCSI > >card with 5 x Hitachi 73GB 10K RPM drives. > > > >I tell the disk druid to auto partition and get an error that there is no > >place to install to. > > > >I just downloaded the CentOS 4.0 iso''s yesterday. > > > >Any suggestions? > > > > > > > > I could be mistaken, but I don''t think that controller is very well > supported. Adaptec''s site shows drivers for RH9 and Suse 8, but it also > says "*minimally tested*". Probably not the ringing endorsement I''d > want for a RAID controller on a production box.Yes, these Adaptec (and several others) are currently not working very well under Linux in general. This is something I''ve seen with Adaptec 29160 personally, and heard about with several other Adaptecs. Right now, I am not certain if anyone kernel level is even examining the issue with the modules, it certainly doesn''t seem anyone in Adaptec is. The really funny thing is you get a lot of "enterprise linux" (more general, not RHEL) hardware vendors slapping these controllers in their products these days. -- '''''''''''''''''''''''''''''''''''''''''''''''''' .O. Sam Hart, sam@progeny.com ..O Progeny Linux Systems, Inc OOO <http://www.progeny.com/>
Sam Hart wrote:> On Tue, 2005-03-15 at 12:16 -0500, Chris Mauritz wrote: >>Michael Weisman wrote: >>I could be mistaken, but I don''t think that controller is very well >>supported. Adaptec''s site shows drivers for RH9 and Suse 8, but it also >>says "*minimally tested*". Probably not the ringing endorsement I''d >>want for a RAID controller on a production box. > > Yes, these Adaptec (and several others) are currently not working very > well under Linux in general. This is something I''ve seen with Adaptec > 29160 personally, and heard about with several other Adaptecs.I have one of these servers on order right now and may have to cancel it if we can''t get CentOS to run on it. What SCSI RAID controller do you recommend? --Ajay
Avtar Gill wrote:> <SNIP Michael''s part, sorry Michael>> Just out of curiosity, could you please offer more details regarding > your experience with the 3ware SATA card? Which model, what type of > RAID array and which operating system.I''d also like to note that Linux support for a particular card may not guarantee that it will function well. I found that my 3ware 9500S (SATA) would crash randomly if I put it in a motherboard with a Via chipset. After discussing this with a collegue I found that this was a noted problem when combining these two types of hardware. 3ware cards do not seem to be well tested on Via chipsets. This forced my server upgrade to a motherboard with an Intel chipset and I''ve experienced no crashes since. The drivers for the Via chipsets in MS Windows come with many bugfixes to compensate for their hardware. I remember corresponding with Vojtech Pavlik on the Linux kernel team a few years ago about some issues with the Via chipsets. Turned out the issues I asked him about always had a fix waiting in the queue for the next kernel release. They kept finding new issues to fix with Via. So, I''ve found it best to avoid motherboards with that chipset due to the issues with the 3ware card and also the older issues with the chipset bugs. So, if something doesn''t work, it may not merely be an issue with the distro or the kernel. Hope this helps, Shawn M. Jones
Shawn M. Jones wrote:> Avtar Gill wrote: > >> <SNIP Michael''s part, sorry Michael> > > >> Just out of curiosity, could you please offer more details regarding >> your experience with the 3ware SATA card? Which model, what type of >> RAID array and which operating system. > > > I''d also like to note that Linux support for a particular card may not > guarantee that it will function well. > > I found that my 3ware 9500S (SATA) would crash randomly if I put it in > a motherboard with a Via chipset. After discussing this with a > collegue I found that this was a noted problem when combining these > two types of hardware. 3ware cards do not seem to be well tested on > Via chipsets. This forced my server upgrade to a motherboard with an > Intel chipset and I''ve experienced no crashes since.I''d like to second this experience. We''ve moved all of our whiteboxes to Intel boards w/Intel chips, and the problem went away... J
I''ve been asked to elaborate on the issues that I had with the raid cards. this was all with RH 9, and mysql 4.1.x The server is generic with a Supermicro X5DPE motherboard, 4GB RAM and 4 x Maxtor 250GB SATA drives. The first server had the 3ware 8506 series card. Shortly after it arrived, I got the error that the array was degraded. So I tried a rebuild. No Luck. I replaced the drive and THEn did a rebuild. Worked....for 2 days than went south again. I returned that server to the mfg and the sent a new one with the adaptec SATA raid card. The machine started just fine. Then I upgraded the database and perl (just like last time). Everything was fine for about a week to 10 days. In the middle of a huge upload, I saw that error about a degraded array, again. I had to wait for the database load to finish (which took another 10 days, but that''s another story). After the db load finished, I used the CLIU to start a rebuild. Everything looked fine and I left the job run overnight. The next day, there were tons of errors on the screen (which, of course I did not capture) but the end result is that a second drive had failed according to the messages and that was that - everything was gone. michael weisman mike@theaddoctors.com On Tuesday 15 March 2005 16:41, Jonathan wrote:> Shawn M. Jones wrote: > > Avtar Gill wrote: > >> <SNIP Michael''s part, sorry Michael> > >> > >> > >> Just out of curiosity, could you please offer more details regarding > >> your experience with the 3ware SATA card? Which model, what type of > >> RAID array and which operating system. > > > > I''d also like to note that Linux support for a particular card may not > > guarantee that it will function well. > > > > I found that my 3ware 9500S (SATA) would crash randomly if I put it in > > a motherboard with a Via chipset. After discussing this with a > > collegue I found that this was a noted problem when combining these > > two types of hardware. 3ware cards do not seem to be well tested on > > Via chipsets. This forced my server upgrade to a motherboard with an > > Intel chipset and I''ve experienced no crashes since. > > I''d like to second this experience. We''ve moved all of our whiteboxes > to Intel boards w/Intel chips, and the problem went away... > > J > _______________________________________________ > CentOS mailing list > CentOS@caosity.org > http://lists.caosity.org/mailman/listinfo/centos
Michael Weisman wrote:>Thanks for your comments, I supect that you are quite right about the card. >I do remember the Mylex card, too. > > >Those things were bulletproof. Unfortunately, the maintainer of the driver passed away some time ago. I''m not sure who''s in charge of maintaining it these days.>I was not aware that this card was not well supported. I''ll have to use a >different card when the time comes. This machine is a internal system, >though, noy our main production server. > >Yeah, I don''t think I''d use it for anything even remotely important with those comments on Adaptec''s own site.>As afr as the SATA, etc, I just finished living a nightmare with adaptec and >3ware SATA raid, which is why I went back to SCSI. It''s tried and true, and >(until now) has never let me down, in a production environment. > > >I suspect you may have some other hardware problem that''s unrelated to the 3ware card (unless it was a defective card). I have beaten the crap out of these in production systems with multi-terabyte arrays with nary a problem. Best regards, Chris
Hi I just installed CentOS 3.3 (Will try 4 later). I have 10 x 250GB harddisk which I build in a Raid-5 Config. /proc/mdstat: Personalities : [raid5] read_ahead 1024 sectors Event: 2 md0 : active raid5 hdl1[9] hdk1[8] hdj1[7] hdi1[6] hdh1[5] hdg1[4] hdf1[3] hde1[2] hdd1[1] hdb1[0] -2088962752 blocks level 5, 64k chunk, algorithm 0 [10/10] [UUUUUUUUUU] [>....................] resync = 2.8% (6954048/245111616) finish=387.2min speed=10249K/sec unused devices: <none> /etc/raidtab: raiddev /dev/md0 raid-level 5 nr-raid-disks 10 chunk-size 64k persistent-superblock 1 nr-spare-disks 0 device /dev/hdb1 raid-disk 0 device /dev/hdd1 raid-disk 1 device /dev/hde1 raid-disk 2 device /dev/hdf1 raid-disk 3 device /dev/hdg1 raid-disk 4 device /dev/hdh1 raid-disk 5 device /dev/hdi1 raid-disk 6 device /dev/hdj1 raid-disk 7 device /dev/hdk1 raid-disk 8 device /dev/hdl1 raid-disk 9 /dev/md0 57601604 32828 54642732 1% /volume1 /dev/md0 shows as 50GB when I did a df. Is there a size limitation to the biggest software raid on a default kernel installation? Thanks in advance. Regards.
On Wed, 16 Mar 2005 14:38:18 +0800, Ho Chaw Ming <chawming@pacific.net.sg> wrote:> Is there a size limitation to the biggest software raid on a default kernel > installation?Others can chime in because I''m not 100% -- but I believe the limit on a 2.4 based kernel is 2 terabytes. That''s not just with software RAID either, that''s just the max size the kernel can address. I would suggest splitting your array up into two 5 disk arrays. That''s what I had to do w/ a hardware based RAID setup similar to yours. -Ryan
Is there anyway to patch it to > 2TB? Or do installing 2.6.x kernels solve the problem? regards -----Original Message----- From: centos-bounces@caosity.org [mailto:centos-bounces@caosity.org] On Behalf Of Ryan Lane Sent: 16 March 2005 21:01 To: CentOS discussion and information list Subject: Re: [Centos] What''s the biggest software raid partition? On Wed, 16 Mar 2005 14:38:18 +0800, Ho Chaw Ming <chawming@pacific.net.sg> wrote:> Is there a size limitation to the biggest software raid on a default > kernel installation?Others can chime in because I''m not 100% -- but I believe the limit on a 2.4 based kernel is 2 terabytes. That''s not just with software RAID either, that''s just the max size the kernel can address. I would suggest splitting your array up into two 5 disk arrays. That''s what I had to do w/ a hardware based RAID setup similar to yours. -Ryan _______________________________________________ CentOS mailing list CentOS@caosity.org http://lists.caosity.org/mailman/listinfo/centos
Kurt Bechstein
2005-Mar-16 13:37 UTC
[Centos] What''s the biggest software raid partition?
On Wed, 2005-03-16 at 08:06, Ho Chaw Ming wrote:> Is there anyway to patch it to > 2TB? Or do installing 2.6.x kernels solve > the problem? > > regardsI cannot speak directly as far as raid goes but I know in 2.6 kernels the maximum file system size has been increased to 16TB up from the 2TB of the 2.4 series.. -- Kurt Bechstein | Unique Systems, Inc. System Administrator | 1446 S. Reynolds Rd. #313 Phone: (419) 861-3331 | Maumee, OH 43537 Email: kurt@uniqsys.com | http://www.uniqsys.com
Kurt Bechstein wrote:>On Wed, 2005-03-16 at 08:06, Ho Chaw Ming wrote: > > >>Is there anyway to patch it to > 2TB? Or do installing 2.6.x kernels solve >>the problem? >> >>regards >> >> > >I cannot speak directly as far as raid goes but I know in 2.6 kernels >the maximum file system size has been increased to 16TB up from the 2TB >of the 2.4 series.. > > > >That is certainly true. I''m scratching my head trying to figure out the failure mode the original poster is experiencing though. From his description, it seems the array was built and then incorrectly reported itself as a ~50gig partition. That''s really really odd. An easy solution for that poster (barring an upgrade to a 2.6 kernel) would be to remove a couple of disks from the array and then give it another try. If it was me, I''d probably bounce Centos 3.x and pave over the system with Centos 4.0. There are a number of other improvements in addition to the ability to access partitions greater than 2TB. Cheers, C