Guys, I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB SATA-disks for the data-backup to build a backup server. It's built around an Asus Z87-A that seems to have problems with anything Linux unfortunately. Anyway, BackupPC is my preferred backup-solution, so I went ahead to install another favourite, CentOS 6.4 - and failed. The raid controller is a Highpoint RocketRAID 2740 and its driver is suggested to be prior to starting Anaconda by way of "ctrl-alt-f2", at which point Anaconda freezes. I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4. I've never had to deal with this big raid-arrays before and am a bit stumped. Any hints as to where to start reading up, as well as hints on how to proceed (another motherboard, ditto raidcontroller?), would be greatly appreciated. Thanks. -- BW, Sorin ----------------------------------------------------------- # Sorin Srbu, Sysadmin # Uppsala University # Dept of Medicinal Chemistry # Div of Org Pharm Chem # Box 574 # SE-75123 Uppsala # Sweden# # Phone: +46 (0)18-4714482 # Visit: BMC, Husargatan 3, D5:512b # Web: http://www.orgfarm.uu.se ----------------------------------------------------------- # () ASCII ribbon campaign - Against html E-mail # /\ # # This message was not sent from an iProduct! # # MotD follows: # Legacy MS Tag: PATH=C:\DOS;C:\DOS\RUN;C:\WIN\CRASH\DOS;C:\ME\DEL\WIN
> -----Original Message----- > From: Reindl Harald [mailto:h.reindl at thelounge.net] > Sent: den 4 november 2013 13:48 > To: CentOS mailing list; Sorin Srbu > Subject: Re: [CentOS] [OT] Building a new backup server > > Am 04.11.2013 13:44, schrieb Sorin Srbu: > > I've come so far as installing Fedora 19 and having it see all the > > hard-drives, but it refuses to create any partition bigger than approx. > > 16 TB with ext4. > > this is notting new and Google would tell it easy > https://www.google.at/search?q=ext4+16TB > > https://ext4.wiki.kernel.org/index.php/Ext4_Howto > > The code to create file systems bigger than 16 TiB is, at the time of > writing > this article, not in any stable release of e2fsprogs. It will be in future > releases.Ah, thanks. I previously read about ext4 and that its max limit was in the Exabyte range, but missed the part about the tools... Let's see if I can't do the custom-thing you link to. Thanks again! -- //Sorin
On 04.11.2013 12:44, Sorin Srbu wrote:> > Anyway, BackupPC is my preferred backup-solution, so I went ahead to > install > another favourite, CentOS 6.4 - and failed. > > The raid controller is a Highpoint RocketRAID 2740 and its driver is > suggested > to be prior to starting Anaconda by way of "ctrl-alt-f2", at which > point > Anaconda freezes.Hi Sorin, Please check this page, if you have the driver from the manufactured it shows you how to load it: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/sect-Driver_updates-Use_a_boot_option_to_specify_a_driver_update_disk-ppc.html> > I've come so far as installing Fedora 19 and having it see all the > hard-drives, but it refuses to create any partition bigger than > approx. 16 TB > with ext4.Yes, RedHat puts in this artificial limit. They say they do not support volumes larger than this and recommend XFS instead, which is what I recommend as well.> > I've never had to deal with this big raid-arrays before and am a bit > stumped. > > Any hints as to where to start reading up, as well as hints on how to > proceed > (another motherboard, ditto raidcontroller?), would be greatly > appreciated.Just a thought - I maintain a CentOS destop oriented remix and have an ISO with the kernel from elrepo.org (kernel-ml): http://li.nux.ro/download/ISO/Stella6.4_x86_64.1_kernel-ml.iso It's not tested much but the kernel might be new enough to support the raid card, if you can install it you could keep using it; "changing it" it to CentOS is trivial. HTH Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
Sorin Srbu wrote:> Guys, > > I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TBSATA-disks for the data-backup to build a backup server. It's built around an Asus Z87-A> that seems to have problems with anything Linux unfortunately. > > Anyway, BackupPC is my preferred backup-solution, so I went ahead toinstall another favourite, CentOS 6.4 - and failed.> > The raid controller is a Highpoint RocketRAID 2740 and its driver issuggested to be prior to starting Anaconda by way of "ctrl-alt-f2", at which point> Anaconda freezes. > > I've come so far as installing Fedora 19 and having it see all thehard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.> > I've never had to deal with this big raid-arrays before and am a bitstumped.> > Any hints as to where to start reading up, as well as hints on how toproceed (another motherboard, ditto raidcontroller?), would be greatly appreciated. Several. First, see if you CentOS supports that card. The alternative is to go to Highpoint's website, and look for the driver. You *might* need to get the source and build it - I had to do that a few months ago, on an old 2260 (I think it is) card, and had to hack the source -they're *not* good about updates. If you're lucky, they'll have a current driver or source. Second, on our HBR's (that's a technical term - Honkin' Big RAIDS... <g>), we use ext4, and RAID 6. Also, for about two years, I keep finding things that say that although ext4 supports gigantic filesystems, the tools aren't there yet. The upshot is that I make several volumes and partition them into 14TB-16TB filesystems. Besides, if you have a problem with a truly humongous RAID, the rebuild will finish sometime around next summer.... mark
On 11/4/2013 4:44 AM, Sorin Srbu wrote:> I've come so far as installing Fedora 19 and having it see all the > hard-drives, but it refuses to create any partition bigger than approx. 16 TB > with ext4. > > I've never had to deal with this big raid-arrays before and am a bit stumped.use XFS for large file systems. -- john r pierce 37N 122W somewhere on the middle of the left coast
Hey, Les, Thanks for changing the subject to OT. Les Mikesell wrote:> On Tue, Nov 5, 2013 at 1:28 PM, <m.roth at 5-cent.us> wrote: >> >> As I noted, we make sure rsync uses hard links... but we have a goodnumber of individual people and projects with who *each* have a good number of terabytes of data and generated data. Some of our 2TB drives are>> over 90% full, and then there's the honkin' huge RAID, and at least one14TB partition is over 9TB full....> > If you have database dumps or big text files that aren't compressed,backuppc could be a big win. I think it is the only thing that can keep a compressed copy on the server side and work directly with a stock rsync and uncompressed files on the target hosts (and it can cache the block-checksums so it doesn't have to uncompress and> recompute them every run). While it is 'just a perl script' it's notquite what you expect from simple scripting... We have a *bunch* of d/bs. Oracle. MySQL. Postgresql. All with about a week's dumps from every night, and then backups of them to the b/u servers. I can't imagine how they'd be a win - don't remember just off the top of my head if they're compressed or not. A *lot* of our data is not huge text files - lots and lots of pure datafiles, output from things like Matlab, R, and some local programs, like the one for modeling protein folding. mark