similar to: OT: RAID5, RAID50 and RAID60 performance??

Displaying 20 results from an estimated 4000 matches similar to: "OT: RAID5, RAID50 and RAID60 performance??"

2008 May 22
5
RAID5 or RAID50 for database?
we have DELL 6800 server with 12 internal disks in it. O.S. is CENTOS 4.6 and SCSI control card is PERC 4e/di. We plan to configure 4 disks (5,8,9,10) as RAID5 or RAID50. This logical volume will be use as file systems and store database backup files. Can anyone tell me which one is better on performance? Thanks. ?????????Yahoo!??????2.0????????????? - ????? -------------- next part
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2011 Jan 11
3
RAID configuration suggestion???
we have several DELL R900 with PERC 6/E adapter in it. R900 using Redhat Linux. Each R900 have two PERC 6/E adapter and at least two MD1000 connect to it. Configuration 1: PERC 6/E -- two MD1000 PERC 6/E -- empty Configuration 2: PERC 6/E -- MD1000 PERC 6/E -- MD1000 Normally first MD1000 for database and second MD1000 for nightly backup. My
2009 Aug 20
1
what is RAID "background initialization" ??
we have DELL server (CENTOS 4.X) with MD1000 connect on it. One of Raid5 (internal 4 disks) has hard disk bad and I replace it. I saw /var/log/messages have following entry: ======================================= Aug 18 15:33:20 host1 Server Administrator: Storage Service EventID: 2049 Array disk removed: Array Disk 0:11 Controller 1, Connector 0 Aug 18 15:34:09 host1 Server Administrator:
2008 Mar 26
3
HW experience
Hi, we would like to establish a small Lustre instance and for the OST planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller (Raid-6). Any experience (good or bad) with such a config ? thanxs, Martin
2011 Jun 08
3
what is difference between "slow initialize" and "patrol read" on RAID?
We have DELL server with MD1000 Disk array in it. O.S. is CENTOS 5.5. Recently every time MD1000 "patrol read" start I will get "media error" messages on /var/log/message file. I use MD1000 "slow initialize" to initialize "bad disk" and NO error. After "slow initialize" finish, I manually "startup patrol read". I continue get
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single ''RAID56'' flag, and the amount of redundancy is stored elsewhere. This attempts it, but I hate it and don''t really want to do it. The type field is designed as a bitmask, and _used_ as a bitmask in a number of
2009 May 25
1
raid5 or raid6 level cluster
Hello, ?s there anyway to create raid6 or raid5 level glusterfs installation ? >From docs I undetstood that I can do raid1 base glusterfs installation or radi0 (strapting data too all servers ) and raid10 based solution but raid10 based solution is not cost effective because need too much server. Do you have a plan for keep one or two server as a parity for whole glusterfs system
2009 Jun 26
2
2TB partition limitation on X86_64 version??
we have DELL server with Centos 5.3 X86_64 bits version on it. This server also have couple MD1000 connect to it. We configured MD1000 as one hardware Volume size 2990GB. I tried to use "fdisk" to partition this 2990Gb volume and "fdisk" can only see 2000GB. does 64 bits O.S. still have 2TB limitation on File system? Does there has other tool can partition disk size large
2011 Nov 23
2
stripe alignment consideration for btrfs on RAID5
Hiya, is there any recommendation out there to setup a btrfs FS on top of hardware or software raid5 or raid6 wrt stripe/stride alignment? From mkfs.btrfs, it doesn''t look like there''s much that can be adjusted that would help, and what I''m asking might not even make sense for btrfs but I thought I''d just ask. Thanks, Stephane -- To unsubscribe from this
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2007 May 01
2
Raid5 issues
So when I couldn't get the raid10 to work, I decided to do raid5. Everything installed and looked good. I left it overnight to rebuild the array, and when I came in this morning, everything was frozen. Upon reboot, it said that 2 of the 4 devices for the raid5 array failed. Luckily, I didn't have any data on it, but how do I know that the same thing won't happen when I have
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote: >>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote: >> This is why before configuring and installing everything you may want to >> attach drives one at a time, and upon boot take a note which physical >> drive number the controller has for that drive, and definitely label it >> so >> y9ou will know which drive to pull when drive failure is reported. > >
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks into the top bits of the type bitmask (and I hope we have), then we''re fairly much there. Current code is at: git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git We have recovery working, as well as both full-stripe writes
2012 May 23
1
pvcreate limitations on big disks?
OK folks, I'm back at it again. Instead of taking my J4400 ( 24 x 1T disks) and making a big RAID60 out of it which Linux cannot make a filesystem on, I'm created 4 x RAID6 which each are 3.64T I then do : sfdisk /dev/sd{b,c,d,e} <<EOF ,,8e EOF to make a big LVM partition on each one. But then when I do : pvcreate /dev/sd{b,c,d,e}1 and then pvdisplay It shows each one as
2006 Nov 21
3
RAID benchmarks
We (a small college with about 3000 active accounts) are currently in the process of moving from UW IMAP running on linux to dovecot running on a cluster of 3 or 4 new faster Linux machines. (Initially using perdition to split the load.) As we are building and designing the system, I'm attempting to take (or find) benchmarks everywhere I can in order to make informed decisions and so
2013 Apr 11
6
RAID 6 - opinions
I'm setting up this huge RAID 6 box. I've always thought of hot spares, but I'm reading things that are comparing RAID 5 with a hot spare to RAID 6, implying that the latter doesn't need one. I *certainly* have enough drives to spare in this RAID box: 42 of 'em, so two questions: should I assign one or more hot spares, and, if so, how many? mark