Displaying 20 results from an estimated 9000 matches similar to: "OT: What's wrong with RAID5"
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2009 May 25
1
raid5 or raid6 level cluster
Hello,
?s there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.
Do you have a plan for keep one or two server as a parity for whole
glusterfs system
2009 Aug 18
2
OT: RAID5, RAID50 and RAID60 performance??
We have several DELL servers with MD1000 connect to it. Server will install CENTOS 5.x X86_64 version. My questions are:
1. Which configuration have better performance RAID5, RAID50 or RAID60?
2. how much performance difference?
___________________________________________________
??????? ? ????????????????
http://messenger.yahoo.com.tw/
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I
identified the drive, brought the system down (8 drives, and I didn't know
the s/n of the bad one. why it was there in the box, rather than where I
started looking...) Brought it up, RAID not working. I finally found that
I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I
could add the new
2007 Jun 29
2
poor read performance
I am seeing what seems to be a notable limit on read performance of an
ext3 filesystem. If anyone could offer some insight it would be helpful.
Background:
12 x 500G SATA disks in a Hardware RAID enclosure connected via 2Gb/s FC
to a 4 x 2.6 Ghz system with 4GB ram running RHEL4.5. Initially the
enclosure was configured RAID5 10+1 parity, although I've also tried
RAID 50 and currently RAID 0.
2009 Aug 20
1
what is RAID "background initialization" ??
we have DELL server (CENTOS 4.X) with MD1000 connect on it. One of Raid5 (internal 4 disks) has hard disk bad and I replace it. I saw /var/log/messages have following entry:
=======================================
Aug 18 15:33:20 host1 Server Administrator: Storage Service EventID: 2049 Array disk removed: Array Disk 0:11 Controller 1, Connector 0
Aug 18 15:34:09 host1 Server Administrator:
2013 Aug 16
3
4 vol raid5 segfault on device delete
I have a 4 device volume with raid5 - trying to remove one of the
devices (plenty of free space) and I get an almost immediate segfault.
Scrub shows no errors, repair show space cache invalid but nothing
else (I remounted with clear cache to be safe). Lots of corrupt on
bdev (for 3 out of 4 drives), but I have no file access issues that I
know of. Thanks!
Output below:
2008 Apr 11
2
question on RAID performance
Hi all,
I was wonder what experiences there are out there with using RAID-X for
performance increases. I do use RAID-1 (2 disks) but am interested in
attemtps to gain higher R/W performance. Do the RAID-5's etc give
noticeable
performace increases?
A significant help for me was using ccache for compiling programs. That was
a real performance increase.
Thanks for any suggestions/opinions.
2012 Sep 16
12
Setting up XEN domU causes RAID5 to fail?
This may be a coincidence or not, but I''m building a new XEN system for
myself for work purposes.
I support several different versions of a software that cannot be installed
at the same time, so I decided I wanted to setup a XEN domU for each.
I had 5 spare 500GB drives so I put them in my system and partitioned them
so I have a RAID1 boot, a RAID5 root and a RAID5 images.
I got
2015 Feb 16
4
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru <eliezer at ngtech.co.il> wrote:
> I am unsure I understand what you wrote.
> "XFS will create multiple AG's across all of those
> devices,"
> Are you comparing md linear/concat to md raid0? and that the upper level XFS
> will run on top them?
Yes to the first question, I'm not understanding the second
2013 Feb 18
1
RAID5/6 Implementation - Understanding first
Chris and team, hats off on the RAID5/6 being at least experimental. I have been following your work for a year now, and waiting for these days.
I am trying to get my head rapped around the architecture for BTRFS before I jump in and start recommending code changes to the branch.
What I am trying to understand is the comments in the GIT commit which state:
Read/modify/write is done after the
2007 Jun 19
38
ZFS Scalability/performance
Hello,
I''m quite interested in ZFS, like everybody else I suppose, and am about
to install FBSD with ZFS.
On that note, i have a different first question to start with. I
personally am a Linux fanboy, and would love to see/use ZFS on linux. I
assume that I can use those ZFS disks later with any os that can
work/recognizes ZFS correct? e.g. I can install/setup ZFS in FBSD, and
later use
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi,
I just replaced Slackware64 14.1 running on my office's HP Proliant
Microserver with a fresh installation of CentOS 7.
The server has 4 x 250 GB disks.
Every disk is configured like this :
* 200 MB /dev/sdX1 for /boot
* 4 GB /dev/sdX2 for swap
* 248 GB /dev/sdX3 for /
There are supposed to be no spare devices.
/boot and swap are all supposed to be assembled in RAID level 1 across
2009 Oct 09
6
disk I/O problems and Solutions
Hey folks,
CentOS / PostgreSQL shop over here.
I'm hitting 3 of my favorite lists with this, so here's hoping that
the BCC trick is the right way to do it :-)
We've just discovered thanks to a new Munin plugin
http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html
that our production DB is completely maxing out in I/O for about a 3
hour stretch from
2008 Feb 01
2
RAID Hot Spare
I've googled this question without a great deal of information.
Monday I'm rebuilding a Linux server at work. Instead of purchasing 3
drives for this system I purchased 4 with intent to create a hot spare.
Here is my usual setup which I'll do again but with a hot spare for each
partion.
Create /dev/md0 mount point /boot RAID1 3 drives with 1 hot spare
Create two more raid setups
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2013 Jun 26
1
some feedbacks seen on btrfs
First off, thanks for an awesome file system, it is working well for
my purposes of compressing a filesystem on a small VPS. Woot!
I thought I''d call out a few things (in the hopes of spurring
improvements) I''d seen about btrfs (in case they weren''t common
knowledge...):
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup