Displaying 20 results from an estimated 287 matches for "raid6".
Did you mean:
raid
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs?
- Is it possible to add/remove drives...
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this...
2009 May 25
1
raid5 or raid6 level cluster
Hello,
?s there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.
Do you have a plan for keep...
2007 Nov 15
0
md raid6 recommended?
I notice that raid6 is recommended in the manual. eg. "...RAID6 is a must."
http://manual.lustre.org/manual/LustreManual16_HTML/DynamicHTML-11-1.html
which I found a bit surprising given that in Dec ''06 Peter Braam said
on this list "Some of our customers experienced data corruption in the
R...
2013 Dec 10
2
gentoo linux, problem starting vm´s when cache=none
...t-glib-0.1.7,
virt-manager 0.10.0-r1
when i set on virtual machine "cache=none" in the disk-menu, the machine
faults to start with:
<<
Fehler beim Starten der Domain: Interner Fehler: Prozess während der
Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64: -drive
file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
could not open disk image /raid6/virtual/debian-kaspersky.img: Invalid
argument
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in
cb...
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual
3ware 7500-8 based systems to md, I decided I'd like to go with RAID6
(since md is less tolerant of marginal drives than is 3ware). I did some
benchmarking and was getting decent speeds with a 128KiB chunksize.
So the next step was fai...
2013 Jun 11
1
cluster.min-free-disk working?
...ssed something here?
Thanks!
/jon
***************gluster volume info************************
Volume Name: glusterKumiko
Type: Distribute
Volume ID: 8f639d0f-9099-46b4-b597-244d89def5bd
Status: Started
Number of Bricks: 4
Transport-type: tcp,rdma
Bricks:
Brick1: kumiko01:/mnt/raid6
Brick2: kumiko02:/mnt/raid6
Brick3: kumiko03:/mnt/raid6
Brick4: kumiko04:/mnt/raid6
Options Reconfigured:
cluster.min-free-disk: 20%
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130...
2009 Aug 06
10
RAID[56] status
...ay
the merge:
- Better rebuild support -- if we lose a disk and add a replacement,
we want to recreate only the contents of that disk, rather than
allocating a new chunk elsewhere and then rewriting _everything_.
- Support for more than 2 redundant blocks per stripe (RAID[789] or
RAID6[³⁴⁵] or whatever we''ll call it).
- RAID[56789]0 support.
- Clean up the discard support to do the right thing.
--
David Woodhouse Open Source Technology Centre
David.Woodhouse@intel.com Intel Corporation
--
To unsubscribe fro...
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[==================>..] resync = 90.2% (880765600/976222720) finish=44.6min speed=35653K/sec
# cat /sys/block/md11/md/mismatc...
2007 Jun 14
0
(no subject)
...od: missing operand after `b''
Special files require major and minor device numbers.
Try `/bin/mknod --help'' for more information.
raid5: automatically using best checksumming function: pIII_sse
pIII_sse : 1352.800 MB/sec
raid5: using function: pIII_sse (1352.800 MB/sec)
raid6: int32x1 635 MB/s
raid6: int32x2 821 MB/s
raid6: int32x4 686 MB/s
raid6: int32x8 557 MB/s
raid6: mmxx1 1709 MB/s
raid6: mmxx2 1978 MB/s
raid6: sse1x1 1044 MB/s
raid6: sse1x2 1156 MB/s
raid6: sse2x1 1982 MB/s
raid6: sse2x2 2062 MB/s
raid6: using algorithm ss...
2016 Dec 12
2
raid 6 on centos 7
i have 6 sata hdd 2 TB . i want install centos 7 on these hdd in raid 6 mode.
how can i do it ?
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a
number of redundant disks -- so instead of RAID5, RAID6, etc., we end up
with a single ''RAID56'' flag, and the amount of redundancy is stored
elsewhere.
This attempts it, but I hate it and don''t really want to do it. The type
field is designed as a bitmask, and _used_ as a bitmask in a number of
places -- I think it''s...
2017 Feb 17
3
RAID questions
...probably not restartable, you quite likely
> will lose the whole volume)
Doesn't mdraid support changing RAID levels? I think it will even do it
reasonably safely (though still better not to have a power failure!). I
have a vague memory of adding a drive to a RAID5 and converting it to a
RAID6 but I could be misremembering.
--keith
--
kkeller at wombat.san-francisco.ca.us
2009 Sep 24
4
mdadm size issues
Hi,
I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit)
All 10 drives are 2T in size.
device sd{a,b,c,d,e,f} are on my motherboard
device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below)
#lspci
06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controll...
2016 Oct 25
3
"Shortcut" for creating a software RAID 60?
Hello all,
Testing stuff virtually over here before taking it to the physical servers.
I found a shortcut for creating a software RAID 10 ("--level=10"), device in
CentOS 6.
Looking at the below, I don't see anything about a shortcut for RAID 60.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm
l/Storage_Administration_Guide/s1-raid-levels.html
Is RAID
2011 Apr 12
17
40TB File System Recommendations
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
CentOS 5.6
array is /dev/sdb
So here is what I have tried s...
2009 Mar 31
2
DomU console appears to hang when starting
...raid1 personality registered for level 1
[ 0.747314] xor: automatically using best checksumming function: generic_sse
[ 0.765850] generic_sse: 3034.000 MB/sec
[ 0.765855] xor: using function: generic_sse (3034.000 MB/sec)
[ 0.766563] async_tx: api initialized (async)
[ 0.833914] raid6: int64x1 1456 MB/s
[ 0.901926] raid6: int64x2 2047 MB/s
[ 0.969941] raid6: int64x4 2176 MB/s
[ 1.037972] raid6: int64x8 1912 MB/s
[ 1.105987] raid6: sse2x1 2151 MB/s
[ 1.174003] raid6: sse2x2 3157 MB/s
[ 1.242040] raid6: sse2x4 3031 MB/s
[ 1.242045] raid6: usin...
2014 Aug 28
2
Random Disk I/O Tests
I have two openvz servers running Centos 6.x both with 32GB of RAM.
One is an Intel Xeon E3-1230 quad core with two 4TB 7200 SATA drives
in software RAID1. The other is an old HP DL380 dual quad core with 8
750GB 2.5" SATA drives in hardware RAID6. I want to figure out which
one has better random I/O performance to host a busy container. The
DL380 currently has one failed drive in the RAID6 array until I get
down to replace it, will that degrade performance? Is there an easy
way to test disk I/O? On a plain Gigabyte file copy the softwar...
2013 Dec 10
0
Re: gentoo linux, problem starting vm´s when cache=none
...cache=none" in the disk-menu, the
> >>machine faults to start with:
> >>
> >><<
> >>Fehler beim Starten der Domain: Interner Fehler: Prozess während der
> >>Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64:
> >>-drive file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
> >>could not open disk image /raid6/virtual/debian-kaspersky.img:
> >>Invalid argument
> >>
> >>
> >>Traceback (most recent call last):
> >> File &q...
2014 Jun 11
2
Re: libguestfs supermin error
...05f]
> [ 0.243614] pci 0000:00:00.0: PCI bridge to [bus 01]
> [ 0.244871] pci 0000:00:00.0: bridge window [io 0x10000-0x10fff]
> [ 0.246514] pci 0000:00:00.0: bridge window [mem 0xc0100000-0xc01fffff]
> [ 0.274019] bio: create slab <bio-0> at 0
> [ 0.341414] raid6: altivecx1 1458 MB/s
> [ 0.409673] raid6: altivecx2 1815 MB/s
> [ 0.477900] raid6: altivecx4 2740 MB/s
> [ 0.546168] raid6: altivecx8 2824 MB/s
> [ 0.615756] raid6: int64x1 454 MB/s
> [ 0.683978] raid6: int64x2 828 MB/s
> [ 0.752252] raid6: int64x4 12...