similar to: RAID[56] status

Displaying 20 results from an estimated 1000 matches similar to: "RAID[56] status"

2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single ''RAID56'' flag, and the amount of redundancy is stored elsewhere. This attempts it, but I hate it and don''t really want to do it. The type field is designed as a bitmask, and _used_ as a bitmask in a number of
2013 Aug 14
23
[RFC] btrfs-progs: fix sparse checking and warnings
Hi gang, I was a little surprised to see that patch go by recently which fixed an endian bug. I went to see how sparse checking looked and it was.. broken. I got it going again in my Fedora environment. Most of the patches are just cleanups, but there *were* three real bugs lurking in all that sparse warning spam. So I maintain that it''s worth our time to keep it going and fix
2012 Jan 11
12
[PATCH 00/11] Btrfs: some patches for 3.3
The biggest one is a fix for fstrim, and there''s a fix for on-disk free space cache. Others are small fixes and cleanups. The last three have been sent weeks ago. The patchset is also available in this repo: git://repo.or.cz/linux-btrfs-devel.git for-chris Note there''s a small confict with Al Viro''s vfs changes. Li Zefan (11): Btrfs: add pinned extents to
2011 May 02
5
[PATCH v3 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2011 Apr 12
3
[PATCH v2 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2015 Dec 24
4
[PATCH] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-progs v4.2.2. Cc: Gene Cumm <gene.cumm
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: > >> 3 - Can additional drive(s) be added later with a changein RAID level >> without current data loss? > > Only some systems support that sort of restriping, and its a dangerous > activity (if the power fails or system crashes midway through
2013 Oct 16
3
trivial cleanups
Hi gang, Here''s some trivial cleanups that I''ve built up while reading through the code. They''ve been run through xfstests -g quick. - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2016 Oct 25
3
"Shortcut" for creating a software RAID 60?
Hello all, Testing stuff virtually over here before taking it to the physical servers. I found a shortcut for creating a software RAID 10 ("--level=10"), device in CentOS 6. Looking at the below, I don't see anything about a shortcut for RAID 60. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm l/Storage_Administration_Guide/s1-raid-levels.html Is RAID
2012 Mar 20
13
[PATCH 0 of 3 v2] PV-GRUB: add support for ext4 and btrfs
Hi, The following patches add support for ext4 and btrfs to PV-GRUB. These patches are taken nearly verbatim from those provided by Fedora and Gentoo. We''ve been using these patches for the PV-GRUB images available in EC2 for some time now with no problems. Changes from v1: - Makefile has been changed to check the exit code from patch - The btrfs patch has been rebased to apply
2011 Mar 08
6
[PATCH v1 0/6] btrfs: scrub
This series adds an initial implementation for scrub. It works quite straightforward. The usermode issues an ioctl for each device in the fs. For each device, it enumerates the allocated device chunks. For each chunk, the contained extents are enumerated and the data checksums fetched. The extents are read sequentially and the checksums verified. If an error occurs (checksum or EIO), a good copy
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2010 Jan 04
0
[RFC 03/12 RESEND PATCH] Btrfs: Reorder __btrfs_map_block to make code more efficient.
Allocate multi structure only after we know the correct size and do not do unneeded steps when we are only returning length. Signed-off-by: jim owens <jowens@hp.com> --- fs/btrfs/volumes.c | 65 +++++++++++++++++++-------------------------------- 1 files changed, 24 insertions(+), 41 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 5af76fc..e6599ef 100644 ---
2011 Apr 12
17
40TB File System Recommendations
Hello All I have a brand spanking new 40TB Hardware Raid6 array to play around with. I am looking for recommendations for which filesystem to use. I am trying not to break this up into multiple file systems as we are going to use it for backups. Other factors is performance and reliability. CentOS 5.6 array is /dev/sdb So here is what I have tried so far reiserfs is limited to 16TB ext4
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the latest btrfs tools? More specifically: - Is it able to correct errors during scrubs? - Is it able to transparently handle disk failures without downtime? - Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs? - Is it possible to add/remove drives to a RAID6 array? Regards, Hans-Kristian -- To
2011 Nov 23
2
stripe alignment consideration for btrfs on RAID5
Hiya, is there any recommendation out there to setup a btrfs FS on top of hardware or software raid5 or raid6 wrt stripe/stride alignment? From mkfs.btrfs, it doesn''t look like there''s much that can be adjusted that would help, and what I''m asking might not even make sense for btrfs but I thought I''d just ask. Thanks, Stephane -- To unsubscribe from this
2016 Oct 26
1
"Shortcut" for creating a software RAID 60?
On 10/25/2016 11:54 AM, Gordon Messmer wrote: > If you built a RAID0 array of RAID6 arrays, then you'd fail a disk by > marking it failed and removing it from whichever RAID6 array it was a > member of, in the same fashion as you'd remove it from any other array > type. FWIW, what I've done in the past is build the raid 6's with mdraid, then use LVM to stripe them
2013 Apr 11
6
RAID 6 - opinions
I'm setting up this huge RAID 6 box. I've always thought of hot spares, but I'm reading things that are comparing RAID 5 with a hot spare to RAID 6, implying that the latter doesn't need one. I *certainly* have enough drives to spare in this RAID box: 42 of 'em, so two questions: should I assign one or more hot spares, and, if so, how many? mark