Displaying 20 results from an estimated 7000 matches similar to: "stripe alignment consideration for btrfs on RAID5"
2011 Sep 06
3
btrfs-delalloc - threaded?
Hi all.
I was doing some testing with writing out data to a BTFS filesystem
with the compress-force option. With 1 program running, I saw
btfs-delalloc taking about 1 CPU worth of time, much as could be
expected. I then started up 2 programs at the same time, writing data
to the BTRFS volume. btrfs-delalloc still only used 1 CPU worth of
time. Is btrfs-delalloc threaded, to where it can use
2009 May 25
1
raid5 or raid6 level cluster
Hello,
?s there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.
Do you have a plan for keep one or two server as a parity for whole
glusterfs system
2009 Aug 18
2
OT: RAID5, RAID50 and RAID60 performance??
We have several DELL servers with MD1000 connect to it. Server will install CENTOS 5.x X86_64 version. My questions are:
1. Which configuration have better performance RAID5, RAID50 or RAID60?
2. how much performance difference?
___________________________________________________
??????? ? ????????????????
http://messenger.yahoo.com.tw/
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a
number of redundant disks -- so instead of RAID5, RAID6, etc., we end up
with a single ''RAID56'' flag, and the amount of redundancy is stored
elsewhere.
This attempts it, but I hate it and don''t really want to do it. The type
field is designed as a bitmask, and _used_ as a bitmask in a number of
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote:
> On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote:
>
>> 3 - Can additional drive(s) be added later with a changein RAID level
>> without current data loss?
>
> Only some systems support that sort of restriping, and its a dangerous
> activity (if the power fails or system crashes midway through
2013 Jun 16
1
btrfs balance resume + raid5/6
Greetings!
I''m testing raid6, and recently added two drives.
I haven''t been able to properly resume a balance operation: the number of
total chunks is always too low.
It seems that the balance starts and pauses properly, but always resumes
with ~7 chunks.
Here''s an example:
vendikar tim # uname -r
3.10.0-031000rc4-generic
vendikar tim # btrfs fi sho
Label:
2007 May 01
2
Raid5 issues
So when I couldn't get the raid10 to work, I decided to do raid5.
Everything installed and looked good. I left it overnight to rebuild
the array, and when I came in this morning, everything was frozen. Upon
reboot, it said that 2 of the 4 devices for the raid5 array failed.
Luckily, I didn't have any data on it, but how do I know that the same
thing won't happen when I have
2012 Sep 16
12
Setting up XEN domU causes RAID5 to fail?
This may be a coincidence or not, but I''m building a new XEN system for
myself for work purposes.
I support several different versions of a software that cannot be installed
at the same time, so I decided I wanted to setup a XEN domU for each.
I had 5 spare 500GB drives so I put them in my system and partitioned them
so I have a RAID1 boot, a RAID5 root and a RAID5 images.
I got
2014 Mar 08
2
Re: questions regarding file-system optimization for sortware-RAID array
Andreas,
why is it relevant only in case of RAID5 or RAID6?
regards,
Martin
On Fri, Mar 7, 2014 at 5:57 PM, Andreas Dilger <adilger@dilger.ca> wrote:
> Note that stride and stripe width only make sense for RAI-5/6 arrays.
> For RAID-1 it doesn't really matter.
>
> Cheers, Andreas
>
>> On Mar 6, 2014, at 13:46, Martin T <m4rtntns@gmail.com> wrote:
>>
2011 May 14
0
data alignment for SSD: Stripe size or sector size given with -s?
Hi!
> [ANNOUNCE] Btrfs v0.9
> [...]
> * Stripe size parameter to mkfs.btrfs (-s size_in_bytes). Extents will
> be aligned to the stripe size for performance.
> [...]
http://fixunix.com/kernel/258991-[announce]-btrfs-v0-9-a.html
versus
> -s, --sectorsize size
> Specify the sectorsize, the minimum block allocation.
(man mkfs.btrfs with btrfs-tools
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this list: send the line
2009 Sep 24
5
OT: What's wrong with RAID5
Hi all,
Sorry for the OT.
I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1.
The problem is, from the docs it says that it only supports either
RAID-DP or RAID4.
What I want to achieve is Max Storage Capacity, so I change it from
RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group
decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the
capacity is the same.
2011 Sep 27
2
high CPU usage and low perf
Hiya,
Recently,
a btrfs file system of mine started to behave very poorly with
some btrfs kernel tasks taking 100% of CPU time.
# btrfs fi show /dev/sdb
Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1
Total devices 3 FS bytes used 4.25TB
devid 2 size 2.73TB used 1.52TB path /dev/sdc
devid 1 size 2.70TB used 1.49TB path /dev/sda4
devid 3 size
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we''re
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git
We have recovery working, as well as both full-stripe writes
2013 Apr 11
6
RAID 6 - opinions
I'm setting up this huge RAID 6 box. I've always thought of hot spares,
but I'm reading things that are comparing RAID 5 with a hot spare to RAID
6, implying that the latter doesn't need one. I *certainly* have enough
drives to spare in this RAID box: 42 of 'em, so two questions: should I
assign one or more hot spares, and, if so, how many?
mark
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2013 Mar 15
0
[PATCH] btrfs-progs: mkfs: add missing raid5/6 description
Signed-off-by: Matias Bjørling <m@bjorling.me>
---
man/mkfs.btrfs.8.in | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in
index 41163e0..db8c57c 100644
--- a/man/mkfs.btrfs.8.in
+++ b/man/mkfs.btrfs.8.in
@@ -37,7 +37,7 @@ mkfs.btrfs uses all the available storage for the filesystem.
.TP
\fB\-d\fR, \fB\-\-data
2013 Feb 18
1
RAID5/6 Implementation - Understanding first
Chris and team, hats off on the RAID5/6 being at least experimental. I have been following your work for a year now, and waiting for these days.
I am trying to get my head rapped around the architecture for BTRFS before I jump in and start recommending code changes to the branch.
What I am trying to understand is the comments in the GIT commit which state:
Read/modify/write is done after the
2011 Jun 29
33
Re: Mis-Design of Btrfs?
On 06/27/2011 07:46 AM, NeilBrown wrote:
> On Thu, 23 Jun 2011 12:53:37 +0200 Nico Schottelius
> <nico-lkml-20110623@schottelius.org> wrote:
>
>> Good morning devs,
>>
>> I''m wondering whether the raid- and volume-management-builtin of btrfs is
>> actually a sane idea or not.
>> Currently we do have md/device-mapper support for raid
>>
2010 Mar 25
3
RAID 5 setup?
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
-Jason