Displaying 20 results from an estimated 10000 matches similar to: "RAID5/6 Implementation - Understanding first"
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this list: send the line
2011 Nov 23
2
stripe alignment consideration for btrfs on RAID5
Hiya,
is there any recommendation out there to setup a btrfs FS on top
of hardware or software raid5 or raid6 wrt stripe/stride alignment?
From mkfs.btrfs, it doesn''t look like there''s much that can be
adjusted that would help, and what I''m asking might not even
make sense for btrfs but I thought I''d just ask.
Thanks,
Stephane
--
To unsubscribe from this
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than
approximately 40MB/s on an ext2 file system. IMO, this is horrible
performance for a 6-drive, hardware RAID 5 array. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive,
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI
providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume
consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp
clients for about 200 users.
The logical drive was created with the following settings:
RAID = 5
stripe size = 32kb
write policy = wrback
read policy =
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way mirrored setup of N disks you get the write performance of
the worst disk and a read performance that is
2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems
to be some sort of fundamental disagreement between ext3 and 3ware's
hardware RAID5 mode that trashes write performance. As a representative
example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode
(256KB stripe size) with a software RAID0 stripe on top (also 256KB
chunks). bonnie++ results look
2008 Jan 09
3
WaitExten and Macros
I am trying to use a WaitExten in a Macro, and I am finding that the extension which is pressed ends up in context of the calling context and not in the Macro.
How do you do a WaitExten in a Macro?
Tony Plack
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we''re
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git
We have recovery working, as well as both full-stripe writes
2017 Jun 02
2
more recent perl version?
On Jun 2, 2017, at 5:05 AM, hw <hw at gc-24.de> wrote:
>
> Warren Young wrote:
>>
>> There are various options. We use mod_fcgid + Plack here.
> I need to look into that when I have time.
I wonder if it wouldn?t have been faster to just backport the app to Perl 5.16? How hard could it be? It?s not like Perl 5.16 is a hopelessly lame and incapable language.
The
2005 Aug 29
2
RAID5 - MBR and default grub config on CentOS 4
greetings,
i just know i have read several times on this list and in other places that
on a RAID5 array that the MBR should be placed on the /boot partition.
is this truly correct and if so, why?
my experiences of putting the MBR on the /boot partition are contrary to
that it will even work and boot that way...
comments please?
regards,
- rh
--
Robert Hanson
Abba Communications
2009 Jul 03
1
new RAID5 array: 3x500GB with XFS
Hello all,
I have yesterday after some typos, sent my ext3 RAID5 array to the
void...
I want to recreate it now, but I read on
http://wiki.centos.org/HowTos/Disk_Optimization
that you can optimize the filesystem on top of the RAID.
Will this wiki article be exactly the same for XFS?
Is it worth the trouble to also create an LVM volume on the RAID array?
Regards,
Coert
2008 Mar 10
1
Shared Extension
I am working on a project that requires shared extension. Where shared line looks at the status of a line/trunk, shared extension would look at a series of channels as the same "extension".
The users would like to add destination channels on the fly, to provide roaming extensions, but maintaining fixed channels as well.
If a call comes in on an extension, the system needs to honor the
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and
2017 May 24
3
more recent perl version?
On May 24, 2017, at 9:38 AM, hw <hw at gc-24.de> wrote:
>
> Warren Young schrieb:
>> On May 24, 2017, at 7:05 AM, hw <hw at gc-24.de> wrote:
>>> apache uses mod_perl
>>
>> mod_perl was dropped from Apache in 2.4, and Red Hat followed suit with RHEL 7.
>
> What is it using instead?
There are various options. We use mod_fcgid + Plack here.
And
2011 Apr 12
3
[PATCH v2 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates
chunks on the devices in the same order. This leads to a very uneven
distribution, especially with RAID1 or RAID10 and an uneven number of
devices.
This patch always sorts the devices before allocating, and allocates the
stripes on the devices with the most available space, as long as there
is enough space available. In a low
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2017 May 24
2
more recent perl version?
On May 24, 2017, at 7:05 AM, hw <hw at gc-24.de> wrote:
> apache uses mod_perl
mod_perl was dropped from Apache in 2.4, and Red Hat followed suit with RHEL 7.
> But there is a package 'rh-perl524-mod_perl?.
That must be someone?s backport. As someone who migrated a mod_perl based app off of mod_perl several years ago, I recommend that you do not use it, unless you have old
2011 May 02
5
[PATCH v3 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates
chunks on the devices in the same order. This leads to a very uneven
distribution, especially with RAID1 or RAID10 and an uneven number of
devices.
This patch always sorts the devices before allocating, and allocates the
stripes on the devices with the most available space, as long as there
is enough space available. In a low
2012 Jan 11
12
[PATCH 00/11] Btrfs: some patches for 3.3
The biggest one is a fix for fstrim, and there''s a fix for on-disk
free space cache. Others are small fixes and cleanups.
The last three have been sent weeks ago.
The patchset is also available in this repo:
git://repo.or.cz/linux-btrfs-devel.git for-chris
Note there''s a small confict with Al Viro''s vfs changes.
Li Zefan (11):
Btrfs: add pinned extents to