Displaying 20 results from an estimated 9000 matches similar to: "mke2fs stride and LVM"
2007 Jun 05
1
Calculating stride values?
All,
I have a question about calculating the value for the -E stride option
to mke2fs.
The mke2fs man page says
stride=stripe-size
Configure the filesystem for a RAID array with stripe-size filesystem blocks per stripe.
So stride = size of stripe/blocksize.
The size of a stripe is the RAID chunk size * the number of drives in the RAID.
My question: are parity disks
2007 May 27
1
dealing with mke2fs -T option
Hi,
I have a doubt if I use the mke2fs option the right way.
I formatted two different disks, one with
$ mke2fs -b 4096 -E stride=16 -m 1 -T news /dev/sdd
and the other with
$ mke2fs -b 4096 -E stride=16 -m 1 -T largefile4 /dev/sde
sdd is supposed to get files between 8k and 16k.
sde will handle files with a fixed size of 32Mb.
Then I tried this :
$ dd if=/dev/zero of=/mount-sdx/file bs=4k
2014 Mar 06
2
questions regarding file-system optimization for sortware-RAID array
Hi,
I created a RAID1 array of two physical HDD's with chunk size of 64KiB
under Debian "wheezy" using mdadm. As a next step, I would like to create
an ext3(or ext4) file-system to this RAID1 array using mke2fs utility.
According to RAID-related tutorials, I should create the file-system like
this:
# mkfs.ext3 -v -L myarray -m 0.5 -b 4096 -E stride=16,stripe-width=32
/dev/md0
2014 Mar 08
2
Re: questions regarding file-system optimization for sortware-RAID array
Andreas,
why is it relevant only in case of RAID5 or RAID6?
regards,
Martin
On Fri, Mar 7, 2014 at 5:57 PM, Andreas Dilger <adilger@dilger.ca> wrote:
> Note that stride and stripe width only make sense for RAI-5/6 arrays.
> For RAID-1 it doesn't really matter.
>
> Cheers, Andreas
>
>> On Mar 6, 2014, at 13:46, Martin T <m4rtntns@gmail.com> wrote:
>>
2014 Mar 07
0
Re: questions regarding file-system optimization for sortware-RAID array
Note that stride and stripe width only make sense for RAI-5/6 arrays.
For RAID-1 it doesn't really matter.
Cheers, Andreas
> On Mar 6, 2014, at 13:46, Martin T <m4rtntns@gmail.com> wrote:
>
> Hi,
>
> I created a RAID1 array of two physical HDD's with chunk size of 64KiB under Debian "wheezy" using mdadm. As a next step, I would like to create an ext3(or
2014 Mar 08
0
Re: questions regarding file-system optimization for sortware-RAID array
The stripe and stride options do two things:
- shift block and inode bitmaps in each group to be on different disks
- align the block allocation to the stripe and stride boundaries to
avoid read-modify-write in RAID
The first one is irrelevant if the flex_bg option is used, since it already packs
the bitmaps together and achieves the same effect.
The second is meaningless for RAID-1 since
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI
providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume
consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp
clients for about 200 users.
The logical drive was created with the following settings:
RAID = 5
stripe size = 32kb
write policy = wrback
read policy =
2006 Jan 25
1
EXT3: failed to claim external journal device.
We are having problems remounting an ext3 filesystem using an external
journal device. The filesystem in question was working fine until the
server was rebooted.
This is what we see on dmesg when trying to mount:
EXT3: failed to claim external journal device.
The external journal lives on a LVM2 logical volume and it seems to be
accessible ( we can dumpe2fs and see filesystem information).
2012 Aug 31
1
[PATCH V1] NEW API:ext:mke2fs
New api mke2fs for full configuration of filesystem.
Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com>
---
daemon/ext2.c | 452 +++++++++++++++++++++++++++++++++++++++++
generator/generator_actions.ml | 18 ++
gobject/Makefile.inc | 6 +-
src/MAX_PROC_NR | 2 +-
4 files changed, 475 insertions(+), 3 deletions(-)
diff --git
2010 Oct 01
2
Format details for a raid partition....
So I have been playing with a RAID 10 f2 ( 2 disks far layout)
setup...thanks for all of the advice..Now I am playing with the format and
want to make sure I have it setup the best that I can, my raid was built
using the raid 10 option with 2 disks with the layout=far, chunk size
512....now I read all of the docs I could find about format and stride and
stripe size and this is what i came up
2011 May 31
6
[PATCH 1/4] febootstrap: Look for insmod.static, mke2fs in /sbin
---
configure.ac | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/configure.ac b/configure.ac
index da03c9f..7606bca 100644
--- a/configure.ac
+++ b/configure.ac
@@ -68,7 +68,8 @@ dnl For ArchLinux handler.
AC_CHECK_PROG(PACMAN,[pacman],[pacman],[no])
dnl Required programs, libraries.
-AC_PATH_PROG([INSMODSTATIC],[insmod.static],[no])
2001 Oct 19
2
using non-default blocks per group
I've been playing with the -g switch to mke2fs, which controls the blocks
per group. This switch isn't documented. I've been using it to balance
metadata traffic on RAID 0.
This seems to work fine in ext2. Will I have any problems with ext3 where
the blocks per group is not a power of 2?
-jwb
2008 Feb 11
2
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
I'm seeing the following failures with "make check" (x86-32 linux):
FAIL: test/CodeGen/X86/fold-mul-lohi.ll
Failed with exit(1) at line 2
while running: llvm-as < test/CodeGen/X86/fold-mul-lohi.ll | llc -march=x86-64 | not grep lea
leaq B, %rsi
leaq A, %r8
leaq P, %rsi
child process exited abnormally
FAIL:
2012 Oct 04
1
[PATCH] gallium/nouveau: use pre-calculated stride for resource_get_handle
Fixes FDO#55294.
---
src/gallium/drivers/nv30/nv30_miptree.c | 3 +--
src/gallium/drivers/nv50/nv50_miptree.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/src/gallium/drivers/nv30/nv30_miptree.c b/src/gallium/drivers/nv30/nv30_miptree.c
index 5a9a63b..9700fa8 100644
--- a/src/gallium/drivers/nv30/nv30_miptree.c
+++ b/src/gallium/drivers/nv30/nv30_miptree.c
@@ -56,8
2008 Feb 12
0
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
Fixed. Thanks.
Evan
On Feb 11, 2008, at 2:35 AM, Duncan Sands wrote:
> I'm seeing the following failures with "make check" (x86-32 linux):
>
> FAIL: test/CodeGen/X86/fold-mul-lohi.ll
> Failed with exit(1) at line 2
> while running: llvm-as < test/CodeGen/X86/fold-mul-lohi.ll | llc -
> march=x86-64 | not grep lea
> leaq B, %rsi
> leaq
2008 Feb 12
2
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
Hi Evan,
In -relocation-model=static mode, those tests are now getting
code like this
leaq A, %rsi
movss %xmm0, (%rsi,%rdx,4)
instead of this:
movss %xmm0, A(,%rdx,4)
This is specifically what these tests were written to catch :-).
Running them with -relocation-model=pic is hiding the real bug.
Dan
On Feb 11, 2008, at 11:22 PM, Evan Cheng wrote:
> Fixed.
2008 Feb 12
0
[LLVMdev] "make check" failures: leaq in fold-mul-lohi.ll, stride-nine-with-base-reg.ll, stride-reuse.ll
Fixed. However, I wonder if we are doing the right / smart codegen for
static codegen. AMD64 ABI document seems to indicate rip relative
addressing should be used even in this case (see page 38). You know
about about Linux addressing mode than I do. Please check.
Thanks,
Evan
On Feb 12, 2008, at 10:10 AM, Dan Gohman wrote:
> Hi Evan,
>
> In -relocation-model=static mode, those
2011 Mar 15
1
Using stride on non-RAID
Hello,
I understand the need for a proper stride setting when formatting a filesystem on a RAID device. However, is there any problem in using a stride setting when formatting a filesystem on a regular non-RAID, non-SSD, just plain-vanilla-single-disk block device? I'm sure there isn't any benefit to it, but I'm curious if there is any harm.
The reason I ask is I'm looking at
2003 Jul 30
2
accidental mke2fs
I know there is no straightforward way to recover deleted files on
an ext3 file system, but is there any way to recover from an
accidental mke2fs?
--
---------------------------------------------------------------
Paul Raines email: raines@nmr.mgh.harvard.edu
MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
149 (2301) 13th Street tel:(617)-724-2369
2011 Oct 23
2
ssd quandry
On a CentOS 6 64bit system, I added a couple prototype SAS SSDs on a HP
P411 raid controller (I believe this is a rebranded LSI megaraid with HP
firmware) and am trying to format them for best random IO performance
with something like postgresql.
so, I used the raid command tool to build a raid0 with 2 SAS SSDs
# hpacucli ctrl slot=1 logicaldrive 3 show detail
Smart Array P410 in Slot 1