Displaying 5 results from an estimated 5 matches for "ioscheduler".
Did you mean:
io_schedule
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
...y. Please have a look at
what I'm doing and let me know if anybody has any suggestions on how to
improve the performance...
System specs:
-----------------
2 x 2.8GHz Xeons
6GB RAM
1 3ware 9500S-12
2 x 6-drive, RAID 5 arrays with a stripe size of 256KB. Each array is
2.3TB after formatting.
ioscheduler set to use the deadline scheduler.
mkfs.ext2 options used:
------------------------
mkfs.ext2 -b 4096 -L /d01 -m 1 -O sparse_super,dir_index -R stride=64 -T
largefile /dev/sda1
I'm using a stride size of 64 since the ext2 block size is 4KB and the
array stripe size is 256KB (256/4 = 64).
Out...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...usr=101.04 minf=19757720.00
cpu sys=740.39 majf=24809.00 ctx=845290443.66 usr=37.25 minf=19349958.33
cpu sys=723.63 majf=27597.33 ctx=850199927.33 usr=35.35 minf=19092343.00
FIO config file:
[global]
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
group_reporting
norandommap
ioscheduler=noop
thread
bs=512
size=4MB
direct=1
filename=/dev/vdb
numjobs=256
ioengine=aio
iodepth=64
loops=3
Signed-off-by: Stefan Hajnoczi <stefanha at linux.vnet.ibm.com>
---
Other block drivers (cciss, rbd, nbd) use spin_unlock_irq() so I followed that.
To me this seems wrong: blk_run_queue() uses...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...usr=101.04 minf=19757720.00
cpu sys=740.39 majf=24809.00 ctx=845290443.66 usr=37.25 minf=19349958.33
cpu sys=723.63 majf=27597.33 ctx=850199927.33 usr=35.35 minf=19092343.00
FIO config file:
[global]
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
group_reporting
norandommap
ioscheduler=noop
thread
bs=512
size=4MB
direct=1
filename=/dev/vdb
numjobs=256
ioengine=aio
iodepth=64
loops=3
Signed-off-by: Stefan Hajnoczi <stefanha at linux.vnet.ibm.com>
---
Other block drivers (cciss, rbd, nbd) use spin_unlock_irq() so I followed that.
To me this seems wrong: blk_run_queue() uses...
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
...ught me to this patch (reverting fixes the
issue).
Therefore I''d like to come back to that suggested "that way it only
affects btrfs" solution.
What happens on my system is that all direct I/O requests from
userspace are broken up in 4k bio''s and then re-merged
by the ioscheduler before reaching the device driver.
Eventually that means +30% cpu cost for 64k, probably much more for
larger request sizes - Throughput is only affected if there is no
cpu left to spare for this additional overhead.
A blktrace log is probably the best way to explain this in detail:
(sequential 64...
2011 May 03
8
Is it possible for the ext4/btrfs file system to pass some context related info to low level block driver?
Currently, some new storage devices have the ability to do performance optimizations according to the type of data payload - say, file system metadata, time-stamps, sequential write in some granularity, random write and so on.
For example, the latest eMMC 4.5 device can support the so-called ''Context Management'' and ''Data Tag Mechanism'' features. By receiving