Displaying 20 results from an estimated 500 matches similar to: "[RFC] Early look at btrfs directIO read code"
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS.
Repeat the exercise with directio on UFS and there is no
2013 Jan 31
4
[RFC][PATCH 2/2] Btrfs: implement unlocked dio write
This idea is from ext4. By this patch, we can make the dio write parallel,
and improve the performance.
We needn''t worry about the race between dio write and truncate, because the
truncate need wait untill all the dio write end.
And we also needn''t worry about the race between dio write and punch hole,
because we have extent lock to protect our operation.
I ran fio to test the
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2013 Oct 25
0
[PATCH] Btrfs: return an error from btrfs_wait_ordered_range
I noticed that if the free space cache has an error writing out it''s data it
won''t actually error out, it will just carry on. This is because it doesn''t
check the return value of btrfs_wait_ordered_range, which didn''t actually return
anything. So fix this in order to keep us from making free space cache look
valid when it really isnt. Thanks,
Signed-off-by:
2004 Jul 08
0
directio for ext3 file system
Hi,
Does anybody know whether the ext3 file system
support Direct_io? if so, how do you enable it?
I went through the man page of mount, and it did
not mention such option?
my system is running :
Red Hat Enterprise Linux AS release 3 (Taroon Update
2)
Kernel 2.4.21-15.ELsmp on an i686
Thanks much!!!
David.
__________________________________
Do you Yahoo!?
Yahoo! Mail Address
2010 May 12
0
[PATCH 2/4] direct-io: add a hook for the fs to provide its own submit_bio function V3
V1->V2:
-Changed dio_end_io to EXPORT_SYMBOL_GPL
-Removed the own_submit blockdev dio helper
-Removed the boundary change
V2->V3
-Made it so we keep track of what the current logical offset in the file we have
a BIO setup for so we can pass it into the submit_io hook.
Because BTRFS can do RAID and such, we need our own submit hook so we can setup
the bio''s in the correct fashion,
2013 Jul 25
0
[PATCH V8 21/33] ocfs2: add support for read_iter and write_iter
Signed-off-by: Dave Kleikamp <dave.kleikamp at oracle.com>
Acked-by: Joel Becker <jlbec at evilplan.org>
Cc: Zach Brown <zab at zabbo.net>
Cc: Mark Fasheh <mfasheh at suse.com>
Cc: ocfs2-devel at oss.oracle.com
---
fs/ocfs2/aops.h | 2 +-
fs/ocfs2/file.c | 55 ++++++++++++++++++++++----------------------------
fs/ocfs2/ocfs2_trace.h | 6 +++---
3 files
2010 Nov 02
2
[RFC][PATCH] direct-io: btrfs: avoid splitting dio requests for non-btrfs filesystems
Hi,
this is about an issue newer kernels show, bysplitting direct I/O requests
into 4k pieces to directly merge them in the Block Device Layer afterwards.
If anyone is interested in own tests just use a simple command like
dd if=/mnt/test/test-dd1 of=/dev/null iflag=direct bs=64k count=1
in combination with blktrace.
The following patch is more a proposal for discussion than a solution, well
2010 Apr 15
1
[PATCH] ocfs2: avoid direct write if we fall back to buffered v2
when we fall back to buffered write from direct write, we call
__generic_file_aio_write but that will end up doing direct write
even we are only prepared to do buffered write because the file
has O_DIRECT flag set. This is a fix for
https://bugzilla.novell.com/show_bug.cgi?id=591039
revised with Joel's comments.
---
fs/ocfs2/file.c | 23 ++++++++++++-----------
1 files changed, 12
2013 Jan 09
0
[PATCH V5 19/30] ocfs2: add support for read_iter, write_iter, and direct_IO_bvec
From: Zach Brown <zab@zabbo.net>
ocfs2''s .aio_read and .aio_write methods are changed to take
iov_iter and pass it to generic functions. Wrappers are made to pack
the iovecs into iters and call these new functions.
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: Zach Brown <zab@zabbo.net>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>:
> On Wed, 26 Oct 2011, Christian Brunner wrote:
>> >> > Christian, have you tweaked those settings in your ceph.conf? It would be
>> >> > something like ''journal dio = false''. If not, can you verify that
>> >> > directio shows true when the journal is initialized from your osd log?
2005 Dec 21
4
ZFS, COW, write(2), directIO...
Hi ZFS Team,
I have a couple of questions...
Assume that the maximum slab size that ZFS supports is x. (I am assuming
there is a maximum.) An application does a (single) write(2) for 2x
bytes. Does ZFS/COW guarantee that either all the 2x bytes are
persistent or none at all? Consider a case where there is a panic after
x bytes has gone to disk and the change propagated to the uber block. Do
2023 Feb 16
0
[RFC PATCH v1 07/12] vsock/virtio: MGS_ZEROCOPY flag support
On Mon, Feb 06, 2023 at 07:00:35AM +0000, Arseniy Krasnov wrote:
>This adds main logic of MSG_ZEROCOPY flag processing for packet
>creation. When this flag is set and user's iov iterator fits for
>zerocopy transmission, call 'get_user_pages()' and add returned
>pages to the newly created skb.
>
>Signed-off-by: Arseniy Krasnov <AVKrasnov at sberdevices.ru>
2010 May 07
6
[PATCH 1/5] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read,
just in case it was actually a short read and we need to just return.
This is similar to what already happens in the write case. If we have a short
read while doing O_DIRECT, instead of just returning, fallthrough and try to
read the rest via buffered IO. BTRFS needs this because if we encounter a
compressed or
2013 Dec 02
2
lastes sources don't include "drop_cache" option
Was there some reason that patch got dropped?
Otherwise rsync eats up all the buffer memory.
Note -- I tried directio -- didn't work due to alignment
issues -- buffers have to be aligned to sectors.
The kernel, if I remember correctly, has been on again/off again
on requiring alignment on directio -- because most of the drivers
and devices do for directio to work, at.
"dd"
2005 Sep 22
0
io provider and files in a forceddirectio mounted filesystem
The following script is used as a first attempt to discover IO patterns in a
dbase setup:
#------------------------------------------------------------------
#pragma D option dynvarsize=128m
dtrace:::BEGIN
{
}
pid$target::kaio:entry
{
self->doit = 1;
}
pid$target::_aiodone:return
{
self->doit = 0;
}
io:::start
/self->doit || execname == "oracle"/
{
2011 Apr 08
0
[PATCH] Btrfs: check for duplicate iov_base's when doing dio reads
Apparently it is ok to submit a read to an IDE device with the same target page
for different offsets. This is what Windows does under qemu. The problem is
under DIO we expect them to be different buffers for checksumming reasons, and
so this sort of thing will result in checksum errors, when in reality the file
is fine. So when reading, check to make sure that all iov bases are different,
and
2010 May 12
0
[PATCH 1/4] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read,
just in case it was actually a short read and we need to just return.
This is similar to what already happens in the write case. If we have a short
read while doing O_DIRECT, instead of just returning, fallthrough and try to
read the rest via buffered IO. BTRFS needs this because if we encounter a
compressed or
2010 May 06
1
[PATCH 1/3] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read,
just in case it was actually a short read and we need to just return.
This is similar to what already happens in the write case. If we have a short
read while doing O_DIRECT, instead of just returning, fallthrough and try to
read the rest via buffered IO. BTRFS needs this because if we encounter a
compressed or
2008 Mar 04
0
Device-mapper-multipath not working correctly with GNBD devices
Hi all,
I am trying to configure a failover multipath between 2 GNBD devices.
I have a 4 nodes Redhat Cluster Suite (RCS) cluster. 3 of them are used for
running services, 1 of them for central storage. In the future I am going to
introduce another machine for central storage. The 2 storage machine are
going to share/export the same disk. The idea is not to have a single point
of failure