Displaying 12 results from an estimated 12 matches for "ocfs2_dio_end_io".
2010 Nov 19
5
[PATCH 1/1] Ocfs2: Teach 'coherency=full' O_DIRECT writes to correctly up_read i_alloc_sem.
Former logic of ocfs2_file_aio_write() was a bit stricky to unlock the rw_lock
and i_alloc_sem, by using some private bits in struct 'iocb' to communite with
ocfs2_dio_end_io(), it did work before we introduce the patch of supporting
'coherency=full,buffered' option, since rw_lock and i_alloc_sem were never
acquired both at the same time, no mattar we doing buffered or direct IO or not.
This patch tries to teach ocfs2_dio_end_io fully understand the bahavior of...
2013 Oct 21
1
Kernel BUG in ocfs2_get_clusters_nocache
...;ffffffffa026eb3a>]
ocfs2_direct_IO_get_blocks+0x5a/0x160 [ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffff811c87c1>] ? inode_dio_done+0x31/0x40
[Fri Oct 18 10:52:28 2013] [<ffffffff811ea90c>]
do_blockdev_direct_IO+0xdfc/0x1fb0
[Fri Oct 18 10:52:28 2013] [<ffffffffa026eae0>] ? ocfs2_dio_end_io+0x110/0x110
[ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffff811ebb15>] __blockdev_direct_IO+0x55/0x60
[Fri Oct 18 10:52:28 2013] [<ffffffffa026eae0>] ? ocfs2_dio_end_io+0x110/0x110
[ocfs2]
[Fri Oct 18 10:52:28 2013] [<ffffffffa026e9d0>] ? ocfs2_direct_IO+0x80/0x80
[ocfs2]
[Fri...
2013 Jul 25
0
[PATCH V8 21/33] ocfs2: add support for read_iter and write_iter
...0644
--- a/fs/ocfs2/aops.h
+++ b/fs/ocfs2/aops.h
@@ -74,7 +74,7 @@ static inline void ocfs2_iocb_set_rw_locked(struct kiocb *iocb, int level)
/*
* Using a named enum representing lock types in terms of #N bit stored in
* iocb->private, which is going to be used for communication between
- * ocfs2_dio_end_io() and ocfs2_file_aio_write/read().
+ * ocfs2_dio_end_io() and ocfs2_file_write/read_iter().
*/
enum ocfs2_iocb_lock_bits {
OCFS2_IOCB_RW_LOCK = 0,
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 41000f2..d2d203b 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2220,15 +2220,13 @@ ou...
2013 Jan 09
0
[PATCH V5 19/30] ocfs2: add support for read_iter, write_iter, and direct_IO_bvec
...0644
--- a/fs/ocfs2/aops.h
+++ b/fs/ocfs2/aops.h
@@ -72,7 +72,7 @@ static inline void ocfs2_iocb_set_rw_locked(struct kiocb *iocb, int level)
/*
* Using a named enum representing lock types in terms of #N bit stored in
* iocb->private, which is going to be used for communication between
- * ocfs2_dio_end_io() and ocfs2_file_aio_write/read().
+ * ocfs2_dio_end_io() and ocfs2_file_write/read_iter().
*/
enum ocfs2_iocb_lock_bits {
OCFS2_IOCB_RW_LOCK = 0,
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 37d313e..94fc309 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2219,15 +2219,13 @@ ou...
2012 Jun 27
4
[V4]fix ocfs2 aio/dio writing process hang
V4 changes:
add Acked-by: Joel Becker <jlbec at evilplan.org>
V3 changes:
- add Cc: stable at vger.kernel.org in the patch header to align with stable rules
- add Acked-by: Jeff Moyer <jmoyer at redhat.com>
V2 changes:
- update the patch header of the first patch to make it more clear.
This patch list fixes an issue about ocfs2 aio/dio write process hang.
The call trace is like
2011 Jun 24
10
[PATCH 0/9] remove i_alloc_sem V2
i_alloc_sem has always been a bit of an odd "lock". It''s the only remaining
rw_semaphore that can be released by a different thread than the one that
locked it, and it''s use case in the core direct I/O code is more like a
counter given that the writers already have external serialization.
This series removes it in favour of a simpler counter scheme, thus getting
rid
2010 Apr 15
1
[PATCH] ocfs2: avoid direct write if we fall back to buffered v2
...el's comments.
---
fs/ocfs2/file.c | 23 ++++++++++++-----------
1 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index de059f4..0240de7 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1973,18 +1973,18 @@ relock:
/* communicate with ocfs2_dio_end_io */
ocfs2_iocb_set_rw_locked(iocb, rw_level);
- if (direct_io) {
- ret = generic_segment_checks(iov, &nr_segs, &ocount,
- VERIFY_READ);
- if (ret)
- goto out_dio;
+ ret = generic_segment_checks(iov, &nr_segs, &ocount,
+ VERIFY_READ);
+ if (ret)
+ goto out_di...
2009 Jul 13
1
[PATCH 1/1] adds mlogs to aops.c
...ret = -EIO;
goto bail;
}
@@ -618,6 +654,7 @@ static int ocfs2_direct_IO_get_blocks(struct inode *inode, sector_t iblock,
contig_blocks = max_blocks;
bh_result->b_size = contig_blocks << blocksize_bits;
bail:
+ mlog_exit(ret);
return ret;
}
@@ -638,6 +675,8 @@ static void ocfs2_dio_end_io(struct kiocb *iocb,
/* this io's submitter should not have unlocked this before we could */
BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));
+ mlog(0, "(0x%p, %lld, %ld, 0x%p)\n", iocb, offset, (long)bytes, private);
+
ocfs2_iocb_clear_rw_locked(iocb);
level = ocfs2_iocb_rw_locked_...
2009 Jul 21
1
(no subject)
...ret = -EIO;
goto bail;
}
@@ -618,6 +658,7 @@ static int ocfs2_direct_IO_get_blocks(struct inode *inode, sector_t iblock,
contig_blocks = max_blocks;
bh_result->b_size = contig_blocks << blocksize_bits;
bail:
+ mlog_exit(ret);
return ret;
}
@@ -638,6 +679,8 @@ static void ocfs2_dio_end_io(struct kiocb *iocb,
/* this io's submitter should not have unlocked this before we could */
BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));
+ mlog(0, "(0x%p, %lld, %ld, 0x%p)\n", iocb, offset, (long)bytes, private);
+
ocfs2_iocb_clear_rw_locked(iocb);
level = ocfs2_iocb_rw_locked_...
2009 Jul 21
1
[PATCH 1/1] ocfs2: adds mlogs to aops.c -V2
...ret = -EIO;
goto bail;
}
@@ -618,6 +658,7 @@ static int ocfs2_direct_IO_get_blocks(struct inode *inode, sector_t iblock,
contig_blocks = max_blocks;
bh_result->b_size = contig_blocks << blocksize_bits;
bail:
+ mlog_exit(ret);
return ret;
}
@@ -638,6 +679,8 @@ static void ocfs2_dio_end_io(struct kiocb *iocb,
/* this io's submitter should not have unlocked this before we could */
BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));
+ mlog(0, "(0x%p, %lld, %ld, 0x%p)\n", iocb, offset, (long)bytes, private);
+
ocfs2_iocb_clear_rw_locked(iocb);
level = ocfs2_iocb_rw_locked_...
2008 Jun 04
1
OCFS2 and direct-io writes
I am looking at possibility of using OCFS2 with an existing application that
requires very high throughput for read and write file access.
Files are created by single writer (process) and can be read by multiple reader,
possibly while the file is being written. 100+ different files may be written
simultaneously, and can be read by 1000+ readers.
I am currently using XFS on a local filesystem,
2010 May 07
6
[PATCH 1/5] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read,
just in case it was actually a short read and we need to just return.
This is similar to what already happens in the write case. If we have a short
read while doing O_DIRECT, instead of just returning, fallthrough and try to
read the rest via buffered IO. BTRFS needs this because if we encounter a
compressed or