search for: is_sync

Displaying 12 results from an estimated 12 matches for "is_sync".

Did you mean: ms_sync
2010 Feb 03
0
[PATCH] ocfs2: Add parenthesis to wrap the check for O_DIRECT.
....b2ca980 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -2013,8 +2013,8 @@ out_dio: /* buffered aio wouldn't have proper lock coverage today */ BUG_ON(ret == -EIOCBQUEUED && !(file->f_flags & O_DIRECT)); - if ((file->f_flags & O_DSYNC && !direct_io) || IS_SYNC(inode) || - (file->f_flags & O_DIRECT && has_refcount)) { + if (((file->f_flags & O_DSYNC) && !direct_io) || IS_SYNC(inode) || + ((file->f_flags & O_DIRECT) && has_refcount)) { ret = filemap_fdatawrite_range(file->f_mapping, pos,...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...direct-io.c > +++ b/fs/direct-io.c > @@ -1274,8 +1274,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, > */ > if (dio->is_async && iov_iter_rw(iter) == WRITE) { > retval = 0; > - if ((iocb->ki_filp->f_flags & O_DSYNC) || > - IS_SYNC(iocb->ki_filp->f_mapping->host)) > + if (iocb->ki_flags & IOCB_DSYNC) > retval = dio_set_defer_completion(dio); > else if (!dio->inode->i_sb->s_dio_done_wq) { > /* > -- > 2.13.6 >
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...direct-io.c > +++ b/fs/direct-io.c > @@ -1274,8 +1274,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, > */ > if (dio->is_async && iov_iter_rw(iter) == WRITE) { > retval = 0; > - if ((iocb->ki_filp->f_flags & O_DSYNC) || > - IS_SYNC(iocb->ki_filp->f_mapping->host)) > + if (iocb->ki_flags & IOCB_DSYNC) > retval = dio_set_defer_completion(dio); > else if (!dio->inode->i_sb->s_dio_done_wq) { > /* > -- > 2.13.6 >
2001 Jul 30
1
ext3-2.4-0.9.5
...inning or NVRAM) has not been demonstrated. - The holding of i_sem over the parent is a severe scalability limitation with synchronous metadata operations. Better to have: void *opaque; down(&parent->i_sem); file->f_op->op(&opaque, args...); up(&parent->i_sem); if (IS_SYNC(inode)) inode->i_op->wait_on_stuff(opaque); -
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...any waiters. Called under ctx->lock. */ -static void freed_request(struct request_queue *q, unsigned int flags) +static void freed_request(struct blk_queue_ctx *ctx, unsigned int flags) { - struct request_list *rl = &q->rq; + struct request_list *rl = &ctx->rl; int sync = rw_is_sync(flags); rl->count[sync]--; if (flags & REQ_ELVPRIV) rl->elvpriv--; - __freed_request(q, sync); + __freed_request(ctx, sync); if (unlikely(rl->starved[sync ^ 1])) - __freed_request(q, sync ^ 1); + __freed_request(ctx, sync ^ 1); } /* * Determine if elevator data s...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...any waiters. Called under ctx->lock. */ -static void freed_request(struct request_queue *q, unsigned int flags) +static void freed_request(struct blk_queue_ctx *ctx, unsigned int flags) { - struct request_list *rl = &q->rq; + struct request_list *rl = &ctx->rl; int sync = rw_is_sync(flags); rl->count[sync]--; if (flags & REQ_ELVPRIV) rl->elvpriv--; - __freed_request(q, sync); + __freed_request(ctx, sync); if (unlikely(rl->starved[sync ^ 1])) - __freed_request(q, sync ^ 1); + __freed_request(ctx, sync ^ 1); } /* * Determine if elevator data s...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all, While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a number of splats in the block layer: * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in jbd2_trans_will_send_data_barrier * BUG: sleeping function called from invalid context at mm/mempool.c:320 * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750 ... I've included the
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all, While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a number of splats in the block layer: * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in jbd2_trans_will_send_data_barrier * BUG: sleeping function called from invalid context at mm/mempool.c:320 * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750 ... I've included the
2009 Aug 24
0
[PATCH] Btrfs: proper metadata -ENOSPC handling
...se_path(root, path); + ret = btrfs_extend_transaction(trans, root, 1); + if (ret) + goto out; + ret = btrfs_lookup_file_extent(trans, root, path, inode->i_ino, search_start, -1); if (ret < 0) @@ -1080,6 +1084,10 @@ out_nolock: if ((file->f_flags & O_SYNC) || IS_SYNC(inode)) { trans = btrfs_start_transaction(root, 1); + if (IS_ERR(trans)) { + err = PTR_ERR(trans); + goto fail; + } ret = btrfs_log_dentry_safe(trans, root, file->f_dentry); if (ret == 0) { @@ -1092,6 +1100,7 @@ out_nolock: btrfs_commit_transaction(trans, ro...
2010 May 07
6
[PATCH 1/5] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read, just in case it was actually a short read and we need to just return. This is similar to what already happens in the write case. If we have a short read while doing O_DIRECT, instead of just returning, fallthrough and try to read the rest via buffered IO. BTRFS needs this because if we encounter a compressed or
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups, and improves old version. Improvements are as following. * Modularizing our new CFQ scheduler. The expanded CFQ scheduler is registered/unregistered as new I/O elevator scheduler called "cfq-cgroups". By this, the traditional CFQ scheduler, which does not handle cgroups, and our new CFQ
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups, and improves old version. Improvements are as following. * Modularizing our new CFQ scheduler. The expanded CFQ scheduler is registered/unregistered as new I/O elevator scheduler called "cfq-cgroups". By this, the traditional CFQ scheduler, which does not handle cgroups, and our new CFQ