search for: blkdev_issue_flush

Displaying 17 results from an estimated 17 matches for "blkdev_issue_flush".

2011 Jan 26
0
[PATCH 2/3] jbd2: Remove barrier feature conditional flag (or: always issue flushes)
...xt4_sync_file(struct file *file, int datasync) * the journal device.) */ if (ext4_should_writeback_data(inode) && - (journal->j_fs_dev != journal->j_dev) && - (journal->j_flags & JBD2_BARRIER)) + (journal->j_fs_dev != journal->j_dev)) blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL); ret = jbd2_log_wait_commit(journal, commit_tid); - } else if (journal->j_flags & JBD2_BARRIER) + } else blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL); return ret; } diff --git a/fs/ext4/super.c b/fs/ext4/super.c i...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...[ 162.585279] dump_backtrace+0x0/0x1c8 [ 162.587583] show_stack+0x14/0x20 [ 162.589047] dump_stack+0xac/0xe4 [ 162.592035] ___might_sleep+0x164/0x238 [ 162.594830] __might_sleep+0x50/0x88 [ 162.597012] mempool_alloc+0xc0/0x198 [ 162.600633] bio_alloc_bioset+0x144/0x250 [ 162.602983] blkdev_issue_flush+0x48/0xc8 [ 162.606134] ext4_sync_file+0x220/0x330 [ 162.607870] vfs_fsync_range+0x48/0xc0 [ 162.611694] dio_complete+0x1fc/0x220 [ 162.613369] dio_bio_end_aio+0xf0/0x130 [ 162.617040] bio_endio+0xe8/0xf8 [ 162.618583] blk_update_request+0x80/0x2e8 [ 162.619841] blk_mq_end_request+0x2...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...[ 162.585279] dump_backtrace+0x0/0x1c8 [ 162.587583] show_stack+0x14/0x20 [ 162.589047] dump_stack+0xac/0xe4 [ 162.592035] ___might_sleep+0x164/0x238 [ 162.594830] __might_sleep+0x50/0x88 [ 162.597012] mempool_alloc+0xc0/0x198 [ 162.600633] bio_alloc_bioset+0x144/0x250 [ 162.602983] blkdev_issue_flush+0x48/0xc8 [ 162.606134] ext4_sync_file+0x220/0x330 [ 162.607870] vfs_fsync_range+0x48/0xc0 [ 162.611694] dio_complete+0x1fc/0x220 [ 162.613369] dio_bio_end_aio+0xf0/0x130 [ 162.617040] bio_endio+0xe8/0xf8 [ 162.618583] blk_update_request+0x80/0x2e8 [ 162.619841] blk_mq_end_request+0x2...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...0x1c8 > [ 162.587583] show_stack+0x14/0x20 > [ 162.589047] dump_stack+0xac/0xe4 > [ 162.592035] ___might_sleep+0x164/0x238 > [ 162.594830] __might_sleep+0x50/0x88 > [ 162.597012] mempool_alloc+0xc0/0x198 > [ 162.600633] bio_alloc_bioset+0x144/0x250 > [ 162.602983] blkdev_issue_flush+0x48/0xc8 > [ 162.606134] ext4_sync_file+0x220/0x330 > [ 162.607870] vfs_fsync_range+0x48/0xc0 > [ 162.611694] dio_complete+0x1fc/0x220 > [ 162.613369] dio_bio_end_aio+0xf0/0x130 > [ 162.617040] bio_endio+0xe8/0xf8 > [ 162.618583] blk_update_request+0x80/0x2e8 > [...
2023 Aug 06
0
[PATCH v4] virtio_pmem: add the missing REQ_OP_WRITE for flush bio
...ush+0x17/0x40 > > > ? pmem_submit_bio+0x370/0x390 > > > ? __submit_bio+0xbc/0x190 > > > ? submit_bio_noacct_nocheck+0x14d/0x370 > > > ? submit_bio_noacct+0x1ef/0x520 > > > ? submit_bio+0x55/0x60 > > > ? submit_bio_wait+0x5a/0xc0 > > > ? blkdev_issue_flush+0x44/0x60 > > > > > > The root cause is that submit_bio_noacct() needs bio_op() is either > > > WRITE or ZONE_APPEND for flush bio and async_pmem_flush() doesn't assign > > > REQ_OP_WRITE when allocating flush bio, so submit_bio_noacct just fail > > &g...
2023 Aug 06
0
[PATCH v4] virtio_pmem: add the missing REQ_OP_WRITE for flush bio
...ush+0x17/0x40 > > > ? pmem_submit_bio+0x370/0x390 > > > ? __submit_bio+0xbc/0x190 > > > ? submit_bio_noacct_nocheck+0x14d/0x370 > > > ? submit_bio_noacct+0x1ef/0x520 > > > ? submit_bio+0x55/0x60 > > > ? submit_bio_wait+0x5a/0xc0 > > > ? blkdev_issue_flush+0x44/0x60 > > > > > > The root cause is that submit_bio_noacct() needs bio_op() is either > > > WRITE or ZONE_APPEND for flush bio and async_pmem_flush() doesn't assign > > > REQ_OP_WRITE when allocating flush bio, so submit_bio_noacct just fail > > &g...
2006 Oct 20
2
the worst scenario of ext3 after abnormal powerdown
Hi, I have seen and heard many cases of ext3 corrupted after abnormal powerdown (e.g. missing all the files in one directory). yes, UPS should help, but wonder what kind of worst scenario will ext3 present after powerdown. messed up meta data has been seen in many cases, for example, the in-direct block of one inode contains garbage, which causes the automatic fsck failed to work and user has
2016 Aug 16
3
hfsplus on C7
It would seem kmod-hfsplus does not have a module for my kernel uname -r 3.10.0-327.28.2.el7.x86_64 yum provides "*/hfsplus.ko" Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirror.tzulo.com * elrepo: repos.ord.lax-noc.com * extras: mirrors.usinternet.com * updates: mirror.nexcess.net kmod-hfsplus-0.0-1.el7.elrepo.x86_64 : hfsplus
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...dm_make_request+0x128/0x1a0 [dm_mod] [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189597] [<ffffffff9672076f>] ? system_call_after_swapg...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...8/0x1a0 [dm_mod] > [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 > [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 > [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 > [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 > [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 > [ 8280.189597] [<ffffffff9672076f>]...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...>> [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 >> [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 >> [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 >> [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 >> [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 >> [xfs] >> [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] >> [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 >> [ 8280.189597] [<ff...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...1279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>> [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 >>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>> [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >>> [xfs] >>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] >>> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [11...
2011 Aug 15
6
[patch] xen-blkback: sync I/O after backend disconnected
When backend disconnect, sync IO requests to the disk. Signed-off-by: Joe Jin <joe.jin@oracle.com> Cc: Jens Axboe <jaxboe@fusionio.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Ian Campbell <Ian.Campbell@eu.citrix.com> --- drivers/block/xen-blkback/xenbus.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/block/xen-blkback/xenbus.c
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...8/0x1a0 [dm_mod] > [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 > [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.495090] [<ffffffffb992076f>]...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...fffb991348d>] io_schedule_timeout+0xad/0x130 >>>>> [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 >>>>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>>>> [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >>>>> [xfs] >>>>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] >>>>> [11279.495086] [<ffffffffb944f0e7>] do_...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...k_queue_tag *); static inline struct request *blk_map_queue_find_tag(struct blk_queue_tag *bqt, int tag) { if (unlikely(bqt == NULL || tag >= bqt->real_max_depth)) return NULL; return bqt->tag_index[tag]; } #define BLKDEV_DISCARD_SECURE 0x01 /* secure discard */ extern int blkdev_issue_flush(struct block_device *, gfp_t, sector_t *); extern int blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, unsigned long flags); extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask); static...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...k_queue_tag *); static inline struct request *blk_map_queue_find_tag(struct blk_queue_tag *bqt, int tag) { if (unlikely(bqt == NULL || tag >= bqt->real_max_depth)) return NULL; return bqt->tag_index[tag]; } #define BLKDEV_DISCARD_SECURE 0x01 /* secure discard */ extern int blkdev_issue_flush(struct block_device *, gfp_t, sector_t *); extern int blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, unsigned long flags); extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask); static...