search for: submit_bio

Displaying 20 results from an estimated 141 matches for "submit_bio".

2018 Feb 07
1
Adjust type of rw in submit_bio from int to unsigned long
Hi, I am a developer who worked on CentOS. Recently I am working on porting one of my block drivers from CentOS 6.x to CentOS 7.x. In newest kernel (kernel-3.10.0-693.17.1.el7) I found there is an issue in submit_bio()'s first argument: void submit_bio(int rw, struct bio *bio) The type of bi_rw of struct bio is unsigned long, and the number of enum rq_flags_bits also exceeds 32, so we should better using unsigned long for rw in submit_bio(), like: void submit_bio(unsigned long rw, struct bio *bio)
2006 Jun 09
1
RHEL 4 U2 / OCFS 1.2.1 weekly crash?
Hello, I have two nodes running the 2.6.9-22.0.2.ELsmp kernel and the OCFS2 1.2.1 RPMs. About once a week, one of the nodes crashes itself (self- fencing) and I get a full vmcore on my netdump server. The netdump log file shows the shared filesystem LUN (/dev/dm-6) did not respond within 12000ms. I have not changed the default heartbeat values in /etc/sysconfig/o2cb. There was no other IO
2010 May 12
0
[PATCH 2/4] direct-io: add a hook for the fs to provide its own submit_bio function V3
...s it into the submit_io hook. Because BTRFS can do RAID and such, we need our own submit hook so we can setup the bio''s in the correct fashion, and handle checksum errors properly. So there are a few changes here 1) The submit_io hook. This is straightforward, just call this instead of submit_bio. 2) Allow the fs to return -ENOTBLK for reads. Usually this has only worked for writes, since writes can fallback onto buffered IO. But BTRFS needs the option of falling back on buffered IO if it encounters a compressed extent, since we need to read the entire extent in and decompress it. So if...
2007 Jul 29
1
6 node cluster with unexplained reboots
We just installed a new cluster with 6 HP DL380g5, dual single port Qlogic 24xx HBAs connected via two HP 4/16 Storageworks switches to a 3Par S400. We are using the 3Par recommended config for the Qlogic driver and device-mapper-multipath giving us 4 paths to the SAN. We do see some SCSI errors where DM-MP is failing a path after get a 0x2000 error from the SAN controller, but the path gets puts
2008 Jul 14
1
Node fence on RHEL4 machine running 1.2.8-2
...do allocating bios for read Jul 14 05:55:59 node1 Index 4: took 0 ms to do bio alloc read Jul 14 05:55:59 node1 Heartbeat thread (13) printing last 24 blocking operations (cur = 0): Jul 14 05:55:59 node1 Index 5: took 0 ms to do bio add page read Jul 14 05:55:59 node1 Index 10: took 0 ms to do submit_bio for write Jul 14 05:55:59 node1 Index 11: took 0 ms to do checking slots Jul 14 05:55:59 node1 Index 6: took 0 ms to do submit_bio for read Jul 14 05:55:59 node1 Heartbeat thread stuck at msleep, stuffing current time into that blocker (index 0) Jul 14 05:55:59 node1 Index 12: took 0 ms to do w...
2006 Nov 03
2
Newbie questions -- is OCFS2 what I even want?
Dear Sirs and Madams, I run a small visual effects production company, Hammerhead Productions. We'd like to have an easily extensible inexpensive relatively high-performance storage network using open-source components. I was hoping that OCFS2 would be that system. I have a half-dozen 2 TB fileservers I'd like the rest of the network to see as a single 12 TB disk, with the aggregate
2003 Dec 10
0
VFS: brelse: Trying to free free buffer
...96/160] __find_get_block+0x60/0xa0 Dec 11 04:03:56 fendrian kernel: [__getblk_slow+24/224] __getblk_slow+0x18/0xe0 Dec 11 04:03:56 fendrian kernel: [__getblk+42/48] __getblk+0x2a/0x30 Dec 11 04:03:56 fendrian kernel: [ext3_getblk+123/528] ext3_getblk+0x7b/0x210 Dec 11 04:03:56 fendrian kernel: [submit_bio+61/112] submit_bio+0x3d/0x70 Dec 11 04:03:56 fendrian kernel: [ll_rw_block+88/128] ll_rw_block+0x58/0x80 Dec 11 04:03:56 fendrian kernel: [ext3_find_entry+288/976] ext3_find_entry+0x120/0x3d0 Dec 11 04:03:56 fendrian kernel: [ext3_lookup+41/160] ext3_lookup+0x29/0xa0 Dec 11 04:03:56 fendrian ker...
2023 Aug 06
0
[PATCH v4] virtio_pmem: add the missing REQ_OP_WRITE for flush bio
...; From: Hou Tao <houtao1 at huawei.com> > > > > > > When doing mkfs.xfs on a pmem device, the following warning was > > > reported: > > > > > > ?------------[ cut here ]------------ > > > ?WARNING: CPU: 2 PID: 384 at block/blk-core.c:751 submit_bio_noacct > > > ?Modules linked in: > > > ?CPU: 2 PID: 384 Comm: mkfs.xfs Not tainted 6.4.0-rc7+ #154 > > > ?Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) > > > ?RIP: 0010:submit_bio_noacct+0x340/0x520 > > > ?...... > > > ?Call Trace: >...
2023 Aug 06
0
[PATCH v4] virtio_pmem: add the missing REQ_OP_WRITE for flush bio
...; From: Hou Tao <houtao1 at huawei.com> > > > > > > When doing mkfs.xfs on a pmem device, the following warning was > > > reported: > > > > > > ?------------[ cut here ]------------ > > > ?WARNING: CPU: 2 PID: 384 at block/blk-core.c:751 submit_bio_noacct > > > ?Modules linked in: > > > ?CPU: 2 PID: 384 Comm: mkfs.xfs Not tainted 6.4.0-rc7+ #154 > > > ?Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) > > > ?RIP: 0010:submit_bio_noacct+0x340/0x520 > > > ?...... > > > ?Call Trace: >...
2018 Nov 06
2
[PATCH 0/1] vhost: add vhost_blk driver
..., my implementation is not that different, > because the whole thing has only twice more LOC that vhost/test.c. > > I posted my numbers (see in quoted text above the 16 queues case), > IOPS goes from ~100k to 1.2M and almost reaches the physical > limitation of the backend. > > submit_bio() is a bit faster, but can't be used for disk images placed > on a file system. I have that submit_bio implementation too. > > Storage industry is shifting away from SCSI, which has a scaling > problem. Know little about storage. For scaling, do you mean SCSI protocol itself? If...
2018 Nov 06
2
[PATCH 0/1] vhost: add vhost_blk driver
..., my implementation is not that different, > because the whole thing has only twice more LOC that vhost/test.c. > > I posted my numbers (see in quoted text above the 16 queues case), > IOPS goes from ~100k to 1.2M and almost reaches the physical > limitation of the backend. > > submit_bio() is a bit faster, but can't be used for disk images placed > on a file system. I have that submit_bio implementation too. > > Storage industry is shifting away from SCSI, which has a scaling > problem. Know little about storage. For scaling, do you mean SCSI protocol itself? If...
2019 May 18
2
[Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver
...1] R13: ffff8880b916b600 R14: ffff888341800000 R15: ffffea00006f4840 [ 2504.243602] write_pmem+0x61/0x90 [ 2504.244002] pmem_do_bvec+0x178/0x2c0 [ 2504.244469] ? chksum_update+0xe/0x20 [ 2504.244908] pmem_make_request+0xf7/0x270 [ 2504.245509] generic_make_request+0x199/0x3f0 [ 2504.246179] ? submit_bio+0x67/0x130 [ 2504.246710] submit_bio+0x67/0x130 [ 2504.247117] ext4_io_submit+0x44/0x50 [ 2504.247556] ext4_writepages+0x621/0xe80 [ 2504.248028] ? 0xffffffff81000000 [ 2504.248418] ? do_writepages+0x46/0xd0 [ 2504.248880] ? ext4_mark_inode_dirty+0x1d0/0x1d0 [ 2504.249417] do_writepages+0x46...
2019 May 18
2
[Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver
...1] R13: ffff8880b916b600 R14: ffff888341800000 R15: ffffea00006f4840 [ 2504.243602] write_pmem+0x61/0x90 [ 2504.244002] pmem_do_bvec+0x178/0x2c0 [ 2504.244469] ? chksum_update+0xe/0x20 [ 2504.244908] pmem_make_request+0xf7/0x270 [ 2504.245509] generic_make_request+0x199/0x3f0 [ 2504.246179] ? submit_bio+0x67/0x130 [ 2504.246710] submit_bio+0x67/0x130 [ 2504.247117] ext4_io_submit+0x44/0x50 [ 2504.247556] ext4_writepages+0x621/0xe80 [ 2504.248028] ? 0xffffffff81000000 [ 2504.248418] ? do_writepages+0x46/0xd0 [ 2504.248880] ? ext4_mark_inode_dirty+0x1d0/0x1d0 [ 2504.249417] do_writepages+0x46...
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...tween backend driver queue resources for real NVMe hardware (eg: > target_core_nvme), but since it would still be doing close to the same > amount of software emulation for both backend driver cases, I wouldn't > expect there to be much performance advantage over just using normal > submit_bio(). > > --nab >
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...tween backend driver queue resources for real NVMe hardware (eg: > target_core_nvme), but since it would still be doing close to the same > amount of software emulation for both backend driver cases, I wouldn't > expect there to be much performance advantage over just using normal > submit_bio(). > > --nab >
2014 Sep 11
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...__blk_mq_alloc_request+0x2c/0x1e8) [ 66.437547] [<00000000003eef82>] blk_mq_map_request+0xc2/0x208 [ 66.437549] [<00000000003ef860>] blk_sq_make_request+0xac/0x350 [ 66.437721] [<00000000003e2d6c>] generic_make_request+0xc4/0xfc [ 66.437723] [<00000000003e2e56>] submit_bio+0xb2/0x1a8 [ 66.438373] [<000000000031e8aa>] ext4_io_submit+0x52/0x80 [ 66.438375] [<000000000031ccfa>] ext4_writepages+0x7c6/0xd0c [ 66.438378] [<00000000002aea20>] __writeback_single_inode+0x54/0x274 [ 66.438379] [<00000000002b0134>] writeback_sb_inodes+0x28c/0...
2014 Sep 11
3
blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)
...__blk_mq_alloc_request+0x2c/0x1e8) [ 66.437547] [<00000000003eef82>] blk_mq_map_request+0xc2/0x208 [ 66.437549] [<00000000003ef860>] blk_sq_make_request+0xac/0x350 [ 66.437721] [<00000000003e2d6c>] generic_make_request+0xc4/0xfc [ 66.437723] [<00000000003e2e56>] submit_bio+0xb2/0x1a8 [ 66.438373] [<000000000031e8aa>] ext4_io_submit+0x52/0x80 [ 66.438375] [<000000000031ccfa>] ext4_writepages+0x7c6/0xd0c [ 66.438378] [<00000000002aea20>] __writeback_single_inode+0x54/0x274 [ 66.438379] [<00000000002b0134>] writeback_sb_inodes+0x28c/0...
2019 Apr 11
4
[PATCH v5 1/6] libnvdimm: nd_region flush callback support
...> + if (!child) > + return -ENOMEM; > + bio_copy_dev(child, bio); > + child->bi_opf = REQ_PREFLUSH; > + child->bi_iter.bi_sector = -1; > + bio_chain(child, bio); > + submit_bio(child); I understand how this works, but it's a bit too "magical" for my taste. I would prefer that all flush implementations take an optional 'bio' argument rather than rely on the make_request implementation to stash the bio away on a driver specific list. > + } e...
2019 Apr 11
4
[PATCH v5 1/6] libnvdimm: nd_region flush callback support
...> + if (!child) > + return -ENOMEM; > + bio_copy_dev(child, bio); > + child->bi_opf = REQ_PREFLUSH; > + child->bi_iter.bi_sector = -1; > + bio_chain(child, bio); > + submit_bio(child); I understand how this works, but it's a bit too "magical" for my taste. I would prefer that all flush implementations take an optional 'bio' argument rather than rely on the make_request implementation to stash the bio away on a driver specific list. > + } e...
2016 Apr 13
3
Bug#820862: xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds
...it_on_page_bit+0x7f/0x90 [ 1680.060231] [<ffffffff810a7e90>] ? autoremove_wake_function+0x30/0x30 [ 1680.060246] [<ffffffff8114a46d>] ? pagevec_lookup_tag+0x1d/0x30 [ 1680.060254] [<ffffffff8113cfc0>] ? filemap_fdatawait_range+0xd0/0x160 [ 1680.060260] [<ffffffff8127d941>] ? submit_bio+0x71/0x150 [ 1680.060266] [<ffffffff81279118>] ? bio_alloc_bioset+0x198/0x290 [ 1680.060275] [<ffffffffa001172c>] ? jbd2_journal_commit_transaction+0xa5c/0x1950 [jbd2] [ 1680.060283] [<ffffffff8100331e>] ? xen_end_context_switch+0xe/0x20 [ 1680.060292] [<ffffffff810912f6>] ?...