search for: blk_mq_end_request

Displaying 11 results from an estimated 11 matches for "blk_mq_end_request".

2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...ake_request_checks+0x670/0x750 ... I've included the full splats at the end of the mail. These all happen in the context of the virtio block IRQ handler, so I wonder if this calls something that doesn't expect to be called from IRQ context. Is it valid to call blk_mq_complete_request() or blk_mq_end_request() from an IRQ handler? Syzkaller came up with a minimized reproducer, but it's a bit wacky (the fcntl and bpf calls should have no practical effect), and I haven't managed to come up with a C reproducer. Any ideas? Thanks, Mark. Syzkaller reproducer: # {Threaded:true Collide:true Repea...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...ake_request_checks+0x670/0x750 ... I've included the full splats at the end of the mail. These all happen in the context of the virtio block IRQ handler, so I wonder if this calls something that doesn't expect to be called from IRQ context. Is it valid to call blk_mq_complete_request() or blk_mq_end_request() from an IRQ handler? Syzkaller came up with a minimized reproducer, but it's a bit wacky (the fcntl and bpf calls should have no practical effect), and I haven't managed to come up with a C reproducer. Any ideas? Thanks, Mark. Syzkaller reproducer: # {Threaded:true Collide:true Repea...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...> ... I've included the full splats at the end of the mail. > > These all happen in the context of the virtio block IRQ handler, so I > wonder if this calls something that doesn't expect to be called from IRQ > context. Is it valid to call blk_mq_complete_request() or > blk_mq_end_request() from an IRQ handler? No, it's likely a bug in detection whether IO completion should be deferred to a workqueue or not. Does attached patch fix the problem? I don't see exactly this being triggered by the syzkaller but it's close enough :) Honza > Syzkaller came up with...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...ded the full splats at the end of the mail. > > > > These all happen in the context of the virtio block IRQ handler, so I > > wonder if this calls something that doesn't expect to be called from IRQ > > context. Is it valid to call blk_mq_complete_request() or > > blk_mq_end_request() from an IRQ handler? > > No, it's likely a bug in detection whether IO completion should be deferred > to a workqueue or not. Does attached patch fix the problem? I don't see > exactly this being triggered by the syzkaller but it's close enough :) > > Honza...
2018 Feb 26
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...ded the full splats at the end of the mail. > > > > These all happen in the context of the virtio block IRQ handler, so I > > wonder if this calls something that doesn't expect to be called from IRQ > > context. Is it valid to call blk_mq_complete_request() or > > blk_mq_end_request() from an IRQ handler? > > No, it's likely a bug in detection whether IO completion should be deferred > to a workqueue or not. Does attached patch fix the problem? I don't see > exactly this being triggered by the syzkaller but it's close enough :) > > Honza...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...e end of the mail. > > > > > > These all happen in the context of the virtio block IRQ handler, so I > > > wonder if this calls something that doesn't expect to be called from IRQ > > > context. Is it valid to call blk_mq_complete_request() or > > > blk_mq_end_request() from an IRQ handler? > > > > No, it's likely a bug in detection whether IO completion should be deferred > > to a workqueue or not. Does attached patch fix the problem? I don't see > > exactly this being triggered by the syzkaller but it's close enough :) >...
2019 Dec 12
4
[PATCH] virtio-blk: remove VIRTIO_BLK_F_SCSI support
...ruct scatterlist *data_sg, bool have_data) { @@ -216,13 +136,6 @@ static inline void virtblk_request_done(struct request *req) req->special_vec.bv_offset); } - switch (req_op(req)) { - case REQ_OP_SCSI_IN: - case REQ_OP_SCSI_OUT: - virtblk_scsi_request_done(req); - break; - } - blk_mq_end_request(req, virtblk_result(vbr)); } @@ -299,10 +212,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx, type = VIRTIO_BLK_T_WRITE_ZEROES; unmap = !(req->cmd_flags & REQ_NOUNMAP); break; - case REQ_OP_SCSI_IN: - case REQ_OP_SCSI_OUT: - type = VIRTIO_BLK_T_SCSI_CMD; - b...
2017 Jan 28
6
make SCSI passthrough support optional
Hi all, this series builds on my previous changes in Jens' for-4.11/rq-refactor branch that split out the BLOCK_PC fields from struct request into a new struct scsi_request, and makes support for struct scsi_request and the SCSI passthrough ioctls optional. It is now only enabled by drivers that need it. In addition I've made SCSI passthrough support in the virtio_blk driver an optional
2017 Jan 28
6
make SCSI passthrough support optional
Hi all, this series builds on my previous changes in Jens' for-4.11/rq-refactor branch that split out the BLOCK_PC fields from struct request into a new struct scsi_request, and makes support for struct scsi_request and the SCSI passthrough ioctls optional. It is now only enabled by drivers that need it. In addition I've made SCSI passthrough support in the virtio_blk driver an optional
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics