Displaying 20 results from an estimated 53 matches for "queue_num".
2011 Nov 11
1
[RFC] kvm tools: Implement multiple VQ for virtio-net
...+ pthread_cond_t io_cond[VIRTIO_NET_NUM_QUEUES];
+ int rx_vq_num;
+ int tx_vq_num;
+ int vq_num;
int tap_fd;
char tap_name[IFNAMSIZ];
@@ -78,17 +76,22 @@ static void *virtio_net_rx_thread(void *p)
struct net_dev *ndev = p;
u16 out, in;
u16 head;
- int len;
+ int len, queue_num;
+
+ mutex_lock(&ndev->mutex);
+ queue_num = ndev->rx_vq_num * 2;
+ ndev->tx_vq_num++;
+ mutex_unlock(&ndev->mutex);
kvm = ndev->kvm;
- vq = &ndev->vqs[VIRTIO_NET_RX_QUEUE];
+ vq = &ndev->vqs[queue_num];
while (1) {
- mutex_lock(&ndev->io_rx_lo...
2011 Nov 11
1
[RFC] kvm tools: Implement multiple VQ for virtio-net
...+ pthread_cond_t io_cond[VIRTIO_NET_NUM_QUEUES];
+ int rx_vq_num;
+ int tx_vq_num;
+ int vq_num;
int tap_fd;
char tap_name[IFNAMSIZ];
@@ -78,17 +76,22 @@ static void *virtio_net_rx_thread(void *p)
struct net_dev *ndev = p;
u16 out, in;
u16 head;
- int len;
+ int len, queue_num;
+
+ mutex_lock(&ndev->mutex);
+ queue_num = ndev->rx_vq_num * 2;
+ ndev->tx_vq_num++;
+ mutex_unlock(&ndev->mutex);
kvm = ndev->kvm;
- vq = &ndev->vqs[VIRTIO_NET_RX_QUEUE];
+ vq = &ndev->vqs[queue_num];
while (1) {
- mutex_lock(&ndev->io_rx_lo...
2013 Mar 23
10
[PATCH V7 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V7: respin to fix the patch apply error
V6: rework
2013 Mar 23
10
[PATCH V7 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V7: respin to fix the patch apply error
V6: rework
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2018 Jun 07
2
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...o discard and write zeroes commands.
>
> I don't see this in the patch...
Yeah, do noting with DISCARD/WRITE ZEROES means no need to OR BLK_T_OUT again.
>
> > @@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> > int qid = hctx->queue_num;
> > int err;
> > bool notify = false;
> > + bool unmap = false;
> > u32 type;
> >
> > BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > @@ -237,6 +273,13 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> &g...
2018 Jun 07
2
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...o discard and write zeroes commands.
>
> I don't see this in the patch...
Yeah, do noting with DISCARD/WRITE ZEROES means no need to OR BLK_T_OUT again.
>
> > @@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> > int qid = hctx->queue_num;
> > int err;
> > bool notify = false;
> > + bool unmap = false;
> > u32 type;
> >
> > BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > @@ -237,6 +273,13 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> &g...
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
...ice *vdev;
+ u32 vq;
+};
+
+struct virtio_mmio_hdr {
+ char magic[4];
+ u32 version;
+ u32 device_id;
+ u32 vendor_id;
+ u32 host_features;
+ u32 host_features_sel;
+ u32 reserved_1[2];
+ u32 guest_features;
+ u32 guest_features_sel;
+ u32 guest_page_size;
+ u32 reserved_2;
+ u32 queue_sel;
+ u32 queue_num_max;
+ u32 queue_num;
+ u32 queue_align;
+ u32 queue_pfn;
+ u32 reserved_3[3];
+ u32 queue_notify;
+ u32 reserved_4[3];
+ u32 interrupt_state;
+ u32 interrupt_ack;
+ u32 reserved_5[2];
+ u32 status;
+} __attribute__((packed));
+
+struct virtio_mmio {
+ u32 addr;
+ void *dev;
+ struct kvm *kvm;...
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
...ice *vdev;
+ u32 vq;
+};
+
+struct virtio_mmio_hdr {
+ char magic[4];
+ u32 version;
+ u32 device_id;
+ u32 vendor_id;
+ u32 host_features;
+ u32 host_features_sel;
+ u32 reserved_1[2];
+ u32 guest_features;
+ u32 guest_features_sel;
+ u32 guest_page_size;
+ u32 reserved_2;
+ u32 queue_sel;
+ u32 queue_num_max;
+ u32 queue_num;
+ u32 queue_align;
+ u32 queue_pfn;
+ u32 reserved_3[3];
+ u32 queue_notify;
+ u32 reserved_4[3];
+ u32 interrupt_state;
+ u32 interrupt_ack;
+ u32 reserved_5[2];
+ u32 status;
+} __attribute__((packed));
+
+struct virtio_mmio {
+ u32 addr;
+ void *dev;
+ struct kvm *kvm;...
2018 Jun 11
1
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...#39;t see this in the patch...
> > Yeah, do noting with DISCARD/WRITE ZEROES means no need to OR BLK_T_OUT
> again.
> > >
> > > > @@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct
> > > blk_mq_hw_ctx *hctx,
> > > > int qid = hctx->queue_num;
> > > > int err;
> > > > bool notify = false;
> > > > + bool unmap = false;
> > > > u32 type;
> > > >
> > > > BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > > > @@ -237,6 +273,13 @@ static...
2018 Jun 06
10
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...RQF_SPECIAL_PAYLOAD) {
+ kfree(page_address(req->special_vec.bv_page) +
+ req->special_vec.bv_offset);
+ }
+
switch (req_op(req)) {
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
@@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
int qid = hctx->queue_num;
int err;
bool notify = false;
+ bool unmap = false;
u32 type;
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
@@ -237,6 +273,13 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
case REQ_OP_FLUSH:
type = VIRTIO_BLK_T_FLUSH;
break;
+ case REQ_OP_DISCA...
2018 Jun 06
10
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...RQF_SPECIAL_PAYLOAD) {
+ kfree(page_address(req->special_vec.bv_page) +
+ req->special_vec.bv_offset);
+ }
+
switch (req_op(req)) {
case REQ_OP_SCSI_IN:
case REQ_OP_SCSI_OUT:
@@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
int qid = hctx->queue_num;
int err;
bool notify = false;
+ bool unmap = false;
u32 type;
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
@@ -237,6 +273,13 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
case REQ_OP_FLUSH:
type = VIRTIO_BLK_T_FLUSH;
break;
+ case REQ_OP_DISCA...
2012 Aug 28
11
[PATCH 0/5] Multiqueue virtio-scsi
Hi all,
this series adds multiqueue support to the virtio-scsi driver, based
on Jason Wang's work on virtio-net. It uses a simple queue steering
algorithm that expects one queue per CPU. LUNs in the same target always
use the same queue (so that commands are not reordered); queue switching
occurs when the request being queued is the only one for the target.
Also based on Jason's