search for: inflight

Displaying 20 results from an estimated 112 matches for "inflight".

Did you mean: in_flight
2013 Mar 11
4
[PATCH] tcm_vhost: Wait for pending requests in vhost_scsi_flush()
This patch makes vhost_scsi_flush() wait for all the pending requests issued before the flush operation to be finished. Changes in v4: - Introduce vhost_scsi_inflight - Drop array to track flush Changes in v3: - Rebase - Drop 'tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint()' in this series, we already did that in 'tcm_vhost: Use vq->private_data to indicate if the endpoint is setup' Changes in v2: - Increase/Decrease...
2013 Mar 11
4
[PATCH] tcm_vhost: Wait for pending requests in vhost_scsi_flush()
This patch makes vhost_scsi_flush() wait for all the pending requests issued before the flush operation to be finished. Changes in v4: - Introduce vhost_scsi_inflight - Drop array to track flush Changes in v3: - Rebase - Drop 'tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint()' in this series, we already did that in 'tcm_vhost: Use vq->private_data to indicate if the endpoint is setup' Changes in v2: - Increase/Decrease...
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
...ro to a file can cause the memory consumption to go up by 200MB--with dozens of VMs this can add up enough to trigger the OOM killer. It looks like the rbd driver in qemu allocates a number of buffers for each request, one of which is the full amount of data to read/write. Monitoring the "inflight" numbers in the guest I've seen it go as high as 184. I'm trying to figure out if there are any limits on how high the inflight numbers can go, but I'm not having much luck. I was hopeful when I saw qemu calling virtio_add_queue() with a queue size, but the queue size was 128...
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
...ro to a file can cause the memory consumption to go up by 200MB--with dozens of VMs this can add up enough to trigger the OOM killer. It looks like the rbd driver in qemu allocates a number of buffers for each request, one of which is the full amount of data to read/write. Monitoring the "inflight" numbers in the guest I've seen it go as high as 184. I'm trying to figure out if there are any limits on how high the inflight numbers can go, but I'm not having much luck. I was hopeful when I saw qemu calling virtio_add_queue() with a queue size, but the queue size was 128...
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6: - Allow device specific fields per vq - Track cmd per vq - Do not track evt - Switch to static array for inflight allocation, completely get rid of the pain to handle inflight allocation failure. Asias He (2): vhost: Allow device specific fields per vq tcm_vhost: Wait for pending requests in vhost_scsi_flush() drivers/vhost/net.c | 60 +++++++++++-------- drivers/vhost/tcm_vhost.c | 145 +++++++...
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6: - Allow device specific fields per vq - Track cmd per vq - Do not track evt - Switch to static array for inflight allocation, completely get rid of the pain to handle inflight allocation failure. Asias He (2): vhost: Allow device specific fields per vq tcm_vhost: Wait for pending requests in vhost_scsi_flush() drivers/vhost/net.c | 60 +++++++++++-------- drivers/vhost/tcm_vhost.c | 145 +++++++...
2014 Aug 26
1
help? looking for limits on in-flight write operations for virtio-blk
On 08/26/2014 04:34 AM, Stefan Hajnoczi wrote: > On Mon, Aug 25, 2014 at 8:42 PM, Chris Friesen > <chris.friesen at windriver.com> wrote: >> I'm trying to figure out if there are any limits on how high the inflight >> numbers can go, but I'm not having much luck. >> >> I was hopeful when I saw qemu calling virtio_add_queue() with a queue size, >> but the queue size was 128 which didn't match the inflight numbers I was >> seeing, and after changing the queue size down to 1...
2014 Aug 26
1
help? looking for limits on in-flight write operations for virtio-blk
On 08/26/2014 04:34 AM, Stefan Hajnoczi wrote: > On Mon, Aug 25, 2014 at 8:42 PM, Chris Friesen > <chris.friesen at windriver.com> wrote: >> I'm trying to figure out if there are any limits on how high the inflight >> numbers can go, but I'm not having much luck. >> >> I was hopeful when I saw qemu calling virtio_add_queue() with a queue size, >> but the queue size was 128 which didn't match the inflight numbers I was >> seeing, and after changing the queue size down to 1...
2012 Oct 10
7
[PATCH 0 of 7] Miscellaneous updates
Clearing out my local queue of changes before applying other''s.
2014 Aug 26
0
help? looking for limits on in-flight write operations for virtio-blk
...e memory consumption to go up > by 200MB--with dozens of VMs this can add up enough to trigger the OOM > killer. > > It looks like the rbd driver in qemu allocates a number of buffers for each > request, one of which is the full amount of data to read/write. Monitoring > the "inflight" numbers in the guest I've seen it go as high as 184. > > I'm trying to figure out if there are any limits on how high the inflight > numbers can go, but I'm not having much luck. > > I was hopeful when I saw qemu calling virtio_add_queue() with a queue size, > bu...
2020 Apr 08
1
CentOS 7 and USB 3.1
...23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: ERROR Transfer event for unknown stream ring slot 4 ep 3 Apr 8 13:23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: @00000020155b24c0 00000000 00000000 1a001000 04048001 Apr 8 13:23:55 devgeis kernel: sd 7:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT Apr 8 13:23:55 devgeis kernel: sd 7:0:0:0: [sdd] tag#3 CDB: Write(10) 2a 00 35 b8 db ff 00 04 00 00 Apr 8 13:23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: ERROR Transfer event for unknown stream ring slot 4 ep 3 Apr 8 13:23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: @00000020155b24e0 0000...
2013 Feb 15
0
[PATCH 1/4] xen/arm: trap guest WFI
...regs.h> #include <asm/cpregs.h> @@ -781,6 +782,11 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) case HSR_EC_DATA_ABORT_GUEST: do_trap_data_abort_guest(regs, hsr.dabt); break; + case HSR_EC_WFI_WFE: + if ( list_empty(&current->arch.vgic.inflight_irqs) ) + do_sched_op_compat(SCHEDOP_block, 0); + regs->pc += hsr.len ? 4 : 2; + break; default: printk("Hypervisor Trap. HSR=0x%x EC=0x%x IL=%x Syndrome=%"PRIx32"\n", hsr.bits, hsr.ec, hsr.len, hsr.iss); diff --git a/xen/arch/arm...
2008 May 30
2
[PATCH 1/3] virtio: VIRTIO_F_NOTIFY_ON_EMPTY to force callback on empty
virtio allows drivers to suppress callbacks (ie. interrupts) for efficiency (no locking, it's just an optimization). There's a similar mechanism for the host to suppress notifications coming from the guest: in that case, we ignore the suppression if the ring is completely full. It turns out that life is simpler if the host similarly ignores callback suppression when the ring is
2008 May 30
2
[PATCH 1/3] virtio: VIRTIO_F_NOTIFY_ON_EMPTY to force callback on empty
virtio allows drivers to suppress callbacks (ie. interrupts) for efficiency (no locking, it's just an optimization). There's a similar mechanism for the host to suppress notifications coming from the guest: in that case, we ignore the suppression if the ring is completely full. It turns out that life is simpler if the host similarly ignores callback suppression when the ring is
2006 Apr 25
3
Freebsd Stable 6.x ipsec slower than with 4.9
Hello List, I have to dualcore Athlon 64 4800+ systems. Initially I was running 4.9 on both of them an was able to get 54mbits thru direct connected realtek 10/100 cards as measured by nttcp. I put stable on one of the system and now can on get 37mbits as measured by nttcp when going thru an ipsec tunnel. Eliminating the tunnel I get 94mbit/sec. Ideas as to why this is happening? Also
2013 Mar 22
4
[PATCH V2 0/3] tcm_vhost pending requests flush
Changes in v2: - Increase/Decrease inflight requests in vhost_scsi_{allocate,free}_cmd and tcm_vhost_{allocate,free}_evt Asias He (3): tcm_vhost: Wait for pending requests in vhost_scsi_flush() tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint() tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq drivers/vhost/tcm_v...
2013 Mar 22
4
[PATCH V2 0/3] tcm_vhost pending requests flush
Changes in v2: - Increase/Decrease inflight requests in vhost_scsi_{allocate,free}_cmd and tcm_vhost_{allocate,free}_evt Asias He (3): tcm_vhost: Wait for pending requests in vhost_scsi_flush() tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint() tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq drivers/vhost/tcm_v...
2010 Dec 09
2
servers blocked on ocfs2
...15:11 parmenides kernel: lockres: M000000000000000000000b6f931666, owner=1, state=0 Dec 4 09:15:11 parmenides kernel: last used: 0, refcnt: 4, on purge list: no Dec 4 09:15:11 parmenides kernel: on dirty list: no, on reco list: no, migrating pending: no Dec 4 09:15:11 parmenides kernel: inflight locks: 0, asts reserved: 0 Dec 4 09:15:11 parmenides kernel: refmap nodes: [ 0 ], inflight=0 Dec 4 09:15:11 parmenides kernel: granted queue: Dec 4 09:15:11 parmenides kernel: type=5, conv=-1, node=1, cookie=1:6, ref=2, ast=(empty=y,pend=n), bast=(empty=y,pend=n), pending=(conv=n,lock=...
2013 May 06
2
[PATCH v2] xen/gic: EOI irqs on the right pcpu
...gt;status &= ~IRQ_INPROGRESS; - GICC[GICC_DIR] = virq; + /* Assume only one pcpu needs to EOI the irq */ + cpu = cpumask_first(&p->eoimask); + cpumask_clear(&p->eoimask); + eoi = 1; } list_del_init(&p->inflight); spin_unlock_irq(&v->arch.vgic.lock); + if ( eoi ) { + /* this is not racy because we can''t receive another irq of the + * same type until we EOI it. */ + if ( cpu == smp_processor_id() ) + gic_irq_eoi((void*)virq)...
2013 May 02
5
[PATCH 0/3] vhost-scsi: file renames
This reorgs the files a bit, renaming tcm_vhost to vhost_scsi as that's how userspace refers to it. While at it, cleanup some leftovers from when it was a staging driver. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Michael S. Tsirkin (3): vhost: src file renames tcm_vhost: header split up vhost_scsi: module rename drivers/vhost/Kconfig | 10 ++-