Displaying 20 results from an estimated 112 matches for "inflights".
Did you mean:
inflight
2013 Mar 11
4
[PATCH] tcm_vhost: Wait for pending requests in vhost_scsi_flush()
This patch makes vhost_scsi_flush() wait for all the pending requests
issued before the flush operation to be finished.
Changes in v4:
- Introduce vhost_scsi_inflight
- Drop array to track flush
Changes in v3:
- Rebase
- Drop 'tcm_vhost: Wait for pending requests in
vhost_scsi_clear_endpoint()' in this series, we already did that in
'tcm_vhost: Use vq->private_data to indicate
2013 Mar 11
4
[PATCH] tcm_vhost: Wait for pending requests in vhost_scsi_flush()
This patch makes vhost_scsi_flush() wait for all the pending requests
issued before the flush operation to be finished.
Changes in v4:
- Introduce vhost_scsi_inflight
- Drop array to track flush
Changes in v3:
- Rebase
- Drop 'tcm_vhost: Wait for pending requests in
vhost_scsi_clear_endpoint()' in this series, we already did that in
'tcm_vhost: Use vq->private_data to indicate
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
Hi,
I'm trying to figure out what controls the number if in-flight virtio
block operations when running linux in qemu on top of a linux host.
The problem is that we're trying to run as many VMs as possible, using
ceph/rbd for the rootfs. We've tripped over the fact the the memory
consumption of qemu can spike noticeably when doing I/O (something as
simple as "dd" from
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
Hi,
I'm trying to figure out what controls the number if in-flight virtio
block operations when running linux in qemu on top of a linux host.
The problem is that we're trying to run as many VMs as possible, using
ceph/rbd for the rootfs. We've tripped over the fact the the memory
consumption of qemu can spike noticeably when doing I/O (something as
simple as "dd" from
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6:
- Allow device specific fields per vq
- Track cmd per vq
- Do not track evt
- Switch to static array for inflight allocation, completely get rid of the
pain to handle inflight allocation failure.
Asias He (2):
vhost: Allow device specific fields per vq
tcm_vhost: Wait for pending requests in vhost_scsi_flush()
drivers/vhost/net.c | 60 +++++++++++--------
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6:
- Allow device specific fields per vq
- Track cmd per vq
- Do not track evt
- Switch to static array for inflight allocation, completely get rid of the
pain to handle inflight allocation failure.
Asias He (2):
vhost: Allow device specific fields per vq
tcm_vhost: Wait for pending requests in vhost_scsi_flush()
drivers/vhost/net.c | 60 +++++++++++--------
2014 Aug 26
1
help? looking for limits on in-flight write operations for virtio-blk
On 08/26/2014 04:34 AM, Stefan Hajnoczi wrote:
> On Mon, Aug 25, 2014 at 8:42 PM, Chris Friesen
> <chris.friesen at windriver.com> wrote:
>> I'm trying to figure out if there are any limits on how high the inflight
>> numbers can go, but I'm not having much luck.
>>
>> I was hopeful when I saw qemu calling virtio_add_queue() with a queue size,
>>
2014 Aug 26
1
help? looking for limits on in-flight write operations for virtio-blk
On 08/26/2014 04:34 AM, Stefan Hajnoczi wrote:
> On Mon, Aug 25, 2014 at 8:42 PM, Chris Friesen
> <chris.friesen at windriver.com> wrote:
>> I'm trying to figure out if there are any limits on how high the inflight
>> numbers can go, but I'm not having much luck.
>>
>> I was hopeful when I saw qemu calling virtio_add_queue() with a queue size,
>>
2012 Oct 10
7
[PATCH 0 of 7] Miscellaneous updates
Clearing out my local queue of changes before applying other''s.
2014 Aug 26
0
help? looking for limits on in-flight write operations for virtio-blk
On Mon, Aug 25, 2014 at 8:42 PM, Chris Friesen
<chris.friesen at windriver.com> wrote:
> I'm trying to figure out what controls the number if in-flight virtio block
> operations when running linux in qemu on top of a linux host.
>
> The problem is that we're trying to run as many VMs as possible, using
> ceph/rbd for the rootfs. We've tripped over the fact the the
2020 Apr 08
1
CentOS 7 and USB 3.1
I am getting these errors on my machine for an external USB connection.
Apr 8 13:23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: ERROR Transfer event
for unknown stream ring slot 4 ep 3
Apr 8 13:23:55 devgeis kernel: xhci_hcd 0000:b3:00.0: @00000020155b24c0
00000000 00000000 1a001000 04048001
Apr 8 13:23:55 devgeis kernel: sd 7:0:0:0: [sdd] tag#3
uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT
2013 Feb 15
0
[PATCH 1/4] xen/arm: trap guest WFI
Trap guest WFI, block the guest VCPU unless it has pending interrupts.
Awake the guest vcpu when a new interrupt for it arrrives.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
xen/arch/arm/domain_build.c | 2 +-
xen/arch/arm/traps.c | 6 ++++++
xen/arch/arm/vgic.c | 4 +++-
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git
2008 May 30
2
[PATCH 1/3] virtio: VIRTIO_F_NOTIFY_ON_EMPTY to force callback on empty
virtio allows drivers to suppress callbacks (ie. interrupts) for
efficiency (no locking, it's just an optimization).
There's a similar mechanism for the host to suppress notifications
coming from the guest: in that case, we ignore the suppression if the
ring is completely full.
It turns out that life is simpler if the host similarly ignores
callback suppression when the ring is
2008 May 30
2
[PATCH 1/3] virtio: VIRTIO_F_NOTIFY_ON_EMPTY to force callback on empty
virtio allows drivers to suppress callbacks (ie. interrupts) for
efficiency (no locking, it's just an optimization).
There's a similar mechanism for the host to suppress notifications
coming from the guest: in that case, we ignore the suppression if the
ring is completely full.
It turns out that life is simpler if the host similarly ignores
callback suppression when the ring is
2006 Apr 25
3
Freebsd Stable 6.x ipsec slower than with 4.9
Hello List,
I have to dualcore Athlon 64 4800+ systems. Initially I was running 4.9
on both of them an was able to get 54mbits thru direct connected realtek
10/100 cards as measured by nttcp.
I put stable on one of the system and now can on get 37mbits as measured
by nttcp when going thru an ipsec tunnel.
Eliminating the tunnel I get 94mbit/sec.
Ideas as to why this is happening?
Also
2013 Mar 22
4
[PATCH V2 0/3] tcm_vhost pending requests flush
Changes in v2:
- Increase/Decrease inflight requests in
vhost_scsi_{allocate,free}_cmd and tcm_vhost_{allocate,free}_evt
Asias He (3):
tcm_vhost: Wait for pending requests in vhost_scsi_flush()
tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint()
tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq
drivers/vhost/tcm_vhost.c | 131
2013 Mar 22
4
[PATCH V2 0/3] tcm_vhost pending requests flush
Changes in v2:
- Increase/Decrease inflight requests in
vhost_scsi_{allocate,free}_cmd and tcm_vhost_{allocate,free}_evt
Asias He (3):
tcm_vhost: Wait for pending requests in vhost_scsi_flush()
tcm_vhost: Wait for pending requests in vhost_scsi_clear_endpoint()
tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq
drivers/vhost/tcm_vhost.c | 131
2010 Dec 09
2
servers blocked on ocfs2
Hi,
we have recently started to use ocfs2 on some RHEL 5.5 servers (ocfs2-1.4.7)
Some days ago, two servers sharing an ocfs2 filesystem, and with quite
virtual services, stalled, in what it seems on ocfs2 issue. This are the
lines in their messages files:
=====node heraclito (0)========================================
/Dec 4 09:15:06 heraclito kernel: o2net: connection to node parmenides
2013 May 06
2
[PATCH v2] xen/gic: EOI irqs on the right pcpu
We need to write the irq number to GICC_DIR on the physical cpu that
previously received the interrupt, but currently we are doing it on the
pcpu that received the maintenance interrupt. As a consequence if a
vcpu is migrated to a different pcpu, the irq is going to be EOI''ed on
the wrong pcpu.
This covers the case where dom0 vcpu0 is running on pcpu1 for example
(you can test this
2013 May 02
5
[PATCH 0/3] vhost-scsi: file renames
This reorgs the files a bit, renaming tcm_vhost to
vhost_scsi as that's how userspace refers to it.
While at it, cleanup some leftovers from when it was a
staging driver.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Michael S. Tsirkin (3):
vhost: src file renames
tcm_vhost: header split up
vhost_scsi: module rename
drivers/vhost/Kconfig | 10 ++-