similar to: [RFC] Virtual Machine Device Queues(VMDq) support on KVM

Displaying 20 results from an estimated 10000 matches similar to: "[RFC] Virtual Machine Device Queues(VMDq) support on KVM"

2013 Jul 31
29
[PATCH 0/9] tools: remove or disable old/useless/unused/unmainted stuff
depends on "autoconf: regenerate configure scripts with 4.4 version" This series removes some of the really old deadwood from the tools build and makes some other things which are on their way out configurable at build time with a default depending on how far down the slope I judge them to be. * nuke in tree copy of libaio * nuke obsolete tools: xsview, miniterm, lomount & sv *
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote: > For virtio-blk, I don't think it is always better to take more queues, and > we need to leverage below things in host side: > > - host storage top performance, generally it reaches that with more > than 1 jobs with libaio(suppose it is N, so basically we can use N > iothread per device in qemu to try to get top performance) > > -
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote: > For virtio-blk, I don't think it is always better to take more queues, and > we need to leverage below things in host side: > > - host storage top performance, generally it reaches that with more > than 1 jobs with libaio(suppose it is N, so basically we can use N > iothread per device in qemu to try to get top performance) > > -
2019 Jun 20
1
Re: [libnbd PATCH 3/8] pread: Reject server SR read response with no data chunks
On Mon, Jun 17, 2019 at 07:07:53PM -0500, Eric Blake wrote: > The NBD spec requires that a server doing structured reads must not > succeed unless it covers the entire buffer with reply chunks. In the > general case, this requires a lot of bookkeeping to check whether > offsets were non-overlapping and sufficient, and we'd rather defer > such checking to an optional callback
2019 Jun 10
2
[nbdkit PATCH] crypto: Tweak handling of SEND_MORE
In the recent commit 3842a080 to add SEND_MORE support, I blindly implemented the tls code as: if (SEND_MORE) { cork send } else { send uncork } because it showed improvements for my test case of aio-parallel-load from libnbd. But that test sticks to 64k I/O requests. But with further investigation, I've learned that even though gnutls corking works great for smaller
2010 Jan 28
31
[PATCH 0 of 4] aio event fd support to blktap2
Get blktap2 running on pvops. This mainly adds eventfd support to the userland code. Based on some prior cleanup to tapdisk-queue and the server object. We had most of that in XenServer for a while, so I kept it stacked. 1. Clean up IPC and AIO init in tapdisk-server. [I think tapdisk-ipc in blktap2 is basically obsolete. Pending a later patch to remove it?] 2. Split tapdisk-queue into
2006 Dec 01
1
[PATCH] Ensure blktap reports I/O errors back to guest
There are a number of flaws in the blktap userspace daemon when dealing with I/O errors. - The backends which use AIO check the io_events.res member to determine if an I/O error occurred. Which is good. But when calling the callback to signal completion of the I/O, they pass the io_events.res2 member Now this seems fine at first glance[1] "res is the usual result of an I/O
2012 Jul 13
9
[PATCH RESEND 0/5] Add vhost-blk support
Hi folks, [I am resending to fix the broken thread in the previous one.] This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk device accelerator. Compared to userspace virtio-blk implementation, vhost-blk gives about 5% to 15% performance improvement. Asias He (5): aio: Export symbols and struct kiocb_batch for in kernel aio usage eventfd: Export symbol
2012 Jul 13
9
[PATCH RESEND 0/5] Add vhost-blk support
Hi folks, [I am resending to fix the broken thread in the previous one.] This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk device accelerator. Compared to userspace virtio-blk implementation, vhost-blk gives about 5% to 15% performance improvement. Asias He (5): aio: Export symbols and struct kiocb_batch for in kernel aio usage eventfd: Export symbol
2019 Jun 29
19
[libnbd PATCH 0/6] new APIs: aio_in_flight, aio_FOO_notify
I still need to wire in the use of *_notify functions into nbdkit to prove whether it makes the code any faster or easier to maintain, but at least the added example shows one good use case for the new API. Eric Blake (6): api: Add nbd_aio_in_flight generator: Allow DEAD state actions to run generator: Allow Int64 in callbacks states: Prepare for aio notify callback api: Add new
2012 Jul 12
6
[PATCH 0/5] Add vhost-blk support
Hi folks, This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk device accelerator. Compared to userspace virtio-blk implementation, vhost-blk gives about 5% to 15% performance improvement. Asias He (5): aio: Export symbols and struct kiocb_batch for in kernel aio usage eventfd: Export symbol eventfd_file_create() vhost: Make vhost a separate module vhost-net: Use
2012 Jul 12
6
[PATCH 0/5] Add vhost-blk support
Hi folks, This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk device accelerator. Compared to userspace virtio-blk implementation, vhost-blk gives about 5% to 15% performance improvement. Asias He (5): aio: Export symbols and struct kiocb_batch for in kernel aio usage eventfd: Export symbol eventfd_file_create() vhost: Make vhost a separate module vhost-net: Use
2015 Apr 24
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Fri, Apr 24, 2015 at 9:12 AM, Luke Gorrie <luke at snabb.co> wrote: > - How fast would the new design likely be? This proposal eliminates two things in the path: 1. Compared to vhost_net, it bypasses the host tun driver and network stack, replacing it with direct vhost_net <-> vhost_net data transfer. At this level it's compared to vhost-user, but it's not programmable
2015 Apr 24
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Fri, Apr 24, 2015 at 9:12 AM, Luke Gorrie <luke at snabb.co> wrote: > - How fast would the new design likely be? This proposal eliminates two things in the path: 1. Compared to vhost_net, it bypasses the host tun driver and network stack, replacing it with direct vhost_net <-> vhost_net data transfer. At this level it's compared to vhost-user, but it's not programmable
2010 Sep 23
1
OpenVPN tunnel and one-way audio - Do I still need a SIP proxy? (bruce bruce)
> I don't think it's an endpoint issue. I think the SIP packet headers get > over-written by the tunnel (openvpn) protocol. I'd be rather astonished if OpenVPN itself were responsible for this. As far as I know, OpenVPN doesn't do higher-level-protocol rewriting of any sort. It just provides the "bit pipe" through the tunnel. I'd suggest several other
2010 Jul 02
1
VMDq SR-IOV
Hello, Is there a HowTo or tutorial about the configuration of VMDq or SR-IOV with XEN? Thanks in advance. Best Regards, -- Houssem MEDHIOUB Research Engineer TELECOM & Management SudParis _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2009 May 26
1
VMDQ / netchannel2 support?
Hello, Did 3.4 come with vmdq, sr-iov, etc.. (netchannel2) support included? If so, is there any documentation available on how to make use of it? Also, these techniques seem like a good way to improve network performance on hardware that supports it, but for folks that want to use iptables and other network management utilities from dom0, will it even be possible anymore since the NICs seem to
2008 Oct 14
0
Intel VMDq support in Xen
Is Intel VMDq supported in Xen 3.3 or 3.4 ? And if not when will it be ? Ce message et toutes les pieces jointes sont etablis a l''attention exclusive de ses destinataires et sont strictement confidentiels. Pour en savoir plus cliquer ici This message and any attachments are confidential to the ordinary user of the e-mail address to which it was addressed and may also be privileged.
2012 Oct 11
0
How to use the VMDq of Intel NIC?
Hi, all: Does xen 4.1 & kernel 3.4 support intel VMDq? How can i make our domU benefit from this? -- 高永超 (flex) 运维工程师 Douban Inc. Gtalk: frostynova@gmail.com Skype:flex177 WebSite: http://blog.flib.me Mobile: +86 13811077854 No.14 Jiuxianqiao Road, Area 51 A1-1-1008, Beijing 100016 , China 北京市酒仙桥路14号51楼A1区1门1008,100016 _______________________________________________ Xen-users mailing
2014 Feb 13
4
Slow Samba transfer
Hi, this is my first pos here, please be lenient. My problem shuld be a FAQ and, in fact I found a lot of references googling around, but nothing could really solve my problem, so here I am. I have a Samba server: Very basic wheezy amd64 installation on a small VIRTUAL server (Xen). Only fancy thing is direct access to a couple of RAID1 (mirror) arrays where data is stored. I normally access