Il 13/06/2014 11:26, Vincent JARDIN ha scritto:>> Markus especially referred to parts *outside* QEMU: the server, the >> uio driver, etc. These out-of-tree, non-packaged parts of ivshmem >> are one of the reasons why Red Hat has disabled ivshmem in RHEL7. > > You made the right choices, these out-of-tree packages are not required. > You can use QEMU's ivshmem without any of the out-of-tree packages. The > out-of-tree packages are just some examples of using ivshmem.Fine, however Red Hat would also need a way to test ivshmem code, with proper quality assurance (that also benefits upstream, of course). With ivshmem this is not possible without the out-of-tree packages. Disabling all the unwanted devices is a lot of work and thankless too (you only get complaints, in fact!). But we prefer to ship only what we know we can test, support and improve. We do not want customers' bug reports to languish because they are using code that cannot really be fixed. Note that we do take into account community contributions in choosing which new code can be supported. For example most work on VMDK images was done by Fam when he was a student, libiscsi is mostly the work of Peter Lieven, and so on; both of them are supported in RHEL. These people did/do a great job, and we were happy to embrace those features! Now, putting back my QEMU hat...>> He also listed many others. Basically for parts of QEMU that are not >> of high quality, we either fix them (this is for example what we did >> for qcow2) or disable them. Not just ivshmem suffered this fate, for >> example many network cards, sound cards, SCSI storage adapters. > > I and David (cc) are working on making it better based on the issues > that are found. > >> Now, vhost-user is in the process of being merged for 2.1. Compared > to the DPDK solution: > > now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit > because they have different scope and use cases. It is like comparing > two different(A) models of IPC: > - vhost-user -> networking use case specificNot necessarily. First and foremost, vhost-user defines an API for communication between QEMU and the host, including: * file descriptor passing for the shared memory file * mapping offsets in shared memory to physical memory addresses in the guests * passing dirty memory information back and forth, so that migration is not prevented * sending interrupts to a device * setting up ring buffers in the shared memory None of these is virtio specific, except the last (even then, you could repurpose the messages to pass the address of the whole shared memory area, instead of the vrings only). Yes, the only front-end for vhost-user, right now, is a network device. But it is possible to connect vhost-scsi to vhost-user as well, it is possible to develop a vhost-serial as well, and it is possible to only use the RPC and develop arbitrary shared-memory based tools using this API. It's just that no one has done it yet. Also, vhost-user is documented! See here: https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html The only part of ivshmem that vhost doesn't include is the n-way inter-guest doorbell. This is the part that requires a server and uio driver. vhost only supports host->guest and guest->host doorbells.>> * it doesn't require hugetlbfs (which only enabled shared memory by >> chance in older QEMU releases, that was never documented) > > ivhsmem does not require hugetlbfs. It is optional. > >> * it doesn't require the kernel driver from the DPDK sample > > ivhsmem does not require DPDK kernel driver. see memnic's PMD: > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.cYou're right, I was confusing memnic and the vhost example in DPDK. Paolo
> Fine, however Red Hat would also need a way to test ivshmem code, with > proper quality assurance (that also benefits upstream, of course). With > ivshmem this is not possible without the out-of-tree packages.You did not reply to my question: how to get the list of things that are/will be disabled by Redhat? About Redhat's QA, I do not care. About Qemu's QA, I do care ;) I guess we can combine both. What's about something like: tests/virtio-net-test.c # qtest_add_func( is a nop) but for ivshmem test/ivshmem-test.c ? would it have any values? If not, what do you use at Redhat to test Qemu?>> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit >> because they have different scope and use cases. It is like comparing >> two different(A) models of IPC:I do repeat this use case that you had removed because vhost-user does not solve it yet: >> - ivshmem -> framework to be generic to have shared memory for many >> use cases (HPC, in-memory-database, a network too like memnic).>> - vhost-user -> networking use case specific > > Not necessarily. First and foremost, vhost-user defines an API for > communication between QEMU and the host, including: > * file descriptor passing for the shared memory file > * mapping offsets in shared memory to physical memory addresses in the > guests > * passing dirty memory information back and forth, so that migration is > not prevented > * sending interrupts to a device > * setting up ring buffers in the shared memoryYes, I do agree that it is promising. And of course some tests are here: https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html for some of the bullets you are listing (not all yet).> Also, vhost-user is documented! See here: > https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.htmlas I told you, we'll send a contribution with ivshmem's documentation.> The only part of ivshmem that vhost doesn't include is the n-way > inter-guest doorbell. This is the part that requires a server and uio > driver. vhost only supports host->guest and guest->host doorbells.agree: both will need it: vhost and ivshmem requires a doorbell for VM2VM, but then we'll have a security issue to be managed by Qemu for vhost and ivshmem. I'll be pleased to contribute on it for ivshmem thru another thread that this one.>> ivhsmem does not require DPDK kernel driver. see memnic's PMD: >> http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c > > You're right, I was confusing memnic and the vhost example in DPDK.Definitively, it proves a lack of documentation. You welcome. Olivier did explain it:>> ivhsmem does not require hugetlbfs. It is optional. >> >> > * it doesn't require ivshmem (it does require shared memory, which >> > will also be added to 2.1) > > Right, hugetlbfs is not required. A posix shared memory or tmpfs > can be used instead. For instance, to use /dev/shm/foobar: > > qemu-system-x86_64 -enable-kvm -cpu host [...] \ > -device ivshmem,size=16,shm=foobarBest regards, Vincent
Il 13/06/2014 15:41, Vincent JARDIN ha scritto:>> Fine, however Red Hat would also need a way to test ivshmem code, with >> proper quality assurance (that also benefits upstream, of course). With >> ivshmem this is not possible without the out-of-tree packages. > > You did not reply to my question: how to get the list of things that > are/will be disabled by Redhat?I don't know exactly what the answer is, and this is probably not the right list to discuss it. I guess there are partnership programs with Red Hat that I don't know the details of, but these are more for management folks and not really for developers. ivshmem in particular was disabled even in RHEL7 beta, so you could have found out about this in December and opened a bug in Bugzilla about it.> I guess we can combine both. What's about something like: > tests/virtio-net-test.c # qtest_add_func( is a nop) > but for ivshmem > test/ivshmem-test.c > ? > > would it have any values?The first things to do are: 1) try to understand if there is any value in a simplified shared memory device with no interrupts (and those no eventfd or uio dependencies, not even optionally). You are not using them because DPDK only does polling and basically reserves a core for the NIC code. If so, this would be a very simple device, just a 100 or so lines of code. We could get this in upstream, and it would be likely enabled in RHEL too. 2) if not, get the server and uio driver merged into the QEMU tree, and document the protocol in docs/specs/ivshmem_device_spec.txt. It doesn't matter if the code comes from the Nahanni repository or from your own implementation. Also start fixing bugs such as the ones that Markus reported (removing all exit() invocations). Writing testcases using the qtest framework would also be useful, but first of all it is important to make ivshmem easier to use.> If not, what do you use at Redhat to test Qemu?We do integration testing using autotest/virt-test (QEMU and KVM developers for upstream use it too) and also some manual functional tests. Contributing ivshmem tests to the virt-test would also be helpful in demonstrating your interest in maintaining ivshmem. The repository and documentation is at https://github.com/autotest/virt-test/ (a bit Fedora-centric).> I do repeat this use case that you had removed because vhost-user does > not solve it yet: > >>> - ivshmem -> framework to be generic to have shared memory for many >>> use cases (HPC, in-memory-database, a network too like memnic).Right, ivshmem is better for guest-to-guest. vhost-user is not restricted to networking, but it is indeed more focused on guest-to-host. ivshmem is usable for guest-to-host, but I would prefer still some "hybrid" that uses vhost-like messages to pass the shared memory fds to the external program. Paolo