Displaying 20 results from an estimated 30000 matches similar to: "Does libvirt (will) support qemu's userspace vhost scsi and blk"
2018 Oct 30
0
Fw: Re: [SPDK] VM boot failed sometimes if using vhost-user-blk with spdk
Forwarded to centos mailing list
-----Original Messages-----
From: wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn>
Sent Time: 2018-10-30 14:06:00 (Tuesday)
To: "storage performance development kit" <spdk at lists.01.org>
Cc: centos at centos.org, qemu-discuss at nongnu.org
Subject: Re: [SPDK] VM boot failed sometimes if using vhost-user-blk with spdk
I enable debug of
2019 Oct 09
0
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Sorry for the late reply, and thanks Jano for pointing out elsewhere
that this didn't receive a response.
On 8/12/19 5:56 AM, Li Feng wrote:
> Hi Guys,
>
> And I want to add the vhost-user-scsi-pci/vhost-user-blk-pci support
> for libvirt.
>
> The usage in qemu like this:
>
> Vhost-SCSI
> -chardev socket,id=char0,path=/var/tmp/vhost.0
> -device
2019 Oct 14
0
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
On 10/14/19 3:12 AM, Li Feng wrote:
> Hi Cole & Michal,
>
> I'm sorry for my late response, I just end my journey today.
> Thank your response, your suggestion is very helpful to me.
>
> I have added Michal in this mail, Michal helps me review my initial patchset.
> (https://www.spinics.net/linux/fedora/libvir/msg191339.html)
>
Whoops I missed that posting, I
2019 Oct 15
1
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Cole Robinson <crobinso@redhat.com> 于2019年10月15日周二 上午1:48写道:
>
> On 10/14/19 3:12 AM, Li Feng wrote:
> > Hi Cole & Michal,
> >
> > I'm sorry for my late response, I just end my journey today.
> > Thank your response, your suggestion is very helpful to me.
> >
> > I have added Michal in this mail, Michal helps me review my initial patchset.
2019 Oct 14
2
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Hi Cole & Michal,
I'm sorry for my late response, I just end my journey today.
Thank your response, your suggestion is very helpful to me.
I have added Michal in this mail, Michal helps me review my initial patchset.
(https://www.spinics.net/linux/fedora/libvir/msg191339.html)
All concern about this feature is the XML design.
My original XML design exposes more details of Qemu.
2019 Aug 12
2
Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Hi Guys,
And I want to add the vhost-user-scsi-pci/vhost-user-blk-pci support
for libvirt.
The usage in qemu like this:
Vhost-SCSI
-chardev socket,id=char0,path=/var/tmp/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
Vhost-BLK
-chardev socket,id=char1,path=/var/tmp/vhost.1
-device vhost-user-blk-pci,id=blk0,chardev=char1
What type should I add for libvirt.
Type1:
<hostdev
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2014 Jul 01
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
On 2014-06-30 19:36, Ming Lei wrote:
> Hi Jens and Rusty,
>
> On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote:
>> On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote:
>>> Hi,
>>>
>>> These patches try to support multi virtual queues(multi-vq) in one
>>> virtio-blk device, and maps
2018 Feb 26
4
How to update modules in iniramfs fastly
> -----Original Messages-----
> From: "Steven Tardy" <sjt5atra at gmail.com>
> Sent Time: 2018-02-26 10:48:48 (Monday)
> To: "CentOS mailing list" <centos at centos.org>
> Cc:
> Subject: Re: [CentOS] How to update modules in iniramfs fastly
>
> On Sun, Feb 25, 2018 at 8:29 PM wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn>
> wrote:
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty,
On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote:
> On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote:
>> Hi,
>>
>> These patches try to support multi virtual queues(multi-vq) in one
>> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
>> hardware queue.
>>
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty,
On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote:
> On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote:
>> Hi,
>>
>> These patches try to support multi virtual queues(multi-vq) in one
>> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
>> hardware queue.
>>
2017 Oct 18
2
Null deference panic in CentOS-6.5
Fine, it seems that upgrade kernel is the only effective solution.
> On 18 Oct 2017, at 10:00 PM, Stephen John Smoogen <smooge at gmail.com> wrote:
>
> On 18 October 2017 at 04:50, wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn> wrote:
>> I googled this issue and found so many people have encountered, but most of
>> them just said "the newer kernel doesn't
2019 Apr 28
2
Who is responsible to load NIC driver when boot up
> -----Original Messages-----
> From: "Steven Tardy" <sjt5atra at gmail.com>
> Sent Time: 2019-04-28 13:02:18 (Sunday)
> To: "CentOS mailing list" <centos at centos.org>
> Cc:
> Subject: Re: [CentOS] Who is responsible to load NIC driver when boot up
>
> On Sat, Apr 27, 2019 at 11:44 PM wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn>
>
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
vhost-blk is an in-kernel virito-blk device accelerator.
Due to lack of proper in-kernel AIO interface, this version converts
guest's I/O request to bio and use submit_bio() to submit I/O directly.
So this version any supports raw block device as guest's disk image,
e.g. /dev/sda, /dev/ram0. We can add file based image support to
vhost-blk once we have in-kernel AIO interface. There are