Displaying 20 results from an estimated 224 matches for "lio".
Did you mean:
io
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...> > > > At first glance it seems like the virtio_nvme guest driver is just
> > > > > another block driver like virtio_blk, so I'm not clear why a
> > > > > virtio-nvme device makes sense.
> > > >
> > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > >
> > > > Nick(CCed), could you correct me if I'm wrong?
> > > >
> > > > For SCSI stack, we have:
> > > > virtio-scsi(guest)
> > > > tcm_vhost(or vhost_scsi, host)
> &...
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...> > > > At first glance it seems like the virtio_nvme guest driver is just
> > > > > another block driver like virtio_blk, so I'm not clear why a
> > > > > virtio-nvme device makes sense.
> > > >
> > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > >
> > > > Nick(CCed), could you correct me if I'm wrong?
> > > >
> > > > For SCSI stack, we have:
> > > > virtio-scsi(guest)
> > > > tcm_vhost(or vhost_scsi, host)
> &...
2015 Sep 17
2
[RFC PATCH 0/2] virtio nvme
...gt;
> <SNIP>
>
> > >
> > > At first glance it seems like the virtio_nvme guest driver is just
> > > another block driver like virtio_blk, so I'm not clear why a
> > > virtio-nvme device makes sense.
> >
> > I think the future "LIO NVMe target" only speaks NVMe protocol.
> >
> > Nick(CCed), could you correct me if I'm wrong?
> >
> > For SCSI stack, we have:
> > virtio-scsi(guest)
> > tcm_vhost(or vhost_scsi, host)
> > LIO-scsi-target
> >
> > For NVMe stack, we&...
2015 Sep 17
2
[RFC PATCH 0/2] virtio nvme
...gt;
> <SNIP>
>
> > >
> > > At first glance it seems like the virtio_nvme guest driver is just
> > > another block driver like virtio_blk, so I'm not clear why a
> > > virtio-nvme device makes sense.
> >
> > I think the future "LIO NVMe target" only speaks NVMe protocol.
> >
> > Nick(CCed), could you correct me if I'm wrong?
> >
> > For SCSI stack, we have:
> > virtio-scsi(guest)
> > tcm_vhost(or vhost_scsi, host)
> > LIO-scsi-target
> >
> > For NVMe stack, we&...
2015 Sep 10
5
[RFC PATCH 0/2] virtio nvme
...r comments.
> >
> > Play it in Qemu with:
> > -drive file=disk.img,format=raw,if=none,id=D22 \
> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
> >
> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
>
> Why is a virtio-nvme guest device needed? I guess there must either
> be NVMe-only features that you want to pass through, or you think the
> performance will be significantly better than virtio-blk/virtio-scsi?
It simply passes through NVMe commands.
R...
2015 Sep 10
5
[RFC PATCH 0/2] virtio nvme
...r comments.
> >
> > Play it in Qemu with:
> > -drive file=disk.img,format=raw,if=none,id=D22 \
> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
> >
> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
>
> Why is a virtio-nvme guest device needed? I guess there must either
> be NVMe-only features that you want to pass through, or you think the
> performance will be significantly better than virtio-blk/virtio-scsi?
It simply passes through NVMe commands.
R...
2013 Jan 11
4
count combined occurrences of categories
..., 'fr')
au1 <- c('deb', 'art', 'deb', 'seb', 'deb', 'deb', 'mar', 'mar', 'joy', 'joy')
au2 <- c('art', 'deb', 'mar', 'deb', 'joy', 'mar', 'art', 'lio', 'nem', 'mar')
au3 <- c('mar', 'lio', 'joy', 'mar', 'art', 'lio', 'nem', 'art', 'deb', 'tat')
tutu <- data.frame(cbind(nam, au1, au2, au3))
thanks,
David
[[alternative HTML version deleted...
2010 Jan 28
31
[PATCH 0 of 4] aio event fd support to blktap2
Get blktap2 running on pvops.
This mainly adds eventfd support to the userland code. Based on some
prior cleanup to tapdisk-queue and the server object. We had most of
that in XenServer for a while, so I kept it stacked.
1. Clean up IPC and AIO init in tapdisk-server.
[I think tapdisk-ipc in blktap2 is basically obsolete.
Pending a later patch to remove it?]
2. Split tapdisk-queue into
2015 Sep 18
0
[RFC PATCH 0/2] virtio nvme
...2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > > Hi Ming & Co,
<SNIP>
> > > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > > >
> > > > > Nick(CCed), could you correct me if I'm wrong?
> > > > >
> > > > > For SCSI stack, we have:
> > > > > virtio-scsi(guest)
> > > > > tcm_vho...
2015 Sep 17
1
[RFC PATCH 0/2] virtio nvme
...omments.
>
> <SNIP>
>
>> >
>> > At first glance it seems like the virtio_nvme guest driver is just
>> > another block driver like virtio_blk, so I'm not clear why a
>> > virtio-nvme device makes sense.
>>
>> I think the future "LIO NVMe target" only speaks NVMe protocol.
>>
>> Nick(CCed), could you correct me if I'm wrong?
>>
>> For SCSI stack, we have:
>> virtio-scsi(guest)
>> tcm_vhost(or vhost_scsi, host)
>> LIO-scsi-target
>>
>> For NVMe stack, we'll have...
2015 Sep 17
1
[RFC PATCH 0/2] virtio nvme
...omments.
>
> <SNIP>
>
>> >
>> > At first glance it seems like the virtio_nvme guest driver is just
>> > another block driver like virtio_blk, so I'm not clear why a
>> > virtio-nvme device makes sense.
>>
>> I think the future "LIO NVMe target" only speaks NVMe protocol.
>>
>> Nick(CCed), could you correct me if I'm wrong?
>>
>> For SCSI stack, we have:
>> virtio-scsi(guest)
>> tcm_vhost(or vhost_scsi, host)
>> LIO-scsi-target
>>
>> For NVMe stack, we'll have...
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...s A. Bellinger wrote:
> > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > > > Hi Ming & Co,
>
> <SNIP>
>
> > > > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > > > >
> > > > > > Nick(CCed), could you correct me if I'm wrong?
> > > > > >
> > > > > > For SCSI stack, we have:
> > > > > > virtio-scsi(guest)
> &g...
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...s A. Bellinger wrote:
> > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > > > Hi Ming & Co,
>
> <SNIP>
>
> > > > > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > > > > >
> > > > > > Nick(CCed), could you correct me if I'm wrong?
> > > > > >
> > > > > > For SCSI stack, we have:
> > > > > > virtio-scsi(guest)
> &g...
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a
virtualization system. In this moment I have a very basic test
configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB
ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly
interconnected. Gluster configuration is replica 2. OS is Fedora 27
For my tests I used dd a...
2015 Sep 18
0
[RFC PATCH 0/2] virtio nvme
...gt; > > >
> > > > At first glance it seems like the virtio_nvme guest driver is just
> > > > another block driver like virtio_blk, so I'm not clear why a
> > > > virtio-nvme device makes sense.
> > >
> > > I think the future "LIO NVMe target" only speaks NVMe protocol.
> > >
> > > Nick(CCed), could you correct me if I'm wrong?
> > >
> > > For SCSI stack, we have:
> > > virtio-scsi(guest)
> > > tcm_vhost(or vhost_scsi, host)
> > > LIO-scsi-target
>...
2020 Sep 22
1
[PATCH 4/8] vhost scsi: fix cmd completion race
..._work.
> If the last put happens a little later then we could race where
> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>
> This patch has us delay completing the cmd until the last lio core ref
> is dropped. We then know that once we signal to the guest that the cmd
> is completed that if it queues a new command it will find a free cmd.
It seems weird to me to see a reference to LIO in the description of a
vhost patch? Since this driver supports more backends than LIO, sho...
2015 Sep 17
0
[RFC PATCH 0/2] virtio nvme
...; As title said, request for your comments.
<SNIP>
> >
> > At first glance it seems like the virtio_nvme guest driver is just
> > another block driver like virtio_blk, so I'm not clear why a
> > virtio-nvme device makes sense.
>
> I think the future "LIO NVMe target" only speaks NVMe protocol.
>
> Nick(CCed), could you correct me if I'm wrong?
>
> For SCSI stack, we have:
> virtio-scsi(guest)
> tcm_vhost(or vhost_scsi, host)
> LIO-scsi-target
>
> For NVMe stack, we'll have similar components:
> virtio-n...
2020 Jun 25
3
R 4.0.0 rebuild status
...nge the R-rpm-macros
package.
Probably it should be enough to change the /usr/lib/rpm/R-deps.R script to add
Requires: R(ABI)=4.0
I suggest to continue this as is and then implement that change in rawhide and
bring it back to Fedora 32 as new updates are issued.
What do you think?
--
Jos? Ab?lio
2020 Jul 11
2
R 4.0.0 rebuild status
...sary to be pushed to stable.
Does any one has any objection for this to be pushed to stable?
If I do not hear until then I will push the update Monday night (Western
Europe time zone, and yes we are in the Summer and the days are larger) in
time for the daily batch update.
Regards,
--
Jos? Ab?lio
2020 Aug 11
2
R2spec woes
On Tue, 11 Aug 2020 at 02:35, Elliott Sales de Andrade
<quantum.analyst at gmail.com> wrote:
>
> Hi Jos?,
>
> On Mon, 10 Aug 2020 at 11:20, Jos? Ab?lio Matos <jamatos at fc.up.pt> wrote:
> >
> > I tried R2spec to create the spec files necessary to have Rcpparmadillo.
> >
> > I noticed t...