search for: datera

Displaying 11 results from an estimated 11 matches for "datera".

2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote: > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote: > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote: > > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote: > > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote: > > > > > Hi Ming & Co, > >
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote: > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote: > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote: > > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote: > > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote: > > > > > Hi Ming & Co, > >
2015 Sep 27
0
[RFC PATCH 0/2] virtio nvme
...'---------------' > > Looks fine. Btw, after chatting with Dr. Hannes this week at SDC here are his original rts-megasas -v6 patches from Feb 2013. Note they are standalone patches that require a sufficiently old enough LIO + QEMU to actually build + function. https://github.com/Datera/rts-megasas/blob/master/rts_megasas-qemu-v6.patch https://github.com/Datera/rts-megasas/blob/master/rts_megasas-fabric-v6.patch For groking purposes, they demonstrate the principle design for a host kernel level driver, along with the megasas firmware interface (MFI) specific emulation magic that...
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme? What would be the advantage over virtio-blk? Multiqueue is not supported by QEMU but it's already supported by Linux (commit 6a27b656fc). To me, the advantage of nvme is that it provides more than decent performance on unmodified Windows guests, and thanks to your vendor extension can be used on Linux as well with speeds comparable to
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme? What would be the advantage over virtio-blk? Multiqueue is not supported by QEMU but it's already supported by Linux (commit 6a27b656fc). To me, the advantage of nvme is that it provides more than decent performance on unmodified Windows guests, and thanks to your vendor extension can be used on Linux as well with speeds comparable to
2015 Dec 02
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
...er that wants to support Windows guests > (together with e.g. a fast SAS emulated controller to replace virtio-scsi, > and emulated igb or ixgbe to replace virtio-net). vhost-nvme patches are learned from rts-megasas, which could possibly be a fast SAS emulated controller. https://github.com/Datera/rts-megasas > > Which features are supported by NVMe and not virtio-blk? Rob (CCed), Would you share whether google uses any NVMe specific feature? Thanks.
2020 Jan 04
0
CentOS 7 as a Fibre Channel SAN Target
...ave some suggestions. I've googled this particular issue and not really finding any good results on how to resolve it. I installed targetcli via yum on CentOS 7 and this is the version in the repo: # targetcli targetcli shell version 2.1.fb49 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. I downloaded the .zip of the targetcli-fb version and tried it too from the scripts folder and same results. Python: # python --version Python 2.7.5 Libs: Installed Packages python-configshell...
2019 Jan 11
1
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one. It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further. I?m doing
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not as spectacular as deleting an entire architecture, but it's still a worthy reduction in lines of code. Untested due to lack of hardware and not understanding how to set up a target platform. Changes from v1: - Fixed bugs pointed out by Jens in iscsit_wait_for_tag() - Abstracted out tag freeing as requested by Bart
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not as spectacular as deleting an entire architecture, but it's still a worthy reduction in lines of code. Untested due to lack of hardware and not understanding how to set up a target platform. Changes from v1: - Fixed bugs pointed out by Jens in iscsit_wait_for_tag() - Abstracted out tag freeing as requested by Bart
2018 May 15
6
[PATCH 0/2] Use sbitmap instead of percpu_ida
From: Matthew Wilcox <mawilcox at microsoft.com> This is a pretty rough-and-ready conversion of the target drivers from using percpu_ida to sbitmap. It compiles; I don't have a target setup, so it's completely untested. I haven't tried to do anything particularly clever here, so it's possible that, for example, the wait queue in iscsi_target_util could be more clever, like