search for: backstores

Displaying 19 results from an estimated 19 matches for "backstores".

Did you mean: backstore
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
Hi Kalever! First of all, I really appreciate your test results for block-store(https://github.com/pkalever/iozone_results_gluster/tree/master/block-store) :-) My teammate and I tested block-store(glfs backstore with tcmu-runner) but we have met a problem of performance. We tested some cases with one server that has RDMA volume and one client that is connected to same RDMA network. two
2017 Jun 26
1
mirror block devices
Hi folks, I have to migrate a set of iscsi backstores to a new target via network. To reduce downtime I would like to mirror the active volumes first, next stop the initators, and then do a final incremental sync. The backstores have a size between 256 GByte and 1 TByte each. In toto its about 8 TByte. Of course I have found the --copy-devices patch...
2017 Sep 13
3
glusterfs expose iSCSI
...-shared-persistent-storage-in-docker-container/ but when I install tcmu-runner. It doesn't work. I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli, it not show *user:glfs* and *user:gcow* */>* ls o- / ...................................................... [...] o- backstores ........................................... [...] | o- block ............................... [Storage Objects: 0] | o- fileio .............................. [Storage Objects: 0] | o- pscsi ............................... [Storage Objects: 0] | o- ramdisk ............................. [Stora...
2018 Oct 20
1
how to use LV(container) created by lxc-create?
hi everyone, how can I use a LV created by lxc in libvirt guest domain? You know, you lxc-create with LV as backstore for a container(ubuntu in my case) and now you want to give to libvit. That, I thought would be a very common case but I failed to find a good howto on how one: lxc-create(lvm) => let libvirt take control over it. Or it's not how you would do it at all? many thanks, L.
2016 Feb 05
2
safest way to mount iscsi loopback..
.. what is? fellow centosians. how to you mount your loopback targets? I'm trying lvm backstore, I was hoping I would do it with uuid, but it's exposed more than once and how would kernel decide which device to use I don't know. thanks
2016 Feb 11
2
safest way to mount iscsi loopback..
On 2/11/2016 5:14 AM, lejeczek wrote: > nobody does use iscsi loopback over an lvm? I'm not sure what 'iscsi loopback' even means. iSCSI is used to mount a virtual block device hosted on another system (initiator mode) or to share a virtual block device (target mode), while loopback is used to mount a local file as a device, such as an .iso image of an optical disc. can you
2017 Sep 13
0
glusterfs expose iSCSI
...tainer environment or is it just in a non-container centos environment ? > > I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli, > it not show user:glfs and user:gcow > > /> ls > o- / ...................................................... [...] > o- backstores ........................................... [...] > | o- block ............................... [Storage Objects: 0] > | o- fileio .............................. [Storage Objects: 0] > | o- pscsi ............................... [Storage Objects: 0] > | o- ramdisk ...................
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...er. >> >> Also tried creating one file in /tmp, used as fileio, also can reproduce. >> So no real device is based. >> >> like: >> cd /tmp >> dd if=/dev/zero of=test bs=1M count=1024; sync; >> targetcli >> #targetcli >> (targetcli) /> cd backstores/fileio >> (targetcli) /> create name=file_backend file_or_dev=/tmp/test size=1G >> (targetcli) /> cd /vhost >> (targetcli) /> create wwn=naa.60014052cc816bf4 >> (targetcli) /> cd naa.60014052cc816bf4/tpgt1/luns >> (targetcli) /> create /backstores/fileio...
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...er. >> >> Also tried creating one file in /tmp, used as fileio, also can reproduce. >> So no real device is based. >> >> like: >> cd /tmp >> dd if=/dev/zero of=test bs=1M count=1024; sync; >> targetcli >> #targetcli >> (targetcli) /> cd backstores/fileio >> (targetcli) /> create name=file_backend file_or_dev=/tmp/test size=1G >> (targetcli) /> cd /vhost >> (targetcli) /> create wwn=naa.60014052cc816bf4 >> (targetcli) /> cd naa.60014052cc816bf4/tpgt1/luns >> (targetcli) /> create /backstores/fileio...
2016 Feb 11
0
safest way to mount iscsi loopback..
nobody does use iscsi loopback over an lvm? On 05/02/16 17:36, lejeczek wrote: > .. what is? > fellow centosians. > > how to you mount your loopback targets? > I'm trying lvm backstore, I was hoping I would do it with > uuid, but it's exposed more than once and how would kernel > decide which device to use I don't know. > > thanks >
2016 Feb 15
0
[Bulk] Re: safest way to mount iscsi loopback..
...) or to share a virtual block device > (target mode), while loopback is used to mount a local > file as a device, such as an .iso image of an optical disc. > > can you explain in a little more detail what you're trying > to do ? > > > whatever devices you have in your backstores (LIO implementation is the default one I believe) and then naturally in your targets, etc.. - on the same local system you can loop them back = LIO presents them again to the kernel as local scsi devices. I'm thinking, maybe multipath should be involved/deployed here? I can mount such a loo...
2014 May 01
0
Any information on targetcli (fcoe) for centos 6? Close to end of my rope
.../ # targetcli targetcli shell version 2.0rc1.fb16 Copyright 2011 by RisingTide Systems LLC and others. For help on commands, type 'help'. /> /> ls o- / ......................................................................................................................... [...] o- backstores.............................................................................................................. [...] | o- block ................................................................................................... [2 Storage Objects] | | o- vol_grp1-logical_vol1 ......................
2023 Oct 03
0
maptools, rgdal, rgeos and rgrass7 retiring Monday, October 16
The legacy R spatial infrastructure packages maptools, rgdal and rgeos will be archived by CRAN on Monday, October 16, 2023; rgrass7 has already been replaced by rgrass and will be archived with the retiring packages. The choice of date matches the previously announced archiving during October 2023, and the specific date matches the release schedule of Bioconductor 3.18 (some Bioconductor
2023 Oct 03
0
maptools, rgdal, rgeos and rgrass7 retiring Monday, October 16
The legacy R spatial infrastructure packages maptools, rgdal and rgeos will be archived by CRAN on Monday, October 16, 2023; rgrass7 has already been replaced by rgrass and will be archived with the retiring packages. The choice of date matches the previously announced archiving during October 2023, and the specific date matches the release schedule of Bioconductor 3.18 (some Bioconductor
2012 Sep 07
12
[PATCH 0/5] vhost-scsi: Add support for host virtualized target
...nab/qemu-kvm.git vhost-scsi-for-1.3 Note the code is cut against yesterday's QEMU head, and dispite the name of the tree is based upon mainline qemu.org git code + has thus far been running overnight with > 100K IOPs small block 4k workloads using v3.6-rc2+ based target code with RAMDISK_DR backstores. Other than some minor fuzz between jumping from QEMU 1.2.0 -> 1.2.50, this series is functionally identical to what's been posted for vhost-scsi RFC-v3 to qemu-devel. Please consider applying these patches for an initial vhost-scsi merge into QEMU 1.3.0-rc code, or let us know what else y...
2012 Sep 07
12
[PATCH 0/5] vhost-scsi: Add support for host virtualized target
...nab/qemu-kvm.git vhost-scsi-for-1.3 Note the code is cut against yesterday's QEMU head, and dispite the name of the tree is based upon mainline qemu.org git code + has thus far been running overnight with > 100K IOPs small block 4k workloads using v3.6-rc2+ based target code with RAMDISK_DR backstores. Other than some minor fuzz between jumping from QEMU 1.2.0 -> 1.2.50, this series is functionally identical to what's been posted for vhost-scsi RFC-v3 to qemu-devel. Please consider applying these patches for an initial vhost-scsi merge into QEMU 1.3.0-rc code, or let us know what else y...
2012 Feb 03
6
Spectacularly disappointing disk throughput
Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...hz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommended by Stefan) target backstores + vhost configuration from rtsadmin/targetcli shell: /> ls backstores/rd_mcp/ o- rd_mcp ................................................. [32 Storage Objects] o- ramdisk0 .............................................. [ramdisk activated] o- ramdisk1 ............................................
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...hz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommended by Stefan) target backstores + vhost configuration from rtsadmin/targetcli shell: /> ls backstores/rd_mcp/ o- rd_mcp ................................................. [32 Storage Objects] o- ramdisk0 .............................................. [ramdisk activated] o- ramdisk1 ............................................