search for: backstor

Displaying 19 results from an estimated 19 matches for "backstor".

Did you mean: backstory
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
Hi Kalever! First of all, I really appreciate your test results for block-store(https://github.com/pkalever/iozone_results_gluster/tree/master/block-store) :-) My teammate and I tested block-store(glfs backstore with tcmu-runner) but we have met a problem of performance. We tested some cases with one server that has RDMA volume and one client that is connected to same RDMA network. two machines have same environment like below. - Distro : CentOS 6.9 - Kernel : 4.12.9 - GlusterFS : 3.10.5 - tcmu-runner...
2017 Jun 26
1
mirror block devices
Hi folks, I have to migrate a set of iscsi backstores to a new target via network. To reduce downtime I would like to mirror the active volumes first, next stop the initators, and then do a final incremental sync. The backstores have a size between 256 GByte and 1 TByte each. In toto its about 8 TByte. Of course I have found the --copy-devices pat...
2017 Sep 13
3
glusterfs expose iSCSI
...-shared-persistent-storage-in-docker-container/ but when I install tcmu-runner. It doesn't work. I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli, it not show *user:glfs* and *user:gcow* */>* ls o- / ...................................................... [...] o- backstores ........................................... [...] | o- block ............................... [Storage Objects: 0] | o- fileio .............................. [Storage Objects: 0] | o- pscsi ............................... [Storage Objects: 0] | o- ramdisk ............................. [Sto...
2018 Oct 20
1
how to use LV(container) created by lxc-create?
hi everyone, how can I use a LV created by lxc in libvirt guest domain? You know, you lxc-create with LV as backstore for a container(ubuntu in my case) and now you want to give to libvit. That, I thought would be a very common case but I failed to find a good howto on how one: lxc-create(lvm) => let libvirt take control over it. Or it's not how you would do it at all? many thanks, L.
2016 Feb 05
2
safest way to mount iscsi loopback..
.. what is? fellow centosians. how to you mount your loopback targets? I'm trying lvm backstore, I was hoping I would do it with uuid, but it's exposed more than once and how would kernel decide which device to use I don't know. thanks
2016 Feb 11
2
safest way to mount iscsi loopback..
On 2/11/2016 5:14 AM, lejeczek wrote: > nobody does use iscsi loopback over an lvm? I'm not sure what 'iscsi loopback' even means. iSCSI is used to mount a virtual block device hosted on another system (initiator mode) or to share a virtual block device (target mode), while loopback is used to mount a local file as a device, such as an .iso image of an optical disc. can you
2017 Sep 13
0
glusterfs expose iSCSI
...tainer environment or is it just in a non-container centos environment ? > > I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli, > it not show user:glfs and user:gcow > > /> ls > o- / ...................................................... [...] > o- backstores ........................................... [...] > | o- block ............................... [Storage Objects: 0] > | o- fileio .............................. [Storage Objects: 0] > | o- pscsi ............................... [Storage Objects: 0] > | o- ramdisk .................
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...in a very long time, I suspect the problemb is with >>> your SCSI code, and not with ext4. >>> >> >> Do you know what's the possible reason of this error. >> >> Have tried 4.7-rc2, same issue exist. >> It can be reproduced by fileio and iblock as backstore. >> It is easier to happen in qemu like this process: >> qemu-> mount-> dd xx -> umout -> mount -> rm xx, then the error may >> happen, no need to reboot. >> >> ramdisk can not cause error just because it just malloc and memcpy, >> while not going...
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...in a very long time, I suspect the problemb is with >>> your SCSI code, and not with ext4. >>> >> >> Do you know what's the possible reason of this error. >> >> Have tried 4.7-rc2, same issue exist. >> It can be reproduced by fileio and iblock as backstore. >> It is easier to happen in qemu like this process: >> qemu-> mount-> dd xx -> umout -> mount -> rm xx, then the error may >> happen, no need to reboot. >> >> ramdisk can not cause error just because it just malloc and memcpy, >> while not going...
2016 Feb 11
0
safest way to mount iscsi loopback..
nobody does use iscsi loopback over an lvm? On 05/02/16 17:36, lejeczek wrote: > .. what is? > fellow centosians. > > how to you mount your loopback targets? > I'm trying lvm backstore, I was hoping I would do it with > uuid, but it's exposed more than once and how would kernel > decide which device to use I don't know. > > thanks > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.cento...
2016 Feb 15
0
[Bulk] Re: safest way to mount iscsi loopback..
...) or to share a virtual block device > (target mode), while loopback is used to mount a local > file as a device, such as an .iso image of an optical disc. > > can you explain in a little more detail what you're trying > to do ? > > > whatever devices you have in your backstores (LIO implementation is the default one I believe) and then naturally in your targets, etc.. - on the same local system you can loop them back = LIO presents them again to the kernel as local scsi devices. I'm thinking, maybe multipath should be involved/deployed here? I can mount such a l...
2014 May 01
0
Any information on targetcli (fcoe) for centos 6? Close to end of my rope
.../ # targetcli targetcli shell version 2.0rc1.fb16 Copyright 2011 by RisingTide Systems LLC and others. For help on commands, type 'help'. /> /> ls o- / ......................................................................................................................... [...] o- backstores.............................................................................................................. [...] | o- block ................................................................................................... [2 Storage Objects] | | o- vol_grp1-logical_vol1 ....................
2023 Oct 03
0
maptools, rgdal, rgeos and rgrass7 retiring Monday, October 16
...n https://cran.r-project.org/src/contrib/Archive. maptools, rgdal and rgeos also retain their R-forge repositories, which may be used to retrieve functions for adding to other packages. A snapshot of Windows and macOS binary packages may be found on https://github.com/r-spatial/evolution/tree/main/backstore. Please raise questions by replying to this post, or as issues on https://github.com/r-spatial/evolution. -- Roger Bivand Emeritus Professor Norwegian School of Economics Postboks 3490 Ytre Sandviken, 5045 Bergen, Norway Roger.Bivand at nhh.no
2023 Oct 03
0
maptools, rgdal, rgeos and rgrass7 retiring Monday, October 16
...n https://cran.r-project.org/src/contrib/Archive. maptools, rgdal and rgeos also retain their R-forge repositories, which may be used to retrieve functions for adding to other packages. A snapshot of Windows and macOS binary packages may be found on https://github.com/r-spatial/evolution/tree/main/backstore. Please raise questions by replying to this post, or as issues on https://github.com/r-spatial/evolution. -- Roger Bivand Emeritus Professor Norwegian School of Economics Postboks 3490 Ytre Sandviken, 5045 Bergen, Norway Roger.Bivand at nhh.no
2012 Sep 07
12
[PATCH 0/5] vhost-scsi: Add support for host virtualized target
...nab/qemu-kvm.git vhost-scsi-for-1.3 Note the code is cut against yesterday's QEMU head, and dispite the name of the tree is based upon mainline qemu.org git code + has thus far been running overnight with > 100K IOPs small block 4k workloads using v3.6-rc2+ based target code with RAMDISK_DR backstores. Other than some minor fuzz between jumping from QEMU 1.2.0 -> 1.2.50, this series is functionally identical to what's been posted for vhost-scsi RFC-v3 to qemu-devel. Please consider applying these patches for an initial vhost-scsi merge into QEMU 1.3.0-rc code, or let us know what else...
2012 Sep 07
12
[PATCH 0/5] vhost-scsi: Add support for host virtualized target
...nab/qemu-kvm.git vhost-scsi-for-1.3 Note the code is cut against yesterday's QEMU head, and dispite the name of the tree is based upon mainline qemu.org git code + has thus far been running overnight with > 100K IOPs small block 4k workloads using v3.6-rc2+ based target code with RAMDISK_DR backstores. Other than some minor fuzz between jumping from QEMU 1.2.0 -> 1.2.50, this series is functionally identical to what's been posted for vhost-scsi RFC-v3 to qemu-devel. Please consider applying these patches for an initial vhost-scsi merge into QEMU 1.3.0-rc code, or let us know what else...
2012 Feb 03
6
Spectacularly disappointing disk throughput
Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...hz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommended by Stefan) target backstores + vhost configuration from rtsadmin/targetcli shell: /> ls backstores/rd_mcp/ o- rd_mcp ................................................. [32 Storage Objects] o- ramdisk0 .............................................. [ramdisk activated] o- ramdisk1 ..........................................
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...hz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommended by Stefan) target backstores + vhost configuration from rtsadmin/targetcli shell: /> ls backstores/rd_mcp/ o- rd_mcp ................................................. [32 Storage Objects] o- ramdisk0 .............................................. [ramdisk activated] o- ramdisk1 ..........................................