similar to: Bug#691808: xcp-storage-managers: Another wrong binary path + wrong parameter in storage managers backend

Displaying 20 results from an estimated 400 matches similar to: "Bug#691808: xcp-storage-managers: Another wrong binary path + wrong parameter in storage managers backend"

2013 Jul 04
1
Failed to create SR with lvmoiscsi on xcp1.6[ [opterr=Logical Volume partition creation error [opterr=error is 5]]
Hello Experts, When I try to create SR with lvm over iSCSI, it always failed, I list my xe command and debug info: [root@xcp16 log]# xe sr-create host-uuid=a226200e-f7ff-4dee-b679-e5f114d1e465 content-type=user name-label=shared_disk_sr shared=true device-config:target=192.168.1.2 device-config:targetIQN=iqn.2013-07.example:shareddisk device-config:SCSIid=1IET_00010001 type=lvmoiscsi The SR is
2012 Jun 01
2
installation and configuration documentation for XCP
i''ve installed XCP 1.5-beta. i''m a little confused as to what has happened. everything so far seems to work. however, i need more information on what was done to my hard disk during the installation and how was the file system set up. in particular, i was investigating how to create a new logical volume to place my ISO file to use as my ISO storage (SR). i notice (see below with
2011 Jul 22
4
VM backup problem
Hai, I use following steps for LV backup. * lvcreate -L 5G -s -n lv_snapshot /dev/VG_XenStorage-7b010600-3920-5526-b3ec-6f7b0f610f3c/VHD-a2db885c-9ad0-46c3-b2c3-a30cb71d83f8 lv_snapshot created* This command worked properly Then issue kpartx command kpartx -av
2011 Jul 22
4
VM backup problem
Hai, I use following steps for LV backup. * lvcreate -L 5G -s -n lv_snapshot /dev/VG_XenStorage-7b010600-3920-5526-b3ec-6f7b0f610f3c/VHD-a2db885c-9ad0-46c3-b2c3-a30cb71d83f8 lv_snapshot created* This command worked properly Then issue kpartx command kpartx -av
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume. lvconvert tells me I don't have enough free space, even though I have hundreds of gigabytes free on both physical volumes. Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data Insufficient suitable allocatable extents for logical volume : 10240 more required Any ideas? Thanks!, Gordon Here's the output from the
2017 Jan 11
2
HSM
Hmm, don't you just love changing terminology! I've been using HSM systems at work since '99. BTW, DMAPI is the Data Management API which was a common(ish) extension used by amongst others SGI and IBM. Back to lvmcache. It looks interesting. I'd earlier dismissed LVM since it is block orientated, not file orientated. Probably because my mental image is of files migrating to
2017 Jan 11
0
HSM
ZFS also does some fun things here if you want to build an SSD & spinning disk array - http://zfsonlinux.org/ On 11 January 2017 at 11:56, J Martin Rushton < martinrushton56 at btinternet.com> wrote: > Hmm, don't you just love changing terminology! I've been using HSM > systems at work since '99. BTW, DMAPI is the Data Management API which > was a common(ish)
2018 Dec 08
1
Weird problems with CentOS 7.6 1810 installer
> > > > Commands line options: rd.debug rd.udev.debug systems.log_level=debug That willl be incredibly verbose, and slows things down a lot, so in the off chance there's a race, you might get different results. But if not, the log should contain something useful. I like the hypothesis about mdadm metadata version 0.9, however that's still really common on RHEL and CentOS. It
2012 Apr 17
0
failure("Storage_access failed with: SR_BACKEND_FAILURE_47: [ ; The SR is not available
Hello, Please excuse my complete xen ignorance but I am hoping someone will be able ot help me out here. I recently built my first Xen box using a 2, 1TB drives in a raid configuration, 2TB total. I installed Xen, I am able to access it via XenCenter. However, I am trying to use some of that 2TB as a local datastore but for the life of me I cannot get it mounted, I keep on getting this error:
2017 Oct 10
1
ZFS with SSD ZIL vs XFS
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote: > On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on
2011 Aug 23
0
xe vm-export fails on debian squeeze
Hi. Running xe vm-export on any vm results in the following error: root@squeeze:~# xe vm-export vm=ddf27dce-5851-2ed8-8fc3-8d7dfd64dbef filename=test.vhd Error code: SR_BACKEND_FAILURE_46 Error parameters: , The VDI is not available [opterr=Command [''/usr/sbin/vhd-util'', ''scan'', ''-f'', ''-c'', ''-m'',
2017 Oct 31
3
BoF - Gluster for VM store use case
During Gluster Summit, we discussed gluster volumes as storage for VM images - feedback on the usecase and upcoming features that may benefit this usecase. Some of the points discussed * Need to ensure there are no issues when expanding a gluster volume when sharding is turned on. * Throttling feature for self-heal, rebalance process could be useful for this usecase * Erasure coded volumes with
2017 Jan 11
0
HSM
HSM also stands for "Hardware security module" Maybe lvmcache would be interesting for you? HSM is more popularly known as "tiering". Cheers, Andrew On 11 January 2017 at 11:15, J Martin Rushton < martinrushton56 at btinternet.com> wrote: > I think there may be some confusion here. By HSM I was referring to > Hierarchical Storage Management, whereby there are
2017 Feb 10
0
Huge directory tree: Get files to sync via tools like sysdig
On Fri, 10 Feb 2017 12:38:32 +1300 Henri Shustak <henri.shustak at gmail.com> wrote: > As Ben mentioned, ZFS snapshots is one possible approach. Another > approach is to have a faster storage system. I have seen considerable > speed improvements with rsync on similar data sets by say upgrading > the storage sub system. Another possibility could be to use lvm and lvmcache to
2017 Nov 01
0
BoF - Gluster for VM store use case
----- Original Message ----- > From: "Sahina Bose" <sabose at redhat.com> > To: gluster-users at gluster.org > Cc: "Gluster Devel" <gluster-devel at gluster.org> > Sent: Tuesday, October 31, 2017 11:46:57 AM > Subject: [Gluster-users] BoF - Gluster for VM store use case > > During Gluster Summit, we discussed gluster volumes as storage for VM
2013 Jul 22
1
Bug#717573: XCP: Cannot connect to iSCSI target
Package: xcp-storage-managers Version: 0.1.1-3 Severity: normal XCP cannot use iscsi block devices. Fails with iscsiadm: No active sessions. I ran the following commands trying to diagnose the problem: First introducing a SR and then creating a PBD with the needed configuration for the iscsi block device. These steps work fine but when I tried to plug the PBD I got the error: root at
2018 Mar 07
4
gluster for home directories?
Hi, We are looking into replacing our current storage solution and are evaluating gluster for this purpose. Our current solution uses a SAN with two servers attached that serve samba and NFS 4. Clients connect to those servers using NFS or SMB. All users' home directories live on this server. I would like to have some insight in who else is using gluster for home directories for about 500
2018 Mar 08
0
gluster for home directories?
Hi Rik, Nice clarity and detail in the description. Thanks! inline... On Wed, Mar 7, 2018 at 8:29 PM, Rik Theys <Rik.Theys at esat.kuleuven.be> wrote: > Hi, > > We are looking into replacing our current storage solution and are > evaluating gluster for this purpose. Our current solution uses a SAN > with two servers attached that serve samba and NFS 4. Clients connect to
2013 Dec 23
0
Re: Connect libvirt to iSCSI target
On 12/22/2013 10:09 AM, Marco wrote: > On 2013–12–21 John Ferlan wrote: > >> On 12/17/2013 07:13 PM, Marco wrote: >>> Hi! >>> >>> I'm new to libvirt and face problems connecting to an iSCSI target. >>> What I intend to do is to connect libvirt (I tried virt-manager and >>> virsh) to an iSCSI target and then boot from the LUNs which
2011 Dec 29
4
BalloonWorkerThread issue
Hello List, Merry Christmas to all !! Basically I''m trying to boot a Windows 2008R2 DC HVM with 90GB static max memory and 32GB static min. The node config is Dell M610 with X5660 and 96GB RAM and its running XCP 1.1 Many times the node crashes while booting HVM. Sometimes I get success. I have attached the HVM boot log of successful start. Many times the node hangs as soon as the