Displaying 20 results from an estimated 700 matches similar to: "Libvirt fails on network disk with ISCSI protocol"
2017 Aug 02
0
Re: Libvirt fails on network disk with ISCSI protocol
On Wed, Aug 02, 2017 at 05:47:31PM +0300, Fred Rolland wrote:
>Hi,
>
>I am working on oVirt, and I am trying to run a VM with a network disk with
>ISCSI protocol ( the storage is on a Cinder server).
>
>Here is the disk XML I use:
>
> <disk device="disk" snapshot="no" type="network">
> <address bus="0"
2010 Jun 16
1
how to match the ID of a LUN in a storage pool with the GUID on the target server
I've configured a libvirt storage pool using an iscsi target from a Sun
7310 storage appliance and am using the LUNs in this target as volumes
for my KVM guests. The setup is very similar to what Daniel covered in
a recent blog posting:
http://berrange.com/posts/2010/05/05/provisioning-kvm-virtual-machines-on-iscsi-the-hard-way-part-2-of-2/
It works great, but I can't figure out how
2008 Jun 25
6
dm-multipath use
Are folks in the Centos community succesfully using device-mapper-multipath?
I am looking to deploy it for error handling on our iSCSI setup but there
seems to be little traffic about this package on the Centos forums, as far
as I can tell, and there seems to be a number of small issues based on my
reading the dm-multipath developer lists and related resources.
-geoff
Geoff Galitz
Blankenheim
2013 Dec 22
2
Re: Connect libvirt to iSCSI target
On 2013–12–21 John Ferlan wrote:
> On 12/17/2013 07:13 PM, Marco wrote:
> > Hi!
> >
> > I'm new to libvirt and face problems connecting to an iSCSI target.
> > What I intend to do is to connect libvirt (I tried virt-manager and
> > virsh) to an iSCSI target and then boot from the LUNs which contain
> > the VMs.
> >
> > I followed the
2011 Sep 09
3
CentOS5 with Dell Broadcom iSCSI Offload, does it work ?
Hi all,
After finding multiples answers to this question via google, but without
making it work on my servers. Has anybody iSCSI Offload working on a Dell
Server with Broadcom NICs ?
My environment: I'm running CentOS 5.6 CR, on a Dell PowerEdge R710 with
Broadcom Corporation NetXtreme II BCM5709 conecting to an EMC CX4-120 SAN,
via 2x Cisco 2960G-24TC-L switches. It's working
2008 Oct 15
29
HELP! SNV_97,98,99 zfs with iscsitadm and VMWare!
I''m not sure if this is a problem with the iscsitarget or zfs. I''d greatly appreciate it if it gets moved to the proper list.
Well I''m just about out of ideas on what might be wrong..
Quick history:
I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found out that multilun''s were being treated as multipaths so waited till SNV_94 came out to
2013 Dec 18
3
Connect libvirt to iSCSI target
Hi!
I'm new to libvirt and face problems connecting to an iSCSI target.
What I intend to do is to connect libvirt (I tried virt-manager and
virsh) to an iSCSI target and then boot from the LUNs which contain
the VMs.
I followed the documentation¹ but got stuck at section 12.1.5.4.3.
1)
virsh pool-define-as \
--name foo \
--type iscsi
2013 Jul 04
1
Failed to create SR with lvmoiscsi on xcp1.6[ [opterr=Logical Volume partition creation error [opterr=error is 5]]
Hello Experts,
When I try to create SR with lvm over iSCSI, it always failed, I list my xe command and debug info:
[root@xcp16 log]# xe sr-create host-uuid=a226200e-f7ff-4dee-b679-e5f114d1e465 content-type=user name-label=shared_disk_sr shared=true device-config:target=192.168.1.2 device-config:targetIQN=iqn.2013-07.example:shareddisk device-config:SCSIid=1IET_00010001 type=lvmoiscsi
The SR is
2020 Jun 25
1
virsh edit does not work when <initiator> and <auth> is used in config
Hello,
I am having problem when using: "virsh edit <vm_name>"
my VM has network iscsi disk defined:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='iscsi'
name='iqn.1992-08.com.netapp:5481.60080e50001ff2000000000051aee24d/0'>
<host
2009 Jan 19
2
Error on xm create: VmError: (38, ''Function not implemented'')
Hi everyone,
I generated my own DomU guest using opensuse''s yast dirinstall and stored it
on a seperate iscsi target. I created it as a DomU on my Domain0
(2.6.25.16-0.1-xen kernel) successfully, using the following configuration:
name = "vm01-opensuse11-base-LAMP"
memory = 256
kernel = "/boot/vmlinuz-xen"
ramdisk = "/boot/initrd-xen"
root =
2009 Jan 19
1
iscsi of a SAN on a DomU
Hi,
i have a debian Etch x86_64 with a xen 3.1 on a kernel 2.6.18-xen.
I have some DomU with Debian Etch.
I installed open-iscsi, configure /etc/iscsi/iscsid.conf:
---
node.active_cnx = 1
node.startup = automatic
#node.session.auth.username = dima
#node.session.auth.password = aloha
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 10
2016 Apr 11
4
Problems with scsi-target-utils when hosted on dom0 centos 7 xen box
Hello
We were attempting to use scsi-target-utils, hosted on a xen dom0 vm
using localhost, and running into some problems. I was not able to
reproduce this on a centos 7.2 server using the default kernel.
(From dmesg)
Apr 4 11:18:42 funk kernel: [ 596.511204] connection2:0: detected
conn error (1022)
Apr 4 11:18:42 funk kernel: connection2:0: ping timeout of 5 secs
expired, recv
2010 Mar 17
1
Pool, iSCSI and guest start
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Former user of Xen and newbie in kvm/qemu/libvirt stuff, I give it a try
on my network ;-)
I need to run a VM with iSCSI target attached.
I did it this way :
1) Creation of iscsi pool (equa.xml) :
<pool type="iscsi">
<name>equalog</name>
<source>
<host name="10.10.0.1"/>
<device
2018 Jan 10
1
Whether libvirt can support all backing chain layer are iscsi network disk type
Hi,
For backing chain that all images are iscsi network disk type , such as
iscsi://ip/iqn../0(base image) <-iscsi://ip/iqn../1(active image).
Currently, 'qemu-img info --backing-chain' command can display correct
backing file info, but after starting guest with the active image in
backing chain, it don't include the related <backingStore> element in
dumpxml.
So,
2012 Nov 19
1
how to make the volume's format to qcow2 when creating volume
hi,all
the following are files of pool and volume.
storage pool is based on logical(LVM) and iscsi,now I create volume
specified the format to "qcow2"
*pool.xml*
<pool type='logical'>
<name>pool_190</name>
<source>
<device
path='/dev/disk/by-path/ip-192.168.0.190:3260-iscsi-iqn.2012-11.com.cloudking:server.target1-lun-1'/>
2011 Feb 28
2
can't disconnec iSCSI targets, please help
Hi,
I'm trying to disconnect some iSCSI targets, but can't seem to.
[root at localhost ~]# iscsiadm -m session
tcp: [1] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:500gb
tcp: [3] 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.thin3.vg0.1tba
tcp: [4] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:extent0
tcp: [5] 192.168.2.200:3260,1
2018 Mar 29
2
Using alias under disk in XML
I've been trying to follow the information found here [1] in order to
provide an alias for RBD disks I'm defining, however it does not appear
to be working and I wanted to see if I was doing something wrong.
I define the alias like so (using 'virsh edit'):
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'
2014 Jul 04
2
iSCSI initiator iqn
Hi,
I could not find any option to set iSCSI initiator iqn while using
guestfish, although the underlying qemu command has this option.
It appears that each time guestfish tries to connect to iSCSI LUN, a
randomly generated initiator iqn is being used. This is preventing
guestfish to connect to the iSCSI target in our environment as the target
allows incoming connection based on the preconfigured
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone,
I need some help to setup my configuration failover system.
My goal is to have a redundance system using Xen + DRBD + Corosync +
Pacemaker
On Xen I will have one virtual machine. When this computer has network
down, I will do a Live migration to the second computer.
The first configuration I will need is a crossover cable, won''t I? It is
really necessary? Ok, I did it. eth0
2013 Apr 24
7
[PATCH] hotplug/Linux: add iscsi block hotplug script
This hotplug script has been tested with IET and NetBSD iSCSI targets,
without authentication.
This hotplug script will only work with PV guests not using pygrub.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@citrix.com>
---
Changes due to 4.3 release freeze:
* We can no longer provide a