Displaying 20 results from an estimated 30000 matches similar to: "iscsi w/ libvirt"
2010 May 05
2
FYI: Notes on setting up KVM guests using iSCSI
DV suggested that we document some libvirt setups using shared storage. I'm
not a fan of NFS, so I wrote some blog posts on how to use iSCSI in the
context of libvirt + KVM.
There is of course more than one way todo things, so I've outlined a couple
of different options. One completely manual command line approach using
tgtadm on the iSCSI server:
2011 Sep 13
1
libvirt does not recognize all devices in iscsi and mpath pools in a predictable manner
Hi,
I'm using libvirt 0.8.3 on Fedora 14 (as I wrote earlier, I'm having some
trouble updating to the newest version), and I'm having problems getting iscsi
and mpath storage pools to work in a usable and consistent manner.
I have two storage pools defined on the host machine, one for raw iscsi-
devices and one for those same iscsi devices device-mapped by multipath. They
look
2010 Jun 16
1
how to match the ID of a LUN in a storage pool with the GUID on the target server
I've configured a libvirt storage pool using an iscsi target from a Sun
7310 storage appliance and am using the LUNs in this target as volumes
for my KVM guests. The setup is very similar to what Daniel covered in
a recent blog posting:
http://berrange.com/posts/2010/05/05/provisioning-kvm-virtual-machines-on-iscsi-the-hard-way-part-2-of-2/
It works great, but I can't figure out how
2010 Jan 16
3
How to force iscsi to see the new LUN size
Hi,
I increased the size of one of the LUNs and on CentOS 5.4 if I restart
iscsi (`service iscsi restart`) I'll see the the new size but this will
disconnect all other LUNs.
I'm hoping that there is isciadm or some other command that will force
iscsi to rediscover the LUNs but I can't seem to be able to come up with
one.
Resize2fs says that there is nothing to be done. I'm
2011 Aug 21
1
Multipath w/ iscsi
I have several CentOS 6 boxes that mount iscsi based luns and use mpath.
They all had problems shutting down as a result of unused maps not getting
flushed as the system halted.
After examining the init scripts, netfs, iscsi and multipathd all had the correct
order but mpath failed to flush these maps and the system waited indefinitely.
In the meantime I hacked this by adding a `/sbin/multipath
2013 Dec 18
3
Connect libvirt to iSCSI target
Hi!
I'm new to libvirt and face problems connecting to an iSCSI target.
What I intend to do is to connect libvirt (I tried virt-manager and
virsh) to an iSCSI target and then boot from the LUNs which contain
the VMs.
I followed the documentation¹ but got stuck at section 12.1.5.4.3.
1)
virsh pool-define-as \
--name foo \
--type iscsi
2009 Sep 07
3
iSCSI domU - introducing more stability
Hi there,
during peak load on some running domU, I noticed random iSCSI "Reported LUNs data has changed" which forced me to shutdown the respective domU, re-login the target and do a fsck before starting domU again.
This occurred on a 16 core machine, having only about 14 domUs running. Spare memory has been occupied by dom0 (about 40G). Each domU has it''s own iSCSI target.
2013 Dec 22
2
Re: Connect libvirt to iSCSI target
On 2013–12–21 John Ferlan wrote:
> On 12/17/2013 07:13 PM, Marco wrote:
> > Hi!
> >
> > I'm new to libvirt and face problems connecting to an iSCSI target.
> > What I intend to do is to connect libvirt (I tried virt-manager and
> > virsh) to an iSCSI target and then boot from the LUNs which contain
> > the VMs.
> >
> > I followed the
2013 Jan 20
10
iscsi on xen
I wonder if someone can point me in right directions. I have two dell
servers I setup iscsi so I have four 2 tb hard drives and i had used lvm
to create one big partiton and share it using iscsi. How I go about
assigning sections of iscsi for virtual hard drives . should go about
assigning Should I export the whole 8TB as one iscsi and then use lvm to
create smaller virtual disk. Or should I
2011 Feb 02
1
iSCSI storage pool questions
Hi All,
I've been trying to figure out the best way of using an iSCSI SAN with KVM and thanks to a helpful post by Tom Georgoulias that I found on this list (https://www.redhat.com/archives/libvirt-users/2010-May/msg00008.html), it appears I have a solution.
What I'm wondering is the following:
1) If I use an iSCSI LUN as the storage pool (instead of creating an LVM VG from this iSCSI
2007 Sep 21
2
Image file or partitions over an iSCSI?
Dear all,
I will setup soon an Infortrend iSCSI SAN device with a raid6 volume and
a DELL PE1950 (8 VMs) with a Qlogic iSCSI HBA (server to be duplicated
at later time for redundancy).
My priorities are:
1) VMs on the iSCSI SAN in order to boot them also by another server if
the first one fails
2) performances
3) live migration is a non mandatory option (i.e. I can suffer to
shutdown
2018 Jan 10
1
Whether libvirt can support all backing chain layer are iscsi network disk type
Hi,
For backing chain that all images are iscsi network disk type , such as
iscsi://ip/iqn../0(base image) <-iscsi://ip/iqn../1(active image).
Currently, 'qemu-img info --backing-chain' command can display correct
backing file info, but after starting guest with the active image in
backing chain, it don't include the related <backingStore> element in
dumpxml.
So,
2017 Aug 02
2
Libvirt fails on network disk with ISCSI protocol
Hi,
I am working on oVirt, and I am trying to run a VM with a network disk with
ISCSI protocol ( the storage is on a Cinder server).
Here is the disk XML I use:
<disk device="disk" snapshot="no" type="network">
<address bus="0" controller="0" target="0" type="drive"
unit="0" />
2011 Sep 14
1
KVM CO 5.6 VM guest crashes running iSCSI
Hi All,
I'm running KVM host on CentOS 5.6 x64, all of my guests are CO 5.6
x64 as well. I create / run VMs via libvirt.
Here are the packages I have:
# rpm -qa | egrep "kvm|virt"
kvm-83-224.el5.centos
python-virtinst-0.400.3-11.el5
kvm-qemu-img-83-224.el5.centos
kmod-kvm-83-224.el5.centos
libvirt-python-0.8.2-15.el5
etherboot-zroms-kvm-5.4.4-13.el5.centos
libvirt-0.8.2-15.el5
2012 Sep 28
2
iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case...
I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then
2008 Aug 04
23
Xen and iSCSI - options and questions
Hello,
I have a small Xen farm of 8 dom0 servers with 64 virtual machines running para-virtualized and this has been working great. Unfortunately, I''ve hit a limit: my iSCSI hardware supports only 512 concurrent connections and so I''m pretty much at the limit. (Wish I would have seen that problem sooner!)
Of course, 87% of those connections are idle-- but necessary because I
2014 Feb 25
1
libvirt iSCSI target discovery
Hi,
Is it possible to discover iSCSI targets using libvirt API..?
OR
is it possible to get the similar results of below commands using
libvirtAPI..?
iscsiadm --mode discovery --type sendtargets --portal server1.example.com
sudo iscsiadm -m discovery -t st -p 192.168.0.10
Regards
Sijo
2006 Nov 09
7
xen, iscsi and resilience to short network outages
Hi. Here is the short version:
If dom0 experiences a short (< 120 second) network outage the guests
whose disks are on iSCSI LUNs get (seemingly) unrecoverable IO errors.
Is it possible to make Xen more resiliant to such problems?
And now the full version:
We''re testing Xen on iSCSI LUNs. The hardware/software configuration is:
* Dom0 and guest OS: SLES10 x86_64
* iSCSI LUN on
2017 Nov 08
2
Does libvirt-sanlock support network disk?
Hello,
As we know, libvirt sanlock support file type storage. I wonder *if it
supports network storage.*
I tried *iSCSI*, but found it didn't generate any resource file:
Version: *qemu-2.10 libvirt-3.9 sanlock-3.5*
1. Set configuration:
qemu.conf:
*lock_manager = "sanlock"*
qemu-sanlock.conf:
*auto_disk_leases = 1disk_lease_dir = "/var/lib/libvirt/sanlock"host_id =
2009 Mar 11
6
Export ZFS via ISCSI to Linux - Is it stable for production use now?
Hello,
I want to setup an opensolaris for centralized storage server, using
ZFS as the underlying FS, on a RAID 10 SATA disks.
I will export the storage blocks using ISCSI to RHEL 5 (less than 10
clients, and I will format the partition as EXT3)
I want to ask...
1. Is this setup suitable for mission critical use now?
2. Can I use LVM with this setup?
Currently we are using NFS as the