Displaying 20 results from an estimated 3000 matches similar to: "libvirt and shared storage SAN Fiber Channel"
2008 Feb 20
1
Issue with dom0 xen ballooning
Hi all,
I'm having issue with my Centos 5.1 /Xen installation.
I'm having some dom0 running 2.6.18-53.1.6.el5xen (x86_64).
On the dom0 where the load is high (more than 70% of total system memory
consumed by dom0 and domU) we have a lot of "memory squeeze ".
The result is that the domU seems to be blocked (no network/no disk
acces/ etc....).
Looking for a solution in xen
2008 Jan 14
1
Problem with CentOS 5.1 / Xen / Migration : domU time
Hi all,
I'm having issue with my Centos 5.1 /Xen installation.
I'm having 2 dom0 running 2.6.18-53.1.4.el5xen (x86_64).
These two dom0 are connected to a SAN.
These two dom0 hosts multiple domU like Centos 4.5 (2.6.9-55.0.9.ELxenU)
and Centos 5.1 (2.6.18-53.1.4.el5xen).
These two dom0 run ntpd to synchronize dom0 time.
When i start a virtual machine on the first dom0 all seems good, the
2014 Dec 04
3
xen-c6 fails to boot
Thanks all for the advice.
It seems there is an issue with Dracut booting from these hosts when LVM is used.
dracut: Scanning devices sda2 for LVM logical volumes VolGroup/lv_swap VolGroup/lv_root
dracut: inactive '/dev/VolGroup/lv_swap' [1.94 GiB] inherit
dracut: inactive '/dev/VolGroup/lv_root' [230.69 GiB] inherit
dracut: PARTIAL MODE. Incomplete logical volumes will be
2008 Mar 03
3
LVM and kickstarts ?
Hey,
Can anyone tell me why option 1 works and option 2 fails ? I know I
need swap and such, however in trouble shooting this issue I trimmed
down my config.
It fails on trying to format my logical volume, because the mount point
does not exist (/dev/volgroup/logvol)
It seems that with option 2, the partitions are created and LVM is setup
correctly. However the volgroup / logvolume was not
2022 Jan 09
1
rd.lvm.lv on CentOS Stream 9 (first-boot failure)
I've install a CentOS Stream 9 system from a kickstart file that
specified (among other things) several logical volumes:
logvol / --fstype="ext4" --size=10240 --name=lv_root --vgname=VolGroup
logvol /var --fstype="ext4" --size=4096 --name=lv_var --vgname=VolGroup
logvol swap --fstype="swap" --size=2048 --name=lv_swap --vgname=VolGroup
When that system rebooted,
2010 Nov 12
4
Opinion on best way to use network storage
I need the community''s opinion on the best way to use my storage SAN to host
xen images. The SAN itself is running iSCSI and NFS. My goal is to keep
all my xen images on the SAN device, and to be able to easily move images
from one host to another as needed while minimizing storage requirements and
maximizing performance.
What I see are my options:
1) Export a directory through NFS.
2022 Jan 10
1
rd.lvm.lv on CentOS Stream 9 (first-boot failure)
On 1/9/22 15:37, Gordon Messmer wrote:
> 1: The system also includes a volume group named "BackupGroup" and
> that group activates on boot (post-dracut).? Why are those LVs
> activated when rd.lvm.lv is specified?
As far as I can tell, this is because in the dracut boot process, the
device backing VolGroup is activated, but the device backing BackupGroup
is not.? As a
2013 May 02
4
Kickstart and volume group with a dash in the name
Hi,
I'm trying to setup the provisioning of new OpenStack hypervisors with
cinder volumes on them. The problem is that kickstart doesn't allow
dashed in volume group names?
I tried this:
volgroup cinder-volumes --pesize=4096 pv.02
and this:
volgroup cinder--volumes --pesize=4096 pv.02
but in both cases I end up with a volume group named "cindervolumes" on
the system. Any
2012 Jan 30
2
One of my servers wont boot today
Hi All,
One of my servers upon a restart today comes up with an error
checking filesystems:
fsck.ext3: no such file or directory while trying to open /dev/VolGroup-1/Logvol00.
/dev/VolGroup-1/LogVol00. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then
2014 Aug 07
1
kickstart - dont wipe data
Hi,
I am struggling with kickstart.
What I want to achieve is a reinstall, but some data partitions should
survive the install, i.e. they should not be formatted.
With a single disk this works, here is the relevant part from the
kickstart file (I shortened the name of the volume group)
...
zerombr
clearpart --none --initlabel
part /boot --fstype="xfs" --label=boot --onpart=vda1
part
2007 Oct 13
1
Problem creating volgroups with kickstart installations (on xen)
I'm testing doing kickstart installations on Xen VMs. This is
the first time I'm trying out kickstart at all, so I rather think I'm
doing something wrong in the kickstart configuration than it is
a Xen issue.
I use a modified kickstart file from an earlier manual installation
with a very basic filesystem setup. It fails with
"SystemError: vgcreate failed for VolGroup00".
2009 Aug 22
6
Fw: Re: my bootlog
Fasiha Ashraf
--- On Sat, 22/8/09, Fasiha Ashraf <feehapk@yahoo.co.in> wrote:
From: Fasiha Ashraf <feehapk@yahoo.co.in>
Subject: Re: [Xen-users] my bootlog
To: "Boris Derzhavets" <bderzhavets@yahoo.com>
Date: Saturday, 22 August, 2009, 11:12 AM
Please check what wrong here
grub.conf
title Fedora (2.6.30-rc6-tip)
root (hd0,6)
kernel
2009 Aug 25
4
Creating a Fedora 11 DomU
Hi All,
I just tried to create a DomU using the following method... (I normally use the Xen install kernel when I create CentOS domus, but when I tried that with the PXE image, that had the same effect as below). This is the first time I have created a Fedora domu in ages... years, even.
virt-install \
--paravirt \
--name demo \
--ram 500 \
2016 Jan 14
5
CentOS 6 Virt SIG Xen 4.6 packages available in centos-virt-xen-testing
As mentioned yesterday, Xen 4.6 packages are now available for
testing. These also include an update to libvirt 1.3.0, in line with
what's available for CentOS 7. Please test, particularly the upgrade
if you can, and report any problems here.
To upgrade:
yum update --enablerepo=centos-virt-xen-testing
To install from scratch:
* Install centos-release-xen from centos-extras
yum install
2013 Mar 09
1
kickstart %pre vda/sda troubles
hi,
The problem: for kvm/qemu disks are /dev/vdx devices when using the
virtio driver. For vmware, drives are /dev/sdx devices. For hp
servers, /dev/ccisss/whatever (sorry, no proliant with an array
controller handy to check it).
in order to just have one kickstart script to maintain I am trying to
use the %pre section but getting a bit stuck. This is what I have:
%pre
if [ -b /dev/sda ]
then
2006 Mar 08
12
AW: Problem booting domU
Hello,
Can you check following entrys:
Old:
disk = [''phy:vm_volumes/root.dhcp1,sda1,w'',
''phy:vm_volumes/var.dhcp1,sda2,w'',
''phy:vm_volumes/swap.dhcp1,sda3,w'']
New:
disk = [''phy:/vm_volumes/root.dhcp1,sda1,w'',
''phy:/vm_volumes/var.dhcp1,sda2,w'',
2019 May 08
5
kickstart compat C7 -> C8
Hi all,
I still use the following kickstart partition scheme for C7 installations (via virt-install):
Briefly, fixed size for /root and /boot, and the rest is filled up for /srv.
The same kickstart (despite that c7 uses vda, f29 uses sda) doesn't work with Fedora29 (EL8).
I get a "device is too small for new format" error. Any hints?
part /RESCUE --fstype="ext4"
2010 Oct 14
12
best practices in using shared storage for XEN Virtual Machines and auto-failover?
Hi all,
Can anyone pleas tell me what would be best practice to use shared
storage with virtual machines, especially when it involved high
availability / automated failover between 2 XEN servers?
i.e. if I setup 2x identical XEN servers, each with say 16GB RAM, 4x
1GB NIC''s, etc. Then I need the xen domU''s to auto failover between
the 2 servers if either goes down (hardware
2008 Apr 28
1
Kickstart syntax for CentOS upgrade
I'd like to automate the upgrade from CentOS 4.6 to 5.1 as much as
possible. Since upgrades per se are not really recommended, I'm
planning to do a kickstart installation. However, I want to leave
one of the existing partitions (/scratch) untouched during the
installation. Here is my current layout (LogVol00 is swap so not
shown in the df output below):
# df -hl
2011 Feb 09
3
High Availability and Storage Cluster
Dear mailing list members,
There was a branch about clusters, but I have a certain task.
There are two servers with CentOS 5.5 installed.
The servers are working with Zabbix (monitoring system for traffic,
using a MySQL), wiki and RT (all are using Apache).
If one server will have became not available then necessary start these
services on another server with replication of data.
Can I use