Displaying 20 results from an estimated 8000 matches similar to: "Problems with a logical pool creation"
2015 Apr 01
1
can't mount an LVM volume inCentos 5.10
I have a degraded raid array (originally raid-10, now only two drives)
that contains an LVM volume. I can see in the appended text that the
Xen domains are there but I don't see how to mount them. No doubt this
is just ignorance on my part but I wonder if anyone would care to
direct me? I want to be able to retrieve dom-0 and one of the dom-Us
to do data recovery, the others are of
2017 Mar 18
1
Centos-6.8 fsck and lvms
I have a CentOS-6.8 system which has a suspected HHD failure. I have
booted it into rescue mode from a CentOS-6.5 minimal install CD in
order to run fsck -c on it. The system hosts several vms. I have
activated the lvs associated with these vm using pvscan -s ; vgscan ;
vgchange -ay. An lvscan shows the lvs as ACTIVE. None are mounted.
When I try to run fsck on any of them I see the
2012 Jun 19
1
Basic shared storage + KVM
Hi,
I am trying to set up a shared iscsi storage to serve 6 kvm hypervisors
running centos 6.2.
I export an LVM from iscsi and configured virt-manager to see the iscsi
space as LVM storage (a single storage pool).
I can create space on this LVM storage pool directly from virt-manager
and I am already running a couple of sample VMs, that do migrate from
one hv to the other.
This configuration
2008 Jun 19
3
lvm with iscsi devices on boot
Hi All,
My CentOS 5.1 server is using iSCSI attached disks connecting
to a dual controller storage array. I have also configured multipathd
to manage the multiple paths. Everything works well, and on
boot the dev nodes are automatically created in /dev/mapper.
On these devices, I have created logical volumes using lvm2.
My problem is that lvm does not recognize these iscsi/multipath
volumes on
2010 May 31
1
Working example of logical storage pool and volume creation?
Hi all,
Does anyone have a working example of creation of a logical storage pool
and volume?
I'm hitting a wall getting logical volumes to work on RHEL 6 beta.
There's a single drive I'm trying to setup (sdc) as a libvirt managed
logical storage pool, but all volume creation on it fails.
Here's what I'm finding so far:
Prior to any storage pool work, only the host
2017 Sep 30
2
LVM not activating on reboot
Hi
I've recently rebuilt my home server using centos 7, and transplanted
over the main storage disks
It's a 3 disk raid5, with an lvm storage group (vg03) on there
Activating and mounting works fine:
# vgscan
? Reading volume groups from cache.
? Found volume group "vg03" using metadata type lvm2
# vgchange -ay
? 1 logical volume(s) in volume group "vg03" now
2018 May 24
2
[PATCH v2] daemon: Move lvmetad to early in the appliance boot process.
When the daemon starts up it creates a fresh (empty) LVM configuration
and starts up lvmetad (which depends on the LVM configuration).
However this appears to cause problems: Some types of PV seem to
require lvmetad and don't work without it
(https://bugzilla.redhat.com/show_bug.cgi?id=1581810). If we don't
start lvmetad earlier, the device nodes are not created.
Therefore move the
2020 Jan 06
4
can't boot after volume rename
I renamed my volume with vgrename however I didn't complete the other steps.
Mainly update fstab and intiramfs. Once I booted, I was dropped on the
Dracut shell. From here I can see the newly rename VG and I can lvm lvscan
as well as activate it, lvm vgchange -ay.
However I can't figure out what to do next, I'm assuming I need to
regenerate the initramfs and then boot to change
2017 Apr 23
0
Proper way to remove a qemu-nbd-mounted volume usnig lvm
I either haven't searched for the right thing or the web doesn't contain
the answer.
I have used the following to mount an image and now I need to know the
proper way to reverse the process.
qemu-nbd -c /dev/nbd0 <qcow2 image using lvm>
vgscan --cache (had to use --cache to get the qemu-nbd volume to
be recognized, lvmetad is running)
vgchange -ay
2015 Jan 12
0
Re: Resizing lvm fails with fedora20
On 11.01.2015 02:57, Alex Regan wrote:
> Hi,
> I'm trying to resize a 15GB LVM root partition on a fedora20 server with
> a fedora20 guest and I'm having a problem. Is this supported on fedora20?
>
> I recall having a similar problem (maybe even exact same problem) all
> the way back in fedora16 or fedora17, but hoped/thought it would be
> fixed by now?
>
> #
2012 Nov 06
2
disk device lvm iscsi questions ....
Hi,
I have an iscsistorage which I attached to a new centos 6.3 server. I
added logic volumes as usual, the block devices (sdb & sdc) showed up in
dmesg; I can mount and access the stored files.
Now we did an firmware software update to that storage (while
unmounted/detached from the fileserver) and after reboot of the storage
and reatache the iscsi nodes I do get new devices. (sdd &
2013 Mar 23
2
"Can't find root device" with lvm root after moving drive on CentOS 6.3
I have an 8-core SuperMicro Xeon server with CentOS 6.3. The OS is
installed on a 120 GB SSD connected by SATA, the machine also contains
an Areca SAS controller with 24 drives connected. The motherboard is a
SuperMicro X9DA7.
When I installed the OS, I used the default options, which creates an
LVM volume group to contain / and /home, and keeps /boot and /boot/efi
outside the volume group.
2010 Mar 23
2
[PATCH] Remove initrd patching from oc-boot
Dracut includes what was being patched in
Signed-off-by: Mike Burns <mburns at redhat.com>
---
scripts/ovirt-config-boot | 47 ---------------------------------------------
1 files changed, 0 insertions(+), 47 deletions(-)
diff --git a/scripts/ovirt-config-boot b/scripts/ovirt-config-boot
index d13dad2..28d1572 100755
--- a/scripts/ovirt-config-boot
+++ b/scripts/ovirt-config-boot
@@
2015 Jan 11
2
Resizing lvm fails with fedora20
Hi,
I'm trying to resize a 15GB LVM root partition on a fedora20 server with
a fedora20 guest and I'm having a problem. Is this supported on fedora20?
I recall having a similar problem (maybe even exact same problem) all
the way back in fedora16 or fedora17, but hoped/thought it would be
fixed by now?
# virt-df -h test1-011015.img
Filesystem Size Used
2011 Sep 17
0
XCP autostart and LVM disabled
With XCP how do we autostart a VM when a host boots up? In Xen I would have
made a symbolic link to the VMs config in /etc/xen/auto/
Also does anyone know why LVM is disabled in the /etc/rc.d/rc.sysinit in XCP
1.1 beta? I uncommented it and when my Host boots it does a vgchange -ay now
and the LV gets mounted in the fstab. I can''t think of a reason why this
isn''t the default
2018 May 24
0
Re: [PATCH v2] daemon: Move lvmetad to early in the appliance boot process.
On Thursday, 24 May 2018 16:01:22 CEST Richard W.M. Jones wrote:
> When the daemon starts up it creates a fresh (empty) LVM configuration
> and starts up lvmetad (which depends on the LVM configuration).
>
> However this appears to cause problems: Some types of PV seem to
> require lvmetad and don't work without it
> (https://bugzilla.redhat.com/show_bug.cgi?id=1581810). If
2020 Jan 07
0
can't boot after volume rename
Get a CentOS Install media , boot from it and select troubleshoot.Then mount your root LV, boot lv , /proc/, /sys, /dev & /run (last 4 with "bind" mount option).Then chroot into the root LV's mount point and then change grub menu and run "dracut -f --regenerate-all"
last step is to reboot and test.
Best Regards,Strahil Nikolov
? ??????????, 6 ?????? 2020 ?.,
2011 Jan 08
4
LiveCD System recovery - Mounting LVM?
Hi,
I am trying to recover data from my old system which had LVM. The disk had
two partitions - /dev/sda1 (boot, Linux) and /dev/sda2 (Linux LVM). I had
taken a backup of both partitions using dd.
Now I am booting of CentOS live cd for system restore. I recreated
partitions like previous system using fdisk and then used dd to dump all the
data onto it. I would like to mount sda2 as LVM, but I
2015 Nov 12
2
How to fix an incorrect storage pool?
I've created my storage pool incorrectly. I'm using LVM and I have a volume
group called vms-lvm.
When I look at it in virt-manager I see that the volumes it contains are
home, root and swap, so when I created the storage group in virt-manager I
must have specified something incorrectly.
Unfortunately I can't find a way to correct this. If I try to destroy
(stop) the storage group I
2011 Jul 22
0
Strange problem with LVM, device-mapper, and software RAID...
Running on a up-to-date CentOS 5.6 x86_64 machine:
[heller at ravel ~]$ uname -a
Linux ravel.60villagedrive 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
with a TYAN Computer Corp S4881 motherboard, which has a nVidia 4
channel SATA controller. It also has a Marvell Technology Group Ltd.
88SX7042 PCI-e 4-port SATA-II (rev 02).
This machine has a 120G