Displaying 20 results from an estimated 10000 matches similar to: "Problem with (C)LVM and XEN"
2010 Jul 20
2
LVM issue
Hi We use AoE disks for some of our systems. Currently, a 15.65Tb filesystem we have is full, I then extended the LVM by a further 4Tb but resize4fs could not handle a filesystem over 16Tb (CentOS 5.5). I then reduced the lvm by the same amount, and attempted to create a new LV, but get this error message in the process
lvcreate -v -ndata2 -L2T -t aoe
Test mode: Metadata will NOT be updated.
2011 Feb 17
0
Can't create mirrored LVM: Insufficient suitable allocatable extents for logical volume : 2560 more required
I'm trying to setup a LVM mirror on 2 iSCS targets, but can't.
I have added both /dev/sda & /dev/sdb to the LVM-RAID PV, and both
have 500GB space.
[root at HP-DL360 by-path]# pvscan
PV /dev/cciss/c0d0p2 VG LVM lvm2 [136.59 GB / 2.69 GB free]
PV /dev/sda VG LVM-RAID lvm2 [500.00 GB / 490.00 GB free]
PV /dev/sdb VG LVM-RAID lvm2 [502.70 GB /
2015 Nov 24
0
LVM - how to change lv from linear to stripped? Is it possible?
Hi All.
Currently I am trying to change a logical volume from linear to stripped
because I would like to have a better write throughput. I would like to
perform this change "live" without stopping access to this lv.
I have found two interesting examples:
http://community.hpe.com/t5/System-Administration/Need-to-move-the-data-from-Linear-LV-to-stripped-LV-on-RHEL-5-7/td-p/6134323
2010 Jul 01
0
GNBD/LVM problem
Hello all:
I'm having a strange problem with GNBD and LVM on two fully updated
CentOS 5.5 x86_64 systems.
On node1, I have exported a gnbd volume.
lvcreate -L 500M -n mirrortest_lv01 mirrorvg
gnbd_serv
gnbd_export -d /dev/mirrorvg/mirrortest_lv01 -e node1_lv01
On node2 I have imported the volume:
gnbd_import -i node1
Next, on node2 I attempt to create a mirrored LV with the
2011 Feb 25
3
can't create large LVM, even though pvscan shows enough space left
I'm trying to create a 500GB lv volume on a 500GB physical volume, but can't:
[root at francois-pc ~]# pvscan
PV /dev/sdd VG freenas lvm2 [500.00 GB / 500.00 GB free]
PV /dev/sdc VG thecus lvm2 [1010.00 GB /
910.00 GB free]
PV /dev/mapper/ddf1_RAIDp2 VG VolGroup00 lvm2 [931.25 GB / 0 free]
Total: 3 [2.38 TB] / in use: 3 [2.38 TB]
2017 Nov 07
0
Re: using LVM thin pool LVs as a storage for libvirt guest
Please don't use lvm thin for vm. In our hosting in Russia we have
100-150 vps on each node with lvm thin pool on ssd and have locks,
slowdowns and other bad things because of COW. After we switching to
qcow2 files on plain ssd ext4 fs and happy =).
2017-11-04 23:21 GMT+03:00 Jan HutaĆ <jhutar@redhat.com>:
> Hello,
> as usual, I'm few years behind trends so I have learned
2017 Nov 07
1
Re: using LVM thin pool LVs as a storage for libvirt guest
Do you have some comparasion of IO performance on thin pool vs. qcow2
file on fs?
In my case each VM would have its own thin volume. I just want to
overcommit disk-space.
Regards,
Jan
On 2017-11-07 13:16 +0300, Vasiliy Tolstov wrote:
>Please don't use lvm thin for vm. In our hosting in Russia we have
>100-150 vps on each node with lvm thin pool on ssd and have locks,
>slowdowns
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav.
On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl
> wrote:
> Hello,
>
> I would really appreciate some help/guidance with this problem. First of
> all sorry for the long message. I would file a bug, but do not know if it
> is my fault, dm-cache, qemu or (probably) a combination of both. And i can
> imagine some of
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello,
as usual, I'm few years behind trends so I have learned about LVM thin
volumes recently and I especially like that your volumes can be "sparse"
- that you can have 1TB thin volume on 250GB VG/thin pool.
Is it somehow possible to use that with libvirt?
I have found this post from 2014:
https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html
which says
2012 Jul 09
6
3.5.0-rc6: btrfs and LVM snapshots -> wrong devicename in /proc/mounts
Hi,
using btrfs with LVM snapshots seems to be confusing /proc/mounts
After mounting a snapshot of an original filesystem, the devicename of the
original filesystem is overwritten with that of the snapshot in /proc/mounts.
Steps to reproduce:
arnd@kallisto:/mnt$ sudo mount /dev/vg0/original /mnt/original
[ 107.041432] device fsid 5c3e8ca2-da56-4ade-9fef-103a6a8a70c2 devid 1 transid 4
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone,
Anybody had the chance to test out this setup and reproduce the problem?
I assumed it would be something that's used often these days and a
solution would benefit a lot of users. If can be of any assistance
please contact me.
--
Met vriendelijke groet,
Richard Landsman
http://rimote.nl
T: +31 (0)50 - 763 04 07
(ma-vr 9:00 tot 18:00)
24/7 bij storingen:
+31 (0)6 - 4388
2015 Dec 02
0
lvm snapshot
in journalctl i found:
modprobe: FATAL: Module dm-snapshot not found
...
Can't process LV root_snap: snapshot target support missing from kernel
Zitat von Tru Huynh <tru at centos.org>:
> On Wed, Dec 02, 2015 at 08:53:39PM +0100, Axel Glienke wrote:
>> Creating snapshot:
>>
>> [root at lvmtest ~]# lvcreate -L5G -s -n root_snap /dev/centos/root
>> Reducing
2015 Dec 02
3
lvm snapshot
On Wed, Dec 02, 2015 at 08:53:39PM +0100, Axel Glienke wrote:
> Creating snapshot:
>
> [root at lvmtest ~]# lvcreate -L5G -s -n root_snap /dev/centos/root
> Reducing COW size 5,00 GiB down to maximum usable size 2,94 GiB.
> Logical volume "root_snap" created.
> [root at lvmtest ~]# lvs
> LV VG Attr LSize Pool Origin Data% Meta% Move
> Log
2014 Aug 04
0
Re: libvirt and lvm thin pool
On 08/02/2014 04:24 PM, Vasiliy Tolstov wrote:
> Hi all. I'm using libvirt 1.2.6
> I want to use lvm storage for my virtual machines.
> But i want to use new lvm2 feature - thin pool. How can i do that in
> libvirt? If libvirt can't create it via pool xml, does (and how) to
> use this setup under libvirt?
>
The 'Thin Pool' is avoided by libvirt, but volumes
2015 Dec 02
0
lvm snapshot
Creating snapshot:
[root at lvmtest ~]# lvcreate -L5G -s -n root_snap /dev/centos/root
Reducing COW size 5,00 GiB down to maximum usable size 2,94 GiB.
Logical volume "root_snap" created.
[root at lvmtest ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move
Log Cpy%Sync Convert
root centos owi-aos--- 2,93g
root_snap centos swi-a-s--- 2,94g
2007 Jul 23
2
GFS/LVM/RAID1 recovery question
I have a (CentOS4.5) cluster in which the servers mount a GFS partition
which is an LVM2 logical volume created as a mirror of two iSCSI-
connected drives (with a third for the log). The LV was created using a
command along the lines of:
lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
where sd[bc] are the mirrored (iSCSI) PVs in the VG and sdd is the log.
I have this working and can write data
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on
a CentOS 5.2 box before it goes online and before a drive really fails.
I 'mdadm failed, removed', powered off, replaced drive, partitioned with
sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
Everything seems fine until I try to create a snapshot lv. (Creating a
snapshot lv
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2006 Jun 07
14
HA Xen on 2 servers!! No NFS, special hardware, DRBD or iSCSI...
I''ve been brainstorming...
I want to create a 2-node HA active/active cluster (In other words I want to run a handful of
DomUs on one node and a handful on another). In the event of a failure I want all DomUs to fail
over to the other node and start working immediately. I want absolutely no
single-points-of-failure. I want to do it with free software and no special hardware. I want
2014 Oct 05
0
lvcreate error
Hello
I am unable create a new logical volume, I receive the following error
when using lvcreate
# lvcreate -L 1g -n system3_root hm
device-mapper: resume ioctl on failed: Invalid argument
Unable to resume hm-system3_root (253:7)
Failed to activate new LV.
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda4 hm lvm2 a-- 998.00g 864.75g
# gvs
VG #PV #LV #SN Attr VSize