Displaying 20 results from an estimated 20000 matches similar to: "ext3 on an logical volume - snapshot using how?"
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello,
as usual, I'm few years behind trends so I have learned about LVM thin
volumes recently and I especially like that your volumes can be "sparse"
- that you can have 1TB thin volume on 250GB VG/thin pool.
Is it somehow possible to use that with libvirt?
I have found this post from 2014:
https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html
which says
2011 Feb 25
3
can't create large LVM, even though pvscan shows enough space left
I'm trying to create a 500GB lv volume on a 500GB physical volume, but can't:
[root at francois-pc ~]# pvscan
PV /dev/sdd VG freenas lvm2 [500.00 GB / 500.00 GB free]
PV /dev/sdc VG thecus lvm2 [1010.00 GB /
910.00 GB free]
PV /dev/mapper/ddf1_RAIDp2 VG VolGroup00 lvm2 [931.25 GB / 0 free]
Total: 3 [2.38 TB] / in use: 3 [2.38 TB]
2008 Aug 31
2
LVM and hotswap (USB/iSCSI) devices?
Hi list,
I'm having one of those 'I'm stupid' -problems with LVM on CentOS 5.2.
I've been working with traditional partitions until now, but I've
finally been sold on the theoretical benefits of using LVM, but for now
I only have a huge pile of broken filesystems to show for my efforts.
My scenario;
I attach a disk, either over USB or iSCSI.
I create a PV on this
2006 Feb 09
1
Mount LVM
Hi guys,
sorry if this is trivial, but I have been googling a couple days and
already compromised a test disk trying to figure this out, so I thought
it is time to ask for some advice. I have a disk that comes from a clean
and working CentOS4.2 install, and I am trying to use an external usb to
ide converter to mount it on another workstation with CentOS4.2. I am
trying this to better
2015 Dec 02
3
lvm snapshot
On Wed, Dec 02, 2015 at 08:53:39PM +0100, Axel Glienke wrote:
> Creating snapshot:
>
> [root at lvmtest ~]# lvcreate -L5G -s -n root_snap /dev/centos/root
> Reducing COW size 5,00 GiB down to maximum usable size 2,94 GiB.
> Logical volume "root_snap" created.
> [root at lvmtest ~]# lvs
> LV VG Attr LSize Pool Origin Data% Meta% Move
> Log
2010 May 28
1
Multi-partition domain not recognizing extra partitions
Hello,
I have been having an annoying problem with a multi-partition domain. This
domain has a separate partition for : /, /home, /usr, /var, /tmp and /data.
I created it using xen-tools and the custom partitionning scheme option. The
partitions are inside a LVM.
When I boot the domain, it mounts the / and swap partition but none of the
others. Therefore it doesn''t work... I get the
2015 Dec 02
4
lvm snapshot
Hello
after a lvm snapshot creation and a reboot are all logical volumes are
missing, only swap is present.
lvcreate -L 5000M -s -n centos_h1-root_snap /dev/mapper/centos_h1-root
lvs
LV VG Attr LSize Pool Origin
Data% Meta% Move Log Cpy%Sync Convert
centos_h1-root_snap centos_h1 swi-a-s--- 4,88g root 0,00
home
2010 Feb 15
3
My first type/provider - does nothing...
Hi list,
i tried to write my first type and provider that should create logical
volumes. Seems like i''m missing something as i get nothing when i use
it: No errors and no logical volume :-(
type/logicalvolume.rb:
=================
Puppet::Type.newtype(:logicalvolume) do
@doc = "Manage logical volumes"
ensurable
newparam(:lvname) do
desc "The logcal
2011 Feb 17
0
Can't create mirrored LVM: Insufficient suitable allocatable extents for logical volume : 2560 more required
I'm trying to setup a LVM mirror on 2 iSCS targets, but can't.
I have added both /dev/sda & /dev/sdb to the LVM-RAID PV, and both
have 500GB space.
[root at HP-DL360 by-path]# pvscan
PV /dev/cciss/c0d0p2 VG LVM lvm2 [136.59 GB / 2.69 GB free]
PV /dev/sda VG LVM-RAID lvm2 [500.00 GB / 490.00 GB free]
PV /dev/sdb VG LVM-RAID lvm2 [502.70 GB /
2006 Dec 06
5
LVM & volume groups
Can anybody tell me if it makes a difference if domU''s have separate LVM
volume groups?
For instance, the Xen User Manual
( http://tx.downloads.xensource.com/downloads/docs/user/#SECTION03330000000000000000) says, when setting up a domU''s disks with LVM, to do a
vgcreate vg /dev/sda10
Should each domU have it''s own volume group, or can all the domU''s share
2020 Mar 24
2
Building a NFS server with a mix of HDD and SSD (for caching)
Hi list,
I'm building a NFS server on top of CentOS 8.
It has 8 x 8 TB HDDs and 2 x 500GB SSDs.
The spinning drives are in a RAID-6 array. They are 4K sector size.
The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done
so far:
/dev/sdb ==> SSD raid1 array
/dev/sdd ==> spinning raid6 array
I've
2023 May 19
3
[libguestfs PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506
This small set covers the new /dev/mapper/VG-LV "--key" ID format in the
libguestfs LUKS-on-LVM inspection test.
Thanks,
Laszlo
Laszlo Ersek (3):
update common submodule
LUKS-on-LVM inspection test: rename VGs and LVs
LUKS-on-LVM inspection test: test /dev/mapper/VG-LV translation
common
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2023 May 19
3
[guestfs-tools PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506
This small set covers the new /dev/mapper/VG-LV "--key" ID format in the
LUKS-on-LVM virt-inspector test.
Thanks,
Laszlo
Laszlo Ersek (3):
update common submodule
inspector: rename VGs and LVs in LUKS-on-LVM test
inspector: test /dev/mapper/VG-LV translation in LUKS-on-LVM test
common
2007 Jan 04
2
Freeing pv space for snapshots
After upgrading my HD, I am now wishing I left some space for doing
snapshots. Is there a way to free up some space so I can get some free
PE?
Right now I have this:
# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
2010 May 31
1
Working example of logical storage pool and volume creation?
Hi all,
Does anyone have a working example of creation of a logical storage pool
and volume?
I'm hitting a wall getting logical volumes to work on RHEL 6 beta.
There's a single drive I'm trying to setup (sdc) as a libvirt managed
logical storage pool, but all volume creation on it fails.
Here's what I'm finding so far:
Prior to any storage pool work, only the host
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on
a CentOS 5.2 box before it goes online and before a drive really fails.
I 'mdadm failed, removed', powered off, replaced drive, partitioned with
sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
Everything seems fine until I try to create a snapshot lv. (Creating a
snapshot lv
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello,
I would really appreciate some help/guidance with this problem. First of
all sorry for the long message. I would file a bug, but do not know if
it is my fault, dm-cache, qemu or (probably) a combination of both. And
i can imagine some of you have this setup up and running without
problems (or maybe you think it works, just like i did, but it does not):
PROBLEM
LVM cache writeback
2011 May 25
1
Hook script to preserve one partition untouched during install
This hook script tries to address the fact that a RHEV-H installation
will format all the storage devices available in the machine in order to
create HostVG and AppVG with all the available space. It may be the case
that RHEV-H needs to respect and co-exist with a proposed partitioning
scheme, not getting all the storage space for HostVG and AppVG volume
groups.
The proposed solution adds the
2007 Jul 23
2
GFS/LVM/RAID1 recovery question
I have a (CentOS4.5) cluster in which the servers mount a GFS partition
which is an LVM2 logical volume created as a mirror of two iSCSI-
connected drives (with a third for the log). The LV was created using a
command along the lines of:
lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
where sd[bc] are the mirrored (iSCSI) PVs in the VG and sdd is the log.
I have this working and can write data