similar to: LVM thinpool snapshots broken in 6.5?

Displaying 20 results from an estimated 1100 matches similar to: "LVM thinpool snapshots broken in 6.5?"

2014 Jul 22
0
CentOS7+kickstart+thinpool = error/exception
Hi, I'm trying to create a kickstart file that uses a thinly provisioned lvm volume as root but I've run into trouble. I installed a System manually using this option and this is the anaconda file produced: part /boot --fstype="xfs" --ondisk=vda --size=500 part pv.10 --fstype="lvmpv" --ondisk=vda --size=7691 volgroup centos_centos7 --pesize=4096 pv.10 logvol
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello, as usual, I'm few years behind trends so I have learned about LVM thin volumes recently and I especially like that your volumes can be "sparse" - that you can have 1TB thin volume on 250GB VG/thin pool. Is it somehow possible to use that with libvirt? I have found this post from 2014: https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html which says
2009 Sep 10
3
zfs send of a cloned zvol
Hi, I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap). If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol? Best regards. Maurilio. -- This message posted from
2013 Sep 08
2
LVM Thin Volumes & Storage Pools
Hi, Is it possible to create a storage pool based on an LVM thin pool? I read a recent bugzilla but the problem there was that the storage-pool became unusable AFTER creating a thinpool which is a different case. Thanks, Jorge
2016 Dec 05
0
Huge write amplification with thin provisioned logical volumes
Hi, I've noticed huge write amplification problem with thinly provisioned logical volumes and I wondered if anyone can explain why it happens and if and how can be fixed. The behavior is the same on Centos 6.8 and Centos 7.2. I have a NVME card (Intel DC P3600 -2 TB) on which I create a thinly provisioned logical volume: pvcreate /dev/nvme0n1 vgcreate vgg /dev/nvme0n1 lvcreate
2015 Jun 26
1
LVM hatred, was Re: /boot on a separate partition?
On Fri, Jun 26, 2015 at 10:51 AM, Gordon Messmer <gordon.messmer at gmail.com> wrote: >> , or alternatively making the LVs >> redundant after install is a single command (each) and you can choose >> whether it should be mere mirroring or some MD manged RAID level (modulo >> the LVM RAID MD monitoring issue). > > > I hadn't realized that. That's an
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list. I wanted to suggest a addition to the qemu hook. I will explain it through my own use case. I use a shared LVM storage as a volume pool between my nodes. I use lvmlockd in sanlock mode to protect both LVM metadata corruption and concurrent volume mounting. When I run a VM on a node, I activate the desired LV with exclusive lock
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi... I have had a problem when I am going to detach one specific LVM partitions of Xen, so I have been trying xm destroy <domain>, lvchange -an <lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the server with init 1 yet and nothing... I have seem two specific process started xenwatch and xenbus, but I am not sure if this processes have some action over
2006 May 23
19
LVM2 snapshots and XEN = problem :(
Hello guys Does anyone use lvm2 backends for domU storages ? I do and I wanted to use lvm''s snapshot feature to make backups of domUs in background but I got the following problem. When I create a snapshot LV and copy data from it to backup storage it works perfect. Then I do umount and then lvremove. lvremove asks me if I really want to remove the volume and then just hangs forever.
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2010 Sep 11
5
vgrename, lvrename
Hi, I want to rename some volume groups and logical volumes. I was not surprised when it would not let me rename active volumes. So I booted up the system using the CentOS 5.5 LiveCD, but the LiveCD makes the logical volumes browsable using Nautilus, so they are still active and I can't rename them. Tried: /usr/sbin/lvchange -a n VolGroup00/LogVol00 but it still says: LV
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors. Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/21/20 9:10 AM, Guy Godfroy wrote: >> Hello, this is my first time posting on this mailing list. >> >> I wanted to suggest a
2010 Apr 27
7
Mapping inode numbers to file names
Let''s suppose you rename a file or directory. /tank/widgets/a/rel2049_773.13-4/somefile.txt Becomes /tank/widgets/b/foogoo_release_1.9/README Let''s suppose you are now working on widget B, and you want to look at the past zfs snapshot of README, but you don''t remember where it came from. That is, you don''t know the previous name or location where that
2017 Jul 06
2
logical volume is unreadable
On 06.07.2017 15:35, Robert Nichols wrote: > On 07/06/2017 04:43 AM, Volker wrote: >> Hi all, >> >> one of my lv has become completely unaccessible. Every read access >> results in a buffer io error: >> >> Buffer I/O error on dev dm-13, logical block 0, async page read >> >> this goes for every block in the lv. A ddrescue failed on every single
2017 Dec 11
2
active/active failover
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8) So my question is: can I really use glusterfs to do failover in the way described
2010 Oct 04
1
Mounting an lvm
I converted a system disk from a virtualbox VM and added to the config on a qemu VM. All seems well until I try to mount it. The virtual machine shows data for the disk image using commands like: pvs lvs lvdisplay xena-1 but there is no /dev/xena-1/root to be mounted. I also cannot seem to figure out whether the lvm related modules are available for the virtual machine kernel. Has anyone
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :
2008 Mar 03
3
LVM and kickstarts ?
Hey, Can anyone tell me why option 1 works and option 2 fails ? I know I need swap and such, however in trouble shooting this issue I trimmed down my config. It fails on trying to format my logical volume, because the mount point does not exist (/dev/volgroup/logvol) It seems that with option 2, the partitions are created and LVM is setup correctly. However the volgroup / logvolume was not
2017 Dec 11
0
active/active failover
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up. Chances are if you weren't using the SAN volumes; you could have purchased two servers