search for: lvchanged

Displaying 20 results from an estimated 50 matches for "lvchanged".

Did you mean: lvchange
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list. I wanted to suggest a addition to the qemu hook. I will explain it through my own use case. I use a shared LVM storage as a volume pool between my nodes. I use lvmlockd in sanlock mode to protect both LVM metadata corruption and concurrent volume mounting. When I run a VM on a node, I activate the desired LV with exclusive lock
2006 Jul 27
1
Bug: lvchange delayed until re-boot. System lock up experienced.
Did a search for LVM at the CentOS bugzilla. Nothing seems to match this scenario. If no one contradicts me, I'll also post this in the bug reporting system. Wanted to a) get confirmation, if possible before bugging it and b) warn other souls that may be adventurous too! Summary: failings in LVM and kernel(?) seem to make a "freeze" possible. 1) Lvchange --permission=r seems to
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi... I have had a problem when I am going to detach one specific LVM partitions of Xen, so I have been trying xm destroy <domain>, lvchange -an <lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the server with init 1 yet and nothing... I have seem two specific process started xenwatch and xenbus, but I am not sure if this processes have some action over
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors. Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/21/20 9:10 AM, Guy Godfroy wrote: >> Hello, this is my first time posting on this mailing list. >> >> I wanted to suggest a
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/21/20 9:10 AM, Guy Godfroy wrote: > Hello, this is my first time posting on this mailing list. > > I wanted to suggest a addition to the qemu hook. I will explain it > through my own use case. > > I use a shared LVM storage as a volume pool between my nodes. I use > lvmlockd in sanlock mode to protect both LVM metadata corruption and > concurrent volume mounting.
2010 Sep 11
5
vgrename, lvrename
Hi, I want to rename some volume groups and logical volumes. I was not surprised when it would not let me rename active volumes. So I booted up the system using the CentOS 5.5 LiveCD, but the LiveCD makes the logical volumes browsable using Nautilus, so they are still active and I can't rename them. Tried: /usr/sbin/lvchange -a n VolGroup00/LogVol00 but it still says: LV
2017 Dec 11
2
active/active failover
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8) So my question is: can I really use glusterfs to do failover in the way described
2014 Jan 09
1
LVM thinpool snapshots broken in 6.5?
Hi, I just installed a CentOS 6.5 System with the intention of using thinly provisioned snapshots. I created the volume group, a thinpool and then a logical volume. All of that works fine but when I create a snapshot "mysnap" then the snapshot volume gets displayed in the "lvs" output with the correct information but apparently no device nodes are created under
2011 Oct 27
1
delete lvm problem: exited with non-zero status 5 and signal 0
hi, I use the libvirt-python to manage my virtual machine. When I delete a volume use vol.delete(0), sometimes it note me that has occur the error: libvirtError: internal error '/sbin/lvremove -f /dev/vg.vmms/lvm-v097222.sqa.cm4' exited with non-zero status 5 and signal 0: Can't remove open logical volume
2017 Jul 06
2
logical volume is unreadable
On 06.07.2017 15:35, Robert Nichols wrote: > On 07/06/2017 04:43 AM, Volker wrote: >> Hi all, >> >> one of my lv has become completely unaccessible. Every read access >> results in a buffer io error: >> >> Buffer I/O error on dev dm-13, logical block 0, async page read >> >> this goes for every block in the lv. A ddrescue failed on every single
2017 Dec 11
0
active/active failover
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up. Chances are if you weren't using the SAN volumes; you could have purchased two servers
2017 Dec 12
1
active/active failover
Hi Alex, Thank you for the quick reply! Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2018 Jul 30
2
Issues booting centos7 [dracut is failing to enable centos/root, centos/swap LVs]
Hello, I'm having a strange problem booting a new centos7 installation. Below some background on this. [I have attached the tech details at the bottom of this message] I started a new CentOS7 installation on a VM, so far all good, o/s boots fine. Then I decided to increase VM disk size (initially was 10G) to 13G. Powered off the VM, increased the vhd via the hypervisor, booted from CentOS
2008 Mar 03
3
LVM and kickstarts ?
Hey, Can anyone tell me why option 1 works and option 2 fails ? I know I need swap and such, however in trouble shooting this issue I trimmed down my config. It fails on trying to format my logical volume, because the mount point does not exist (/dev/volgroup/logvol) It seems that with option 2, the partitions are created and LVM is setup correctly. However the volgroup / logvolume was not
2012 Jul 10
1
can NOT delete LV (in use) problem...
We have CENTOS 5.6 on DELL server. I create VG and LV on one SSD disk. after couple weeks I decide to delete it. I unmount file system but can not delete LV. It say "in use". I try following but still NOT work: # lvchange -an /dev/VG0-SSD910/LV01-SSD910 LV VG0-SSD910/LV01-SSD910 in use: not deactivating # kpartx -d /dev/VG0-SSD910/LV01-SSD910 # lvchange -an
2008 Mar 05
1
LVM: how do I change the UUID of a LV?
I know how to change the UUID of Physical Volumes and Volume Groups, but when I try to do the same for a Logical Volume, lvchange complains that "--uuid" is not an option. Here is how I've been changing the others (note that "--uuid" does not appear in the man pages for pvchange and vgchange for lvm2-2.02.26-3.el5): pvchange --uuid {pv dev} vgchange --uuid {vg name} Any
2010 Oct 04
1
Mounting an lvm
I converted a system disk from a virtualbox VM and added to the config on a qemu VM. All seems well until I try to mount it. The virtual machine shows data for the disk image using commands like: pvs lvs lvdisplay xena-1 but there is no /dev/xena-1/root to be mounted. I also cannot seem to figure out whether the lvm related modules are available for the virtual machine kernel. Has anyone
2006 May 23
19
LVM2 snapshots and XEN = problem :(
Hello guys Does anyone use lvm2 backends for domU storages ? I do and I wanted to use lvm''s snapshot feature to make backups of domUs in background but I got the following problem. When I create a snapshot LV and copy data from it to backup storage it works perfect. Then I do umount and then lvremove. lvremove asks me if I really want to remove the volume and then just hangs forever.
2020 Jan 22
2
Re: qemu hook: event for source host too
That's right, I need also that second hook event. For your information, for now I manage locks manually or via Ansible. To make hook manage locks, I still need to find out a secure way to run LVM commands from a non-root account, but this is another problem. Le 22 janvier 2020 10:24:53 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/22/20 9:23 AM, Guy Godfroy
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :