search for: lvchang

Displaying 20 results from an estimated 50 matches for "lvchang".

Did you mean: lvchange
2020 Jan 21
2
qemu hook: event for source host too
...qemu hook. I will explain it through my own use case. I use a shared LVM storage as a volume pool between my nodes. I use lvmlockd in sanlock mode to protect both LVM metadata corruption and concurrent volume mounting. When I run a VM on a node, I activate the desired LV with exclusive lock (lvchange -aey). When I stop the VM, I deactivate the LV, effectively releasing the exclusive lock (lvchange -an). When I migrate a VM (both live and offline), the LV has to be activated on both source and target nodes, so I have to use a shared lock (lvchange -asy). That's why I need a hook event o...
2006 Jul 27
1
Bug: lvchange delayed until re-boot. System lock up experienced.
...ch this scenario. If no one contradicts me, I'll also post this in the bug reporting system. Wanted to a) get confirmation, if possible before bugging it and b) warn other souls that may be adventurous too! Summary: failings in LVM and kernel(?) seem to make a "freeze" possible. 1) Lvchange --permission=r seems to not take effect until re-boot even though lvdisplay says it has taken effect; 2) Prior to reboot, mount will mount a read-only LV and give no warning that it is read-only and will mount it r/w. The opposite also occurs: if the LV was ro and it is changed to...
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi... I have had a problem when I am going to detach one specific LVM partitions of Xen, so I have been trying xm destroy <domain>, lvchange -an <lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the server with init 1 yet and nothing... I have seem two specific process started xenwatch and xenbus, but I am not sure if this processes have some action over LVM partitions + XEN. I need to know how can I...
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors. Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/21/20 9:10 AM, Guy Godfroy wrote: >> Hello, this is my first time...
2020 Jan 22
0
Re: qemu hook: event for source host too
...ugh my own use case. > > I use a shared LVM storage as a volume pool between my nodes. I use > lvmlockd in sanlock mode to protect both LVM metadata corruption and > concurrent volume mounting. > > When I run a VM on a node, I activate the desired LV with exclusive lock > (lvchange -aey). When I stop the VM, I deactivate the LV, effectively > releasing the exclusive lock (lvchange -an). > > When I migrate a VM (both live and offline), the LV has to be activated > on both source and target nodes, so I have to use a shared lock > (lvchange -asy). That's...
2010 Sep 11
5
vgrename, lvrename
...volume groups and logical volumes. I was not surprised when it would not let me rename active volumes. So I booted up the system using the CentOS 5.5 LiveCD, but the LiveCD makes the logical volumes browsable using Nautilus, so they are still active and I can't rename them. Tried: /usr/sbin/lvchange -a n VolGroup00/LogVol00 but it still says: LV VolGroup00/LogVol00 in use: not deactivating Did some googling and found out that other folks have had problems with mkinitrd, but I haven't gotten that far yet. Made a wild guess and killed my nautilus process, a lot of stuff disappeared f...
2017 Dec 11
2
active/active failover
...t/pollux/brick # failover: let's artificially fail one server by killing one glusterfsd: [root at qlogin] systemctl status glusterd [root at qlogin] kill -9 <pid/of/glusterfsd/running/brick/castor> # unmount brick [root at qlogin] umount /glust/castor/ # deactive LV [root at qlogin] lvchange -a n vgosb06vd05/castor ### now do the failover: # active same storage on other server: [root at gluster2] lvchange -a y vgosb06vd05/castor # mount on other server [root at gluster2] mount /dev/mapper/vgosb06vd05-castor /glust/castor # now move the "failed" brick to the other...
2014 Jan 09
1
LVM thinpool snapshots broken in 6.5?
Hi, I just installed a CentOS 6.5 System with the intention of using thinly provisioned snapshots. I created the volume group, a thinpool and then a logical volume. All of that works fine but when I create a snapshot "mysnap" then the snapshot volume gets displayed in the "lvs" output with the correct information but apparently no device nodes are created under
2011 Oct 27
1
delete lvm problem: exited with non-zero status 5 and signal 0
...nal error '/sbin/lvremove -f /dev/vg.vmms/lvm-v097222.sqa.cm4' exited with non-zero status 5 and signal 0: Can't remove open logical volume "lvm-v097222.sqa.cm4". then I go to the host and run the command: " sudo lvchange -a n /dev/vg.vmms/lvm-v097222.sqa.cm4" and then the volume could delete use vol.delete(0). But it's not often occur, just once in nearly 8. I know the delete function virStorageBackendLogicalDeleteVol as this: { { const char *cmdargv[] = { LVREMOVE, "-f&quo...
2017 Jul 06
2
logical volume is unreadable
On 06.07.2017 15:35, Robert Nichols wrote: > On 07/06/2017 04:43 AM, Volker wrote: >> Hi all, >> >> one of my lv has become completely unaccessible. Every read access >> results in a buffer io error: >> >> Buffer I/O error on dev dm-13, logical block 0, async page read >> >> this goes for every block in the lv. A ddrescue failed on every single
2017 Dec 11
0
active/active failover
...;s artificially fail one server by killing one glusterfsd: > [root at qlogin] systemctl status glusterd > [root at qlogin] kill -9 <pid/of/glusterfsd/running/brick/castor> > > # unmount brick > [root at qlogin] umount /glust/castor/ > > # deactive LV > [root at qlogin] lvchange -a n vgosb06vd05/castor > > > ### now do the failover: > > # active same storage on other server: > [root at gluster2] lvchange -a y vgosb06vd05/castor > > # mount on other server > [root at gluster2] mount /dev/mapper/vgosb06vd05-castor /glust/castor > > # now...
2017 Dec 12
1
active/active failover
...artificially fail one server by killing one glusterfsd: > [root at qlogin] systemctl status glusterd > [root at qlogin] kill -9 <pid/of/glusterfsd/running/brick/castor> > > # unmount brick > [root at qlogin] umount /glust/castor/ > > # deactive LV > [root at qlogin] lvchange -a n vgosb06vd05/castor > > > ### now do the failover: > > # active same storage on other server: > [root at gluster2] lvchange -a y vgosb06vd05/castor > > # mount on other server > [root at gluster2] mount /dev/mapper/vgosb06vd05-castor /glust/castor > >...
2018 Jul 30
2
Issues booting centos7 [dracut is failing to enable centos/root, centos/swap LVs]
..., booting continues to initrd and then it's where the problem starts... - looks like dracut has issues locating/enabling /dev/mapper/centos-root (lv) and as a result it cannot boot to the 'real' root fs (/). - while in dracut shell, I execute the following command sequence: 1. lvm lvchange -ay /dev/centos/root 2. lvm lvchange -ay /dev/centos/swap 3. ln -s /dev/mapper/centos-root /dev/root 4. exit ...and the o/s boots fine...so looks like the pv,vg,lv is detected properly while in initrd, but somehow dracut has difficulties enabling the root,swap LVs ? - While in o...
2008 Mar 03
3
LVM and kickstarts ?
...on trying to format my logical volume, because the mount point does not exist (/dev/volgroup/logvol) It seems that with option 2, the partitions are created and LVM is setup correctly. However the volgroup / logvolume was not made active, so my /dev/volgroup/logvol did not exist. Running `lvm lvchange -a -y pathname` from with in the shell after anaconda failed made the volgroup / logvol active. Which would allow the format command to complete. Option 1: zerombr yes clearpart --all --initlabel part /boot --fstype ext3 --size=100 part pv.os --size=10000 --grow --maxsize=10000 --asprimary volg...
2012 Jul 10
1
can NOT delete LV (in use) problem...
We have CENTOS 5.6 on DELL server. I create VG and LV on one SSD disk. after couple weeks I decide to delete it. I unmount file system but can not delete LV. It say "in use". I try following but still NOT work: # lvchange -an /dev/VG0-SSD910/LV01-SSD910 LV VG0-SSD910/LV01-SSD910 in use: not deactivating # kpartx -d /dev/VG0-SSD910/LV01-SSD910 # lvchange -an /dev/VG0-SSD910/LV01-SSD910 LV VG0-SSD910/LV01-SSD910 in use: not deactivating Anyone have ideal how to delete it? Thanks.
2008 Mar 05
1
LVM: how do I change the UUID of a LV?
I know how to change the UUID of Physical Volumes and Volume Groups, but when I try to do the same for a Logical Volume, lvchange complains that "--uuid" is not an option. Here is how I've been changing the others (note that "--uuid" does not appear in the man pages for pvchange and vgchange for lvm2-2.02.26-3.el5): pvchange --uuid {pv dev} vgchange --uuid {vg name} Any suggestions? I'm pretty...
2010 Oct 04
1
Mounting an lvm
I converted a system disk from a virtualbox VM and added to the config on a qemu VM. All seems well until I try to mount it. The virtual machine shows data for the disk image using commands like: pvs lvs lvdisplay xena-1 but there is no /dev/xena-1/root to be mounted. I also cannot seem to figure out whether the lvm related modules are available for the virtual machine kernel. Has anyone
2006 May 23
19
LVM2 snapshots and XEN = problem :(
Hello guys Does anyone use lvm2 backends for domU storages ? I do and I wanted to use lvm''s snapshot feature to make backups of domUs in background but I got the following problem. When I create a snapshot LV and copy data from it to backup storage it works perfect. Then I do umount and then lvremove. lvremove asks me if I really want to remove the volume and then just hangs forever.
2020 Jan 22
2
Re: qemu hook: event for source host too
...make hook manage locks, I still need to find out a secure way to run LVM commands from a non-root account, but this is another problem. Le 22 janvier 2020 10:24:53 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/22/20 9:23 AM, Guy Godfroy wrote: >> I could launch `lvchange -asy` on the source host manually, but the >aim >> of hooks is to automatically execute such commands and avoid human >errors. > >Agreed. However, you would need two hooks actually. One that is called >on the source when the migration is started, and the other that is >c...
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :