similar to: LVM: how do I change the UUID of a LV?

Displaying 20 results from an estimated 1000 matches similar to: "LVM: how do I change the UUID of a LV?"

2012 Aug 30
2
[PATCH v2] daemon: collect list of called external commands
guestfsd calls many different tools. Keeping track of all of them is error prone. This patch introduces a new helper macro to put the command string into its own ELF section: GUESTFSD_EXT_CMD(C_variable, command_name); This syntax makes it still possible to grep for used command names. The actual usage of the collected list could be like this: objcopy -j .guestfsd_ext_cmds -O binary
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list. I wanted to suggest a addition to the qemu hook. I will explain it through my own use case. I use a shared LVM storage as a volume pool between my nodes. I use lvmlockd in sanlock mode to protect both LVM metadata corruption and concurrent volume mounting. When I run a VM on a node, I activate the desired LV with exclusive lock
2012 Aug 30
1
[PATCH] collect list of called external commands
guestfsd calls many different tools. Keeping track of all of them is error prone. This patch introduces a new helper macro to put the command string into its own ELF section: GUESTFS_EXT_CMD(C_variable, command_name); This syntax makes it still possible to grep for used command names. The actual usage of the collected list could be like this: objcopy -j .guestfs_ext_cmds -O binary
2008 Jun 05
3
vsftpd and active mode connections causes FTP session to hang
I've encountered an odd error state that I haven't been able to resolve yet. I have a customer that, for what ever reason, wants to use active mode occasionally for FTP xfers. What they have noticed, is that after you switch to active, and issue a command (they do 'ls', I've done other things like 'put' and 'get', etc.), the connection hangs. If you wait a
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors. Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/21/20 9:10 AM, Guy Godfroy wrote: >> Hello, this is my first time posting on this mailing list. >> >> I wanted to suggest a
2017 Apr 23
0
Proper way to remove a qemu-nbd-mounted volume usnig lvm
I either haven't searched for the right thing or the web doesn't contain the answer. I have used the following to mount an image and now I need to know the proper way to reverse the process. qemu-nbd -c /dev/nbd0 <qcow2 image using lvm> vgscan --cache (had to use --cache to get the qemu-nbd volume to be recognized, lvmetad is running) vgchange -ay
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi... I have had a problem when I am going to detach one specific LVM partitions of Xen, so I have been trying xm destroy <domain>, lvchange -an <lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the server with init 1 yet and nothing... I have seem two specific process started xenwatch and xenbus, but I am not sure if this processes have some action over
2008 Apr 09
3
Interface bonding?
I'm try to bond a few interfaces together with the hopes of getting increased throughput, and I'm using a cisco Catalyst 2900 as the switch. I've tried using mode 0, 5, and 6 with nothing special on the switch, and mode 4 with some ports "trunked" together (I have a feeling that the "trunking" that the 2900 does is not 802.3ad, as it disabled the ports it saw as
2012 Jul 10
1
can NOT delete LV (in use) problem...
We have CENTOS 5.6 on DELL server. I create VG and LV on one SSD disk. after couple weeks I decide to delete it. I unmount file system but can not delete LV. It say "in use". I try following but still NOT work: # lvchange -an /dev/VG0-SSD910/LV01-SSD910 LV VG0-SSD910/LV01-SSD910 in use: not deactivating # kpartx -d /dev/VG0-SSD910/LV01-SSD910 # lvchange -an
2017 Jul 27
0
[PATCH v2] daemon: Remove GUESTFSD_EXT_CMD.
GUESTFSD_EXT_CMD was used by OpenSUSE to track which external commands are run by the daemon and package those commands into the appliance. It is no longer used by recent SUSE builds, so remove it. Thanks: Pino Toscano, Olaf Hering. --- daemon/9p.c | 3 +- daemon/available.c | 7 +-- daemon/base64.c | 6 +-- daemon/blkid.c | 10 ++---
2010 Oct 04
1
Mounting an lvm
I converted a system disk from a virtualbox VM and added to the config on a qemu VM. All seems well until I try to mount it. The virtual machine shows data for the disk image using commands like: pvs lvs lvdisplay xena-1 but there is no /dev/xena-1/root to be mounted. I also cannot seem to figure out whether the lvm related modules are available for the virtual machine kernel. Has anyone
2017 Jul 24
0
[PATCH 2/2] daemon: Replace GUESTFSD_EXT_CMD with --print-external-commands.
GUESTFSD_EXT_CMD is used by OpenSUSE to track which external commands are run by the daemon and package those commands into the appliance. However because this uses linker trickery it won't work from OCaml code. Replace it with a [nearly] standard C mechanism. Files still have to declare the external commands they will use, eg: DECLARE_EXTERNAL_COMMANDS ("btrfs",
2015 Mar 18
0
unable to recover software raid1 install
On Tue, 2015-03-17 at 23:28 +0100, johan.vermeulen7 at telenet.be wrote: > > on a Centos5 system installed with software raid I'm getting: > > raid1: raid set md127 active with 2 out of 2 mirrors > > md:.... autorun DONE > > md: Autodetecting RAID arrays > > md: autorun..... > > md : autorun DONE > > trying to resume form /dev/md1 Hi
2017 Dec 11
2
active/active failover
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8) So my question is: can I really use glusterfs to do failover in the way described
2010 Sep 11
5
vgrename, lvrename
Hi, I want to rename some volume groups and logical volumes. I was not surprised when it would not let me rename active volumes. So I booted up the system using the CentOS 5.5 LiveCD, but the LiveCD makes the logical volumes browsable using Nautilus, so they are still active and I can't rename them. Tried: /usr/sbin/lvchange -a n VolGroup00/LogVol00 but it still says: LV
2011 Oct 27
1
delete lvm problem: exited with non-zero status 5 and signal 0
hi, I use the libvirt-python to manage my virtual machine. When I delete a volume use vol.delete(0), sometimes it note me that has occur the error: libvirtError: internal error '/sbin/lvremove -f /dev/vg.vmms/lvm-v097222.sqa.cm4' exited with non-zero status 5 and signal 0: Can't remove open logical volume
2017 Dec 14
2
Accessing crashed disk
On 13/12/17 21:42, Leon Fauster wrote: > Am 13.12.2017 um 22:31 schrieb martin.wagner at mailbit.io: > >> I have a Centos server that crashed, it would no longer boot. I thought it was the disk with the OS that was the problem so I bought a new one and did a fresh install and now the computer is again up and running. But I'm having problems with accessing the old failed disk. I
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :
2017 Dec 11
0
active/active failover
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up. Chances are if you weren't using the SAN volumes; you could have purchased two servers
2017 Dec 12
1
active/active failover
Hi Alex, Thank you for the quick reply! Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"