Displaying 20 results from an estimated 1000 matches similar to: "qemu hook: event for source host too"
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors.
Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit :
>On 1/21/20 9:10 AM, Guy Godfroy wrote:
>> Hello, this is my first time posting on this mailing list.
>>
>> I wanted to suggest a
2020 Jan 22
2
Re: qemu hook: event for source host too
That's right, I need also that second hook event.
For your information, for now I manage locks manually or via Ansible. To make hook manage locks, I still need to find out a secure way to run LVM commands from a non-root account, but this is another problem.
Le 22 janvier 2020 10:24:53 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit :
>On 1/22/20 9:23 AM, Guy Godfroy
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/21/20 9:10 AM, Guy Godfroy wrote:
> Hello, this is my first time posting on this mailing list.
>
> I wanted to suggest a addition to the qemu hook. I will explain it
> through my own use case.
>
> I use a shared LVM storage as a volume pool between my nodes. I use
> lvmlockd in sanlock mode to protect both LVM metadata corruption and
> concurrent volume mounting.
2020 Jan 23
0
Re: qemu hook: event for source host too
So, how likely is it possible to get this feature (two new events for
the qemu hook)?
Le 22/01/2020 à 10:56, Guy Godfroy a écrit :
> That's right, I need also that second hook event.
>
> For your information, for now I manage locks manually or via Ansible.
> To make hook manage locks, I still need to find out a secure way to
> run LVM commands from a non-root account, but
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/22/20 9:23 AM, Guy Godfroy wrote:
> I could launch `lvchange -asy` on the source host manually, but the aim
> of hooks is to automatically execute such commands and avoid human errors.
Agreed. However, you would need two hooks actually. One that is called
on the source when the migration is started, and the other that is
called on the destination when the migration is finished (so
2020 Jan 24
3
Re: qemu hook: event for source host too
On 1/24/20 4:34 PM, Guy Godfroy wrote:
> I don't really understand what new hook this would be.
Libvirt's migration happens in phases [1]. The last one is 'Confirm'
where either the domain is either killed (because it's running on the
destination successfully), or resumed (because there was an error).
If you make a lock shared at the beginning of the migration, but
2020 Jan 24
2
Re: qemu hook: event for source host too
On 1/23/20 1:43 PM, Guy Godfroy wrote:
> So, how likely is it possible to get this feature (two new events for
> the qemu hook)?
I've started writing it, but then I realized we might need third hook -
in confirm phase - which would be run on the source when quemu switches
control over to the destination, or when migration failed. And this is
what I need to figure out, how to
2008 Jun 12
3
Detach specific partition LVM of XEN
Hi...
I have had a problem when I am going to detach one specific LVM partitions
of Xen, so I have been trying xm destroy <domain>, lvchange -an
<lvm_partition>, lvremove -f.... So I haven''t had sucess. I restarted the
server with init 1 yet and nothing... I have seem two specific process
started xenwatch and xenbus, but I am not sure if this processes have
some action over
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization
installations. To this date we only dealt with a small number of VM''s
(~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the
"storage guy" I find it quite convenient to present to the dom0s one
LUN per VM that makes live migration possible but without the cluster
file system or cLVM
2001 Jun 02
1
No subject
Hi Guus,
Becuase i read in mailing list it's talk about set port forward TCP/UDP to 655. So that i guess, may be it's concern with my problem.
Aey :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://brouwer.uvt.nl/pipermail/tinc/attachments/20010602/830f2717/attachment.htm
2010 Sep 11
5
vgrename, lvrename
Hi,
I want to rename some volume groups and logical volumes.
I was not surprised when it would not let me rename active volumes.
So I booted up the system using the CentOS 5.5 LiveCD,
but the LiveCD makes the logical volumes browsable using Nautilus,
so they are still active and I can't rename them.
Tried:
/usr/sbin/lvchange -a n VolGroup00/LogVol00
but it still says:
LV
2011 Oct 27
1
delete lvm problem: exited with non-zero status 5 and signal 0
hi,
I use the libvirt-python to manage my virtual machine. When I delete a
volume use vol.delete(0), sometimes it note me that has occur the error:
libvirtError: internal error '/sbin/lvremove
-f /dev/vg.vmms/lvm-v097222.sqa.cm4' exited with
non-zero status 5 and signal 0: Can't remove open
logical volume
2020 Jun 19
1
Re: qemu hook: event for source host too
2011 Jul 25
1
Wanted: method for qemu hook script to know if called for migration
Hi,
"Before a QEMU guest is started, the qemu hook script is called"
(http://libvirt.org/hooks.html)...
In writing a simple bash script to prevent a VM on shared storage from being
started on one host if it's already running on another, it's easy enough to
check if virsh on the other host reports the particular VM as "shut off"
there, and go ahead if that's the
2018 Jul 30
2
Issues booting centos7 [dracut is failing to enable centos/root, centos/swap LVs]
Hello,
I'm having a strange problem booting a new centos7 installation. Below some
background on this. [I have attached the tech details at the bottom of this
message]
I started a new CentOS7 installation on a VM, so far all good, o/s boots
fine. Then I decided to increase VM disk size (initially was 10G) to 13G.
Powered off the VM, increased the vhd via the hypervisor, booted from
CentOS
2008 Mar 05
1
LVM: how do I change the UUID of a LV?
I know how to change the UUID of Physical Volumes and Volume Groups, but
when I try to do the same for a Logical Volume, lvchange complains that
"--uuid" is not an option. Here is how I've been changing the others
(note that "--uuid" does not appear in the man pages for pvchange and
vgchange for lvm2-2.02.26-3.el5):
pvchange --uuid {pv dev}
vgchange --uuid {vg name}
Any
2008 Mar 03
3
LVM and kickstarts ?
Hey,
Can anyone tell me why option 1 works and option 2 fails ? I know I
need swap and such, however in trouble shooting this issue I trimmed
down my config.
It fails on trying to format my logical volume, because the mount point
does not exist (/dev/volgroup/logvol)
It seems that with option 2, the partitions are created and LVM is setup
correctly. However the volgroup / logvolume was not