similar to: Connections Driver VirtualBox

Displaying 20 results from an estimated 4000 matches similar to: "Connections Driver VirtualBox"

2018 Oct 15
3
snapshots with virsh in a pacemaker cluster
Hi, i have a two node cluster with virtual guests as resources. I'd like to snapshot the guests once in the night and thought i had a procedure. But i realize that things in a cluster are a bit more complicated than expected :-)) I will shutdown the guests to have a clean snapshot. I can shutdown the guests via pacemaker. But then arises the first problem: When i issue a "virsh
2018 Sep 14
2
Re: live migration and config
14.09.2018 15:43, Jiri Denemark пишет: > On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote: >> >> 13.09.2018 18:57, Jiri Denemark пишет: >>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: >>>> 13.09.2018 17:47, Jiri Denemark пишет: >>>>> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
2019 Sep 18
1
Live-Migration not possible: error: operation failed: guest CPU doesn't match specification
Hi, i have atwo node HA-cluster with pacemaker, corosync, libvirt and KVM. Recently i configured a new VirtualDomain which runs fine, but live Migration does not succeed. This is the error: VirtualDomain(vm_snipanalysis)[14322]: 2019/09/18_16:56:54 ERROR: snipanalysis: live migration to ha-idg-2 failed: 1 Sep 18 16:56:54 [6970] ha-idg-1 lrmd: notice: operation_finished:
2011 Jun 18
1
virsh slow on execution
Hi. I've set up a Debian6+Xen4+libvirt0.9 server. VMs are going to be managed by PaceMaker with VirtualDomain primitive. As found on some mailing list I've set export VIRSH_DEFAULT_CONNECT_URI="xen:///" in root's .bashrc, but when I run virsh commands, it takes up to 10s to get the result! Is this normal? At some point today I had issues connecting to libvirt, and in this
2018 Dec 04
3
concurrent migration of several domains rarely fails
Hi, i have a two-node cluster with several domains as resources. During testing i tried several times to migrate some domains concurrently. Usually it suceeded, but rarely it failed. I found one clue in the log: Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: error : virKeepAliveTimerInternal:143 : internal error: connection closed due to keepalive timeout The
2014 Feb 12
3
Right way to do SAN-based shared storage?
I'm trying to set up SAN-based shared storage in KVM, key word being "shared" across multiple KVM servers for a) live migration and b) clustering purposes. But it's surprisingly sparsely documented. For starters, what type of pool should I be using?
2018 Oct 15
0
Re: snapshots with virsh in a pacemaker cluster
Pacemaker always knows where its resources are running. Query it, stop the domain, then use the queried location as the host to which to issue the snapshot? Cheers, Peter On Mon, 15 Oct 2018, 20:36 Lentes, Bernd, < bernd.lentes@helmholtz-muenchen.de> wrote: > Hi, > > i have a two node cluster with virtual guests as resources. > I'd like to snapshot the guests once in the
2006 Oct 23
1
problems with authentication
Hi, I have some problems I'm using fetchmail to get my mail from the server to a local server, but the problem is that my user name of the mail is "user at virtualdomain" and I can't make a system user with "@" on centos. So, I want to know how can I tell to dovecot the user "user at virtualdomain" is associate to a sistem user "user.virtualdoman".
2014 Feb 12
0
Re: Right way to do SAN-based shared storage?
On Wed, 12 Feb 2014 21:51:53 +0100 urgrue <urgrue@bulbous.org> wrote: > I'm trying to set up SAN-based shared storage in KVM, key word being > "shared" across multiple KVM servers for a) live migration and b) > clustering purposes. But it's surprisingly sparsely documented. For > starters, what type of pool should I be using? It's indeed not documented
2018 Dec 07
3
Re: concurrent migration of several domains rarely fails
On 12/6/18 10:12 AM, Lentes, Bernd wrote: > >> Hi, >> >> i have a two-node cluster with several domains as resources. During testing i >> tried several times to migrate some domains concurrently. >> Usually it suceeded, but rarely it failed. I found one clue in the log: >> >> Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252:
2015 Jan 18
2
sendmail not invoking dovecot-lda
hi dovecot mailinglist - Configuration FreeBSD-9.3 sendmail -d0.1 == sendmail-8.14.9 <<-- dovecot --version == dovecot-2.2.15 # =================================================================== # I'm trying to get sendmail to invoke dovecot.m4 ( dovecot-lda ) to # deliver emails to dovecot's virtual users ( /etc/dovecot/passwd ) # or mysql/postgresql virtual users #
2011 Sep 29
1
CentOS 6: corosync and pacemaker won't stop (patch)
Hi, I cannot 'halt' my CentOS 6 servers while running corosync+pacemaker. I believe the runlevels used to stop corosync and pacemaker are not in the correct order and create the infinite "Waiting for corosync services to unload..." loop thing. This is my first time with this cluster technology but apparently pacemaker has to be stopped /before/ corosync. Applying the following
2008 Nov 18
1
[Patch 3/3] ocfs2-tools: Fix compilation of Pacemaker glue for ocfs2_controld
Fix compilation of Pacemaker glue for ocfs2_controld when the underlying Pacemaker installation supports both the Heartbeat and OpenAIS stack Signed-off-by: Andrew Beekhof <abeekhof at suse.de> --- upstream/ocfs2_controld/pacemaker.c 2008-09-11 16:51:11.000000000 +0200 +++ dev/ocfs2_controld/pacemaker.c 2008-10-23 13:14:56.000000000 +0200 @@ -20,8 +20,16 @@ #include
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
Background: ---------- I'm trying to debug a two-node pacemaker/corosync cluster where I want to be able to do live migration of KVM/qemu VMs. Storage is backed via dual-primary DRBD (yes, fencing is in place). When moving the VM between nodes via 'pcs resource move RES NODENAME', the live migration fails although pacemaker will shut down the VM and restart it on the other node.
2018 Dec 06
0
Re: concurrent migration of several domains rarely fails
> Hi, > > i have a two-node cluster with several domains as resources. During testing i > tried several times to migrate some domains concurrently. > Usually it suceeded, but rarely it failed. I found one clue in the log: > > Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: > error : virKeepAliveTimerInternal:143 : internal error: connection
2018 Sep 14
0
Re: live migration and config
On Fri, Sep 14, 2018 at 16:00:43 +0400, Dmitry Melekhov wrote: > 14.09.2018 15:43, Jiri Denemark пишет: > > On Thu, Sep 13, 2018 at 19:37:00 +0400, Dmitry Melekhov wrote: > >> > >> 13.09.2018 18:57, Jiri Denemark пишет: > >>> On Thu, Sep 13, 2018 at 18:38:57 +0400, Dmitry Melekhov wrote: > >>>> 13.09.2018 17:47, Jiri Denemark пишет: >
2020 Dec 02
1
Problem upgrading from 8.0 to 8.1
Hi, I want to upgrade from 8.0 to 8.1 but yum update does not work. Still 8.0 Probably the cause is that I compiled DRBD myself because it was not available for 8.0 when I set up the machine. Now yum update says: Error: ?Problem 1: cannot install the best update candidate for package drbd-pacemaker-9.11.0-1.el8.x86_64 ? - nothing provides pacemaker needed by
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings! I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it using the standard defaults over TCP/IP. Everything worked very nicely usnig a real, static --mgsnode=a.b.c.x value which was the actual IP of the MGS/MDS system1 node. I am now trying to integrate it with Pacemaker-1.1.7. I believe I have most of the set-up completed with a particular exception. The "lctl
2009 Jun 15
1
Is Pacemaker integration ready to go?
I have seen many references online to being able to use OCFS2 with Pacemaker, but the documentation I have been able to find is very Sparse. I have kernel 2.6.29, and the latest DLM and Pacemaker (using openais) and OCFS2-Tools from GIT. (As of June 13). I was able to build ocfs2_controld.pcmk ... (With some minor changes to the makefile for my install) I noticed the OCF version of o2cb is not
2016 Nov 25
1
Pacemaker bugs?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I think I stumbled on at least two bugs in the CentOS 7.2 pacemaker package, though I'm not quite sure if or where to report it. I'm using the following package to set up a 2-node active/passive cluster: [root at clnode1 ~]# rpm -q pacemaker pacemaker-1.1.13-10.el7_2.4.x86_64 The installation is up-to-date on both nodes as of the