similar to: Re: clustered lvm volume activation

Displaying 20 results from an estimated 5000 matches similar to: "Re: clustered lvm volume activation"

2020 Feb 14
0
Re: clustered lvm volume activation
On Fri, Feb 14, 2020 at 11:28:40 +0100, Timm Wunderlich wrote: > dear all, > > i have a problem with clustered lvm storage and the "logical" storage > driver, perhaps someone has a suggestion. > > i have a clustered lvm (lvmlockd, not clvm) with 4 hosts running. > > when i start the storage on a host it activates all volumes in the whole > volume group. then
2020 Jan 21
2
qemu hook: event for source host too
Hello, this is my first time posting on this mailing list. I wanted to suggest a addition to the qemu hook. I will explain it through my own use case. I use a shared LVM storage as a volume pool between my nodes. I use lvmlockd in sanlock mode to protect both LVM metadata corruption and concurrent volume mounting. When I run a VM on a node, I activate the desired LV with exclusive lock
2020 Jan 22
2
Re: qemu hook: event for source host too
I could launch `lvchange -asy` on the source host manually, but the aim of hooks is to automatically execute such commands and avoid human errors. Le 22 janvier 2020 09:18:54 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit : >On 1/21/20 9:10 AM, Guy Godfroy wrote: >> Hello, this is my first time posting on this mailing list. >> >> I wanted to suggest a
2020 Jan 22
0
Re: qemu hook: event for source host too
On 1/21/20 9:10 AM, Guy Godfroy wrote: > Hello, this is my first time posting on this mailing list. > > I wanted to suggest a addition to the qemu hook. I will explain it > through my own use case. > > I use a shared LVM storage as a volume pool between my nodes. I use > lvmlockd in sanlock mode to protect both LVM metadata corruption and > concurrent volume mounting.
2007 Nov 13
2
lvm over nbd?
I have a system with a large LVM VG partition. I was wondering if there is a way i could share the partition using nbd and have the nbd-client have access the LVM as if it was local. SYSTEM A: /dev/sda3 is a LVM partition and is assigned to VG volgroup1. I want to share /dev/sda3 via nbd-server SYSTEM B: receives A''s /dev/sda3 as /dev/nbd0. I want to access it as VG volgroup1. I am
2008 Nov 06
0
SAN (Shared LUN) with CLVM
Hi@all System: 2 Server nodes connected to a SAN storage (shared LUNs). Shared storage holds a Volume group (lvm2) with all my hvm guests. Live migration works nice. But snapshots from logical volumes only usable, when i first deactivate logical volume on the second node - otherwise errors in metadata came up and snapshot is broken/inconsistent. both xen 3.3 are installed on ubuntu 8.04. can
2012 Aug 02
1
XEN HA Cluster with LVM fencing and live migration ? The right way ?
Hi, I am trying to build a rock solid XEN High availability cluster. The platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA fiber channel to the SAN (HP EVA). XEN is running smoothly and I''m even amazed with the live migration performances (this is the first time I have the chance to try it in such a nice environment). XEN apart the SLES heartbeat cluster is
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys, I have setup my hard disc with 3 partitions: 1- 256MB on /boot; 2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN); 3- 498GB exported with vblade-persist to my network (eth1 for the AoE protocol). On dom0 hypervisor01: vblade-persist setup 0 0 eth1 /dev/sda3 vblade-persist start all How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s? Including the
2010 Mar 08
4
Error with clvm
Hi, I get this error when i try to start clvm (debian lenny) : This is a clvm version with openais # /etc/init.d/clvm restart Deactivating VG ::. Stopping Cluster LVM Daemon: clvm. Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD started CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132 CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on with our CLVM cluster. Background: 4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches. Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage array. Machines are running CentOS 5.8 with the Xen kernels. These blades host various VMs for a project. The iSCSI
2008 Apr 18
1
help--dom0 network goes unpingable when xend starts (fwd)
I am posting the message below again because it did not go through last night. Help! Steve Timm -- ------------------------------------------------------------------ Steven C. Timm, Ph.D (630) 840-8525 timm@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.
2012 Feb 23
2
lockmanager for use with clvm
Hi, i am setting up a cluster of kvm hypervisors managed with libvirt. The storage pool is on iscsi with clvm. To prevent that a vm is started on more than one hypervisor, I want to use a lockmanager with libvirt. I could only find sanlock as lockmanager, but AFSIK sanlock will not work in my setup as I don't have a shared filesystem. I have dlm running for clvm. Are there lockmanager
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization installations. To this date we only dealt with a small number of VM''s (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the "storage guy" I find it quite convenient to present to the dom0s one LUN per VM that makes live migration possible but without the cluster file system or cLVM
2011 Oct 13
1
pvresize on a cLVM
Hi, I'm needing to expand a LUN on my EMC CX4-120 SAN. (Well I already had done it). On this LUN I had a PV of a cLVM VG. Know I need to run pvresize on it. Has anybody done this on a cLVM PV ? I'm trying to rescan the devices, but I can't "see" the new size. And, googling on it I can only find RHEL5.2 responses. Thanks in advance,
2009 Apr 01
3
installing DomU with two network bridges via virt-install
I have a Xen DomU configuration that was made in the days before libvirt and virt-install. In this configuration I have: vif = [ ''mac=00:16:3e:05:06:01, bridge=xenbr0'', ''mac=00:16:3e:05:06:0a, bridge=xenbr1'' ] and then in xend-config.sxp I define (network-script my-network-bridge) where my-network-bridge is in the scripts directory and looks like this:
2010 Jan 06
2
changing behavior of xendomains stop
I''m running xen 3.1.2 as bastardized by RedHat on a redhat clone operating system. I''m using the xendomains script as it came out of the box to start my domU''s at boot and stop them at shutdown. There are three problems right now: 1) left to its own devices, service xendomains stop attempts to do "xm save" on each of the domU''s. that takes quite a
2007 Aug 16
1
xen 3.1/ RHEL5 vs. ethtool
I have the xensource 3.1.0 x86_64 tarball installed over a RHEL5 clone distribution. The "ethtool" utility only returns the following information: [root@fermigrid5 etc]# ethtool eth0 Settings for eth0: Link detected: yes [root@fermigrid5 etc]# Since I have no vanilla-installed rhel5 machines with which to compare, I am not sure if I am dealing with a bug in the ethtool (whose
2017 Sep 15
0
Changes to 'ADJCALLSTACK*' and 'callseq_*' between LLVM v4.0 and v5.0
Hi Martin, Pseudo CALLSEQ_START was changed in r302527, commit message contains details on the changes. However CALLSEQ_END was not modified. If your made changes to ADJCALLSTACKUP to add additional argument, that may result in error. Thanks, --Serge 2017-09-15 19:09 GMT+07:00 Martin J. O'Riordan via llvm-dev < llvm-dev at lists.llvm.org>: > Hi LLVM-Devs, > > I have managed
2017 Sep 19
1
Changes to 'ADJCALLSTACK*' and 'callseq_*' between LLVM v4.0 and v5.0
Hi Serge, Thanks for your help. I have looked at the change log, and so far as I can tell, my implementation is pretty much identical to all of the in-tree targets, but I’m missing something and can’t see what it is. I have simplified my TD description to just: def MyCallseqStart : SDNode<"ISD::CALLSEQ_START", SDCallSeqStart<[SDTCisVT<0, i32>,
2010 Jan 20
1
Clock skew on domU, no ntpd
Setup: RedHat/Centos/Sci. Linux 5 update 3, Dom0: kernel-xen-2.6.18-164.10.1.el5xen 64-bit (on Dell Poweredge 2950 dual quad-core). DomU: kernel-xen-2.6.18-164.10.1.el5xen 32-bit, 1 vcpu 6 domU''s per dom0. We have also seen the same problem with 2.6.18-164.9.1 and 2.6.18-164.6.1 kernels on this branch. Symptom: On 32-bit domU only (we have never seen 64-bit domU be affected), we observe