similar to: AoE LVM2 DRBD Xen Setup

Displaying 20 results from an estimated 800 matches similar to: "AoE LVM2 DRBD Xen Setup"

2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys, I have setup my hard disc with 3 partitions: 1- 256MB on /boot; 2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN); 3- 498GB exported with vblade-persist to my network (eth1 for the AoE protocol). On dom0 hypervisor01: vblade-persist setup 0 0 eth1 /dev/sda3 vblade-persist start all How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s? Including the
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization installations. To this date we only dealt with a small number of VM''s (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the "storage guy" I find it quite convenient to present to the dom0s one LUN per VM that makes live migration possible but without the cluster file system or cLVM
2010 Jan 14
8
XCP - GFS - ISCSI
Hi everyone! I have 2 hosts + 1 ISCSI device. I want to create a shared storage repository and both hosts use together. I wont use NFS. prepared sr: xe sr-create host-uuid=xxx content-type=user name-label=NAS1 shared=true type=iscsi device-config:target=xxxx device-config:targetIQN=xxxx hosts see the iscsi device: scsi4 : iSCSI Initiator over TCP/IP scsi 4:0:0:0: Direct-Access NAS
2008 Jan 15
6
live migration breaking...aoe issue?
I am trying to get a proof-of-concept type setup going...I have a storage box and 2 xen servers...I am using file based disk that live on the aoe device...i can run the vm from either host without issue...when I run the live migration, the domain leaves the xm list on host 1 and shows up on host 2 (however there is a pause for pings of about 2 minutes?)...after it is on host 2, I can xm console
2008 Oct 10
4
xenconsole: Could not open tty `/dev/pts/5'': No such file or directory
Hi, i''m running a two dom0 nodes cluster for HA, and xen domUs running on top, currently it handles 5 domUs. The issue is that there''s a domU that can only run on one node, it''s not possible to start it on the other one. I get a ''b'' status and can''t access the console. Other domUs can start in either node and I can access their console without
2008 Nov 20
27
lenny amd64 and xen.
I''ve installed debian lenny amd64, it is frozen now. I''ve install kernel for xen support but it doesn''t start. It says "you need to load kernel first" but I''ve installed all the packages concerning xen, also packages related to the kernel. Perhaps lenny doesn''t support xen anymore? Any solution?
2009 Apr 15
32
cLVM on Debian/Lenny
Hi - Is there someone around who successfully got cLVM on Debian/Lenny working? I was wondering if I was the only one facing problems with... Thanks in anticipation, -- Olivier Le Cam Département des Technologies de l''Information et de la Communication CRDP de l''académie de Versailles _______________________________________________ Xen-users mailing list
2010 Feb 27
17
XEN and clustering?
Hi. I''m using Xen on RHEL cluster, and I have strange problems. I gave raw volumes from storage to Xen virtual machines. With windows, I have a problem that nodes don''t see the volume as same one.... for example: clusternode1# clusvcadm -d vm:winxp clusternode1# dd if=/dev/mapper/winxp of=/node1winxp clusternode2# dd if=/dev/mapper/winxp of=/node2winxp clusternode3# dd
2009 Jun 18
2
dahdi and overlapdial problem
Hi there, we have a problem with dahdi and overlapdial. We are running an E1 in Germany and are in need of overlapdial. The E1 is connected to a Sangoma A101. As soon as overlapdial is set to "yes" we have problems with incoming audio on the dahdi channels. When set to "no" all audio is fine. Basically we can choose between being able to receive calls or to place calls
2010 Mar 08
4
Error with clvm
Hi, I get this error when i try to start clvm (debian lenny) : This is a clvm version with openais # /etc/init.d/clvm restart Deactivating VG ::. Stopping Cluster LVM Daemon: clvm. Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD started CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132 CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2010 Oct 14
12
best practices in using shared storage for XEN Virtual Machines and auto-failover?
Hi all, Can anyone pleas tell me what would be best practice to use shared storage with virtual machines, especially when it involved high availability / automated failover between 2 XEN servers? i.e. if I setup 2x identical XEN servers, each with say 16GB RAM, 4x 1GB NIC''s, etc. Then I need the xen domU''s to auto failover between the 2 servers if either goes down (hardware
2012 Feb 23
2
lockmanager for use with clvm
Hi, i am setting up a cluster of kvm hypervisors managed with libvirt. The storage pool is on iscsi with clvm. To prevent that a vm is started on more than one hypervisor, I want to use a lockmanager with libvirt. I could only find sanlock as lockmanager, but AFSIK sanlock will not work in my setup as I don't have a shared filesystem. I have dlm running for clvm. Are there lockmanager
2008 Jan 25
8
Processor architecture
Hi, How could i know the processor architecture whether this is x86 or x86_64 ? arch returns the architecture of the OS installed on the machine. But what happened when i installed x86 dom0 on x86_64 machine, then arch and #uname -a as well as #xm info all shows. x86, but really the machine is x86_64. Could anybody tell me how i can get the system information that this is x86_64 machine. Is
2018 Jul 19
1
Re: [PATCH 2/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
On Wednesday, 18 July 2018 15:37:24 CEST Richard W.M. Jones wrote: > The old vgscan API literally ran vgscan. When we switched to using > lvmetad (in commit dd162d2cd56a2ecf4bcd40a7f463940eaac875b8) this > stopped working because lvmetad now ignores plain vgscan commands > without the --cache option. > > We documented that vgscan would rescan PVs, VGs and LVs, but without >
2007 Nov 13
2
lvm over nbd?
I have a system with a large LVM VG partition. I was wondering if there is a way i could share the partition using nbd and have the nbd-client have access the LVM as if it was local. SYSTEM A: /dev/sda3 is a LVM partition and is assigned to VG volgroup1. I want to share /dev/sda3 via nbd-server SYSTEM B: receives A''s /dev/sda3 as /dev/nbd0. I want to access it as VG volgroup1. I am
2007 Mar 22
6
Xen and SAN : snapshot XOR live-migration ?
Please tell me if I am wrong : Xen needs LVM to perform domU snapshots and snapshots must be performed by dom0. By the way, a LVM volume group should not be used by more that one kernel at the same time. So if we use a SAN storage, one volume group should be activated on only one server and deactivated on the others. But if we do that, it should not be possible to perform live migration of
2018 Jul 18
5
[PATCH 0/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
[This email is either empty or too large to be displayed at this time]
2010 Nov 12
4
Opinion on best way to use network storage
I need the community''s opinion on the best way to use my storage SAN to host xen images. The SAN itself is running iSCSI and NFS. My goal is to keep all my xen images on the SAN device, and to be able to easily move images from one host to another as needed while minimizing storage requirements and maximizing performance. What I see are my options: 1) Export a directory through NFS.
2007 May 04
6
Xen console on Bladecenter remote KVM
Hello list, when using Xen on our Bladecenter with the advanced management module everything works fine, but as soon as I switch the KVM to another blade and then switch back to xen machine, vga output is unavailable (due to wrong resolution or hz frequency? But it works if I don''t switch away). The machine itself works, only vga output is gone after switching kvm away from the box.
2010 Jun 14
49
iSCSI and LVM
Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the