similar to: Xen SAN Questions

Displaying 20 results from an estimated 5000 matches similar to: "Xen SAN Questions"

2008 Jul 31
6
drbd 8 primary/primary and xen migration on RHEL 5
Greetings. I''ve reviewed the list archives, particularly the posts from Zakk, on this subject, and found results similar to his. drbd provides a block-drbd script, but with full virtualization, at least on RHEL 5, this does not work; by the time the block script is run, the qemu-dm has already been started. Instead I''ve been simply musing the possibility of keeping the drbd
2009 Jul 18
1
GlusterFS & XenServer Baremetal
Hello, What is for you the best GlusterFS scenario in using XenServer (i'm not talking about Xen on a linux but XenServer baremetal) for a web farm (Apache-Tomcat) ? I were thinking of using ZFS as the filesystem for the different nodes. The objectives/needs : * A storage cluster with the capacity equal to at least 1 node(assuming all nodes are the same). * being able to lose/take down any
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2008 Nov 21
4
two dovecot server using the same file system
Hi all. I want to use two servers with dovecot using a common file system with drbd. So I have several questions. If one server write a mail to th file system he will use his name as part of the mail identification. the second server will use his own name. Each server will generate it own mail numbers . When an imap or pop user will consult the mails i shoul be confusion. Am I correct. is there
2008 Apr 23
10
WinXP CD-ROM problems
I''m running Xen on RHEL5. I''ve got two problems with CD-ROMs. In my .hvm file, I''ve got the following line: disk = [ ''phy:/dev/VG_Guests/WinXP-001,ioemu:hda,w'', ''file:/opt/xen_stuff/winxp.iso,hdc:cdrom,w'', ''phy:/dev/scd0,hdd:cdrom,r'' ] The first CD-ROM, (the one linking to the file winxp.iso), appears to WinXP to
2008 Feb 13
17
Xen 3.2 is not loading on FC8 - Error: Kernel panic - Attempted to kill init
Hi all, I compiled and installed Xen 3.2 source on FC8. Compilation and installation completed with no errors. However when I try to load Xen I get an error: Kernel panic - Attempted to kill init !!! This is my grub configuration: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means
2008 Jan 31
5
Exporting a VM
Hi everyone! I have this doubt, on xenexpress there''s a command to export and import a vm i think it''s like this xe vm-export uuid=0099... filename=/path xe vm-import filename=/path ... I was wondering if there is something like this on Xen 3.0.3. If not how can i make a complete backup of my vm to install it on another computer (is it posible?). Thanks for your time and
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2010 Mar 24
10
how to synch multiple servers?
Is there a way to synch multiple servers at once so when one is changed, samba updates all the other servers at the same time automatically? -- View this message in context: http://old.nabble.com/how-to-synch-multiple-servers--tp28019825p28019825.html Sent from the Samba - General mailing list archive at Nabble.com.
2006 Jun 07
14
HA Xen on 2 servers!! No NFS, special hardware, DRBD or iSCSI...
I''ve been brainstorming... I want to create a 2-node HA active/active cluster (In other words I want to run a handful of DomUs on one node and a handful on another). In the event of a failure I want all DomUs to fail over to the other node and start working immediately. I want absolutely no single-points-of-failure. I want to do it with free software and no special hardware. I want
2008 Jan 28
4
can''t boot from cdrom
Hi All, Following is my winxp hvm config: kernel = "/usr/lib/xen/boot/hvmloader" builder = ''hvm'' memory = 192 name = "winxp" vcpus = 2 disk = [ ''file:/media/sda6/usr/xenimgs/winxp.img,ioemu:hda,w'' ] device_model = ''/usr/lib/xen/bin/qemu-dm.debug'' cdrom=''/dev/hda'' sdl=1 boot=''d'' and
2008 Oct 16
1
GPLPV 0.9.10 & 0.9.11.pre17/18 Network Issues
Hello, I have been testing James'' GPLPV drivers and found excellent performance when using iperf but have been having issues when trying to download a file from a shared folder on my Windows 2003 Enterprise HVM to any other system whether it is linux or windows. Basically my initial iperf tests were showing 937Mbits/sec down and 345Mbits/sec up but when I try to copy a 2GB file
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody, I am in the process of setting up a really cool xen serverfarm. Backend storage will be an LVMed AoE-device on top of DRBD. The goal is to have the backend storage completely redundant. Picture: |RAID| |RAID| |DRBD1| <----> |DRBD2| \ / |VMAC| | AoE | |global LVM VG| / | \ |Dom0a| |Dom0b| |Dom0c| | |
2012 Feb 09
2
XL toolstack and drbd
Hej folks, I''m messing around with DRBD once again but with a new Xen 4.1.2 installation using the XL toolstack instead of the xend daemon. However, after getting my DRBD installed and a device operational, trying to create a domU using the drbd block device doesn''t want to work: xl create -c test.cfg Parsing config file test.cfg Unknown disk type: drbd My config has: disk
2012 Jan 10
3
Clustering solutions - mail, www, storage.
Hi all. I am currently working for a hosting provider in a 100+ linux hosts' environment. We have www, mail HA solutions, as storage we mainly use NFS at the moment. We are also using DRBD, Heartbeat, Corosync. I am now gathering info to make a cluster with: - two virtualization nodes (active master and passive slave); - two storage nodes (for vm files) used by mentioned virtualization nodes
2009 Oct 26
6
LVM over Xen + Network
Hi, We are planning to have LVM being used over a network of 3 h/w machines(500 GB Disk each) Each hardware machine will have 2-3 domUs. Can we store these domUs as a Logical Volumes stored across Network of these 3 machines? Can one DomU exceed the 500 GB (physical drive size) and store say 1 TB of data across the networked Physical Volumes? Has anyone done this before? Thanks and regards,
2009 Aug 27
8
cannot boot PV guest
This is my install profile F11.install name="FC11-G1S2" memory=500 disk = [''phy:/dev/sda3,0,w'' ] vif = [ ''bridge=eth0'' ] vfb = [ ''type=vnc,vncunused=1''] kernel = "/etc/xen/vm/vmlinuz.1" ramdisk = "/etc/xen/vm/initrd.img.1" vcpus=1 on_reboot = ''restart'' on_crash = ''restart''
2008 Aug 23
7
Bridge Networking stops working at random times
Supermicro X7DWN+, XEON 5410, Centos 5.2, Xen 3.2.1 At what looks like random times network traffic over xen bridge stops working, the only way I found to fix it is a reboot. This sometimes takes 10 min, other times it may be up for 10 days. This happened with default xen that comes with RedHat EL 5.2 as well as a default install of Fedora 8. Any ideas? ><> Nathan Stratton
2008 Sep 24
5
Bug#500047: xen-utils-3.0.3-1: domU reboot fails when using DRBD as vbd
Package: xen-utils-3.0.3-1 Version: 3.0.3-0-4 Severity: normal Rebooting from inside domU hangs in initrd: Begin: Waiting for root file system... ... Root file system is not available because underlying DRBD device got deactivated during reboot: $ cat /proc/drbd version: 8.0.13 (api:86/proto:86) GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil at fat-tyre, 2008-08-04
2006 May 14
16
lustre clustre file system and xen 3
Hi, I am setting up a xen 3 enviroment that has a file backend and 2 application servers with live emegration between the 2 application servers. --------- --------- | app 1 | | app 2 | --------- --------- \ / \ / \ / ---------------- | file backend | ---------------- I am planing on using lustre clustre file system on the file backend. Are there