Hi, I''m in the process of planing a SAN based 2 Dom0 redundant solution. Haven''t got the equipment yet to do any testing. What I would like to achieve: Have a set of failover DomU-s. Normally Dom0_0 would run DomU_0, DomU_2,... Dom0_1 would run DomU_1, DomU_3,... this domains need access to some data, which could be common for some of them (e.g a webserver and a fileserver). If I keep that data on the SAN on a clvm lv formated as gfs I can access it from one DomU of each Dom0s so two DomUs in total (or will Xen allow me to export a lv as a partition to more than one DomU). This is more a problem in the failover case, when all DomUs are runing on one Dom0. I would like any idea on this, Thanks in advance. Geza _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Gémes Géza wrote:> Hi, > > I''m in the process of planing a SAN based 2 Dom0 redundant solution. > Haven''t got the equipment yet to do any testing. > What I would like to achieve: > Have a set of failover DomU-s. Normally Dom0_0 would run DomU_0, > DomU_2,... Dom0_1 would run DomU_1, DomU_3,... this domains need access > to some data, which could be common for some of them (e.g a webserver > and a fileserver). If I keep that data on the SAN on a clvm lv formated > as gfs I can access it from one DomU of each Dom0s so two DomUs in total > (or will Xen allow me to export a lv as a partition to more than one > DomU). This is more a problem in the failover case, when all DomUs are > runing on one Dom0. > I would like any idea on this, > > Thanks in advance. > > GezaYou can configure the devices with "w!" instead of "w" if you want to use the same backend in multiple domUs. -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jimmypierre.rouen.france
2006-Apr-26 16:59 UTC
[Xen-users] Error Launching a SuSe 10 image
Suse76:/imagesxen # xm create /etc/xen/suse10 -c Using config file "/etc/xen/suse10". Started domain suse10 Linux version 2.6.13-15.8-xen (geeko@buildhost) (gcc version 4.0.2 20050901 (prerelease) (SUSE Linux)) #1 SMP Tue Feb 7 11:07:24 UTC 2006 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000010000000 (usable) 0MB HIGHMEM available. 264MB LOWMEM available. ACPI in unprivileged domain disabled IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 5 Initializing CPU#0 PID hash table entries: 2048 (order: 11, 32768 bytes) Xen reported: 1007.282 MHz processor. Dentry cache hash table entries: 65536 (order: 6, 262144 bytes) Inode-cache hash table entries: 32768 (order: 5, 131072 bytes) Software IO TLB disabled vmalloc area: d1000000-fb7fe000, maxmem 34000000 Memory: 249472k/270336k available (2362k kernel code, 12380k reserved, 829k data, 180k init, 0k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Security Framework v1.0.0 initialized SELinux: Disabled at boot. Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) Enabling fast FPU save and restore... done. Checking ''hlt'' instruction... disabled checking if image is initramfs... it is Freeing initrd memory: 4270k freed Brought up 1 CPUs NET: Registered protocol family 16 Brought up 1 CPUs ACPI: Subsystem revision 20050408 ACPI: Interpreter disabled. Linux Plug and Play Support v0.97 (c) Adam Belay pnp: PnP ACPI: disabled xen_mem: Initialising balloon driver. PCI: Using ACPI for IRQ routing PCI: If a device doesn''t work, try "pci=routeirq". If it helps, post a report PCI: System does not support PCI PCI: System does not support PCI TC classifier action (bugs to netdev@vger.kernel.org cc hadi@cyberus.ca) Grant table initialized audit: initializing netlink socket (disabled) audit(1146070099.926:1): initialized Total HugeTLB memory allocated, 0 VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API vesafb: abort, cannot ioremap video memory 0x0 @ 0x0 vesafb: probe of vesafb.0 failed with error -5 PNP: No PS/2 controller found. Probing ports directly. i8042.c: No controller found. io scheduler noop registered io scheduler anticipatory registered io scheduler deadline registered io scheduler cfq registered RAMDISK driver initialized: 16 RAM disks of 64000K size 1024 blocksize loop: loaded (max 8 devices) Xen virtual console successfully installed as tty1 Event-channel device installed. Successfully initialized TPM backend driver. Registering block device major 8 netfront: Initialising virtual ethernet driver. xen_tpm_fr: Initialising the vTPM driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.2 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 3.38 NET: Registered protocol family 2 IP route cache hash table entries: 4096 (order: 2, 16384 bytes) TCP established hash table entries: 16384 (order: 5, 131072 bytes) TCP bind hash table entries: 16384 (order: 5, 131072 bytes) TCP: Hash tables configured (established 16384 bind 16384) TCP reno registered NET: Registered protocol family 1 NET: Registered protocol family 8 NET: Registered protocol family 20 Freeing unused kernel memory: 180k freed Starting udev Creating devices Loading ide-disk Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 ide: Assuming 50MHz system bus speed for PIO modes; override with idebus=xx Loading sd_mod SCSI subsystem initialized register_blkdev: cannot get major 8 for sd Loading via82cxxx Loading aic7xxx Loading processor Loading thermal FATAL: Error inserting thermal (/lib/modules/2.6.13-15.8-xen/kernel/drivers/acpi/thermal.ko): No such device Loading fan FATAL: Error inserting fan (/lib/modules/2.6.13-15.8-xen/kernel/drivers/acpi/fan.ko): No such device Loading reiserfs Waiting for device /dev/sda1 to appear: ok rootfs: major=8 minor=1 devn=2049 Mounting root /dev/sda1 mount: No such device Kernel panic - not syncing: Attempted to kill init! _________________________ The config file: suse76:/imagesxen # more /etc/xen/suse10 kernel = "/boot/vmlinuz-2.6.13-15.8-xen" ramdisk = "/boot/initrd-2.6.13-15.8-xen" memory = 256 name = "suse10" nics = 1 vif = [ ''bridge=xenbr0'' ] disk = [ ''file:/imagesxen/suse10.img,sda1,w'' ] root = "/dev/sda1 ro" extra = "5" suse76:/imagesxen # Thanks for your help, Jimmy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, why not give every domU a SAN device ? no need for CLVM and GFS in this case. You should just think about how to get a consistent naming of the SAN devices, e.g. with multipathd oder this scsi persistent names package (can''t recall the name now, if you need it, I can look it up again). What is the benefit of GFS ? I did a GFS cluster recently, it is definitively NOT trivial to setup and keep running and a single mistake will guaranteed CRASH all nodes in your GFS cluster. We tested that rather a lot ... Regards, Schlomo PS: I didn''t try GFS+XEN ... On Wed, 26 Apr 2006, Christopher G. Stach II wrote:> Gémes Géza wrote: > > Hi, > > > > I''m in the process of planing a SAN based 2 Dom0 redundant solution. > > Haven''t got the equipment yet to do any testing. > > What I would like to achieve: > > Have a set of failover DomU-s. Normally Dom0_0 would run DomU_0, > > DomU_2,... Dom0_1 would run DomU_1, DomU_3,... this domains need access > > to some data, which could be common for some of them (e.g a webserver > > and a fileserver). If I keep that data on the SAN on a clvm lv formated > > as gfs I can access it from one DomU of each Dom0s so two DomUs in total > > (or will Xen allow me to export a lv as a partition to more than one > > DomU). This is more a problem in the failover case, when all DomUs are > > runing on one Dom0. > > I would like any idea on this, > > > > Thanks in advance. > > > > Geza > > You can configure the devices with "w!" instead of "w" if you want to > use the same backend in multiple domUs. > >-- Regards, Schlomo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wednesday 26 April 2006 2:38 pm, Schlomo Schapiro wrote:> why not give every domU a SAN device ? no need for CLVM and GFS in thisthat should work, but it''s tied to the partitioning capabilities of the SAN device. i think most FC and iSCSI arrays can export several LUNs, but it''s very inflexible; usually can''t be resized, sometimes can''t even merge two contiguous deleted LUNs. this is the job of LVM. the LVM layer is really thin and efficient. it''s also easy to set up from the command line. the only downside is that to do any modification to a volume group, ALL hosts that use the volume group have to disconnect from it, and reconnect after the changes have been done; even those that don''t use any LV changed. for this, in a cluster environment it''s almost mandatory to use CLVM. it''s the very same package, just configured to propagate any changes using the cluster-managing infrastructure of GFS. that part can be complex and error-prone, but only affect the ability to do modifications to the volume group. in short: if there won''t be many changes to the volumes, and a short downtime is acceptable for any changes; then a simple LVM is the way to go. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Schlomo Schapiro wrote:> Hi, > > why not give every domU a SAN device ? no need for CLVM and GFS in this > case. You should just think about how to get a consistent naming of the > SAN devices, e.g. with multipathd oder this scsi persistent names package > (can''t recall the name now, if you need it, I can look it up again).You get consistent naming with CLVM.> What is the benefit of GFS ? I did a GFS cluster recently, it is > definitively NOT trivial to setup and keep running and a single mistake > will guaranteed CRASH all nodes in your GFS cluster. We tested that rather > a lot ...The benefit is that multiple hosts can write safely to the same filesystem. It''s not that difficult to set up if you read all of the documentation and actually understand it before you begin. Pick your lock manager, find what needs to be shared and with whom, configure your cluster configuration, and start up the machines. If you have problems with joining or leaving the cluster, or fencing, it''s all pretty well documented. It''s not simple, but it''s not as complicated as everyone makes it sound. Crash or fence? A single mistake will probably only prevent your cluster from starting, or at worst, you would get an undesired cluster partition which is easily detected and easily fixed. -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi,> You get consistent naming with CLVM.big overkill. other tools will give you consistent names w/o the cluster overhead.> The benefit is that multiple hosts can write safely to the same filesystem.Which is not what you need for XEN, here you need multiple hosts accessing the same disk space, not neccessarily in form of a filesystem. As long as you can guarantuee from "above" (e.g. your XEN management software), that each piece of data will be accessed only by one host at the same time, then you don''t need a cluster FS that enforces this for you.> It''s not that difficult to set up if you read all of the documentation > and actually understand it before you begin. Pick your lock manager, > find what needs to be shared and with whom, configure your cluster > configuration, and start up the machines. If you have problems with > joining or leaving the cluster, or fencing, it''s all pretty well > documented. It''s not simple, but it''s not as complicated as everyone > makes it sound.As I said, I did it and you start assuming things about me which are without justification.> > Crash or fence? A single mistake will probably only prevent your > cluster from starting, or at worst, you would get an undesired cluster > partition which is easily detected and easily fixed.No, that worked well. But messing with the IPs and network interfaces will lock up the entire cluster, like removing a secondary IP from the interface that was used for the DLM traffic. And of course losing quorum :-) In any case I didn''t want to put GFS down but rather point out a simpler way to achieve a similar goal. -- Regards, Schlomo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Apr 26, 2006 at 02:50:33PM -0500, Javier Guerra wrote:> On Wednesday 26 April 2006 2:38 pm, Schlomo Schapiro wrote: > > why not give every domU a SAN device ? no need for CLVM and GFS in this > > that should work, but it''s tied to the partitioning capabilities of the SAN > device. i think most FC and iSCSI arrays can export several LUNs, but it''s > very inflexible; usually can''t be resized, sometimes can''t even merge two > contiguous deleted LUNs. this is the job of LVM. >All good FC and iSCSI arrays let you resize LUNs.. :) Example of good iSCSI SAN: http://www.equallogic.com -- Pasi ^ . . Linux / - \ Choice.of.the .Next.Generation. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, let me add a remark:> Hi, > > >>You get consistent naming with CLVM. > > > big overkill. other tools will give you consistent names w/o the cluster > overhead. > > >>The benefit is that multiple hosts can write safely to the same filesystem. > > > Which is not what you need for XEN, here you need multiple hosts accessing > the same disk space, not neccessarily in form of a filesystem. As long as > you can guarantuee from "above" (e.g. your XEN management software), that > each piece of data will be accessed only by one host at the same time, > then you don''t need a cluster FS that enforces this for you.I have found that it would be very valuable to be able to have access to the same filesystem from several hosts at the same time: think of back/restore! By having access to the filesystems from a xenU and from the backup software then there is no need to install the backup client on each xenU; it would be enough to have (read) access to all filesystems from the backup host. It would save management work. -- -- Mit freundlichen Gruessen / With best regards Reiner Dassing _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users