Displaying 20 results from an estimated 7000 matches similar to: "Xen VMs on GFS"
2009 Apr 29
3
GFS and Small Files
Hi all,
We are running CentOS 5.2 64bit as our file server.
Currently, we used GFS (with CLVM underneath it) as our filesystem
(for our multiple 2TB SAN volume exports) since we plan to add more
file servers (serving the same contents) later on.
The issue we are facing at the moment is we found out that command
such as 'ls' gives a very slow response.(e.g 3-4minutes for the
outputs of ls
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody,
I am in the process of setting up a really cool xen serverfarm. Backend
storage will be an LVMed AoE-device on top of DRBD.
The goal is to have the backend storage completely redundant.
Picture:
|RAID| |RAID|
|DRBD1| <----> |DRBD2|
\ /
|VMAC|
| AoE |
|global LVM VG|
/ | \
|Dom0a| |Dom0b| |Dom0c|
| |
2008 May 29
3
GFS
Hello:
I am planning to implement GFS for my university as a summer project. I have
10 servers each with SAN disks attached. I will be reading and writing many
files for professor's research projects. Each file can be anywhere from 1k
to 120GB (fluid dynamic research images). The 10 servers will be using NIC
bonding (1GB/network). So, would GFS be ideal for this? I have been reading
a lot
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys,
I have setup my hard disc with 3 partitions:
1- 256MB on /boot;
2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN);
3- 498GB exported with vblade-persist to my network (eth1 for the AoE
protocol).
On dom0 hypervisor01:
vblade-persist setup 0 0 eth1 /dev/sda3
vblade-persist start all
How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s?
Including the
2008 Aug 21
1
Shared Storage Options
Hello all.
I would like to canvas some opinions on options for shared storage in a Xen cluster. So
far I've experimented with using iSCSI and clvm which mixed success.
The primary concern I have with both of these options is that there seems to be no obvious
way to ensure exclusive access to the LUN/device to the VM I want to run. On a couple of
occasions during my playing I've
2010 Jan 14
8
XCP - GFS - ISCSI
Hi everyone!
I have 2 hosts + 1 ISCSI device.
I want to create a shared storage repository and both hosts use together. I
wont use NFS.
prepared sr:
xe sr-create host-uuid=xxx content-type=user name-label=NAS1
shared=true type=iscsi
device-config:target=xxxx device-config:targetIQN=xxxx
hosts see the iscsi device:
scsi4 : iSCSI Initiator over TCP/IP
scsi 4:0:0:0: Direct-Access NAS
2010 Aug 10
1
GFS/GFS2 on CentOS
Hi all,
If you have had experience hosting GFS/GFS2 on CentOS machines could
you share you general impression on it? Was it realiable? Fast? Any
issues or concerns?
Also, how feasible is it to start it on just one machine and then grow
it out if necessary?
Thanks.
Boris.
2005 Sep 14
3
CentOS + GFS + EtherDrive
I am considering building a pair of storage servers that will be using
CentOS and GFS to share the storage from a Coraid (SATA+Raid) EtherDrive
shelf. Has anyone else tried such a setup?
Is GFS stable enough to use in a production environment?
There is a build of GFS 6.1 at http://rpm.karan.org/el4/csgfs/. Has anyone
used this? Is it stable?
Will I run into any problems if I use CentOS 3.5
2008 Feb 09
13
locking and gfs
Hi there,I run samba as a PDC and tried to make this PDC high available with
redhat cluster suite and gfs. I experienced the following problem while
doing this:
If I set the option locking = no in smb.conf it takes about 4 minutes to
copy a file of 1GB size. If I set locking = yes it takes about 1 hour. Im
not sure if locking = no sets locking off for all locking options. At least
I need locking
2009 Apr 15
32
cLVM on Debian/Lenny
Hi -
Is there someone around who successfully got cLVM on Debian/Lenny
working? I was wondering if I was the only one facing problems with...
Thanks in anticipation,
--
Olivier Le Cam
Département des Technologies de l''Information et de la Communication
CRDP de l''académie de Versailles
_______________________________________________
Xen-users mailing list
2006 Feb 13
1
Managing multiple Dom0''s
Are there people currently managing multiple Dom0''s and making
significant use of migration?
I''m loking at a potential setup of 16 or so physical systems with
perhaps 64 or so virtual systems (DomU''s I suppose Dom0 is also a
virtual system if I understand thing correctly, but that''s no twhat I
mean here).
It seems to me that making heavy use of migration for
2009 Jan 27
20
Xen SAN Questions
Hello Everyone,
I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions.
Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be
2008 Jan 15
6
live migration breaking...aoe issue?
I am trying to get a proof-of-concept type setup going...I have a
storage box and 2 xen servers...I am using file based disk that live
on the aoe device...i can run the vm from either host without
issue...when I run the live migration, the domain leaves the xm list
on host 1 and shows up on host 2 (however there is a pause for pings
of about 2 minutes?)...after it is on host 2, I can xm console
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd
2008 Jun 01
1
TCP/IP Settings
Hi everyone,
I am currently converting my AoE/GFS setup over to GlusterFS. So far I have been very impressed with Gluster and am loving it's ease of use. I wrestled with GFS for days, and still do too much IMO. ;)
I had a question about TCP/IP settings. I know Infiniband as interconnects would give us the best performance, but we currently do not have the budget to upgrade everything, so I
2006 Apr 25
14
Xen Partition Performance
Hello,
I''m setting up a Xen system since I have diferent choices to
create the domU''s partitions: raw partition, lvm, files.
I''ve done some tests with hdparm and it all seems to be the same.
Can anyone, please, share with me what if the best method.
Best regards,
Luis
_______________________________________________
Xen-users mailing list
2007 Mar 10
0
Questions about pass through block devices
What happens to the block layer if the block request queue gets very,
very long?
Imagine there are many DomUs and they are all working against VERY
SLOW disks.
I wish I understood the entire chain better, and I''m sorry to ask
such a vague question.
I understand this is a weird question, because the performance
implications of this scenario are horrific. The reason I ask is that
2010 Mar 24
3
mounting gfs partition hangs
Hi,
I have configured two machines for testing gfs filesystems. They are
attached to a iscsi device and centos versions are:
CentOS release 5.4 (Final)
Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009
i686 i686 i386 GNU/Linux
The problem is if I try to mount a gfs partition it hangs.
[root at node2 ~]# cman_tool status
Version: 6.2.0
Config Version: 29
Cluster Name:
2007 Oct 06
1
GFS-kernel module - Version Magic Error
Are the RPMs for the latest GFS kernel module
GFS-kernel-2.6.9-72.2.0.8 to be used with the version 2.6.9-55.0.9.EL
available ? I tried to compile the Source RPMs available from the Red
Hat site but the modules can't be loaded because of invalid module
format arising from version magic issues. The syslog shows:
node0 kernel: gfs: version magic '2.6.9-55.0.9.ELsmp SMP 686 REGPARM
2008 Jun 09
1
Slow gfs performance
HI,
Sorry for repeating same mail ,while composing that mail i mistakenly typed
the send button.I am facing problem with my gfs and below is the running
setup.
My setup
Two node Cluster[only to create shared gfs file system] with manual fencing
running on centos 4 update 5 for oracle apps.
Shared gfs partition are mounted on both the node[active-active]
Whenever i type df -h command it