Displaying 9 results from an estimated 9 matches for "sharedroot".
2007 Nov 08
0
Diskless Sharedroot Cluster
Hi all!
We have just released new updates for the yum channel of the open-sharedroot
software for CentOS4 and CentOS5.
With the open-sharedroot software package, you can build up a NFS or GFS based
diskless sharedroot cluster with CentOS4 and CentOS5. It also contains a
toolset to clone or backup entire diskless sharedroot clusters.
Open Sharedroot homepage:
http://open-sharedr...
2008 Jul 03
0
Last and final official release candidate of the com.oonics open shared root cluster installation DVD is available (RC4)
...open shared root cluster with the use of
anaconda, the well known installation software provided by Red Hat. After the
installation, the open shared root cluster can be easily scaled up to more
than hundred cluster nodes.
You can now download the open shared root installation DVD from
www.open-sharedroot.org.
We are very interested in feetback. Please either file a bug or feature or
post to the mailinglist (see www.open-sharedroot.org).
More details can be found here:
http://open-sharedroot.org/news-archive/availability-of-rc4-of-the-com-oonics-version-of-anaconda
Note: The download isos are b...
2008 Mar 20
0
First official release candidate of the com.oonics open shared root cluster installation DVD is available (RC3)
...open shared root cluster with the use of
anaconda, the well known installation software provided by Red Hat. After the
installation, the open shared root cluster can be easily scaled up to more
than hundred cluster nodes.
You can now download the open shared root installation DVD from
www.open-sharedroot.org.
We are very interested in feetback. Please either file a bug or feature or
post to the mailinglist (see www.open-sharedroot.org).
More details can be found here:
http://www.open-sharedroot.org/news-archive/availability-of-first-beta-of-the-com-oonics-version-of-anaconda.
Note: The downloa...
2009 Apr 29
3
GFS and Small Files
Hi all,
We are running CentOS 5.2 64bit as our file server.
Currently, we used GFS (with CLVM underneath it) as our filesystem
(for our multiple 2TB SAN volume exports) since we plan to add more
file servers (serving the same contents) later on.
The issue we are facing at the moment is we found out that command
such as 'ls' gives a very slow response.(e.g 3-4minutes for the
outputs of ls
2010 Mar 05
0
outstanding patches
I've got 2 old patches that will probably end up getting lost in the
list, can someone ack?
[PATCH node] Enables ability to have a common shared root
[PATCH node] prevent hostvg and sharedroot from accepting same drive input
2008 Jun 09
1
Slow gfs performance
HI,
Sorry for repeating same mail ,while composing that mail i mistakenly typed
the send button.I am facing problem with my gfs and below is the running
setup.
My setup
Two node Cluster[only to create shared gfs file system] with manual fencing
running on centos 4 update 5 for oracle apps.
Shared gfs partition are mounted on both the node[active-active]
Whenever i type df -h command it
2008 Jul 30
5
slow NFS speed
We upgraded from a 10/100Mbs to a 2 100/1000 bonding. We notice the
speeds of NFS to be around 70-80Mb/sec. Which is slow, especially with
bonding. I was wondering if we need to tune anything special with the
Network and NFS. Does anyone have any experience with this?
TIA
2008 Jul 15
4
Bonding and Xen
Does anyone has implemented this sucessfully?
I am asking this because we are implementing Xen on our test lab machines,
which they hold up to three 3com and intel Nics 10/100mbps based.
These servers are meant to replace MS messaging and intranet webservers
which holds up to 5000 hits per day and thousands of mails, and probably the
Dom0 could not handle this kind of setup with only one 100mbps
2008 Jan 02
4
Xen, GFS, GNBD and DRBD?
Hi all,
We're looking at deploying a small Xen cluster to run some of our
smaller applications. I'm curious to get the lists opinions and advice
on what's needed.
The plan at the moment is to have two or three servers running as the
Xen dom0 hosts and two servers running as storage servers. As we're
trying to do this on a small scale, there is no means to hook the