Displaying 20 results from an estimated 2000 matches similar to: "Share GFS2 partition freezes when new node add in the cluster"
2006 Jan 26
0
Xen Testing (Jan 21st SRC tarball) with RHCS
Hi all,
I''ve posted to the linux-cluster suite about a issue i''m having with
RHCS and Xen, it seems all my DomU''s cant join the cluster, the answer i
got was this
----------------------------
Ah, you''re running Xen!
There''s a bug in cman_tool in that it only looks at the first 16 interfaces
and Xen seems to stuff loads more in there.
The attached
2011 Nov 25
0
Failed to start a "virtual machine " service on RHCS in CentOS 6
Hi? All:
I have two physical machines as KVM hosts (clusterA.RHCS and clusterB.RHCS) , an iscsi target set into GFS.
All I want is a HA Cluster which could migrate all the virtual machines on a node to another when the first node failed into some error status.
So I created a cluster "cluster" using RHCS ,added the two hosts into the cluster . created a fence device .
for every virtual
2011 Dec 29
0
ocfs2 with RHCS and GNBD on RHEL?
Does anyone have OCFS2 running with the "Red Hat Cluster Suite" on RHEL?
I'm trying to create a more or less completely fault tolerant solution with two storage servers syncing storage with dual-primary DRBD and offering it up via multipath to nodes for OCFS2.
I was able to successfully multipath a dual-primary DRBD based GFS2 volume in this manner using RHCS and GNBD. But switched
2007 Jul 09
0
Cannot Start CLVMD on second node of cluster
Hi,
I'm trying to configure Red Hat GFS but have problems starting CLVMD on
the second node of a 3-nodes cluster. I can start ccsd, cman, and fenced
successfully, and clvmd on any other nodes first time.
-------------------------
CenOS 4.4
Kernel: 2.6.9-42.0.3.ELsmp
[root at server2 ~]# cman_tool nodes
Node Votes Exp Sts Name
1 1 3 M server3
2 1 3 M server4
3
2011 Jan 11
1
libvirt and shared storage SAN Fiber Channel
Hi,
I'm looking information about using libvirt with san (fiber channel)
based storage and googling for a while i don't see anything about it.
I send this email in order to get advices about libvirt and shared storage
We use here, for 3 years now, 8 linux centos server connected to an
hitachi FC SAN (multipath devices).
Each server run Xen dom0 and use SAN LUNs to store Virtual machine
2009 Sep 16
3
Dependency problem between cman and openais with last cman update
Hi. Today I have updated my cluster installation with the version
cman-2.0.115-1 through yum update. When I have started the cman service
, it fails. If I execute cman_tool debug I get the following error :
[CMAN ] CMAN 2.0.115 (built Sep 16 2009 12:28:10) started
aisexec: symbol lookup error:
/usr/libexec/lcrso/service_cman.lcrso: undefined symbol:
2010 Nov 11
0
Reminder Special issue of the JSS on Graphical User Interfaces for R
Special issue of the Journal of Statistical Software on
Graphical User Interfaces for R
Editors: Pedro Valero-Mora and Rubén Ledesma
Along its almost 15 years of existence, R has managed to gain an
ever-increasing percentage of academic and professional statisticians
but the spread of its use among novice and occasional users of
statistics have not progressed at the same pace. Among the
2007 Jan 15
1
RHCS on CentOS4 - 2 node cluster problem
Hello fellows,
I have a problem with a 2 node RHCS cluster (CentOS 4) where node 1
failed and node 2 became active. That happened already last year and due
to holidays the customer didn't recognize it. The cluster is just a
failover for Apache and has no shared storage space.
Customer now saw the situation, tried to fix it by rebooting node 1,
which then failed to come back up. As
2011 Apr 06
3
FTP server for registered and anonymous users
Friends I have a good ftp server working with authentication of users,
but I want to put a folder with general information for everyone can
read without having to log in, that is to be seen both registered users
and guests too.
--
Fidel Dominguez-Valero
Linux User: 433411
Website: http://www.valerofix.ryanhost.net
2013 Dec 23
2
update 4.0 to 4.1
which procedure to upgrade from 4.0 to 4.1?
Atenciosamente,
Eduardo Buso
Tel: (11) 973123297 - 44877825
Administrador de Redes Windows, Linux e Novell
T?cnico em Manuten??o de Microcomputador, Impressora, Monitor e No-break.
T?cnico em Processamento de Dados
T?cnico em Eletr?nica
2010 Feb 26
0
Announce: Speicial issue of the JSS on Graphical User Interfaces for R
Announce
Special issue of the Journal of Statistical Software on
Graphical User Interfaces for R
Editors: Pedro Valero-Mora and Ruben Ledesma
Since it original paper from Gentleman and Ihaka was published, R has
managed to gain an ever-increasing percentage of academic and
professional statisticians but the spread of its use among novice and
occasional users of statistics have not
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM.
% yum list | grep gfs2
gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2
It is missing from:
http://msync.centos.org/centos-5/5/os/SRPMS/
What I need from the SRPM are the patches. I'm working through
some issues using the source code, and the patches in the RedHat
SRPM
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello,
I will create a Xen cluster and using GFS2 (with conga, ...) to create
a new Xen cluster.
I know that GFS2 is prod ready since RHEL 5.3.
Do you know whent Centos 5.3 will be ready ?
Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to
5.3 without reinstallation ?
Tx
2013 May 03
1
sanlockd, virtlock and GFS2
Hi,
I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm
running into some issues with either sanlock or virtlockd. All virtual
machines are handled via the cluster (in /etc/cluser/cluster.conf) but I
want some kind of locking to be in place as extra security measurement.
Sanlock
=======
At first I tried sanlock, but it seems if one node goes down
unexpectedly,
2007 Apr 03
2
Corrupt inodes on shared disk...
I am having problems when using a Dell PowerVault MD3000 with multipath
from a Dell PowerEdge 1950. I have 2 cables connected and mount the
partition on the DAS Array. I am using RHEL 4.4 with RHCS and a two
node cluster. Only one node is "Active" at a time, it creates a mount
to the partition, and if there is an issue RHCS will fence the device
and then the other node will mount the
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok,
I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific
SAN configuration).
Here's my layout:
I have a server cluster.
OS= RHEL 5.4 (both nodes...)
kernel= 2.6.18-194.11.3.el5
Samba= samba-3.0.33-3.14.el5
*On this cluster are 6 GFS2 Clustered filesystems.
*4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The
other 2 remaining volumes are 1
2013 Mar 21
1
GFS2 hangs after one node going down
Hi guys,
my goal is to create a reliable virtualization environment using CentOS
6.4 and KVM, I've three nodes and a clustered GFS2.
The enviroment is up and working, but I'm worry for the reliability, if
I turn the network interface down on one node to simulate a crash (for
example on the node "node6.blade"):
1) GFS2 hangs (processes go in D state) until node6.blade get
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote:
> On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote:
>> I'm evaluating different cluster file systems that can work with large
>> clustered environment, e.g. hundreds of nodes connected to a SAN over
>> FC.
>>
>> So far I looked at OCFS2 and GFS2, they both worked nearly the same
>> in terms of performance, but since I ran my
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the