similar to: Centos 5.2 ,5.3 and GFS2

Displaying 20 results from an estimated 4000 matches similar to: "Centos 5.2 ,5.3 and GFS2"

2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM. % yum list | grep gfs2 gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2 It is missing from: http://msync.centos.org/centos-5/5/os/SRPMS/ What I need from the SRPM are the patches. I'm working through some issues using the source code, and the patches in the RedHat SRPM
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2008 Dec 03
2
does anyone have experience with clusters?
Hi all, I want to start experimenting with clusters, and I would like to use normal desktop grade hardware for this. I have some extra PC components lying around, enough to build 3 - 4 moderate desktops with a PIV / C2D CPU & 512MB - 1GB RAM each. All the machines should have at least a 100MB NIC, but I can add a gigabit NIC to the machines that doesn't have it if need be. I have used
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2009 Dec 22
1
conga and "virsh nodeinfo"
Hi folks, I have run into a confusing problem. My initial problem is: Conga does not offer "Add a virtual machine service". So I googled and found a RedHat advisory on that: http://rhn.redhat.com/errata/RHBA-2009-1623.html which points updates that should fix this. I checked on my cluster, but the relevant packages are current (and even if ALL packages are current it does not work).
2013 Mar 21
1
GFS2 hangs after one node going down
Hi guys, my goal is to create a reliable virtualization environment using CentOS 6.4 and KVM, I've three nodes and a clustered GFS2. The enviroment is up and working, but I'm worry for the reliability, if I turn the network interface down on one node to simulate a crash (for example on the node "node6.blade"): 1) GFS2 hangs (processes go in D state) until node6.blade get
2013 Aug 05
1
Corrupted mboxes with v2.2.4, posix_fallocate and GFS2
Hi, on a clustered Dovecot server installation that was recently moved from a shared GPFS filesystem to GFS2, occasional corruptions in the users' INBOXes started appearing, where a new incoming message would be appended directly after a block of NUL bytes, and be scanned by dovecot as being glued to the preceding message. I traced this to the file extension operation performed in
2010 Nov 15
3
Local node indexes in a cluster backend with GFS2
Hi, all this days I'm testing a dovecot setup using lvs, director and a cluster email backend with two nodes using rhel5 and gfs2. In the two nodes of the email backend I configured mail location this way: mail_location = sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n /var/vmail is shared clustered filesystem with GFS2 shared by node1 and node2 /var/indexes is a local
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2012 Jul 26
1
using ip address on bonded channels in a cluster
I'm creating a firewall HA cluster. The proof of concept for the basic firewall cluster is OK. I can bring up the cluster, start the iptables firewall, and move all of this with no problem. I'm using Conga to do all of this configuration on Centos 6.3 servers. To extend the "HA" part of this, I'd like to use bonded channels instead of plain old NICs. The firewall uses
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote: > On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote: >> I'm evaluating different cluster file systems that can work with large >> clustered environment, e.g. hundreds of nodes connected to a SAN over >> FC. >> >> So far I looked at OCFS2 and GFS2, they both worked nearly the same >> in terms of performance, but since I ran my
2013 Aug 21
2
Dovecot tuning for GFS2
Hello, I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm using courier over GFS. Actually I'm testing Dovecot with these parameters: mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes lock_method = fcntl Are they correct? RedHat GFS support mmap, so is it better to enable it or leave it disabled? The documentation suggest the
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok, I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific SAN configuration). Here's my layout: I have a server cluster. OS= RHEL 5.4 (both nodes...) kernel= 2.6.18-194.11.3.el5 Samba= samba-3.0.33-3.14.el5 *On this cluster are 6 GFS2 Clustered filesystems. *4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The other 2 remaining volumes are 1
2010 Aug 10
1
GFS/GFS2 on CentOS
Hi all, If you have had experience hosting GFS/GFS2 on CentOS machines could you share you general impression on it? Was it realiable? Fast? Any issues or concerns? Also, how feasible is it to start it on just one machine and then grow it out if necessary? Thanks. Boris.
2018 Apr 05
1
GFS2 writes extremely slow
Hello all, We are facing extremely slow GFS2 issues in Redhat 7 64 bit. Backend is 16 Gbps FC SAN so no issues there. I have scoured the entire (anyways most of the) Internet and arrived at the following settings. mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes mmap_disable = yes lock_method = fcntl Did a systemctl restart dovecot and did not find any major
2005 Dec 23
1
GFS2, OCFS2, and FUSE cause xenU to oops.
I really need to share a filesystem and I''d rather not have to export it from one domU to another so I tried mounting it with GFS2 and then OCFS2. Both caused the xenU kernel to oops just as the mount was attempted. I assumed that a FUSE-based solution would be a little less problematic (if only because it doesn''t require kernel patches) but it also caused an oops right when
2007 Apr 05
7
Problems using GFS2 and clustered dovecot
I am trying to use dovecot. I've got a GFS2 shared volume on two servers with dovecot running on both. On one server at a time, it works. The test I am trying is to attach two mail programs (MUA) via IMAPS (Thunderbird and Evolution as it happens). I've attached one mail program to each IMAPS server. I am trying to move emails around in one program (from folder to folder), and then