similar to: GlusterFs - Any new progress reports?

Displaying 20 results from an estimated 9000 matches similar to: "GlusterFs - Any new progress reports?"

2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys, I'm looking at the possibility of running a pair of servers with Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or other clustered FS) for the mail storage and ext3 for the root drive. I'm currently using maildrop for delivery and Dovecot imap/pop3 with the stores over NFS. I'm looking for better performance but still keeping the HA element I have now with
2009 Oct 20
1
HA Dovecot Config?
Hi! I'm currently running Dovecot 1.1.8 on a HP server using CentOS 5.2 as my IMAP server, receiving mail from postfix, and also using squirrelmail as the frontend. As I look at upgrading the mail server, I'd like to change to a higher availability configuration (where the server can fail and I don't have to reconfig my imap users). For the SMTP that's easy, because I can
2009 Nov 23
5
[OT] DRBD
Hello all, has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :) Thanks, Rodolfo.
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com> > Subject: Re: [Dovecot] SSD drives are really fast running Dovecot > > > Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN > storage unit that supports mixed SSD and spinning storage such as the Nexsan > SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php I can't speak for
2009 Jun 11
6
NAS Storage server question
Hello all, At our office a have a server running 3 Xen domains. Mail server, etc. I want to make this setup more redundant. There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way. Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array. What is the best means of sharing the storage?
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Feb 13
4
Running Xen over NFSv3
Does anyone, is anyone running Xen on NFSv3 successfully? What are some of your pain points/success points? Have you tried a clustered file system with better results such as DRBD or GFS? TIV, Matt _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2009 Nov 16
9
Dovecot and SATA Backend
Hi all, I plan to run a dovecot IMAPS and POPS service on our network. We handle about 3 000 mailboxes. I thought first buying a topnotch server (8 cores and 16 Go RAM) with equalogic iSCSI SAN SAS 15K for storage backend. On second though (and after a comprhensive read of dovecot features), I saw in http://wiki.dovecot.org/MailLocation that index files can be created on a separete local
2006 May 14
16
lustre clustre file system and xen 3
Hi, I am setting up a xen 3 enviroment that has a file backend and 2 application servers with live emegration between the 2 application servers. --------- --------- | app 1 | | app 2 | --------- --------- \ / \ / \ / ---------------- | file backend | ---------------- I am planing on using lustre clustre file system on the file backend. Are there
2010 Mar 24
10
how to synch multiple servers?
Is there a way to synch multiple servers at once so when one is changed, samba updates all the other servers at the same time automatically? -- View this message in context: http://old.nabble.com/how-to-synch-multiple-servers--tp28019825p28019825.html Sent from the Samba - General mailing list archive at Nabble.com.
2010 Apr 30
5
Mount drbd/gfs logical volume from domU
Hi list, I setup on 2 Xen Dom0s drbd/gfs a logical volume, this works as primary/primary so both DomUs will be able to write on them at the same time. But I dont know how to mount them from my domUs, I can see them with fdisk -l. The partition is /dev/xvdb1 SHould I install gfs on domUs and mount them on each as gfs partitions? [root@p3x0501 ~]# fdisk -l Disk /dev/xvda: 5368 MB, 5368709120
2008 Nov 14
10
Shared volume: Software-ISCSI or GFS or OCFS2?
Hello list, I want to use shared volumes between severall vm''s and defenetly don''t want to use NFS or Samba! So i have three options: 1. simulated(software-) iscsi 2. GFS 3. OCFS2 What do you suggest and why? Kind regards, Florian ********************************************************************************************** IMPORTANT: The contents of this email and any
2009 Jan 27
20
Xen SAN Questions
Hello Everyone, I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions. Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be
2005 Nov 10
1
Can OCFS and GFS co-exist on the same RHEL 3 RAC node??
Dear Experts, One of our client wants to use OCFS for their 9i RAC(2 node) on RHEL3 and also use GFS on the nodes. Would there be any issues using OCFS for Oracle 9i RAC and using GFS for failing over things like print services on the same node? Has this been tested? I know these are competing products and OCFS 2.0is not an option now for the client. They need GFS for their custom apps. Any ideas
2005 Mar 10
2
GFS
Hi, Can anyone shed light on Linux's Global File System in RHEL 4 please? It looks like Oracle's OCFS. Is there any relationship between them? If they are different things, has anyone done any performance comparison please? Thanks and regards. Han -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Dec 02
4
Indexes.
Hello people! I have huge problems with IO wait becase dovecot configured to use maildir is under OCFS2 1.4. Now i have an question to OCFS2 each disk action is really heavy becaue it has no index. Now i am thinking in what can be done to heltp my system to use less disk. Looking for index and etc in dovecot i found this, this disables the index file on disk and leave it on ram or it
2008 Sep 07
3
Hard system restart when DRBD connection fails while in use
Hi all, I have two nodes (A+B) running a DRBD file system (using OCFS2) on /shared. If I start say, an FTP file transfer to my drbd /shared directory on node A, then reboot node B which is the other machine in a Primary-Primary DRBD configuration while the transfer is in progress - node A stops at a similar time that DRBD notices the connection with Node B has been lost (hence crippling both
2008 Jan 02
4
Xen, GFS, GNBD and DRBD?
Hi all, We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed. The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the
2008 Sep 07
3
Hard system restart when DRBD connection fails while in use
Hi all, I have two nodes (A+B) running a DRBD file system (using OCFS2) on /shared. If I start say, an FTP file transfer to my drbd /shared directory on node A, then reboot node B which is the other machine in a Primary-Primary DRBD configuration while the transfer is in progress - node A stops at a similar time that DRBD notices the connection with Node B has been lost (hence crippling both