similar to: NAS Storage server question

Displaying 20 results from an estimated 10000 matches similar to: "NAS Storage server question"

2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jul 22
3
DRBD very slow....
Hello all, we have a new setup with xen on centos5.3 I run drbd from lvm volumes to mirror data between the two servers. both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB seagate satas. The one is a dual core xeon, and the other a quad-core xeon. I have a gigabit crossover link between the two with an MTU of 9000 on each end. I currently have 6 drbds mirroring across that
2008 Jan 02
4
Xen, GFS, GNBD and DRBD?
Hi all, We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed. The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the
2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys, I'm looking at the possibility of running a pair of servers with Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or other clustered FS) for the mail storage and ext3 for the root drive. I'm currently using maildrop for delivery and Dovecot imap/pop3 with the stores over NFS. I'm looking for better performance but still keeping the HA element I have now with
2008 May 29
3
GFS
Hello: I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot
2009 Aug 20
1
drbd xen question
Hello all, I am running drbd protocol A to a secondary machine to have 'backups' of my xen domUs. Is it necessary to change the xen domains configs to use /dev/drbd* instead of the LVM volume that drbd mirrors, and which the xen domU runs of? regards, Coert
2012 Jan 10
3
Clustering solutions - mail, www, storage.
Hi all. I am currently working for a hosting provider in a 100+ linux hosts' environment. We have www, mail HA solutions, as storage we mainly use NFS at the moment. We are also using DRBD, Heartbeat, Corosync. I am now gathering info to make a cluster with: - two virtualization nodes (active master and passive slave); - two storage nodes (for vm files) used by mentioned virtualization nodes
2007 Nov 30
2
How to manage images/partitions for xen DomUs?
Hello, I am trying to figure out the best way, how to configure a small cluster with xen and High Availability (Heartbeat). I have two servers and a few virtual machines. What I need is to ensure, that images or partitions of the machines will be mirrored between the two nodes (maybe with drbd?). But that is not all - I also need to enlarge the disks (because of growing databases) of virtual
2010 Feb 08
7
Can I use direct attached storage as a shared filesystem in Xen
I have a quad core server in which I want to run 4 virtual servers. On this server I have a 1/2 terabyte raid 1 I have split between the 4 members that have the OS on it. I have raid 5 10 terabyte internal storage running on a 3ware 9690a card. I want to share this storage between the servers without partitioning it. Is this possible? -------------- next part -------------- An HTML attachment
2010 Oct 18
2
Intel DP55WG centos 5.5 support?
Hello all, I have looked around on the HCL and on other hardware sites. Do any of you have experience with Centos 5.5 64 bit on these motherboards? Regards, Coert Waagmeester
2008 Nov 14
10
Shared volume: Software-ISCSI or GFS or OCFS2?
Hello list, I want to use shared volumes between severall vm''s and defenetly don''t want to use NFS or Samba! So i have three options: 1. simulated(software-) iscsi 2. GFS 3. OCFS2 What do you suggest and why? Kind regards, Florian ********************************************************************************************** IMPORTANT: The contents of this email and any
2010 Sep 08
3
LARTC and CentOS question
Hello all, Got myself the Linux Advanced Routing & Traffic control book http://lartc.org/howto/ All the commands in the guide do not survive reboots. Could someone point me in the right direction, where I can find CentOS/Redhat specific documentation on the whole /etc/sysconfig/network* setup? Kind regards, Coert Waagmeester
2009 Nov 23
5
[OT] DRBD
Hello all, has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :) Thanks, Rodolfo.
2010 Feb 17
3
GlusterFs - Any new progress reports?
GlusterFs always strikes me as being "the solution" (one day...). It's had a lot of growing pains, but there have been a few on the list had success using it already. Given some time has gone by since I last asked - has anyone got any more recent experience with it and how has it worked out with particular emphasis on Dovecot maildir storage? How has version 3 worked out for
2009 Nov 16
9
Dovecot and SATA Backend
Hi all, I plan to run a dovecot IMAPS and POPS service on our network. We handle about 3 000 mailboxes. I thought first buying a topnotch server (8 cores and 16 Go RAM) with equalogic iSCSI SAN SAS 15K for storage backend. On second though (and after a comprhensive read of dovecot features), I saw in http://wiki.dovecot.org/MailLocation that index files can be created on a separete local
2006 Jun 07
14
HA Xen on 2 servers!! No NFS, special hardware, DRBD or iSCSI...
I''ve been brainstorming... I want to create a 2-node HA active/active cluster (In other words I want to run a handful of DomUs on one node and a handful on another). In the event of a failure I want all DomUs to fail over to the other node and start working immediately. I want absolutely no single-points-of-failure. I want to do it with free software and no special hardware. I want
2009 Jan 27
20
Xen SAN Questions
Hello Everyone, I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions. Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be
2013 Jun 11
1
custom permission for single user deep in tree where he has no access
Hello all, Got samba with AD integration and extended ACL up and running. Here is what I am trying to do. share1 in smb.conf: [share1] comment = share1 path = /mnt/data/share1 public = no writable = yes printable = no valid users = @DOMAIN+group1 user1 and user2 are members of group1 user3 is not user1 creates
2010 Jul 19
2
redundant networked secure file system recommendation
Hi all, We are currently running a NFS-based server centric setup. I would like to set up something where I can easily have more than one redundant server, security/authentication (this part seems a little flaky with NFS, at least did several years ago), with the capability to easily add/remove servers as necessary, take redundant servers down for maintenance, etc. Total volume we expect to run