search for: baarf

Displaying 7 results from an estimated 7 matches for "baarf".

Did you mean: barf
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2009 Dec 02
7
Slightly OT: FakeRaid or Software Raid
I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. I am trying to determine which to use on a rebuild in a "standard" CentOS/Xen enviroment. It seems to me that while FakeRaid is/can be completely taken care of in dom0 dmraid whereas with
2008 Mar 18
3
capacity
Hi, I am planning to deploy an Asterisk system to supply 4-6,000 students with voicemail capabilities. The system will be set up with non-DIDs, route incoming calls to voicemail, then send an email notification. Anyone with some ideas on how I should go about spec'ing the server this use? - Eve Ellen -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Nov 06
2
Painfully slow NetApp with databas
Hello, We have long running problem with NetApp filers. When we connect server to the filer sequential read performance is ~70MB/s. But once we run database on the server seq read performance drops to ~11MB/s. That's happening with two servers. One is running Oracle another - MySQL. During speed tests database load is very light (less than 1MB/s of reads and writes). During the tests NetApp
2012 Feb 14
4
Exorbitant cost to achieve redundancy??
I'm trying to justify a GlusterFS storage system for my technology development group and I want to get some clarification on something that I can't seem to figure out architecture wise... My storage system will be rather large. Significant fraction of a petabyte and will require scaling in size for at least one decade. from what I understand GlusterFS achieves redundancy through
2014 Oct 09
3
dovecot replication (active-active) - server specs
Hello, i have some questions about the new dovecot replication and mdbox format. my company has currently 3 old dovecot 2.0.x fileserver/backend with ca. 120k mailboxes and ca. 6 TB data used. They are synchronised per drbd/corosync. Each fileserver/backend have ca. 40k mailboxes im Maildir format. Our MX server is delivering ca. 30 GB new mails per day. Two IMAP proxy server get the
2009 Apr 27
23
Raidz vdev size... again.
Hi, i''m new to the list so please bare with me. This isn''t an OpenSolaris related problem but i hope it''s still the right list to post to. I''m on the way to move a backup server to using zfs based storage, but i don''t want to spend too much drives to parity (the 16 drives are attached to a 3ware raid controller so i could also just use raid6 there). I