similar to: I need storage server advice

Displaying 20 results from an estimated 10000 matches similar to: "I need storage server advice"

2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2011 Feb 25
3
can't create large LVM, even though pvscan shows enough space left
I'm trying to create a 500GB lv volume on a 500GB physical volume, but can't: [root at francois-pc ~]# pvscan PV /dev/sdd VG freenas lvm2 [500.00 GB / 500.00 GB free] PV /dev/sdc VG thecus lvm2 [1010.00 GB / 910.00 GB free] PV /dev/mapper/ddf1_RAIDp2 VG VolGroup00 lvm2 [931.25 GB / 0 free] Total: 3 [2.38 TB] / in use: 3 [2.38 TB]
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 5:50 PM, Les Mikesell <lesmikesell at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:37 PM, Gary Greene <ggreene at minervanetworks.com> wrote: >> >> >> Almost every controller and drive out there now lies about what is and isn?t flushed to disk, making it nigh on impossible for the Kernel to reliably know 100% of the time that the
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2012 Apr 05
1
Better to use a single large storage server or multiple smaller for mdbox?
I'm trying to improve the setup of our Dovecot/Exim mail servers to handle the increasingly huge accounts (everybody thinks it's like infinitely growing storage like gmail and stores everything forever in their email accounts) by changing from Maildir to mdbox, and to take advantage of offloading older emails to alternative networked storage nodes. The question now is whether having a
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2010 Jan 06
16
8-15 TB storage: any recommendations?
Hello everyone, This is not directly related to CentOS but still: we are trying to set up some storage servers to run under Linux - most likely CentOS. The storage volume would be in the range specified: 8-15 TB. Any recommendations as far as hardware? Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Nov 08
9
Performance-Tuning
Hi, I have > 11 TB hard used Mailstorage, saved als maildir in ext3 on HP EVA. I always wanted to make some mesurements about several influences to the performance (switch to ext4, switch to mdbox), but I never had enough time to do that. At the moment I *need* more speed, we have too much waitI/O on the system and I already used all other performance and tuning-tricks (separated cache,
2015 Nov 18
13
OT: Replacing Venerable NAS
I have an original-label Infrant (now NetGear) ReadyNAS storage appliance that's been running for 8+ years. Except for replacing its power supply, it has not skipped a beat in all this time. I use it primarily as a backup device (via NFS) for a couple of Linux machines, (via SMB) for a couple of Windows PC's, and (via ftp) for web sites at my hosting provider. SMART+ reporting shows
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks, I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)
2012 Feb 03
6
Spectacularly disappointing disk throughput
Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI
2019 Jul 19
3
CentOS 8 partiitioning for reliability
I was just given a Dell R720xd with 160 GB memory and 12x 900 GB drives that I plan to deploy as my home mail/file/backup server to replace an aging Supermicro server running CentOS 7. Yeah, it's gross overkill for that and I expect to tuck most of the drives away for spares. How should I RAID and partition this beast for maximum reliability? My current C7 system is using 1 TB of 2 TB
2012 Jul 04
8
Howto add another disk storage
Hi all What is the best strategy to add another storage to an existing virtual mail system ? Move some domains to the new storage and create symlinks ? Switch to dovecot hashing ? But in this case what is the easy-east way to migrate ? Thanks for any suggestions or tips !
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2006 Jan 31
2
OT - Linux NAS for Windows Environment
I'm looking to find a Linux NAS solution for a Windows network. Looking into FreeNAS and OpenFiler. Anyone using these or any other solution? Any comments/suggestions? Thanks, Ed
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old. The old config. Is a pool of a single vdev of 8 disks raidz2. The new pool config is 2vdev''s of 7 disk raidz2 in a single pool. I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2011 Aug 17
6
mail spool filesystem
Hi! I?m about to migrate a system whith 5000 accounts whith (~ 500GB) from "postfix/courier-imap/maildrop/mysql" to a new hardware whith "postfix/dovecot/dovecot/mysql". I?ll make a separate partition (raid 1) for the mail spool (/var/spool/vmail) and want to now what type of filesystem to use on it to increase performance. I read that XFS is a good choice, but is not
2010 Jan 16
95
Best 1.5TB drives for consumer RAID?
Which consumer-priced 1.5TB drives do people currently recommend? I had zero read/write/checksum errors so far in 2 years with my trusty old Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of drives that are big, reliable and cheap. As of Jan 2010 it seems the price sweet spot is the 1.5TB drives. As I had a lot of success with Western Digital drives I thought I would
2012 Aug 01
1
Windows DomU with SSDs
Hi Everyone, We are thinking of venturing into the world of hosting Windows DomUs on our Xen infrastructure. As Windows generally requires a lot more IOPS than Linux does, we are trying to do everything we can to improve performance. While using SSDs would solve the IOPS problem, SSDs suffer from limited write cycles. So, we have the idea of using Flashcache from Facebook to use a single SSD as