similar to: New Dovecot server buildout review

Displaying 20 results from an estimated 7000 matches similar to: "New Dovecot server buildout review"

2020 Nov 12
2
ssacli start rebuild?
> On Nov 11, 2020, at 5:38 PM, Warren Young <warren at etr-usa.com> wrote: > > On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: >> >> I have yet to see software RAID that doesn't kill the performance. > > When was the last time you tried it? > > Why would you expect that a modern 8-core Intel CPU would impede I/O in any measureable way as
2005 Jul 16
0
Asterisk International Carrier Buildout - Create our own International networks for BEST pricing!
Asterisk Users, I am reposting to the Asterisk-Users list what I saw on the Asterisk-Biz list by Mr. Jeff Grammer, GOD BLESS HIM! I am in the Hamptons today, trying to whew a client, and he took a look at my Level3 Partner pricing, and laughed, as his rates were better than mine!!! To top that off, I am colocated within the Level3 datacenter, he has no computers, and gets better rates. So,
2011 May 27
1
dsync: Invalid mailbox first_recent_uid
For the life of me I can't get dsync to work. Please help! Remote server runs dovecot out of /usr/local/dovecot2. Everything makes sense until this line: dsync-local(djonas at vitalwerks.com): Error: Invalid mailbox input from worker server: Invalid mailbox first_recent_uid The local uid is 8989 and the remote uid is 89. I added "first_valid_uid = 89" to the local conf but to
2020 Nov 11
5
ssacli start rebuild?
On Wed, 2020-11-11 at 11:34 +0100, Thomas Bendler wrote: > Am Mi., 11. Nov. 2020 um 07:28 Uhr schrieb hw <hw at gc-24.de>: > > > [...] > > With this experience, these controllers are now deprecated. RAID > > controllers > > that can't rebuild an array after a disk has failed and has been replaced > > are virtually useless. > > [...] > >
2003 Dec 01
0
No subject
2.4.18 kernel) using 3 x 60GB WD 7200 IDE drives on a 7500-4 controller I could get peak I/O of 452 MBytes/sec, and a sustainable I/O rate of over 100 MBytes/sec. That is not exactly a 'dunno' performance situation. These tests were done using dbench and RAID5. Let's get that right: 100 MBytes/sec == 800 Mbits/sec, which is just a tad over 100 Mbits/sec (the bottleneck if you use
2017 Sep 09
0
cyrus spool on btrfs?
Mark Haney wrote: > On 09/08/2017 01:31 PM, hw wrote: >> Mark Haney wrote: >> >> I/O is not heavy in that sense, that?s why I said that?s not the application. >> There is I/O which, as tests have shown, benefits greatly from low latency, which >> is where the idea to use SSDs for the relevant data has arisen from. This I/O >> only involves a small amount of
2020 Nov 11
0
ssacli start rebuild?
On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: > > I have yet to see software RAID that doesn't kill the performance. When was the last time you tried it? Why would you expect that a modern 8-core Intel CPU would impede I/O in any measureable way as compared to the outdated single-core 32-bit RISC CPU typically found on hardware RAID cards? These are the same CPUs,
2020 Nov 14
0
ssacli start rebuild?
> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote: >> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: >> > I have yet to see software RAID that doesn't kill the performance. >> >> When was the last time you tried it? > > I'm currently using it, and the performance sucks. Perhaps it's > not the software itself or the CPU but
2011 Sep 19
1
mdadm and drive identification?
I have a server that has 16 drives in it. They are connected to a 3ware 9650SE-16ML SATA RAID card. I have the card set to export all the drives as JBOD because I prefer Linux to do the reporting of drive and RAID health . I'm using mdadm to create a RAID6 with a hot spare. Doing this I can take the disks and put them on a completely different SATA controller/computer and still have the RAID
2004 Dec 28
0
RAID Card for CentOS 3 compatibility
Hi everybody, I need to setup a couple of new servers with RAID 1 & another with RAID 5, witch card will have built in support (no need for drivers). It can be for ATA or SATA drives (any preference on ATA or SATA ?). I'm looking for a PCI card , or a motherboard with built in card. I been looking at the : 1. "3ware 64-bit PCI to SATA RAID Controller Card, Model "Escalade
2020 Nov 15
1
ssacli start rebuild?
On Sat, 2020-11-14 at 18:55 +0100, Simon Matter wrote: > > On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote: > > > On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote: > > > > I have yet to see software RAID that doesn't kill the performance. > > > > > > When was the last time you tried it? > > > > I'm currently using
2008 Dec 12
2
OT: Need some riser card advice...
Fellow server-builders out there, this is for you. :) I was trying to build a cheap JBOD type storage solution running CentOS. Ended up snagging a Supermicro SC826TQ-R800LPB 2U case (12 drives slots) and a Supermicro X7DBE-O motherboard. Unfortunately, without thinking I snagged a 3ware 9650SE-12ML SATA RAID card which is a full height card and thus does not fit in my case. I have a few
2018 Apr 04
0
JBOD / ZFS / Flash backed
Based on your message, it sounds like your total usable capacity requirement is around <1TB. With a modern SSD, you'll get something like 40k theoretical IOPs for 4k I/O size. You don't mention budget. What is your budget? You mention "4MB operations", where is that requirement coming from? On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer <vincent at epicenergy.ca>
2017 Sep 08
3
cyrus spool on btrfs?
On 09/08/2017 01:31 PM, hw wrote: > Mark Haney wrote: > > I/O is not heavy in that sense, that?s why I said that?s not the > application. > There is I/O which, as tests have shown, benefits greatly from low > latency, which > is where the idea to use SSDs for the relevant data has arisen from.? > This I/O > only involves a small amount of data and is not sustained
2013 Mar 18
0
Re: zfs-discuss Digest, Vol 89, Issue 12
You could always use 40-gigabit between the two storage systems which would speed things dramatically, or back to back 56-gigabit IB. ---------------------------------------- From: zfs-discuss-request@opensolaris.org Sent: Monday, March 18, 2013 11:01 PM To: zfs-discuss@opensolaris.org Subject: zfs-discuss Digest, Vol 89, Issue 12 Send zfs-discuss mailing list submissions to
2008 Jun 06
1
3Ware 9690SA
I successfully installed CentOS on 3ware 9650SE controller. Due to some issues with compatibility with my motherboard, I replaced it with a 9690SA. Now the system won't boot (although interestingly enough, it find the boot menu fine, just won't boot past a certain point in the bootup phase. I thought I would reinstall, but anaconda doesn't find the raid array. 3ware does have
2020 Sep 10
3
Btrfs RAID-10 performance
I cannot verify it, but I think that even JBOD is propagated as a virtual device. If you create JBOD from 3 different disks, low level parameters may differ. And probably old firmware is the reason we used RAID-0 two or three years before. Thank you for the ideas. Kind regards Milo Dne 10.09.2020 v 16:15 Scott Q. napsal(a): > Actually there is, filesystems like ZFS/BTRFS prefer to see
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2004 Dec 27
1
GRUB bootloader
I have raid'd a copy of my system for backup purposes to another hard drive, not deleting the MBR. When I boot with GRUB (Centos 3.3) it now lists multiple instances to boot, but in fact there is only one complete system. Is there a way to delete additional instances from the GRUB bootup screen? Thanks, Beth
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented by the RAID card in JBOD. The Hardware vendor asked "Do you know if Gluster makes use of