similar to: dovecot replication (active-active) - server specs

Displaying 20 results from an estimated 10000 matches similar to: "dovecot replication (active-active) - server specs"

2017 May 02
2
migrate Maildir to mdbox
Hi, I replaced HDDs in my home server and reinstalled the OS (Ubuntu 17.04). Since I had some time I reviewed my Dovecot configuration and changed a couple of things, including mailbox format. Now I would like to migrate e-mail (about 20G -- I use this server as an e-mail archive). The Maildir is available on the (old) hard disk. The new dovecot is running. What should I use? Would mbsync work?
2017 May 02
2
migrate Maildir to mdbox
Silly question? Which is preferred? MailDir or Mbox? (directory vs flat file) How would you do this when migrating from an old server to a new one? Thx -Mike > On May 2, 2017, at 1:58 AM, Aki Tuomi <aki.tuomi at dovecot.fi> wrote: > > Assuming your maildir path is /path/to/mail/Maildir You could do it like this: > > mail_home=/path/to/mail >
2010 Sep 26
5
Need to pick your brain for recommendation on using 2.5" or 3.5" HDDs for Asterisk server...
Hi Everyone, I am stack between two identical systems (2U Twin2, 4 nodes, SuperMicro) servers that have the same exact specs except for HDDs. These nodes will all either have Asterisk installed with CentOS or will have Asterisk install in virtual environment. Option 1: *12* x 3.5" HDD (3 HDDs per node) Option 2: *24* x 2.5" HDD (6 HDDs per node) **both options come to the same price.
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2006 Oct 24
1
Help request...recovering LVM on centos 4.2
I installed a Centos 4.x system using a lvm install across four HDDs. It is my first install using LVM. System had a power-failure and stopped booting up. A new trainee simply took out the HDDs and restarted the file-server on a fresh HDDs. Now the problemis that the four HDDs have data. But the order of the HDDs (of install....1st primary, 2nd primary etc.) is unknown. Earlier we used to boot
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2013 Jan 16
3
Max hard disks supported by XCP 1.6
Hello, I would like to use nas4free under xcp 1.6. I install it under full HVM using "other install media". Now I am attaching 4 hdds as external disks. The vm sees at most two hdds, I suppose because of bios support (1 boot + 1 cdrom + 2 hdds= 4 hdds). I need to use more disks, is it possible? If not, it seems to me a serious limit. Mario
2020 Sep 19
1
storage for mailserver
On 9/17/20 4:25 PM, Phil Perry wrote: > On 17/09/2020 13:35, Michael Schumacher wrote: >> Hello Phil, >> >> Wednesday, September 16, 2020, 7:40:24 PM, you wrote: >> >> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and >> PP> marking the HDD members as --write-mostly, meaning most of the reads >> PP> will come from the
2010 Jun 08
21
My future plan
My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI
2012 Jan 28
1
maildir vs mdbox
Hi, I am planning on running on test between maildir and mdbox to see which is a better fit for my use case. And I'm just looking for general advice/recommendation. I will post any results I obtain here. Important question: I have multiple users hitting the same email account at the same time. Can be a problem with mdbox? (either via thunderbird or with custom webmail apps). I remember
2010 Jan 30
1
Multiple RAID support in CentOS?
Hello, I was wondering if someone could help me, I'm putting together a Server for personal use, I want to virtualize a few servers(mail, web, ssh) and use it as a NAS, but I have a question if I can use Multiple RAID Arrays using the following HW: Intel Xeon Quad Core X3430 ASUS P7F-M LGA 1156 - LSI MegaRAID(integrated) - HighPoint RocketRAID 2640x1 2 Hitachi 500GB HDDs 4 Hitachi 1TB HDDs
2020 Sep 17
2
storage for mailserver
Hello Phil, Wednesday, September 16, 2020, 7:40:24 PM, you wrote: PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and PP> marking the HDD members as --write-mostly, meaning most of the reads PP> will come from the faster SSDs retaining much of the speed advantage, PP> but you have the redundancy of both SSDs and HDDs in the array. PP> Read performance is
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all, I am thinking about a new laptop. I see that there are a number of higher-performance models (incidenatlly, they are also marketed as "gamer" ones) which offer two SATA 2.5" bays and an SD flash card slot. Vendors usually position the two-HDD bay part as either "get lots of capacity with RAID0 over two HDDs, or get some capacity and some performance by mixing one
2004 Jan 24
2
memdisk fails with 4 hdds
Hi everybody. Hardware: Several computers with Asus P/I-P55t2P4S motherboard, CPU Pentium MMX and K6-2/3, 128 to 384 MB RAM, Award 4.51PG Bios, Hard disks from 8 GB to 123.5 GB Realtek 8139D based NICs with PXE Boot-Proms PXELINUX 2.06 / MEMDISK 2.06: - fails to boot Compaq PC-DOS 3.31 PXELINUX 2.08 - succeeds to boot Compaq PC-DOS 3.31 with 0, 1, 2 and 3 hdds installed in
2014 Aug 14
2
Trying Dovecot Replication with dsync
Hi, I have a failover cluster for mail server with: Ubuntu12.04 + DRBD (for block replication) + Ext4 filesystem + Dovecot-2.0.19-2 with Mdbox It works fine with ~50k accounts. My cluster design: http://adminlinux.com.br/cluster_design.txt I plan to test Dovecot Replication with dsync to build an active/active cluster with load balancing. Can anyone direct me to some literature? A tutorial
2007 Mar 17
3
Hellllp Pl: Centos 4.4 Default LVM install boot/recovery problem
Hello all Nill experience with LVM. Have a Default Centos 4.4 install updated till a week ago with three HDDs. System's not booting up since my staff pulled out the plug due to a short circuit nearby. Machine is a PIII 550 MHz, with 3 HDDs 40 GB, 120 GB (Actually is bigger but my bios detects only upto 120 GB) & 20 GB...about half filled with data & the backup server taken out in
2007 Jun 16
2
Extremely broken BIOS detected
Hello, Not sure where to start looking, so I start with the program that threw the message. Not sure what information I need to provide or the preferred formatting. I did a BIOS update and now get an error message I never saw before. System seems to boot ok, run programs ok, access devices ok (SATA DVD, IDE HDD, SATA HDD, network, video, audio, USB, IEEE1394). But I don't think Linux
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2006 Mar 16
3
LSI Logic controller status
Hello, Recently we bought HP bl35 blades with LSI Logic SAS controllers. mpt* drivers works fine but I would like to somehow view status from command line. To monitor e.g. if one of the HDDs failed. I found mptutil on LSI Logic site but it just shows me configuration in quite cryptic form. Thanks, Mindaugas
2008 Jun 04
2
panic on `zfs export` with UNAVAIL disks
hi list, initial situation: SunOS alusol 5.11 snv_86 i86pc i386 i86pc SunOS Release 5.11 Version snv_86 64-bit 3 USB HDDs on 1 USB hub: zpool status: state: ONLINE NAME STATE READ WRITE CKSUM usbpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c7t0d0p0 ONLINE 0 0 0 c8t0d0p0