similar to: Best practices for mailbox network file storage?

Displaying 20 results from an estimated 6000 matches similar to: "Best practices for mailbox network file storage?"

2019 Feb 15
2
Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data
OP - Backblaze Personal. May be like $1/extra per month than your budget. Unlimited IO and backup storage assuming you only need redundancy. https://www.backblaze.com/cloud-backup.html Still going to take a while on initial upload. (Sounds almost AWS Snowball like is what you need but too costly). Regards, R. S. Tyler Schroder Redcoded.com Cyber Intellegence > On Feb 15, 2019, at 10:37
2011 Dec 31
1
problem with missing bricks
Gluster-user folks, I'm trying to use gluster in a way that may be a considered an unusual use case for gluster. Feel free to let me know if you think what I'm doing is dumb. It just feels very comfortable doing this with gluster. I have been using gluster in other, more orthodox configurations, for several years. I have a single system with 45 inexpensive sata drives - it's a
2011 May 08
4
Building a Back Blaze style POD
Hi All, I am about to embark on a project that deals with allowing information archival, over time and seeing change over time as well. I can explain it a lot better, but I would certainly talk your ear off. I really don't have a lot of money to throw at the initial concept, but I have some. This device will host all of the operations for the first few months until I can afford to build a
2019 Feb 15
0
Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data
On 15.02.2019 18:10, (RS) Tyler Schroder wrote: > OP - Backblaze Personal. May be like $1/extra per month than your budget. Unlimited IO and backup storage assuming you only need redundancy. > > https://www.backblaze.com/cloud-backup.html would you really backup into a system, that has closed connectivity? I'd prefer connecting a way I want: e.g. SFTP, SSHFS, HTTPS, ... and not it is
2010 Oct 14
3
best practices in using shared storage for XEN Virtual Machines and auto-failover?
Hi all, Can anyone pleas tell me what would be best practice to use shared storage with virtual machines, especially when it involved high availability / automated failover between 2 XEN servers? i.e. if I setup 2x identical XEN servers, each with say 16GB RAM, 4x 1GB NIC's, etc. Then I need the xen domU's to auto failover between the 2 servers if either goes down (hardware failure /
2013 Aug 21
2
Dovecot tuning for GFS2
Hello, I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm using courier over GFS. Actually I'm testing Dovecot with these parameters: mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes lock_method = fcntl Are they correct? RedHat GFS support mmap, so is it better to enable it or leave it disabled? The documentation suggest the
2012 Jan 25
6
Can anyone talk infrastructure with me?
Hi All, I started a 501c3 (not-for-profit) organization back in February 2011 to deal with information archival. A long vision here, I wont bore you with the details (if you really want to know, e-mail me privately) but the gist is I need to build an infrastructure to accommodate about 2PB of data that is database stuff, stored video, crawl data, static data sets, etc. Right now in my testing of
2015 May 25
3
Shared inbox?
I'm running dovecot 2.2.16 on my FreeBSD mail server. I've read information on the wiki about setting up shared mailboxes, but I want to do something that isn't really coved by the instructions I was reading there. My son (now 7 years old) has an account on the system, but doesn't use it directly. But, for things he's interested in like Minecraft, and/or the local zoo, we
2019 Feb 15
2
Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data
On Feb 15, 2019, at 7:56 AM, Yan Li <elliot.li.tech at gmail.com> wrote: > > G Suite Business tier. Buy five users and you get unlimited Google Drive > storage. That's $50/month. So, you?re already 12x higher than his budget, and it?ll be going up 20% in early April. On top of that, there?s certainly a transfer rate limit. I couldn?t find a reliable source saying what that
2010 Apr 08
1
ZFS monitoring - best practices?
We''re starting to grow our ZFS environment and really need to start standardizing our monitoring procedures. OS tools are great for spot troubleshooting and sar can be used for some trending, but we''d really like to tie this into an SNMP based system that can generate graphs for us (via RRD or other). Whether or not we do this via our standard enterprise monitoring tool or
2014 Feb 12
3
Right way to do SAN-based shared storage?
I'm trying to set up SAN-based shared storage in KVM, key word being "shared" across multiple KVM servers for a) live migration and b) clustering purposes. But it's surprisingly sparsely documented. For starters, what type of pool should I be using?
2012 Mar 06
3
Samba to share NFSv4 + ACL mounted filesystems on NetApp storage
Hi, We are running into a problem with a Samba setup and would like to know if a current fix or workaround is at all possible. Our setup is a NetApp filer serving NFS v4 that is mounted by Solaris and Linux servers. On those servers we are using Samba to create shares of those NFSv4 mounted filesystems. We are migrating to this NFSv4 setup from an existing Solaris NFSv3+Posix ACL setup that also
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2010 Jul 02
4
mainboard recommendations
Hi, As it has been brought to my attention (by several parties), my current MB does not support VT-d. Since I don''t feel like purchasing a brand new system, I have searched for replacement MB-s that would fit into my existing uATX case, support my C2D CPU, and support at least 4x2GB ram. I have narrowed it down to the following four models: SUPERMICRO MBD-C2SBM-Q-O (Q35 chipset)
2023 Apr 15
1
Transcode lossy to further reduced lossy to stream over Icecast
Opus or AAC will give you comparable results at reasonable bitrates (~128k). Though, I would suggest finding a way to get more storage. You could upload to Backblaze B2 or AWS S3 for pennies, if your current host won't let you upgrade. On Sat, Apr 15, 2023 at 3:36?PM D.T. <ohnonot-github at posteo.de> wrote: > Situation: > > - remote virtual server with very little
2006 Nov 09
8
XEN sound emulation locks device exclusively
Hi there! My problem is the following: I am experimenting with virtualization with XEN. I am running WinXP in a HVM host with a Debian Gnu/Linux (Etch) dom0. I am also using audio emulation (sb16). This way, sounds from WinXP work fine. And here comes the problem: The HVM guest completely reserves the audio device; other software can not use it. (mplayer says this: alsa-init: using ALSA
2011 Jan 13
6
Best Cluster Storage
Hi Everyone, I wish to create a Postfix/Dovecot active-active cluster (each node will run Postfix *and* Dovecot), which will obviously have to use central storage. I'm looking for ideas to see what's the best out there. All of this will be running on multiple Xen hosts, however I don't think that matters as long as I make sure that the cluster nodes are on different physical
2023 Apr 16
1
Transcode lossy to further reduced lossy to stream over Icecast
I created some test samples and transcoded to FDK AAC and libopus at fairly low bitrates - I cannot recreate what bothered me about Opus & noisy music previously. It also seems I cannot tease ffmpeg into encoding FDK's AAC with VBR. As it stands, Opus clearly wins in this scenario.* Q: Is it possible to stream in variable bitrate? * ffmpeg -i "$track" -vn -ac 2 -c:a libfdk_aac
2011 May 06
2
single storage server
I have a single storage server which exports /data to number of clients. Is it ok to access the data on the storage server directly (ie not via glusterfs mount) ? (I know this causes problems when there are multiple servers ). This would simplify some configurations. Nick
2011 Jun 09
1
NFS problem
Hi, I got the same problem as Juergen, My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0 Volume Name: poolsave Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/soft/gluster-data Brick2: ylal2960:/soft/gluster-data Options Reconfigured: diagnostics.brick-log-level: DEBUG network.ping-timeout: 20 performance.cache-size: 512MB