similar to: Server migration

Displaying 20 results from an estimated 30000 matches similar to: "Server migration"

2018 May 15
4
end-to-end encryption
Hi to all I was looking at protonmail.com Is possible to implement and end-to-end encryption with dovecot, where server-side there is no private key to decrypt messages? If I understood properly, on protonmail the private key is encrypted with user's password, so that only an user is able to decrypt the mailbox. Anything similiar ?
2017 Feb 15
4
Upgrade from 1.2 to 2.2
Hi, I have a production server running Debian Squeeze with Dovecot 1.2 I would like to upgrade everything to Jessie, running 2.2 Last time I did something similiar, but from Lenny to Squeeze, the whole dovecot installation broke. Any suggestion on how to upgrade everything ? Can I test our current configuration with a newer dovecot version to be sure that everything would be converted properly
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd. killall should send signal 15 (SIGTERM) to the process, maybe a bug in signal
2017 Oct 12
1
gluster status
How can I show the current state of a gluster cluster, like status, replicas down, what is going on and so on ? Something like /proc/mdstat for raid, where I can see which disks are down, if raid is rebuilding,checking, .... Anything similiar in gluster?
2016 Oct 24
2
Server migration
2016-10-24 14:47 GMT+02:00 Michael Seevogel <ms at ddnetservice.de>: > If your server OS supports newer Dovecot versions then I would highly > suggest you to upgrade to Dovecot 2.2.xx (or at least to the latest 2.1) and > set up Dovecot's replication[1] feature. Are you talking about the new server or the older one that I have to replace? The new server has to be installed from
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:07 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > OK, so killall seems to be ok after several attempts i.e. iops do not stop > on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the > command. I will check the servers console during reboot to see if the VM > errors appear just after the power cycle and will try to crash the VM after >
2016 Oct 24
5
Server migration
Hi i have to migrate, online, a dovecot 1.2.15 to a new server. Which is the best way to accomplish this? I have 2 possibility: 1) migrate from the very old server to a newer server with the same dovecot version 2) migrate from the very old server to a new server with the latest dovecot version can i simply use rsync to sync everything and, when the sync is quick, move the mailbox from the old
2017 Dec 15
3
IMAP proxy
I'm migrating an old server to another old server (same dovecot version in both servers) The migration itself is straightforward, stop dovecot on the old server, migrate everything via rsync, start dovecot to the new server. There is only one step left: change the dns configuration, pointing from the old server to the newer one. As most of domains are not managed by me and some other domains
2016 Oct 27
4
Server migration
On 27 Oct 2016, at 15:29, Tanstaafl <tanstaafl at libertytrek.org> wrote: > > On 10/26/2016 2:38 AM, Gandalf Corvotempesta > <gandalf.corvotempesta at gmail.com> wrote: >> This is much easier than dovecot replication as i can start immedialy with >> no need to upgrade the old server >> >> my only question is: how to manage the email received on the
2016 Oct 26
4
Server migration
Il 26 ott 2016 8:30 AM, "Aki Tuomi" <aki.tuomi at dovecot.fi> ha scritto: > I would recommend using same major release with replication. > > If you are using maildir++ format, it should be enough to copy all the > maildir files over and start dovecot on new server. > This is much easier than dovecot replication as i can start immedialy with no need to upgrade the
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if we move this bug to CLOSED state and revert the rebalance-cli warning patch? -Krutika On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > Hello, > > > Yes, i forgot to upgrade the client as well. > > I did the upgrade and created a new volume,
2016 Oct 29
2
Dovecot Proxy and Director
Hi, just a simple question: by using a directory and a proxy, I would be able to totally hide the pop3/imap server ip addresses from outside? I'm asking this because I would like to hide the real server IP for security reasosn (DDoS and so on). The proxy would be placed on servers with high bandwidth while the pop3/imap dovecot servers are placed in a small datacenter that would go down
2016 Nov 17
1
Dovecot proxy
Hi to all I have some *production* pop3/inappropriate server that i would like to move under a proxy Some questions: 1. Keeping the same original hostname on the proxy (in example mail.mydomain.tld) and changing the hostname on the imap server, makes some troubles like MUA redownloading all the messages? Is dovecot (running on the imap server) happy seeing the hostname change? What about
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2012 Apr 27
1
geo-replication and rsync
Hi, can someone tell me the differenct between geo-replication and plain rsync? On which frequency files are replicated with geo-replication? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown? Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the volumes > for 42 seconds (network.ping-timeout). Is there a special way to
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL > will be as well. This assumption might be wrong. So I will test it. It would > be interesting to see client to work in case of crash (SIGKILL) and not in > case of graceful exit of glusterfsd. Exactly. if this happen, probably there
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to be used in production? >> > >
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few > minutes. SIGTERM on the other hand causes crash, but this time it is > not read-only remount, but around 10 IOPS tops and 2 IOPS on average. > -ps So, seems to be reliable to server crashes but not to server shutdown :)
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as