similar to: How to archive mails on different server

Displaying 20 results from an estimated 500 matches similar to: "How to archive mails on different server"

2015 Feb 17
2
Help with archive server
Hi, I want to build a system where 6 months old mail to be moved to archive server to keep my mailbox clean and fast. Is there any way to achieve the same using any pre-built binary shipped with dovecot ? Thanks and Regards Joy
2023 Mar 20
1
Read-only / archive mode for IMAP mailboxes?
Am 20.03.23 um 18:26 schrieb Brendan Braybrook: > check out the imap acl support: > https://doc.dovecot.org/configuration_manual/acl/ > > On 2023-03-20 10:12, R???? P??????? wrote: >> Hello! >> >> We are currently exploring email archiving solutions. Is there a way >> to use an >> IMAP mailbox in read-only / archive mode? The requirement is that
2014 Oct 13
1
delete/archive old mail
Hi everybody, I store user's mail in the old mbox format. I have many scripts to manage users that works fine with mbox. I use the very old Expire_mail.pl script to delete mail older than NN days for selected users (nightly cron job). Still works fine with my CentOS dovecot-2.0.16. Now I want move the mail to a sort of archive folder instead of simply deleting it from the inbox for some
2023 Mar 20
1
Read-only / archive mode for IMAP mailboxes?
check out the imap acl support: https://doc.dovecot.org/configuration_manual/acl/ On 2023-03-20 10:12, R???? P??????? wrote: > Hello! > > We are currently exploring email archiving solutions. Is there a way to use an > IMAP mailbox in read-only / archive mode? The requirement is that delibveries of > new emails should be possible (via SMTP/LMTP), but no messages should be deleted
2013 Jul 12
1
getting quota error when accessing private namespace
Hi all, I have run into a problem which I cannot find a solution for. I have created an additional private namespace with the following commands in dovecot.conf : namespace { disabled = no hidden = no ignore_on_failure = no inbox = no list = children location = maildir:/var/vmail/archives/%Ln/Maildir mailbox "archived mails" { auto = subscribe driver =
2020 Oct 07
3
Version controlled (git) Maildir generated by Dovecot
Thank you Vitalii. Could you please tell me / do you know if those dovecot* files have to be also backed / archived? Kind regards, Adam > ---------- P?vodn? e-mail ---------- > Od: Vitalii <vnagara at yandex.com> > Komu: Adam <adam.ranek at seznam.cz> > Datum: 7. 10. 2020 12:04:11 > P?edm?t: Re: Version controlled (git) Maildir generated by Dovecot > My 5 cents:
2013 Feb 28
4
Disallow Deletion from Trash Folder
Hello: I've been tasked with trying to find a way to keep users from ever "permanently" deleting emails. The users are running Thunderbird and are using the "Archive" option for when emails are deleted. However, they are still able to delete emails from the Archive folders... I'm wondering if there's any way that I can configure Dovecot to make sure that
2014 Jan 02
1
How to remove Dovecot (LMTP) information from Email header
Hello All, I want to remove Dovecot (LMTP) information from Email Header, Please help me. I am using Dovecot 2.0.9 with Exim. Received: from XX.XXblue.co.uk by XX.XXblue.co.uk*(Dovecot) with LMTP id* XIuTJkJFxVLKTwAAG2fxGQ for <anant.saraswat at techblue.co.uk>; Thu, 02 Jan 2014 10:59:28 +0000 Received: from [210.7.64.2] (helo=[192.168.100.71]) by solo.techblue.co.uk with esmtp (Exim
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well, you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ? Best Regards, Strahil Nikolov ? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hi, normally, when we shut down or reboot one of the (server) nodes, we call the "stop-all-gluster-processes.sh" script. But i think you did that, right? Best regards, Hubert Am Mo., 5. Feb. 2024 um 13:35 Uhr schrieb Anant Saraswat < anant.saraswat at techblue.co.uk>: > Hello Everyone, > > We have a replicated Gluster volume with three nodes, and we face a >
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container. The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind. Thanks, Anant ________________________________
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts. ________________________________ From: Aravinda <aravinda at kadalu.tech> Sent: 16 February 2024 12:36 PM To: Anant Saraswat
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant, Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ? Best Regards,Strahil Nikolov On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2024 Feb 26
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, In our setup, the Gluster brick comes from an iSCSI SAN storage and is then used as a brick on the Gluster server. To extend the brick, we stop the Gluster server, extend the logical volume (LV) on the SAN server, resize it on the host, mount the brick with the extended size, and finally start the Gluster server. Please let me know if this process can be optimized, I will be happy to
2024 Feb 05
1
Challenges with Replicated Gluster volume after stopping Gluster on any node.
Hello Everyone, We have a replicated Gluster volume with three nodes, and we face a strange issue whenever we need to restart one of the nodes in this cluster. As per my understanding, if we shut down one node, the Gluster mount should smoothly connect to another remaining Gluster server and shouldn't create any issues. In our setup, when we stop Gluster on any of the nodes, we mostly get
2014 Jan 27
1
I am unable to find my windows share in ~/.gvfs
Hello All, I am facing a strange issue, I use eclipse and I was using Ubuntu earlier, Now I am trying with Centos. As in my Ubuntu i Can find my share in ~/.gvfs. but now I am unable to locate the same in Centos(release 6.5 Final). I have also checked /var/run/ and I dont have /run in my system So can some one please help me how can i get it in Centos. Actually I can access shares by using
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
No. If the script is used to update the GlusterFS packages in the node, then we need to stop the client processes as well (Fuse client is `glusterfs` process. `ps ax | grep glusterfs`). The default behaviour can't be changed, but the script can be enhanced by adding a new option `--skip-clients` so that it can skip stopping the client processes. -- Aravinda Kadalu Technologies
2024 Feb 24
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant, why would you need to shutdown a brick to expand it ? This is an online operation. Best Regards, Strahil Nikolov
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root. Best Regards, Strahil Nikolov ? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi All, I have run the following commands on master3,