similar to: imap-login: Authenticate PLAIN failed: Unsupported authentication mechanism - with Evolution

Displaying 20 results from an estimated 10000 matches similar to: "imap-login: Authenticate PLAIN failed: Unsupported authentication mechanism - with Evolution"

2015 Jun 25
0
imap-login: Authenticate PLAIN failed: Unsupported authentication mechanism - with Evolution
Am Donnerstag, den 25.06.2015, 11:35 +0100 schrieb lejeczek: > I wonder if you know if Evolution works with dovecot TLS? Of course. I use dovecot+Evolution fine. You only need to enable PLAIN and/or LOGIN auth method in your config. or the other ones supported by Evolution. And TLS doestn't matter in this case. As long as Evolution has compiled it in and dovecot has compiled it in, then
2015 Jun 22
2
a temporary failure
Am Montag, den 22.06.2015, 10:05 +0100 schrieb lejeczek: > > > I wonder could there be some kind of collision between > user/passdbs, even though I do not configure anything but > ldap, when I do: passdb { driver = pam } passdb { args = /etc/dovecot/ldap-passdb-my.domain.conf driver = ldap } Check your whole dovecot config. You have an active PAM passdb lookup and an
2015 Aug 08
3
backing up email / saving maildir on external hard drives
Dear Christian, Thanks for your feedback. The HDD will not accept larger than 4GB (as its in FAT format). Its a new external HDD. Thinking of the best format(that would work with Mac , Win and Linux) .seems like a challenge. What's your view on NTFS? And why not exFAT? Thanks Kevin On Saturday, August 8, 2015, Christian Kivalo <ml+dovecot at valo.at> wrote: > > > Am 08.
2019 Jul 04
2
solr vs fts
Am Donnerstag, den 04.07.2019, 12:27 +0300 schrieb Aki Tuomi via dovecot: > On 4.7.2019 12.22, Maciej Milaszewski IQ PL via dovecot wrote: > > Hi > > So you're advised to use a solr or something else? > > > > Using any FTS is advisable, currently suitable ones would be SOLR or > Xapian (see https://github.com/grosjo/fts-xapian) > Hi Aki, I didn't yet
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue. Info file on node 10.5.6.17 consist of an additional property "tier-enabled" which is not present in info file from other 3 nodes, hence when gluster peer probe call is made, in order to maintain consistency across the cluster cksum is compared. In this case as both files are different leading to different cksum, causing state in
2023 Apr 19
2
bash test ?
On 19/04/2023 08:04, wwp wrote: > Hello lejeczek, > > > On Wed, 19 Apr 2023 07:50:29 +0200 lejeczek via CentOS <centos at centos.org> wrote: > >> Hi guys. >> >> I cannot wrap my hear around this: >> >> -> $ unset _Val; test -z ${_Val}; echo $? >> 0 >> -> $ unset _Val; test -n ${_Val}; echo $? >> 0 >> -> $
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote: > > > On Tue, May 1, 2018 at 5:46 AM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote: > > hi guys > > I have a simple case of: > $ setfacl -b > not working! > I copy a folder outside of autofs mounted gluster vol, > to a regular fs and removing acl works as
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2018 Oct 09
5
mount points @install time
hi everyone, is there a way to add custom mount points at installation point? And if there is would you say /usr should/could go onto a separate partition? many thanks, L.
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network, from the rest of the cluster. 2) that unavailable(while it was unavailable) peer got detached with "gluster peer detach" command which succeeded, so now cluster comprise of three peers 3) Self-heal daemon (for some reason) does not start(with an attempt to restart glusted) on the peer which probed that fourth peer. 4) fourth
2015 Jun 19
3
how do I conceptualize system & virtual users?
I guess this would be a common case, I am hoping for some final clarification. a few Linux boxes share ldap (multi-master) backend that PAM/SSSD uses to authenticated users, and these LDAPs are also is used by Samba, users start @ uid 1000. Boxes are in the same both DNS and Samba domains. Do I treat these users as system or virtual users from postfix/dovecot perspective? If it can be a
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2015 Jun 19
1
how do I conceptualize system & virtual users?
On 19/06/15 15:13, Mauricio Tavares wrote: > On Jun 19, 2015 9:08 AM, "lejeczek" <peljasz at yahoo.co.uk> wrote: >> I guess this would be a common case, I am hoping for some final > clarification. >> a few Linux boxes share ldap (multi-master) backend that PAM/SSSD uses to > authenticated users, and these LDAPs are also is used by Samba, users start > @ uid
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2018 May 01
2
unable to remove ACLs
hi guys I have a simple case of: $ setfacl -b not working! I copy a folder outside of autofs mounted gluster vol, to a regular fs and removing acl works as expected. Inside mounted gluster vol I seem to be able to modify/remove ACLs for users, groups and masks but that one simple, important thing does not work. It is also not the case of default ACLs being enforced from the parent, for I
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in "/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with glusterd.logs and command-history. Thanks Gaurav On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas, > same old same > in log of the probing peer I see: > ... > 2017-08-29
2017 Jul 24
2
vol status detail - times out?
hi fellas would you know what could be the problem with: vol status detail times out always? After I did above I had to restart glusterd on the peer which had the command issued. I run 3.8.14. Everything seems to work a ok. many thanks L.
2016 Nov 04
2
mailing list mail from @yahoo addresses
[extracted from "Re: [CentOS] dnf and failing epel" message chain.] > From: lejeczek peljasz at yahoo.co.uk > Date: Fri Nov 4 13:39:40 UTC 2016 >> Date: Friday, November 04, 2016 08:51:07 -0400 >> From: Jonathan Billings <billings at negate.org> >> >>> On Fri, Nov 04, 2016 at 12:30:02PM +0000, lejeczek wrote: >>> >>> ps. I