similar to: GSSAPI authentication behind HA servers

Displaying 20 results from an estimated 200 matches similar to: "GSSAPI authentication behind HA servers"

2006 Nov 14
1
Question regarding bounce messages on quota full. [solved]
On 11/14/06, Mustafa A. Hashmi <mahashmi at gmail.com> wrote: In the following section: > auth default { > mechanisms = plain > > socket listen { > master { > path = /var/run/dovecot-auth-master > mode = 0600 > user = vmail # User running Dovecot LDA > #group = mail # Or alternatively mode 0660 + LDA user in this group >
2019 May 02
4
Aw: Re: very high traffic without any load
2006 Jun 27
3
quota not working on delivery
Hi, I am using the Debian dovecot packages and the deliver LDA from postfix, all version 1.0.beta8. My problem is with quota support, I am using Maildirs. The relevant configuration stuff is included at the bottom of this email. Quota support seems to work with IMAP. However, even when IMAP complains about an account being over quota, the deliver LDA still happily writes new mail to the (correct)
2006 Oct 20
1
Question regarding bounce messages on quota full.
Hi all, Using dovecot's LDA (debian backports package: 1.0rc2), users who have exceeded their quota when receiving messages see the message get bounced. This works fine for us, however, the sender is sent a message as follows: -- Start bounce text ERROR This is the Postfix program at host foo.domain.com I'm sorry to have to inform you that your message could not be be delivered to one
2011 Jan 17
2
ping_pong using o2cb and cman
I was testing ocfs2 on a 2 node cluster set up. ocfs2-tools version is 1.6.3 ocfs2 kernel version is 2.6.36 Using cman on 2 nodes node02 dw # ping_pong -rwm /data/test.dat 3 data increment = 2 14 locks/sec node01 dw # ping_pong -rw /data/test.dat 3 data increment = 2 10 locks/sec node02 dw # ping_pong -r /data/test.dat 3 1980 locks/sec Using cman on 1 node node02 dw #
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users] On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote: > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 full > replicated node . This cluster have 2 gluster volume: > > - data: volume for the Data (Master) Domain (For vm) > - engine: volume fro the hosted_storage Domain (for hosted engine) > >
2019 May 01
4
very high traffic without any load
Hi everyone,   I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case.   Here is a quick snapshot using "tinc -n netname top"
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS. gluster volume create myBrick replica 3 node01:/mnt/data/myBrick node02:/mnt/data/myBrick node03:/mnt/data/myBrick Unfortunately node1 seemed to stop syncing with the other nodes, but this was undetected for weeks! When I noticed it, I did a "service glusterd restart" on node1, hoping the three nodes would sync again. But this did not
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote: > [Adding gluster-users] > > On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com > <mailto:jaganz at gmail.com>> wrote: > > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 > full replicated node . This cluster have 2 gluster volume: > > - data: volume for
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the > command? Does it complain anything about these files? > No, glustershd.log is clean, no extra log after command on all 3 nodes > 2. Are these 12 files also present in the 3rd data brick?
2006 Apr 13
1
Prototyping for basejail distribuition
Hi, I attach 2 files in this email, the first is a Makefile and the second is jail.conf. For demonstre my idea i resolved create one "Pseudo Prototyping", for test is necessary: 1 - Create dir /usr/local/basejail 2 - Copy Makefile to /usr/local/basejail 3 - Copy jail.conf to /etc 4 - The initial basejail is precompiled is distributed in CD1, for simular basejail is necessary a
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2019 May 03
3
Aw: Re: very high traffic without any load
2012 Jun 05
2
Anti DDOS rules
Hi, How can I tell shorewall to block any ip address if it generate x no of request within x no of seconds. I want to filter SYN, ICMP and HTTP Get floods etc. Is it possible have a minimum local level deterrence against ddos attacks at firewall level? -- AzfarHashmi Cloudways Your Managed Cloud e: azfar.hashmi@cloudways.com w: www.cloudways.com <http://www.cloudways.com> PGP
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals
2008 Jun 07
56
Unable to create more than 1 VM
Hi, I have already set up a VM that can access the network using the NAT mode. The problem I have is that I''d like to create another VM that also has access to the network. The problem I get is that when a VM is started, the other one will refuse to start. Actually it starts, but when I want to "xm console" into it I get the following error message: "xenconsole: Could not
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all Centos 7, 3.10.0-123.6.3.el7.x86_64 libvirt 1.27, libvirt 1.2.8 builded from source with ./configure --prefix=/usr make && make install LXC with direct network failed to start: Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, UI refreshed but problem still remain ... No specific error, I've only these errors but I've read that there is no problem if I have this kind of errors: 2017-07-24 15:53:59,823+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] START, GlusterServersListVDSCommand(HostName = node01.localdomain.local,
2013 Oct 11
2
Ruby and Rails Sophisticated CMS
Hi I am looking for Ruby and Rails Sophisticated CMS refinery is looking good to me except its is simple content model and age based. Also I looked into locomotive the problem with locomotive is no SQL support.Could someone refer me one please. Thanks -- You received this message because you are subscribed to the Google Groups "Ruby on Rails: Core" group. To unsubscribe from
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 11:41 PM, yayo (j) wrote: > Hi, > > Sorry for follow up again, but, checking the ovirt interface I've > found that ovirt report the "engine" volume as an "arbiter" > configuration and the "data" volume as full replicated volume. Check > these screenshots: This is probably some refresh bug in the UI, Sahina might be able to tell