search for: 41ff

Displaying 14 results from an estimated 14 matches for "41ff".

Did you mean: 41df
2009 Apr 17
7
Dovecot broken with newer OpenSSL
...ilt with the newer release of OpenSSL. The IMAP client doesn't matter. For the time being I have gone back to .13 linked against older OpenSSL. In the logs I see messages like the following... dovecot: Apr 16 23:12:18 Info: imap-login: Disconnected (no auth attempts): rip=2001:470:b01e:3:216:41ff:fe17:6933, lip=2001:470:1d:8c::2, TLS handshaking: Disconnected -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
2013 Aug 18
4
[Bug 2016] SCTP Support
...---------- CC| |openssh at ml.breakpoint.cc --- Comment #5 from openssh at ml.breakpoint.cc --- The link local address (fe80::) require the interface which should be used to reach the address. On linux you use % operator for this: |bin/ssh -M fe80::111:41ff:fec1:1a81%br0 -p 65535 -z |Last login: Sun Aug 18 16:16:56 2013 from ip6-localhost where I use br0 to reach fe80::111:41ff:fec1:1a81. This is not sctp specific. -- You are receiving this mail because: You are watching the assignee of the bug.
2017 Jul 07
2
I/O error for one folder within the mountpoint
...t/gluster-applicatif/brick <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2> <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> Status: Connected Number of entries: 6 Brick ipvr8.xxx:/mnt/gluster-applicatif/brick <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f0...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed> > <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> > <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> > <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> > <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2> > <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> > Status: Connected > Number of entries: 6 > > Brick ipvr8.xxx:/mnt/gluster-applicatif/brick > <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> > <gfid:8057d06e-5323-47ff-8168-d983c4a82475> > <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> > &l...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...a635-4e0e-bd97-d204a1f8e7ed> >> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> >> <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> >> <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> >> <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2> >> <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> >> Status: Connected >> Number of entries: 6 >> >> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick >> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> >> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> >> <gfid:5b2ea4e4-ce84-4f07-b...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...1f8e7ed> >>> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> >>> <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> >>> <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> >>> <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2> >>> <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> >>> Status: Connected >>> Number of entries: 6 >>> >>> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick >>> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> >>> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> >>> &...
2007 Mar 06
0
Re: asterisk-users Digest, Vol 32, Issue 21
...1 Date: Tue, 6 Mar 2007 20:02:07 +0100 From: Olle E Johansson <oej@edvina.net> Subject: [asterisk-users] Building a new voicemail system... Testers needed! To: Asterisk Non-Commercial Discussion Users Mailing List - <asterisk-users@lists.digium.com> Message-ID: <A8C949D0-6208-41FF-85BD-E8BDDA6BFCF5@edvina.net> Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Friends in the Asterisk community, One thing I avoided working with for a long time is the Asterisk voicemail code. One module in Asterisk I've constantly been naming as one of the worst pa...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote: > > Hello everyone, > > first time on the ML so excuse me if I'm not following well the rules, > I'll improve if I get comments. > > We got one volume "applicatif" on three nodes (2 and 1 arbiter), each > following command was made on node ipvr8.xxx: > > # gluster volume info applicatif > > Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif Type: Replicate Volume ID: ac222863-9210-4354-9636-2c822b332504 Status: Started
2005 Jan 04
5
Shorewall and ChilliSpot
Has anybody on this managed to get ChilliSpot and Shorewall to work together? I have managed to get it to work with the supplied firewall script but if I wanted to do my firewall like that I would not be using Shorewall. At any rate, I am having all kinds of trouble translating the supplied rules to something that Shorewall would understand. If anybody has already done it I would love to see the
2005 Apr 10
28
dumb, dumb question
I''m very new to shorewall. My setup is IP Gateway (CentOS 4 + Shorewall) with 3 NIC cards. Shorewall works great on the firewall machine. Bind also works (local net machines get IPs fine). Under firestarter, all works great. With shorewall, the loc machines can not route past the firewall. They can connect to the firewall, but not past it. Exactly what information should I post to get
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 87f708cb-a9ce-4ac3-a938-8c56f419fb98. sources=0 [2] sinks=1 [2017-10-25 10:40:36.402123] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 277a8cbc-df28-41ff-9ba9-d3453e3918e3. sources=0 [2] sinks=1 [2017-10-25 10:40:36.402999] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 277a8cbc-df28-41ff-9ba9-d3453e3918e3 [2017-10-25 10:40:36.405250] I [MSGID: 108026] [afr-self-heal-c...