Displaying 20 results from an estimated 30000 matches similar to: "SSL hangs? Try this"
2016 Jul 04
2
Build regressions/improvements in v4.7-rc6
On Mon, Jul 4, 2016 at 10:12 AM, Geert Uytterhoeven
<geert at linux-m68k.org> wrote:
> JFYI, when comparing v4.7-rc6[1] to v4.7-rc5[3], the summaries are:
> - build errors: +3/-2
+ /home/kisskb/slave/src/drivers/vhost/vhost.c: error: call to
'__compiletime_assert_844' declared with attribute error: BUILD_BUG_ON
failed: __alignof__ *vq->avail > VRING_AVAIL_ALIGN_SIZE:
2016 Jul 04
2
Build regressions/improvements in v4.7-rc6
On Mon, Jul 4, 2016 at 10:12 AM, Geert Uytterhoeven
<geert at linux-m68k.org> wrote:
> JFYI, when comparing v4.7-rc6[1] to v4.7-rc5[3], the summaries are:
> - build errors: +3/-2
+ /home/kisskb/slave/src/drivers/vhost/vhost.c: error: call to
'__compiletime_assert_844' declared with attribute error: BUILD_BUG_ON
failed: __alignof__ *vq->avail > VRING_AVAIL_ALIGN_SIZE:
2006 Aug 04
1
RC5 Outlook POP3 problem
We have been using dovecot v1.0 alpha3 for almost 1 year for our mail
hosting. It was running great until more accounts added to the hosting.
I have been trying to upgrade it to v1.0 rc5 and we have found that the RC5
has problem with Outlook (Express) with POP accounts. We are getting error
with POP3 on Outlook Express:
Your server has unexpectedly terminated the connection. Possible causes for
2008 May 30
3
v1.1.rc6 released won't compile
I got a compile error with rc6, rc5 works fine.
amd64:dovecot-1.1.rc6# uname -a
FreeBSD amd64.objtech.com 7.0-RELEASE-p1 FreeBSD 7.0-RELEASE-p1 #3: Fri
Apr 18 02:18:13 EDT 2008
./configure \
--prefix=/opt1/dovecot \
--localstatedir=/var \
--without-shadow \
--without-cyrus-sasl2 \
--without-pop3d \
--without-gssapi \
--disable-ipv6 \
--disable-debug \
--with-ioloop=kqueue \
--with-ssl=openssl
2020 Mar 23
2
[10.0.0 Release] Release Candidate 5 is here
On Sun, Mar 22, 2020 at 9:05 PM Andrew Kelley <andrew at ziglang.org> wrote:
>
> On 3/19/20 9:51 AM, Hans Wennborg via llvm-dev wrote:
> > Release Candidate 5 was just tagged as llvmorg-10.0.0-rc5 on the
> > release branch at 35627038123.
> >
> > Source code and docs are available at
> > https://prereleases.llvm.org/10.0.0/#rc5 and
> >
2012 Sep 17
2
'umount' of multi-device volume hangs until the device is physically un-plugged
I''m currently playing around with native btrfs multi-device support in
systemd. There might be a few "hotplug issues" to solve, here is the
first one:
A mounted (otherwise unused) multi-device volume (USB multi-slot card
reader), hangs at:
$ umount /mnt
with (fedora) kernel
3.6.0-0.rc5.git0.1.fc18.x86_64
Any idea what to look for or what to try?
Thanks,
Kay
[
2008 Sep 19
2
Dropping Phone Calls
Hi All,
I'm currently having trouble with dropped phone calls. The following error
message is always in the log. This is a Grandstream GXP-2000 Firmware
1.1.6.16 . The Asterisk box is currently 1.4.22-rc5. The problem has been
occurring on other versions also.
[Sep 19 15:48:02] WARNING[13657]: chan_sip.c:1958 retrans_pkt: Maximum
retries exceeded on transmission 8acaea6dc4c6e9b5 at
2007 Apr 18
2
[RFC, PATCH 9/24] i386 Vmi smp support
SMP bootstrapping support. Just as in the physical platform model,
the BSP is responsible for initializing the AP state prior to execution.
The dependence on lots of processor state information is a design choice
of our implementation. Conceivably, this could be a hypercall that
awakens the same start of day state on APs as on the BSP.
It is likely the AP startup and the start-of-day model will
2007 Apr 18
2
[RFC, PATCH 9/24] i386 Vmi smp support
SMP bootstrapping support. Just as in the physical platform model,
the BSP is responsible for initializing the AP state prior to execution.
The dependence on lots of processor state information is a design choice
of our implementation. Conceivably, this could be a hypercall that
awakens the same start of day state on APs as on the BSP.
It is likely the AP startup and the start-of-day model will
2016 Aug 04
3
[PATCH v2 1/2] firstboot: rename systemd and sysvinit
Currently we install a systemd service named firstboot.service and a
SysV service named virt-sysprep-firstboot. On systems where systemd is
the init system and runs with the SysV compatibility, the different
names make systemd handle them as different services, and thus trying to
run the firstboot script runner twice.
Rename both the systemd service and the SysV one to guestfs-firstboot:
the new
2015 Jun 01
1
GlusterFS 3.7 - slow/poor performances
Dear all,
I have a crash test cluster where i?ve tested the new version of GlusterFS (v3.7) before upgrading my HPC cluster in production.
But? all my tests show me very very low performances.
For my benches, as you can read below, I do some actions (untar, du, find, tar, rm) with linux kernel sources, dropping cache, each on distributed, replicated, distributed-replicated, single (single
2007 Apr 18
2
2.6.19-rc5-mm2: paravirt X86_PAE=y compile error
On Thu, 16 Nov 2006 00:16:26 +0100
Adrian Bunk <bunk@stusta.de> wrote:
> Paravirt breaks CONFIG_X86_PAE=y compilation:
>
> <-- snip -->
>
> ...
> CC init/main.o
> In file included from include2/asm/pgtable.h:245,
> from
> /home/bunk/linux/kernel-2.6/linux-2.6.19-rc5-mm2/include/linux/mm.h:40,
> from
>
2007 Apr 18
2
2.6.19-rc5-mm2: paravirt X86_PAE=y compile error
On Thu, 16 Nov 2006 00:16:26 +0100
Adrian Bunk <bunk@stusta.de> wrote:
> Paravirt breaks CONFIG_X86_PAE=y compilation:
>
> <-- snip -->
>
> ...
> CC init/main.o
> In file included from include2/asm/pgtable.h:245,
> from
> /home/bunk/linux/kernel-2.6/linux-2.6.19-rc5-mm2/include/linux/mm.h:40,
> from
>
2006 Aug 02
1
1.0 RC5 released
http://dovecot.org/releases/dovecot-1.0.rc5.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc5.tar.gz.sig
Hopefully this is the final mbox bugfix.. Nothing else changed since
rc4.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: This is a digitally signed message part
Url :
2006 Aug 02
1
1.0 RC5 released
http://dovecot.org/releases/dovecot-1.0.rc5.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc5.tar.gz.sig
Hopefully this is the final mbox bugfix.. Nothing else changed since
rc4.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: This is a digitally signed message part
Url :
2008 May 04
1
v1.1.rc5 released
?http://dovecot.org/releases/1.1/rc/dovecot-1.1.rc5.tar.gz
http://dovecot.org/releases/1.1/rc/dovecot-1.1.rc5.tar.gz.sig
I've finally gone through all my unread mails and fixed most of the
reported bugs. There are only a couple ones left that I can't reproduce.
Please report if you still have any problems with rc5, even if you
already have reported the same bug before.
v1.1.rc6 will
2008 May 04
1
v1.1.rc5 released
?http://dovecot.org/releases/1.1/rc/dovecot-1.1.rc5.tar.gz
http://dovecot.org/releases/1.1/rc/dovecot-1.1.rc5.tar.gz.sig
I've finally gone through all my unread mails and fixed most of the
reported bugs. There are only a couple ones left that I can't reproduce.
Please report if you still have any problems with rc5, even if you
already have reported the same bug before.
v1.1.rc6 will
2019 Jun 19
1
dev_pagemap related cleanups v2
On Wed, Jun 19, 2019 at 09:46:23AM -0700, Dan Williams wrote:
> On Wed, Jun 19, 2019 at 9:37 AM Jason Gunthorpe <jgg at ziepe.ca> wrote:
> >
> > On Wed, Jun 19, 2019 at 11:40:32AM +0200, Christoph Hellwig wrote:
> > > On Tue, Jun 18, 2019 at 12:47:10PM -0700, Dan Williams wrote:
> > > > > Git tree:
> > > > >
> > > > >
2019 Jun 19
1
dev_pagemap related cleanups v2
On Wed, Jun 19, 2019 at 11:40:32AM +0200, Christoph Hellwig wrote:
> On Tue, Jun 18, 2019 at 12:47:10PM -0700, Dan Williams wrote:
> > > Git tree:
> > >
> > > git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup.2
> > >
> > > Gitweb:
> > >
> > >
2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
hi Geoffrey,
Since you are saying it happens on all types of volumes,
lets do the following:
1) Create a dist-repl volume
2) Set the options etc you need.
3) enable gluster volume profile using "gluster volume profile <volname>
start"
4) run the work load
5) give output of "gluster volume profile <volname> info"
Repeat the steps above on new and old