Displaying 20 results from an estimated 1100 matches similar to: "Problem about dovecot Panic"
2012 Oct 26
1
Dovecot stops to work - anvil problem
Hi all,
we have a problem about anvil, it seems that when we have a high load the
dovecot stops to work. Sometimes it is sufficient to make a dovecot
reload, but sometimes we have to restart it.
These are the lines related to anvil in the dovecot.log:
[root at secchia ~]# grep anvil /var/log/dovecot.log | more
Oct 26 11:13:55 anvil: Error: net_accept() failed: Too many open files
Oct 26
2007 Sep 16
3
PLOGI errors
Hello,
today we made some tests with failed drives on a zpool.
(SNV60, 2xHBA, 4xJBOD connected through 2 Brocade 2800)
On the log we found hundred of the following errors:
Sep 16 12:04:23 svrt12 fp: [ID 517869 kern.info] NOTICE: fp(0): PLOGI to 11dca failed state=Timeout, reason=Hardware Error
Sep 16 12:04:23 svrt12 fctl: [ID 517869 kern.warning] WARNING: fp(0)::PLOGI to 11dca failed. state=c
2005 Aug 31
8
problem with OCFS label
I used this command to create volume label on OCFS:
mkfs.ocfs -F -b 128 -L data13 -m /oradata/data13 -u oracle -g dba -p 0775 /dev/emcpowerp1
emcpowerp is composed of /dev/sdad and /dev/sdk. It seems the above command created the same labels for /dev/emcpowerp1, /dev/sdad1 and /dev/sdk1.
But when I tried to mount this ocfs filesystem by label, it gave me the following error.
# mount -L data13
2005 Sep 22
1
LPFC support in Xen ?
Hi,
Afraid I dont have a very new system to try out Xen.. It''s an ia32 PIII
machine with 1GB memory (that''s gud i believe).
When I try to build the initrd image for my xen kernel, i get error
messages complaining about missing modules, so am trying to compile them on
my own.
Could find support for the remaining three, but am left with lpfcdd.ko.
Couldnt find an appropriate
2020 Oct 17
10
[RFC] treewide: cleanup unreachable breaks
From: Tom Rix <trix at redhat.com>
This is a upcoming change to clean up a new warning treewide.
I am wondering if the change could be one mega patch (see below) or
normal patch per file about 100 patches or somewhere half way by collecting
early acks.
clang has a number of useful, new warnings see
2020 Oct 17
10
[RFC] treewide: cleanup unreachable breaks
From: Tom Rix <trix at redhat.com>
This is a upcoming change to clean up a new warning treewide.
I am wondering if the change could be one mega patch (see below) or
normal patch per file about 100 patches or somewhere half way by collecting
early acks.
clang has a number of useful, new warnings see
2007 Jul 13
28
ZFS and powerpath
How much fun can you have with a simple thing like powerpath?
Here''s the story: I have a (remote) system with access to a couple
of EMC LUNs. Originally, I set it up with mpxio and created a simple
zpool containing the two LUNs.
It''s now been reconfigured to use powerpath instead of mpxio.
My problem is that I can''t import the pool. I get:
pool: ######
id:
2010 Mar 26
0
CEEA-2010:0156 CentOS 5 i386 kmod-lpfc Update
CentOS Errata and Enhancement Advisory 2010:0156
Upstream details at : http://rhn.redhat.com/errata/RHEA-2010-0156.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
i386:
b26004db2b4479e8229913b2783139cf kmod-lpfc-PAE-rhel5u4-8.2.0.63.1p-1.4.el5_4.i686.rpm
6c7c0ab2ea8f812f6ec0c26a746a04a5
2010 Mar 26
0
CEEA-2010:0156 CentOS 5 x86_64 kmod-lpfc Update
CentOS Errata and Enhancement Advisory 2010:0156
Upstream details at : http://rhn.redhat.com/errata/RHEA-2010-0156.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
x86_64:
787036f097504e710495f522bfe48661 kmod-lpfc-rhel5u4-8.2.0.63.1p-1.4.el5_4.x86_64.rpm
11899040b4af01ec087e0fa82d580c87
2019 Feb 22
0
CentOS-announce Digest, Vol 168, Issue 5
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When
2014 Oct 31
0
CEEA-2014:1760 CentOS 7 lpfc Enhancement Update
CentOS Errata and Enhancement Advisory 2014:1760
Upstream details at : https://rhn.redhat.com/errata/RHEA-2014-1760.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
34076a399db40f55bd0d2b86619e9799eab69d3d9e7662773914af9c2166b518 kmod-lpfc-10.2.8021.0-1.el7_0.x86_64.rpm
Source:
2016 Sep 29
0
CEEA-2016:1975 CentOS 7 lpfc Enhancement Update
CentOS Errata and Enhancement Advisory 2016:1975
Upstream details at : https://rhn.redhat.com/errata/RHEA-2016-1975.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
5496783be83e9686b32017d2cbe2deb59bd8f35917adf89c101f51aaa051b24b kmod-lpfc-11.1.0.2-1.el7_2.x86_64.rpm
Source:
2019 Feb 21
0
CEBA-2019:0389 CentOS 7 kmod-redhat-lpfc BugFix Update
CentOS Errata and Bugfix Advisory 2019:0389
Upstream details at : https://access.redhat.com/errata/RHBA-2019:0389
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
9c633b01f1493d863026cd57f41094520d215aa420429701f4a4f3aafb949832 kmod-redhat-lpfc-12.0.0.5_dup7.6-1.el7_6.x86_64.rpm
Source:
2006 Jul 26
9
zfs questions from Sun customer
Please reply to david.curtis at sun.com
******** Background / configuration **************
zpool will not create a storage pool on fibre channel storage. I''m
attached to an IBM SVC using the IBMsdd driver. I have no problem using
SVM metadevices and UFS on these devices.
List steps to reproduce the problem(if applicable):
Build Solaris 10 Update 2 server
Attach to an external
2012 Oct 28
3
[PATCH 00/16] treewide: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
dev_<level> create smaller objects than dev_printk(KERN_<LEVEL>.
Convert non-debug calls to this form.
Joe Perches (16):
tile: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
ata: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
drivers: base: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
block: Convert dev_printk(KERN_<LEVEL> to
2012 Oct 28
3
[PATCH 00/16] treewide: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
dev_<level> create smaller objects than dev_printk(KERN_<LEVEL>.
Convert non-debug calls to this form.
Joe Perches (16):
tile: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
ata: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
drivers: base: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
block: Convert dev_printk(KERN_<LEVEL> to
2007 Aug 08
0
pcifront (CONFIG_XEN_PCIDEV_FRONTEND=m) support in RHEL 4.5 x86 Dom U
Dear All,
The production server supports Intel Virtualization Technology. Processor is
an Intel Xeon 1.86 GHz Quad Core. 8 GB DDR2 memory.
There is also an Emulex LightPulse Fiber Channel HBA adapter.
The host operating system (Dom 0) is RHEL 5 x86 with Xen Virtualization
technology. Dom 0 kernel is 2.6.18-8.el5xen. I have recompiled the Dom 0
kernel so that pciback
2016 Sep 30
0
CentOS-announce Digest, Vol 139, Issue 8
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When
2006 Aug 16
2
RedHat Node Panic Weekly
See earlier post - May 10th "Node Panic"
Can anyone tell me what might be happening here? I have a 3 node
cluster running under RH AS 4 (2.6.9-34.ELsmp) with ocfs2 v.
1.2.1. I've upgraded to 1.2.1 as suggested in the previous post,
but one or more of my nodes continues to panic weekly:
Aug 16 15:29:02 linux96 kernel: (6670,2):ocfs2_extend_file:787 ERROR: bug expression:
2020 Oct 17
0
[RFC] treewide: cleanup unreachable breaks
On Sat, 2020-10-17 at 09:09 -0700, trix at redhat.com wrote:
> From: Tom Rix <trix at redhat.com>
>
> This is a upcoming change to clean up a new warning treewide.
> I am wondering if the change could be one mega patch (see below) or
> normal patch per file about 100 patches or somewhere half way by collecting
> early acks.
>
> clang has a number of useful, new