Displaying 20 results from an estimated 1000 matches similar to: "kernel: nfs: RPC call returned error 88"
2007 Mar 22
5
netapp/maildir/dovecot performance
We are seeing some poor performance recently that is focused around
users with large mailboxes (100,000 message /INBOX, 80,000 message
subfolders, etc).
The performance problem manifests as very high system% utilization -
basically iowait for NFS.
There are two imap servers with plenty of horsepower/memory/etc. They
are connected to a 3050c cluster via gig-e. Here are the mount
options:
2010 Sep 14
5
IOwaits over NFS
Hello.
We have a number of Xen 3.4.2. boxes which have constant iowaits at around
10% with spikes up to 100% when accessing data over NFS. We have been
unable to nail down the issue. Any advice?
System info:
release : 2.6.18-194.3.1.el5xen
version : #1 SMP Thu May 13 13:49:53 EDT 2010
machine : x86_64
nr_cpus : 16
nr_nodes
2016 Jul 14
2
Weird behaviour opening pdf files (and maybe others)
Hi,
I had to review a samba setup recently where people experienced strange
things.
Basically they more from a solaris on phisical machines environment
(locally hosted) running an old version of samba (2.x or even 1.x) to a Red
Hat virtualized environement runing 3.6 (remotely hosted).
The link between clients and server is really good (2x100Mb/s fiber) and
browsing shares and opening office
2006 Mar 24
1
Oracle control file on OCFS2 without datavolume, nointr
We had someone from Oracle support move one of the database control files on
an OCFS2 volume that's not mounted with datavolume,nointr (he moved it
to /opt/oracle). At that time I didn't realize the problem.
We'll move the control file to a volume that's mounted with datavolume,nointr
options.
The question is: how bad is it?
We are experiencing some serious problems,
2009 Oct 27
1
/etc/rc.local and /etc/fstab
Upon system boot, is it ok to mount OCFS2 mounts from /etc/rc.local
rather than /etc/fstab ?
Are there any downsides to using rc.local that you are aware of?
Example /etc/rc.local script:
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
2013 Jun 04
1
Unable to set the o2cb heartbeat to global
Hi,
I have added heartbeat mode as global, but when I do a mkfs and mount, and
then check the mount, it says I am in local mode. Even
/sys/kernel/config/cluster/ocfs2/heartbeat/mode says local. I am running
CentOS with 3.x kernel, with ocfs2-tools-1.6.4-1118.
mkfs -t ocfs2 -b 4K -C 1M -N 16 --cluster-stack=o2cb /dev/sdb
mount -t ocfs2 /dev/sdb /mnt -o
2006 Jun 26
0
[klibc 14/43] Remove in-kernel nfsroot code
The in-kernel nfsroot code is obsoleted by kinit. Remove it; it
causes conflicts.
Signed-off-by: H. Peter Anvin <hpa at zytor.com>
---
commit 161e1dc16ec1129b30b634a2a8dcbbd1937800c5
tree c30da837d746fe65d8a13ccf6f27bd381948edb4
parent 018604e070e143657abcf0cb256a1e2dda205d97
author H. Peter Anvin <hpa at zytor.com> Sat, 20 May 2006 16:24:05 -0700
committer H. Peter Anvin <hpa at
2010 Dec 19
1
recive error while mounting linux partation using ocfs2
hi,
mount -t ocfs2 -o datavolume,nointr -L "oracrsfile" /u02
when i mount linux partation using above command i recieve the following error
mount.ocfs2: Invalid argument while mounting /dev/sdd1 on /u02. Check
'dmesg' for more information on this error.
thanks in advance
zeeshan
2006 Aug 21
4
RC7: its issues or mine?
Background: I'm new to dovecot (although with many years Washington IMAP
behind me). We're considering migrating from Washington IMAP to dovecot
on the main service here, and have just started trying dovecot, using RC7.
Washington, IMAP has the usual(-ish) "/var/spool/mail" shared area for the
INBOX (trad. UNIX "From " format); a user's folders default to being
2006 Oct 18
2
Corrupted index cache file dovecot.index.cache: invalid record size
Hi,
Our dovecot setup consists of two hosts running dovecot-1.0.beta9 with
Maildir/indices stored on NFS(noac,actimeo=0 used).
I am seeing these messages at times - but no real problems on the client
side. Is this something to worry about?
dovecot: Oct 17 10:33:31 Error: IMAP(user): Corrupted index cache file
mailstore/user/Maildir/.mail.incoming/dovecot.index.cache: invalid record
size
2007 Mar 16
2
re: o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1" another node is heartbeating in our slot!
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Folks,
I'm trying to wrap my head around something that happened in our environment.
Basically, we noticed the error in /var/log/messages with no other errors.
"Mar 16 13:38:02 dbo3 kernel: (3712,3):o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1": another node is
heartbeating in our slot!"
Usually there are a
2007 Sep 24
0
Oracle CLusterware fails while runing root.sh
I am installing Oracle Clusterware 10.2.0.1 on Linux 4.5 ES on OCFS2 and ASM. my ocr and oss files are on ocfs2. I got following error while running it. Please let me know what went wrong.
--------------------------------------------------------------------------------------------------------------------
[root@linux2 crs]# ./root.sh
WARNING: directory '/u01/app/oracle/product' is not
2014 Nov 08
2
Master Works, Slave Does Not
On Nov 8, 2014, at 1:52 PM, Steve Read <sd_read at hotmail.com> wrote:
> I have made changes but it is still the same. That is when the server gets the lowbat/noAC signal it does shut down as expected but the slave does not.
>
> Perhaps it makes sense for me to list the present settings:
>
> On the Master:
>
> nut.conf
> MODE=netserver
>
>
2007 Aug 01
2
Mount options and NFS: just checking...
Greetings -
I'm now in the last couple of weeks before going live with Dovecot
(v1.0.3) on our revamped IMAP service. I'd like to double-check
about the best mount options to use; could someone advise, please?
I have three separate directory trees for the message store, the
control files and the index files. These are arranged as follows:
Message Store
Mounted over NFS from
2005 May 21
5
copying large files over NFS locks up machine on -testing from Thursday
I''ve locked up my dom0 a couple of times this morning copying a 3GB
file from local disk to an NFS mount(neither xend nor guests running).
I don''t encounter this problem on the stock CentOS 4 kernel. The
machine is a PowerEdge 2850 with 2 e1000 cards - the one in use is
connected to a PowerConnect 2216 10/100 switch and has negotiated
100Mbit. I''ll check if the stock
2009 May 07
1
df & du - that old chestnut
Afternoon,
We have an ocfs2 release 1.4 filesystem shared between two nodes
(RHEL5). The filesystem in question is used exclusively for Oracle RMAN
backups.
A df -h shows the following:
[root at imsthdb07 ~]# df -h /data/orabackup
Filesystem Size Used Avail
Use% Mounted on
/dev/mapper/eva_mpio_myserver07_08_oracle_bkup0 250G
2009 Dec 18
1
Maildir on NFS - attribute caching question
Hi Timo,
We've been running Dovecot with Maildir on NFS for quite a while - since
back in the 1.0 days I believe. I'm somewhat new here. Anyway...
The Wiki article on NFS states that 1.1 and newer will flush attribute
caches if necessary with mail_nfs_storage=yes. We're running 1.2.8 with
that set, as well as mail_nfs_index=yes, mmap_disable=yes and
fsync_disable=no. We have a pool
2014 Feb 19
1
Problems with Windows on KVM machine
Hello. We've started a virtualisation project and got stuck in one moment.
Currently we are using the following:
Intel 2312WPQJR as a node
Intel R2312GL4GS as a storage with Intel Infiniband 2 ports controller
Infiniband Mellanox SwitchX IS5023 for commutation.
The nodes run CentOS 6.5 with built-in Infiniband package (Linux v0002
2.6.32-431.el6.x86_64), the storage - CentOS 6.4 also built-in
2014 Nov 09
0
Master Works, Slave Does Not
I would like to verify my understanding keeping in mind I have a one master and one slave computer.
When the master gets the low batt/noAC signal it initiates a broadcast packet(s) on port 3493.
1) Is this correct?
Then the slave computer must have this port open and it listens on this port for the shut-down command.
2) Is this correct?
On the slave if I run the following:
steve at MyDesktop:~$
2006 Oct 16
1
indexes?
Picture: A set of very similar UN*X IMAP servers all NFS-mounting their
INBOX area (traditional Unix format) from a common "/var/spool/mail"
area; activity for any given user ought to be within one box although this
cannot be 100% guaranteed. There is the risk of multiple simultaneous
access (e.g. simultaneous LDA/delivers; simultaneous LDA/deliver and
user-driven IMAP update; etc.).