Displaying 6 results from an estimated 6 matches for "bs3".
Did you mean:
bs
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it
to his cluster via automount. It seems to work initially but after some
time (days) he is now regularly seeing this warning:
"Too many levels of symbolic links"
$ df: `/share/gl': Too many levels of symbolic links
when he tries to traverse the mounted filesystems.
I've been using gluster with static mounts
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
...deal with it at the OS level.
The gluster volume is running over IPoIB on QDR IB and looks like this:
Volume Name: gl
Type: Distribute
Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
Status: Started
Number of Bricks: 8
Transport-type: tcp,rdma
Bricks:
Brick1: bs2:/raid1
Brick2: bs2:/raid2
Brick3: bs3:/raid1
Brick4: bs3:/raid2
Brick5: bs4:/raid1
Brick6: bs4:/raid2
Brick7: bs1:/raid1
Brick8: bs1:/raid2
Options Reconfigured:
performance.write-behind-window-size: 1024MB
performance.flush-behind: on
performance.cache-size: 268435456
nfs.disable: on
performance.io-cache: on
performance.quick-read: on...
2012 Jun 20
2
How Fatal? "Server and Client lk-version numbers are not same, reopening the fds"
...s to
all the clients and servers. The gluster volume was created using the
IPoIB network and numbers:
Volume Name: gl
Type: Distribute
Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
Status: Started
Number of Bricks: 8
Transport-type: tcp,rdma
Bricks:
Brick1: bs2:/raid1
Brick2: bs2:/raid2
Brick3: bs3:/raid1
Brick4: bs3:/raid2
Brick5: bs4:/raid1
Brick6: bs4:/raid2
Brick7: bs1:/raid1
Brick8: bs1:/raid2
Options Reconfigured:
nfs.disable: on
performance.io-cache: on
performance.quick-read: on
performance.io-thread-count: 64
auth.allow: 10.2.*.*,10.1.*.*
using 3.3 servers on SciLi 6.2 and 3.3.0qa4...
2012 Aug 23
1
Stale NFS file handle
Hi, I'm a bit curious of error messages of the type "remote operation
failed: Stale NFS file handle". All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?
Regards,
/jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Dec 10
4
Structure needs cleaning on some files
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
[client-rpc-fops.c:526:client3_3_stat_cbk]
2003 Dec 01
0
No subject
...rom matavnet.hu (mail.matavnet.hu [195.228.240.10]) by
lists.samba.org (Postfix) with SMTP id 8592E50AC for
<samba@lists.samba.org>; Fri, 3 Aug 2001 05:48:07 -0700 (PDT)
Received: (qmail 2773 invoked from network); 3 Aug 2001 14:52:39 +0200
Received: from line-69-226.dial.matav.net (HELO bs3.bluesyst.hu)
(root@145.236.69.226) by mail.matavnet.hu with SMTP; 3 Aug 2001
14:52:40 +0200
Received: from bluesystem.hu (brain.bluesyst.hu [192.168.1.16]) by
bs3.bluesyst.hu (8.9.3/8.8.7) with ESMTP id OAA27705 for
<samba@lists.samba.org>; Fri, 3 Aug 2001 14:52:58 +0200
Message-ID: &...