Displaying 9 results from an estimated 9 matches for "fuse_graph_setup".
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
..._cbk] 0-atlas-client-1:
Connected to 192.168.3.233:24009, attached to re
mote volume '/atlas'.
[2011-06-06 02:33:54.230116] I [afr-common.c:2514:afr_notify]
0-atlas-replicate-0: Subvolume 'atlas-client-1' came back up; going
online.
[2011-06-06 02:33:54.237541] I [fuse-bridge.c:3316:fuse_graph_setup]
0-fuse: switched to graph 0
[2011-06-06 02:33:54.237801] I [fuse-bridge.c:2897:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
kernel 7.13
[2011-06-06 02:33:54.238757] I [afr-common.c:836:afr_fresh_lookup_cbk]
0-atlas-replicate-0: added root inode
[2011-06-06 02:33:...
2012 Jan 25
0
remote operation failed: Stale NFS file handle
...ing about the issue. For what it is worth, the clients and
servers are the same systems. Any ideas? I saw a few other people
reporting the same issue with earlier releases of Gluster, but did not
see a solution or further troubleshooting steps.
[2012-01-24 20:57:22.765190] I [fuse-bridge.c:3339:fuse_graph_setup]
0-fuse: switched to graph 0
[2012-01-24 20:57:22.765314] I [fuse-bridge.c:2927:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
kernel 7.13
[2012-01-24 20:57:22.765710] I
[afr-common.c:1520:afr_set_root_inode_on_first_lookup]
0-openfire-replicate-0: added root i...
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
...nnected to 192.168.130.199:24009, attached to remote
volume '/raid5hs/glusterfs/export'.
[2013-03-28 10:57:44.806876] I [afr-common.c:2514:afr_notify]
0-sambavol-replicate-0: Subvolume 'sambavol-client-0' came back up; going
online.
[2013-03-28 10:57:44.811557] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse:
switched to graph 0
[2013-03-28 10:57:44.811773] I [fuse-bridge.c:2897:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel
7.10
[2013-03-28 10:57:44.812139] I [afr-common.c:836:afr_fresh_lookup_cbk]
0-sambavol-replicate-0: added root inode
[2013-03-28 10:...
2012 Jan 04
0
FUSE init failed
...client-0: Using Program GlusterFS 3.2.5, Num (1298437),
Version (310)
[2012-01-04 20:06:45.319931] I
[client-handshake.c:913:client_setvolume_cbk] 0-test-volume-client-0:
Connected to 10.141.0.1:24010, attached to remote volume '/local'.
[2012-01-04 20:06:45.327269] I [fuse-bridge.c:3339:fuse_graph_setup]
0-fuse: switched to graph 0
[2012-01-04 20:06:45.327438] E [fuse-bridge.c:2930:fuse_init]
0-glusterfs-fuse: FUSE init failed (Invalid argument)
[2012-01-04 20:06:45.328275] I
[afr-common.c:1520:afr_set_root_inode_on_first_lookup]
0-test-volume-replicate-0: added root inode
[2012-01-04 20:06:45...
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
...t-6: Connected to 10.59.0.17:24010, attached to remote volume
'/export'.
[2012-06-21 19:24:39.662931] I [client-handshake.c:1445:client_setvolume_cbk]
0-share-client-6: Server and Client lk-version numbers are not same, reopening
the fds
[2012-06-21 19:24:39.668028] I [fuse-bridge.c:4193:fuse_graph_setup] 0-fuse:
switched to graph 0
[2012-06-21 19:24:39.668234] I
[client-handshake.c:453:client_set_lk_version_cbk] 0-share-client-6: Server lk
version = 1
[2012-06-21 19:24:39.668287] I [fuse-bridge.c:3376:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.18
[...
2012 May 29
2
When self-healing is triggered?
...:863:client_setvolume_cbk] 0-client-7: Connected to 194.14.241.42:10001, attached to remote volume 'brick1'.[2012-05-22 17:06:06.133410] I [afr-common.c:2552:afr_notify] 0-replicate-3: Subvolume 'client-7' came back up; going online.[2012-05-22 17:06:06.138600] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse: switched graph to 0[2012-05-22
17:06:06.138805] I [fuse-bridge.c:2897:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.16[2012-05-22 17:06:06.139359] I [afr-common.c:819:afr_fresh_lookup_cbk] 0-replicate-0: added root inode[2012-05-22 17:06:06.1407...
2013 Nov 23
1
Maildir issue.
We brought up a test cluster to investigate GlusterFS.
Using the Quick Start instructions, we brought up a 2 server 1 brick
replicating setup and mounted to it from a third box with the fuse mount
(all ver 3.4.1)
# gluster volume info
Volume Name: mailtest
Type: Replicate
Volume ID: 9e412774-b8c9-4135-b7fb-bc0dd298d06a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB
sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit
network
I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using
glusterfs. (also tried NFS -> Gluster mount)
We have 50Gb of
2011 Jul 11
0
Instability when using RDMA transport
...:19:52.253304] E [client-handshake.c:1163:client_query_portmap_cbk] 0-gluster-vol01-client-1: failed to get the port number for remote subvolume
[2011-07-11 10:19:52.253626] I [client.c:1883:client_rpc_notify] 0-gluster-vol01-client-1: disconnected
[2011-07-11 10:19:52.256681] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse: switched to graph 0
[2011-07-11 10:19:52.256774] I [fuse-bridge.c:2897:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
[2011-07-11 10:19:55.503338] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-gluster-vol01-client-0: changing port to 24009 (from 0)
[2...