Displaying 3 results from an estimated 3 matches for "fuse_release".
Did you mean:
mouse_release
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster
1.3.10
I have three storage bricks and one client
Everytime i run iozone across this setup, i seem to get a bad file
descriptor around the 4k mark.
Any thoughts why? I'm sure more info is wanted, i'm just not sure what else
to include at this point.
thanks
[root at green gluster]# cat
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
..._entry_cbk] glusterfs-fuse:
62: (34) /tmp2/12/04/0000000412 => -1 (2)
2008-12-09 14:53:09 D [fuse-bridge.c:1701:fuse_flush] glusterfs-fuse:
63: FLUSH 0x1eeadf0
2008-12-09 14:53:09 D [fuse-bridge.c:939:fuse_err_cbk] glusterfs-fuse:
63: (16) ERR => 0
2008-12-09 14:53:09 D [fuse-bridge.c:1728:fuse_release] glusterfs-fuse:
64: CLOSE 0x1eeadf0
2008-12-09 14:53:09 D [fuse-bridge.c:939:fuse_err_cbk] glusterfs-fuse:
64: (17) ERR => 0
2008-12-09 14:53:15 D [inode.c:367:__active_inode] fuse/inode:
activating inode(589837), lru=1/1024
2008-12-09 14:53:15 D [inode.c:321:__destroy_inode] fuse/inode: des...