Displaying 8 results from an estimated 8 matches for "fuse_fd_cbk".
2011 Aug 21
2
Fixing split brain
Hi
Consider the typical spit brain situation: reading from file gets EIO,
logs say:
[2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open]
0-gfs-replicate-0: failed to open as split brain seen, returning EIO
[2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk]
0-glusterfs-fuse: 1371456: OPEN()
/manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1
(Input/output error)
On the backend I have two versions, one with size 0, the other with a
decent size. I removed the one with size zero, ran ls -l on the client
to trigger self heal: the l...
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs
ii glusterfs-client
1.3.8-0pre2 GlusterFS fuse client
ii glusterfs-server
1.3.8-0pre2 GlusterFS fuse server
ii libglusterfs0
1.3.8-0pre2 GlusterFS libraries and
translator modules
I have 2 hosts set up to use AFR with
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
...afr_self_heal_completion_cbk]
0-gfs1-replicate-5: background data gfid self-heal failed on
/XMTEXT/gfs1_000/000/000/095
[2013-01-07 09:57:38.193937] W [afr-open.c:168:afr_open] 0-gfs1-replicate-5:
failed to open as split brain seen, returning EIO
[2013-01-07 09:57:38.194033] W [fuse-bridge.c:693:fuse_fd_cbk]
0-glusterfs-fuse: 3162527: OPEN() /XMTEXT/gfs1_000/000/000/095 => -1
(Input/output error)
[2013-01-07 10:08:12.569821] W
[afr-common.c:931:afr_detect_self_heal_by_lookup_status] 0-gfs1-replicate-5:
split brain detected during lookup of /XMTEXT/gfs1_000/000/000/095.
[2013-01-07 10:08:12.569891...
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster
1.3.10
I have three storage bricks and one client
Everytime i run iozone across this setup, i seem to get a bad file
descriptor around the 4k mark.
Any thoughts why? I'm sure more info is wanted, i'm just not sure what else
to include at this point.
thanks
[root at green gluster]# cat
2023 May 25
1
vfs_shadow_copy2 cannot read/find snapshots
...nied] 0-posix-acl-autoload:
> client: -, gfid: 08ee40ea-8f84-4240-a6b1-e56e0d393016,
> req(uid:1001103,gid:1000513,perm:4,ngrps:20),
> ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> [Keine Berechtigung]
> [2023-05-24 12:30:09.764234 +0000] W [fuse-bridge.c:1642:fuse_fd_cbk]
> 0-glusterfs-fuse: 201346: OPENDIR() /admin/projects/.snaps => -1
> (Keine Berechtigung)
Does it make a difference if you re-mount the volume without 'acl'
option? Even otherwise can you try listing the snapshot contents from
the server as a user(and not super user) who in reali...
2008 Oct 02
0
FW: Why does glusterfs not automatically fix these kinds of problems?
...ter doesn't like when the underlying filesystem is acted on
>directly.
>
>At 09:22 AM 10/1/2008, Will Rouesnel wrote:
> >Simple unify configuration with bricks running on the same system
> >(multiple hard disks):
> >
> >2008-10-02 02:11:14 E [fuse-bridge.c:715:fuse_fd_cbk]
> >glusterfs-fuse: 1534391: (12) /home/will/documents/University/PhD
> >2008/Experiments/Silver nanocube rapid polyol
> >synthesis/Photos/P8130384.JPG => -1 (5)
> >2008-10-02 02:11:14 E [unify.c:882:unify_open] unify:
> >/home/will/documents/University/PhD 2008/Ex...
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello,
I have installed GlusterFS one month ago, and replication have many issues :
First of all, our infrastructure, 2 storage array of 8Tb in replication
mode... We have our backups file on this arrays, so 6Tb of datas.
I want replicate datas on the second storrage array, so, i use this command
:
# gluster volume rebalance REP_SVG migrate-data start
And gluster start to replicate, in 2 weeks
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2