Displaying 6 results from an estimated 6 matches for "fuse_lookup".
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
...lient_protocol_reconnect]
client2: breaking reconnect chain
2008-12-09 14:53:09 D [fuse-bridge.c:384:fuse_entry_cbk] glusterfs-fuse:
2: (34) / => 1
2008-12-09 14:53:09 W [fuse-bridge.c:398:fuse_entry_cbk] glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0
2008-12-09 14:53:09 D [fuse-bridge.c:521:fuse_lookup] glusterfs-fuse: 3:
LOOKUP /tmp2
2008-12-09 14:53:09 D [fuse-bridge.c:384:fuse_entry_cbk] glusterfs-fuse:
3: (34) /tmp2 => 589835
2008-12-09 14:53:09 D [inode.c:577:__create_inode] fuse/inode: create
inode(589835)
2008-12-09 14:53:09 D [inode.c:367:__active_inode] fuse/inode:
activating inod...
2010 Jan 07
2
Random directory/files gets unavailable after sometime
Hello,
I am using glusterfs v3.0.0 and having some problems with random directory/files.
They work fine for some time ( hours ) and them suddenly gets unavailable:
# ls -lh
ls: cannot access MyDir: No such file or directory
total 107M
d????????? ? ? ? ? ? MyDir
( long dir list, intentionally hidden )
At the logs i get a lot of messages like those ones:
[2010-01-07
2019 Sep 11
0
[PATCH v5 0/4] virtio-fs: shared file system for virtual machines
...contain session state. It is not
easily possible to bring back the state held in memory after the device
has been reset.
The following areas of the FUSE protocol are stateful and need special
attention:
* FUSE_INIT - this is pretty easy, we must re-negotiate the same
settings as before.
* FUSE_LOOKUP -> fuse_inode (inode_map)
The session contains a set of inode numbers that have been looked up
using FUSE_LOOKUP. They are ephemeral in the current virtiofsd
implementation and vary across device reset. Therefore we are unable
to restore the same inode numbers upon restore.
Th...
2019 Sep 12
0
[PATCH v5 0/4] virtio-fs: shared file system for virtual machines
...ry after the device
> > has been reset.
> >
> > The following areas of the FUSE protocol are stateful and need special
> > attention:
> >
> > * FUSE_INIT - this is pretty easy, we must re-negotiate the same
> > settings as before.
> >
> > * FUSE_LOOKUP -> fuse_inode (inode_map)
> >
> > The session contains a set of inode numbers that have been looked up
> > using FUSE_LOOKUP. They are ephemeral in the current virtiofsd
> > implementation and vary across device reset. Therefore we are unable
> > to re...
2013 Jun 17
0
gluster client timeouts / found conflict
...ass authentication to allow clients to connect from ports
above 1024. I'm mentioning this in case it is relevant for the
connection/timeout/conflict problem we're experiencing.
Excerpt from the included file which might be the most relevant parts:
[2013-06-14 15:55:54] T [fuse-bridge.c:596:fuse_lookup] glusterfs-fuse:
3642552: LOOKUP /369/60702093
[2013-06-14 15:55:54] T [dht-layout.c:306:dht_disk_layout_merge]
distribute: merged to layout: 1610612730 - 1700091214 (type 0) from dn-083-1
[2013-06-14 15:55:54] T [dht-layout.c:306:dht_disk_layout_merge]
distribute: merged to layout: 894784850 - 984...
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2