search for: fuse_open

Displaying 4 results from an estimated 4 matches for "fuse_open".

Did you mean: file_open
2019 Sep 11
0
[PATCH v5 0/4] virtio-fs: shared file system for virtual machines
...current virtiofsd implementation and vary across device reset. Therefore we are unable to restore the same inode numbers upon restore. The solution is persistent inode numbers in virtiofsd. This is also needed to make open_by_handle_at(2) work and probably for live migration. * FUSE_OPEN -> fh (fd_map) The session contains FUSE file handles for open files. There is currently no way of re-opening a file so that a specific fh is returned. A mechanism to do so probably isn't necessary if the driver can update the fh to the new one produced by the device for al...
2019 Sep 12
0
[PATCH v5 0/4] virtio-fs: shared file system for virtual machines
...Therefore we are unable > > to restore the same inode numbers upon restore. > > > > The solution is persistent inode numbers in virtiofsd. This is also > > needed to make open_by_handle_at(2) work and probably for live > > migration. > > > > * FUSE_OPEN -> fh (fd_map) > > > > The session contains FUSE file handles for open files. There is > > currently no way of re-opening a file so that a specific fh is > > returned. A mechanism to do so probably isn't necessary if the > > driver can update the f...
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2