Displaying 6 results from an estimated 6 matches for "fuse_readv_cbk".
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster
1.3.10
I have three storage bricks and one client
Everytime i run iozone across this setup, i seem to get a bad file
descriptor around the 4k mark.
Any thoughts why? I'm sure more info is wanted, i'm just not sure what else
to include at this point.
thanks
[root at green gluster]# cat
2013 Oct 01
1
Gluster on ZFS: cannot open empty files
...og, and they are (I inserted the <volume-name> in place of the
volume's name):
[2013-10-01 19:32:52.125149] W [page.c:991:__ioc_page_error]
0-<volume-name>-io-cache: page error for page = 0x7f944ba2e4a0 & waitq =
0x7f944ba2ba10
[2013-10-01 19:32:52.125180] W [fuse-bridge.c:2049:fuse_readv_cbk]
0-glusterfs-fuse: 2278: READ => -1 (Operation not permitted)
Any ideas about how I can fix this?
Thanks,
Anand
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131001/d5dafca0/attachment.ht...
2023 Aug 12
2
Rebuilding a failed cluster
...03:50:49.834859 +0000] E [MSGID: 122066]
[ec-common.c:1301:ec_prepare_update_cbk] 0-gv-disperse-0: Unable to get
config xattr. FOP : 'FXATTROP' failed on gfid
076a511d-3721-4231-ba3b-5c4cbdbd7f5d. Pa
rent FOP: READ [No data available]
[2023-08-12 03:50:49.834930 +0000] W [fuse-bridge.c:2994:fuse_readv_cbk]
0-glusterfs-fuse: 39: READ => -1 gfid=076a511d-3721-4231-ba3b-5c4cbdbd7f5d
fd=0x7fbc9c001a98 (No data available)
so obviously, I need to copy over more stuff from the original cluster. If
I force the 3 nodes and the volume to have the same uuids, will that be
enough?
-------------- next part -...
2019 Nov 28
1
Stale File Handle Errors During Heavy Writes
...t;>
>>> [2019-11-26 22:41:33.565776] E [MSGID: 109040] [dht-helper.c:1336:dht_migration_complete_check_task] 3-scratch-dht: 24d53a0e-c28d-41e0-9dbc-a75e823a3c7d: failed to lookup the file on scratch-dht? [Stale file handle]
>>> [2019-11-26 22:41:33.565853] W [fuse-bridge.c:2827:fuse_readv_cbk] 0-glusterfs-fuse: 33112038: READ => -1 gfid=147040e2-a6b8-4f54-8490-f0f3df29ee50 fd=0x7f95d8d0b3f8 (Stale file handle)
>>>
>>> I've seen some bugs or other threads referencing similar issues, but couldn't really discern a solution from them.
>>>
>>> I...
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
...to rpc-transport (scratch-client-0)
[2018-02-22 18:07:45.228767] W [rpc-clnt.c:1694:rpc_clnt_submit] 2-scratch-client-0: failed to submit rpc-request (XID: 0x549a0 Program: GlusterFS 3.3, ProgVers: 330, Proc: 13) to rpc-transport (scratch-client-0)
[2018-02-22 18:07:45.228803] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 790331: READ => -1 gfid=3a86afe5-7392-49a5-b60a-e0c93b050c01 fd=0x2b2f23f023b4 (Transport endpoint is not connected)
[2018-02-22 18:07:45.228826] W [rpc-clnt.c:1694:rpc_clnt_submit] 2-scratch-client-0: failed to submit rpc-request (XID: 0x549a1 Program: GlusterFS 3.3, ProgVers...