search for: fuse_err_cbk

Displaying 18 results from an estimated 18 matches for "fuse_err_cbk".

2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. -- Michael Brown | `One of the main causes of the fall of Systems Consultant | the Roman Empire was that, lacking zero,...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...h] 48-gv0-dht: no subvolume for hash (value) = 122440868 [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] 0-glusterfs-fuse: 3615890: WRITE => -1 gfid=c73ca10f-e83e-42a9-9b0a-1de4e12c6798 fd=0x7ffa3802a5f0 (?????? ?????/??????) [2018-02-04 07:41:16.254503] W [fuse-bridge.c:1377:fuse_err_cbk] 0-glusterfs-fuse: 3615891: FLUSH() ERR => -1 (?????? ?????/??????) The message "W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash (value) = 122440868" repeated 81 times between [2018-02-04 07:41:16.189349] and [2018-02-04 07:41:16.254480] [2018-0...
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
...me for hash > (value) = 122440868 > [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] > 0-glusterfs-fuse: 3615890: WRITE => -1 gfid=c73ca10f-e83e-42a9-9b0a-1de4e12c6798 > fd=0x7ffa3802a5f0 (?????? ?????/??????) > [2018-02-04 07:41:16.254503] W [fuse-bridge.c:1377:fuse_err_cbk] > 0-glusterfs-fuse: 3615891: FLUSH() ERR => -1 (?????? ?????/??????) > The message "W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] > 48-gv0-dht: no subvolume for hash (value) = 122440868" repeated 81 times > between [2018-02-04 07:41:16.189349] and [2018-02-04 07:4...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
...(value) = 122440868 >> [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] >> 0-glusterfs-fuse: 3615890: WRITE => -1 gfid=c73ca10f-e83e-42a9-9b0a-1de4e12c6798 >> fd=0x7ffa3802a5f0 (?????? ?????/??????) >> [2018-02-04 07:41:16.254503] W [fuse-bridge.c:1377:fuse_err_cbk] >> 0-glusterfs-fuse: 3615891: FLUSH() ERR => -1 (?????? ?????/??????) >> The message "W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] >> 48-gv0-dht: no subvolume for hash (value) = 122440868" repeated 81 times >> between [2018-02-04 07:41:16.189349] and...
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
...0868 >>> [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] >>> 0-glusterfs-fuse: 3615890: WRITE => -1 gfid=c73ca10f-e83e-42a9-9b0a-1de4e12c6798 >>> fd=0x7ffa3802a5f0 (?????? ?????/??????) >>> [2018-02-04 07:41:16.254503] W [fuse-bridge.c:1377:fuse_err_cbk] >>> 0-glusterfs-fuse: 3615891: FLUSH() ERR => -1 (?????? ?????/??????) >>> The message "W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] >>> 48-gv0-dht: no subvolume for hash (value) = 122440868" repeated 81 times >>> between [2018-02-04 07:41...
2017 Jul 18
2
Sporadic Bus error on mmap() on FUSE mount
...e in the mount log: [2017-07-18 08:30:22.470770] E [MSGID: 108008] [afr-transaction.c:2629:afr_write_txn_refresh_done] 0-flow-replicate-0: Failing FALLOCATE on gfid 6a675cdd-2ea1-473f-8765-2a4c935a22ad: split-brain observed. [Input/output error] [2017-07-18 08:30:22.470843] W [fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse: 56589: FALLOCATE() ERR => -1 (Input/output error) I'm not sure about current state of mmap() on FUSE and Gluster, but its strange that it works only on certain mount of the same volume. version: glusterfs 3.10.3 [root at dc1]# gluster volume info flow Volume Name: flo...
2017 Jul 18
0
Sporadic Bus error on mmap() on FUSE mount
...t; [2017-07-18 08:30:22.470770] E [MSGID: 108008] > [afr-transaction.c:2629:afr_write_txn_refresh_done] 0-flow-replicate-0: > Failing FALLOCATE on gfid 6a675cdd-2ea1-473f-8765-2a4c935a22ad: split-brain > observed. [Input/output error] > [2017-07-18 08:30:22.470843] W [fuse-bridge.c:1291:fuse_err_cbk] > 0-glusterfs-fuse: 56589: FALLOCATE() ERR => -1 (Input/output error) > > I'm not sure about current state of mmap() on FUSE and Gluster, but its > strange that it works only on certain mount of the same volume. This can be caused when a mmap()'d region is not written. For...
2017 Jul 18
1
Sporadic Bus error on mmap() on FUSE mount
...8:30:22.470770] E [MSGID: 108008] >> [afr-transaction.c:2629:afr_write_txn_refresh_done] 0-flow-replicate-0: >> Failing FALLOCATE on gfid 6a675cdd-2ea1-473f-8765-2a4c935a22ad: split-brain >> observed. [Input/output error] >> [2017-07-18 08:30:22.470843] W [fuse-bridge.c:1291:fuse_err_cbk] >> 0-glusterfs-fuse: 56589: FALLOCATE() ERR => -1 (Input/output error) >> >> I'm not sure about current state of mmap() on FUSE and Gluster, but its >> strange that it works only on certain mount of the same volume. > This can be caused when a mmap()'d region...
2023 Feb 14
1
failed to close Bad file descriptor on file creation after using setfattr to test latency?
...864dbaf8-1012-4b79-afe4-89de0ace2628 [Input/output error] > [2023-02-14 16:23:45.648770] E [MSGID: 122077] [ec-generic.c:204:ec_flush] > 0-stor-disperse-0: Failing FLUSH on 864dbaf8-1012-4b79-afe4-89de0ace2628 > [Bad file descriptor] > [2023-02-14 16:23:45.649022] W [fuse-bridge.c:1945:fuse_err_cbk] > 0-glusterfs-fuse: 99: FLUSH() ERR => -1 (Bad file descriptor) > [2023-02-14 16:23:45.648996] E [MSGID: 122077] [ec-generic.c:204:ec_flush] > 0-stor-disperse-0: Failing FLUSH on 864dbaf8-1012-4b79-afe4-89de0ace2628 > [Bad file descriptor]" Gluster volume info sans hostname/...
2017 Dec 21
1
seeding my georeplication
...ed -1 error: Stale file handle [Stale file handle] [2017-12-21 16:36:37.173374] D [MSGID: 0] [gfid-access.c:390:ga_heal_cbk] 0-stack-trace: stack-address: 0x7fe39ebd7498, gfid-access-autoload returned -1 error: Stale file handle [Stale file handle] [2017-12-21 16:36:37.173405] W [fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse: 57862: SETXATTR() /path => -1 (Stale file handle) I notice in slave-upgrade.sh, the .glusterfs contents on each brick are deleted, and the volume restarted before gsync-sync-gfid is run. I have a good working backup at the moment and deleting the .glusterfs folder worries me...
2009 Mar 18
0
glusterfs bdb backend problem
...rfs.log 2009-03-18 09:31:21 E [fuse-bridge.c:539:fuse_attr_cbk] glusterfs-fuse: 13: UTIMENS() /a => -1 (Operation not permitted) 2009-03-18 09:32:17 E [fuse-bridge.c:1606:fuse_writev_cbk] glusterfs-fuse: 17: WRITE => -1 (File descriptor in bad state) 2009-03-18 09:32:17 E [fuse-bridge.c:924:fuse_err_cbk] glusterfs-fuse: 18: FLUSH() ERR => -1 (File descriptor in bad state) ------------------------------------------------------------------------------------------- My glusterfs server log: [root at orion31 glusterfs]# tail glusterfsd.log -n 500 ......................... 2009-03-18 09:30:26 W [xl...
2019 Aug 23
2
plenty of vacuuuming processes
...:fuse_entry_cbk] 0-glusterfs-fuse: 755327: LOOKUP() /path/desktop.ini => -1 (Keine Berechtigung) [2019-08-23 09:23:39.537803] W [fuse-bridge.c:939:fuse_entry_cbk] 0-glusterfs-fuse: 755330: LOOKUP() /path/desktop.ini => -1 (Keine Berechtigung) [2019-08-23 09:23:39.538232] W [fuse-bridge.c:1823:fuse_err_cbk] 0-glusterfs-fuse: 755331: ACCESS() /path => -1 (Keine Berechtigung) This is /var/log/samba/log.smbd: [2019/08/23 11:29:56.943764, 10, pid=1246, effective(101776, 513), real(101776, 0)] ../lib/util/util.c:514(dump_data) ? [0000] 00 00 02 00 00 00 00 2F?? 00 68 00 69 00 6C 00 64?? ......./ .h....
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be
2019 Aug 23
2
plenty of vacuuuming processes
Hi, I have a ctdb cluster with 3 nodes and 3 glusterfs (version 6) nodes up and running. I observe plenty of these situations: A connected Windows-10 client doesn't react anymore. I use forder redirections.? - Smbstatus shows up some (auth in progress) processes. - In the logs of a ctdb node I get: Aug 23 10:12:29 ctdb-1 ctdbd[2167]: Ending traverse on DB locking.tdb (id 568831), records
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
...usterfs-fuse: 62: LOOKUP /tmp2/12/04/0000000412 2008-12-09 14:53:09 D [fuse-bridge.c:464:fuse_entry_cbk] glusterfs-fuse: 62: (34) /tmp2/12/04/0000000412 => -1 (2) 2008-12-09 14:53:09 D [fuse-bridge.c:1701:fuse_flush] glusterfs-fuse: 63: FLUSH 0x1eeadf0 2008-12-09 14:53:09 D [fuse-bridge.c:939:fuse_err_cbk] glusterfs-fuse: 63: (16) ERR => 0 2008-12-09 14:53:09 D [fuse-bridge.c:1728:fuse_release] glusterfs-fuse: 64: CLOSE 0x1eeadf0 2008-12-09 14:53:09 D [fuse-bridge.c:939:fuse_err_cbk] glusterfs-fuse: 64: (17) ERR => 0 2008-12-09 14:53:15 D [inode.c:367:__active_inode] fuse/inode: activating...
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
...5443b Program: GlusterFS 3.3, ProgVers: 330, Proc: 13) to rpc-transport (scratch-client-1) [2018-02-22 18:07:47.148271] W [MSGID: 103037] [rdma.c:3016:gf_rdma_submit_request] 2-rpc-transport/rdma: sending request to peer (172.17.2.255:49154) failed [2018-02-22 18:07:47.148281] W [fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse: 790447: FLUSH() ERR => -1 (Transport endpoint is not connected) [2018-02-22 18:07:47.148279] W [MSGID: 114031] [client-rpc-fops.c:855:client3_3_writev_cbk] 2-scratch-client-1: remote operation failed [Transport endpoint is not connected] [2018-02-22 18:07:47.148310] W [MSGID:...