search for: 0x7e25

Displaying 12 results from an estimated 12 matches for "0x7e25".

Did you mean: 0x725
2017 Nov 13
2
snapshot mount fails in 3.12
...13 08:46:02.300928] I [fuse-bridge.c:5833:fini] 0-fuse: Unmounting '/mnt/temp'. [2017-11-13 08:46:02.308875] I [fuse-bridge.c:5838:fini] 0-fuse: Closing fuse connection to '/mnt/temp'. [2017-11-13 08:46:02.308987] W [glusterfsd.c:1347:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7feb50f70e25] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55a5dbb91365] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55a5dbb9118b] ) 0-: received signum (15), shutting down Google was no help so far. Since the mount worked before the upgrade I'm puzzled. What am I missing...
2017 Nov 13
0
snapshot mount fails in 3.12
...se-bridge.c:5833:fini] 0-fuse: > Unmounting '/mnt/temp'. > [2017-11-13 08:46:02.308875] I [fuse-bridge.c:5838:fini] 0-fuse: Closing > fuse connection to '/mnt/temp'. > [2017-11-13 08:46:02.308987] W [glusterfsd.c:1347:cleanup_and_exit] > (-->/lib64/libpthread.so.0(+0x7e25) [0x7feb50f70e25] > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55a5dbb91365] > -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55a5dbb9118b] ) 0-: > received signum (15), shutting down > > Google was no help so far. Since the mount worked before the upgrade I'm &g...
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...resume] 0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) resolution failed [2018-04-09 05:08:13.937258] I [fuse-bridge.c:5093:fuse_thread_proc] 0-fuse: initating unmount of /n [2018-04-09 05:08:13.938043] W [glusterfsd.c:1393:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb80b05ae25] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x560b52471675] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560b5247149b] ) 0-: received signum (15), shutting down [2018-04-09 05:08:13.938086] I [fuse-bridge.c:5855:fini] 0-fuse: Unmounting '/n'. [2018-04-0...
2017 Oct 24
2
brick is down but gluster volume status says it's fine
...f entries: 3 > > Brick gluster0:/export/brick7/digitalcorpora > /.trashcan > /DigitalCorpora/hello2.txt > /DigitalCorpora > Status: Connected > Number of entries: 3 > > [2017-10-24 17:18:48.288505] W [glusterfsd.c:1360:cleanup_and_exit] > (-->/lib64/libpthread.so.0(+0x7e25) [0x7f6f83c9de25] > -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x55a148eeb135] > -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55a148eeaf5b] ) 0-: > received signum (15), shutting down > [2017-10-24 17:18:59.270384] I [MSGID: 100030] [glusterfsd.c:2503:main] > 0-/us...
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...use: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) > resolution failed > [2018-04-09 05:08:13.937258] I [fuse-bridge.c:5093:fuse_thread_proc] > 0-fuse: initating unmount of /n > [2018-04-09 05:08:13.938043] W [glusterfsd.c:1393:cleanup_and_exit] > (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb80b05ae25] > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x560b52471675] > -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560b5247149b] ) 0-: > received signum (15), shutting down > [2018-04-09 05:08:13.938086] I [fuse-bridge.c:5855:fini] 0-fuse: > Unmounting ...
2017 Oct 24
0
brick is down but gluster volume status says it's fine
...ster0:/export/brick7/digitalcorpora >> /.trashcan >> /DigitalCorpora/hello2.txt >> /DigitalCorpora >> Status: Connected >> Number of entries: 3 >> >> [2017-10-24 17:18:48.288505] W [glusterfsd.c:1360:cleanup_and_exit] >> (-->/lib64/libpthread.so.0(+0x7e25) [0x7f6f83c9de25] >> -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x55a148eeb135] >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55a148eeaf5b] ) 0-: >> received signum (15), shutting down >> [2017-10-24 17:18:59.270384] I [MSGID: 100030] [glusterfsd.c:2503:...
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...0000000-0000-0000-0000-000000000001) > resolution failed > [2018-04-09 05:08:13.937258] I [fuse-bridge.c:5093:fuse_thread_proc] > 0-fuse: initating unmount of /n > [2018-04-09 05:08:13.938043] W [glusterfsd.c:1393:cleanup_and_exit] > (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb80b05ae25] > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x560b52471675] > -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560b5247149b] ) 0-: > received signum (15), shutting down > [2018-04-09 05:08:13.938086] I [fuse-bridge.c:5855:fini] 0-fuse: &gt...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...00-000000000001) >> resolution failed >> [2018-04-09 05:08:13.937258] I [fuse-bridge.c:5093:fuse_thread_proc] >> 0-fuse: initating unmount of /n >> [2018-04-09 05:08:13.938043] W [glusterfsd.c:1393:cleanup_and_exit] >> (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb80b05ae25] >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x560b52471675] >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560b5247149b] ) 0-: >> received signum (15), shutting down >> [2018-04-09 05:08:13.938086] I [fuse-bridge.c:5855:f...
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...d > ? ? [2018-04-09 05:08:13.937258] I > [fuse-bridge.c:5093:fuse_thread_proc] > ? ? 0-fuse: initating unmount of /n > ? ? [2018-04-09 05:08:13.938043] W > [glusterfsd.c:1393:cleanup_and_exit] > ? ? (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb80b05ae25] > ? ? -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) > [0x560b52471675] > ? ? -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) > [0x560b5247149b] ) 0-: > ? ? received signum (15), shutting down > ? ? [2018-...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...ter_home/brick'. [2017-10-25 10:13:31.362882] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-home-client-2: Server and Client lk-version numbers are not same, reopening the fds [2017-10-25 10:13:31.363339] W [glusterfsd.c:1360:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7ff8e2a95e25] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x56178e01a135] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x56178e019f5b] ) 0-: received signum (15), shutting down [2017-10-25 10:13:31.369043] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-home...