search for: mgmt_cbk_spec

Displaying 8 results from an estimated 8 matches for "mgmt_cbk_spec".

2017 Jul 30
1
Lose gnfs connection during test
...inuing [2017-07-30 19:26:18.493551] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:26:18.545959] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:42:29.704707] I [glusterfsd-mgmt.c:54:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-07-30 19:42:30.072282] I [glusterfsd-mgmt.c:54:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-07-30 19:42:30.269784] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:42:30.315577] I [glusterfsd-mgmt.c:16...
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
...elow message in some of the nodes. Also, i can see that scrub daemon is not showing in volume status for some nodes. Error msg type 1 -- [2017-09-01 10:04:45.840248] I [bit-rot.c:1683:notify] 0-glustervol-bit-rot-0: BitRot scrub ondemand called [2017-09-01 10:05:05.094948] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-09-01 10:05:06.401792] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-09-01 10:05:07.544524] I [MSGID: 118035] [bit-rot-scrub.c:1297:br_scrubber_scale_up] 0-glustervol-bit-rot-0: Scaling up scrubbe rs [0 => 36] [2017-09-01 10:05:07.552...
2018 Jan 18
0
issues after botched update
...:174726-home-client-0-0-0 [2018-01-18 08:38:41.790071] I [MSGID: 101055] [client_t.c:443:gf_client_unref] 0-home-server: Shutting down connection gluster00.cluster.local.local-21590-2018/01/18-08:38:41:174726-home-client-0-0-0 [2018-01-18 08:38:56.298125] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-01-18 08:38:56.384120] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-01-18 08:38:56.394284] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-01-18 08:38:56.450621]...
2012 Sep 10
1
A problem with gluster 3.3.0 and Sun Grid Engine
Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often. We partially fixed this problem by "ls" the folder before doing anything in our command, however, Sun Grid Engine
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
...ion = 1 [2018-02-02 11:23:36.297606] I [fuse-bridge.c:4083:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.23 [2018-02-02 11:23:36.297662] I [fuse-bridge.c:4768:fuse_graph_sync] 0-fuse: switched to graph 0 [2018-02-02 11:23:39.223315] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-02-02 11:23:39.449492] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-02-02 11:23:39.516999] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2018-02-02 11:23:39.542305] I [glusterfsd-mgmt.c:1600:...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...08:57:07.003170] W [MSGID: 114031] [client-rpc-fops.c:2928:client3_3_lookup_cbk] 0-home-client-2: remote operation failed. Path: <gfid:c2c7765a-17d9-49be-b7d7-042047a2186a> (c2c7765a-17d9-49be-b7d7-042047a2186a) [No such file or directory] [2017-10-23 09:18:41.875890] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-10-23 09:18:41.879945] I [glusterfsd-mgmt.c:1789:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-10-23 09:24:00.923818] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2017-10-23 09:24:00.977037] I [glusterfsd-mgmt.c:52:mg...