search for: mgmt_getspec_cbk

Displaying 20 results from an estimated 42 matches for "mgmt_getspec_cbk".

2017 Nov 13
2
snapshot mount fails in 3.12
...in the release notes for snapshot mounting? I recently upgraded from 3.10 to 3.12 on CentOS (using centos-release-gluster312). The upgrade worked flawless. The volume works fine too. But mounting a snapshot fails with those two error messages: [2017-11-13 08:46:02.300719] E [glusterfsd-mgmt.c:1796:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2017-11-13 08:46:02.300744] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:snaps) Up to the mounting everything works as before: # gluster snapshot create test home no-timestamp # gluster snaps...
2017 Jul 30
1
Lose gnfs connection during test
...log/messages: Jul 30 18:53:02 localhost_10 kernel: nfs: server 10.147.4.99 not responding, still trying Here is the error message in nfs.log for gluster: 19:26:18.440498] I [rpc-drc.c:689:rpcsvc_drc_init] 0-rpc-service: DRC is turned OFF [2017-07-30 19:26:18.450180] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:26:18.493551] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:26:18.545959] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30...
2017 Nov 13
0
snapshot mount fails in 3.12
...shot mounting? > I recently upgraded from 3.10 to 3.12 on CentOS (using > centos-release-gluster312). The upgrade worked flawless. The volume > works fine too. But mounting a snapshot fails with those two error > messages: > > [2017-11-13 08:46:02.300719] E [glusterfsd-mgmt.c:1796:mgmt_getspec_cbk] > 0-glusterfs: failed to get the 'volume file' from server > [2017-11-13 08:46:02.300744] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk] > 0-mgmt: failed to fetch volume file (key:snaps) > > Up to the mounting everything works as before: > # gluster snapshot create test home...
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t glusterfs gluster01:/volume01 /mnt/gluster The client
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
...poll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] [2023-03-14 19:13:48.912759 +0000] I [MSGID: 101190] [event- epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] [2023-03-14 19:13:48.914529 +0000] E [glusterfsd- mgmt.c:2137:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2023-03-14 19:13:48.914549 +0000] E [glusterfsd- mgmt.c:2338:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:ansible) [2023-03-14 19:13:48.914739 +0000] W [glusterfsd.c:1432:cleanup_and_exit] (-->/lib/x86_64-linux- gnu...
2018 May 08
1
mount failing client to gluster cluster.
...in/glusterfs --volfile-server=glusterp1.graywitch.co.nz --volfile-id=/gv0/kvm01/images/ /var/lib/libvirt/images) [2018-05-08 03:33:48.996244] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2018-05-08 03:33:48.998694] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2018-05-08 03:33:48.998721] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/gv0/kvm01/images/) [2018-05-08 03:33:48.998891] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gn...
2017 Dec 29
1
cannot mount with glusterfs-fuse after NFS-Ganesha enabled
...20:volume_end] 0-parser: "type" not specified for volume gv0-ganesha [2017-12-28 08:15:30.132756] E [MSGID: 100026] [glusterfsd.c:2265:glusterfs_process_volfp] 0-: failed to construct the graph [2017-12-28 08:15:30.133054] E [graph.c:982:glusterfs_graph_destroy] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f0d81334fb1] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x149) [0x7f0d8132f519] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f0d80e6edc4] ) 0-graph: invalid argument: graph [Invalid argument] [2017-12-28 08:15:30.133119] W [glusterfsd.c:1288:cleanup_and_exit] (...
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
...ash: 2013-03-14 16:33:50 configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.3.0 /lib64/libc.so.6[0x38d0a32920] /lib64/libc.so.6(memcpy+0x309)[0x38d0a88da9] /usr/sbin/glusterfs(mgmt_getspec_cbk+0x398)[0x40c888] /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x38d1a0f4d5] /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x38d1a0fcd0] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x38d1a0aeb8] /usr/lib64/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f1d47b...
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
...l_worker] 0-epoll: Started thread with > index [{index=1}] > [2023-03-14 19:13:48.912759 +0000] I [MSGID: 101190] [event- > epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with > index [{index=0}] > [2023-03-14 19:13:48.914529 +0000] E [glusterfsd- > mgmt.c:2137:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume > file' from server > [2023-03-14 19:13:48.914549 +0000] E [glusterfsd- > mgmt.c:2338:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file > (key:ansible) > [2023-03-14 19:13:48.914739 +0000] W > [glusterfsd.c:1432:cleanup_and_exit]...
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
...-glustervol-bit-rot-0: SCRUB TUNABLES:: [Frequency: biweekly, Throttle: lazy] [2017-09-01 10:05:07.552942] I [MSGID: 118038] [bit-rot-scrub.c:948:br_fsscan_schedule] 0-glustervol-bit-rot-0: Scrubbing is schedule d to run at 2017-09-15 10:05:07 [2017-09-01 10:05:07.553457] I [glusterfsd-mgmt.c:1778:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-09-01 10:05:20.953815] I [bit-rot.c:1683:notify] 0-glustervol-bit-rot-0: BitRot scrub ondemand called [2017-09-01 10:05:20.953845] I [MSGID: 118038] [bit-rot-scrub.c:1085:br_fsscan_ondemand] 0-glustervol-bit-rot-0: Ondemand Scrubbing s cheduled t...
2017 Dec 26
0
trying to mount gluster volume
...20:volume_end] 0-parser: "type" not specified for volume gv0-ganesha [2017-12-26 07:49:38.612216] E [MSGID: 100026] [glusterfsd.c:2265:glusterfs_process_volfp] 0-: failed to construct the graph [2017-12-26 07:49:38.612509] E [graph.c:982:glusterfs_graph_destroy] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f30339f7fb1] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x149) [0x7f30339f2519] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f3033531dc4] ) 0-graph: invalid argument: graph [Invalid argument] [2017-12-26 07:49:38.612573] W [glusterfsd.c:1288:cleanup_and_exit] (...
2017 Nov 16
0
Missing files on one of the bricks
...usted.gfid=0x9612ecd2106d42f295ebfef495c1d8ab # gluster volume heal data01 Launching heal operation to perform index self heal on volume data01 has been successful Use heal info commands to check status # cat /var/log/glusterfs/glustershd.log [2017-11-12 08:39:01.907287] I [glusterfsd-mgmt.c:1789:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-11-15 08:18:02.084766] I [MSGID: 100011] [glusterfsd.c:1414:reincarnate] 0-glusterfsd: Fetching the volume file from server... [2017-11-15 08:18:02.085718] I [glusterfsd-mgmt.c:1789:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing...
2018 Jan 15
2
Using the host name of the volume, its related commands can become very slow
...terfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=test /data/gluster/test) [2018-02-03 13:53:22.810249] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2018-02-03 13:53:22.811289] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2018-02-03 13:53:22.811323] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:test) [2018-02-03 13:53:22.811847] W [glusterfsd.c:1251:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_r...
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote: > > > On 15 November 2017 at 19:57, Frederic Harmignies > <frederic.harmignies at elementai.com > <mailto:frederic.harmignies at elementai.com>> wrote: > > Hello, we have 2x files that are missing from one of the bricks. > No idea how to fix this. > > Details: > > # gluster volume
2018 Jan 05
0
Another VM crashed
...ransport-type: tcp Bricks: Brick1: srvpve2g:/data/brick2/brick Brick2: srvpve3g:/data/brick2/brick Brick3: srvpve1g:/data/brick2/brick (arbiter) Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet [2017-12-31 05:25:01.724213] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2018-01-02 07:56:32.763516] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-datastore1-server: disconnecting connection from srvpve4-1 652-2017/05/03-19:41:30:493103-datastore1-client-1-0-2 [2018-01-02 07:56:32.763554] I [MSGID: 101055] [client_t...
2018 Jan 16
0
Using the host name of the volume, its related commands can become very slow
...(args: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=test > /data/gluster/test) > [2018-02-03 13:53:22.810249] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] > 0-epoll: Started thread with index 1 > [2018-02-03 13:53:22.811289] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] > 0-glusterfs: failed to get the 'volume file' from server > [2018-02-03 13:53:22.811323] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] > 0-mgmt: failed to fetch volume file (key:test) > [2018-02-03 13:53:22.811847] W [glusterfsd.c:1251:cleanup_and_exit] > (-->/lib64/libgfr...
2017 Aug 07
2
Slow write times to gluster disk
...071767c7] /usr/lib64/libglusterfs.so.0(xlator_init+0x52)[0x3889622a82] /usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3889669aa1] /usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x57)[0x3889669bd7] /usr/sbin/glusterfs(glusterfs_process_volfp+0xed)[0x405c0d] /usr/sbin/glusterfs(mgmt_getspec_cbk+0x312)[0x40dbd2] /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3889e0f7b5] /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1a1)[0x3889e10891] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x3889e0bbd8] /usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so(+0x94cd)[0x7f8d088e04cd] /usr/...
2018 Jan 18
0
issues after botched update
...:41:174726-home-client-0-0-0 [2018-01-18 08:38:56.298125] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-01-18 08:38:56.384120] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt: Volume file changed [2018-01-18 08:38:56.394284] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-01-18 08:38:56.450621] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-01-18 08:39:17.606237] E [MSGID: 113107] [posix.c:1150:posix_seek] 0-home-posix: seek failed on fd 639 le...
2017 Aug 08
0
Slow write times to gluster disk
...xlator_init+0x52)[0x3889622a82] > > /usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3889669aa1] > > /usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x57)[0x3889669bd7] > > /usr/sbin/glusterfs(glusterfs_process_volfp+0xed)[0x405c0d] > > /usr/sbin/glusterfs(mgmt_getspec_cbk+0x312)[0x40dbd2] > > /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3889e0f7b5] > > /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1a1)[0x3889e10891] > > /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x3889e0bbd8] > > /usr/lib64/glusterfs/3.7.11/rpc-transport/s...
2013 Sep 19
0
dht_layout_dir_mismatch
...] W [socket.c:514:__socket_rwv] 0-glusterfs: readv failed (No data available) [2013-09-18 22:04:32.671484] W [socket.c:1962:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (No data available), peer (127.0.0.1:24007) [2013-09-18 22:04:42.977516] I [glusterfsd-mgmt.c:1583:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2013-09-18 22:14:25.221279] I [dht-layout.c:745:dht_layout_dir_mismatch] 0-USER-HOME-dht: subvol: USER-HOME-client-2; inode layout - 2147483646 - 3221225468; disk layout - 0 - 1073741822 [2013-09-18 22:14:25.221338] I [dht-common.c:623:dht_revalidate_...