Displaying 17 results from an estimated 17 matches for "106004".
Did you mean:
10004
2017 Jun 21
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...17-06-21
16:03:03.202284. timeout = 600 for 192.168.150.52:$
[2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management:
bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21
16:03:03.204555. timeout = 600 for 192.168.150.53:$
[2017-06-21 16:18:34.456522] I [MSGID: 106004]
[glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer
<gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in
Cluste$
[2017-06-21 16:18:34.456619] W
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.3/xla...
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...meout = 600 for 192.168.150.52:$
>
> [2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management:
> bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21
> 16:03:03.204555. timeout = 600 for 192.168.150.53:$
>
> [2017-06-21 16:18:34.456522] I [MSGID: 106004] [glusterd-handler.c:5888:__glusterd_peer_rpc_notify]
> 0-management: Peer <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>),
> in state <Peer in Cluste$
>
> [2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib/x86_64-linux-...
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...150.52:$
>>
>> [2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management:
>> bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21
>> 16:03:03.204555. timeout = 600 for 192.168.150.53:$
>>
>> [2017-06-21 16:18:34.456522] I [MSGID: 106004]
>> [glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer
>> <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in
>> Cluste$
>>
>> [2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
>>...
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...17-06-21 16:03:03.202284. timeout = 600 for 192.168.150.52:$
[2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21 16:03:03.204555. timeout = 600 for 192.168.150.53:$
[2017-06-21 16:18:34.456522] I [MSGID: 106004] [glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in Cluste$
[2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.3/xla...
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
...meout = 600 for 192.168.150.52:$
>
> [2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management:
> bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21
> 16:03:03.204555. timeout = 600 for 192.168.150.53:$
>
> [2017-06-21 16:18:34.456522] I [MSGID: 106004]
> [glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer
> <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in
> Cluste$
>
> [2017-06-21 16:18:34.456619] W
> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib/x86...
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...17-06-21 16:03:03.202284. timeout = 600 for 192.168.150.52:$
[2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21 16:03:03.204555. timeout = 600 for 192.168.150.53:$
[2017-06-21 16:18:34.456522] I [MSGID: 106004] [glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in Cluste$
[2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.3/xla...
2017 Jul 05
1
Gluster failure due to "0-management: Lock not released for <volumename>"
...meout = 600 for 192.168.150.52:$
>
> [2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management:
> bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21
> 16:03:03.204555. timeout = 600 for 192.168.150.53:$
>
> [2017-06-21 16:18:34.456522] I [MSGID: 106004] [glusterd-handler.c:5888:__glusterd_peer_rpc_notify]
> 0-management: Peer <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>),
> in state <Peer in Cluste$
>
> [2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
> (-->/usr/lib/x86_64-linux-...
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...17-06-21 16:03:03.202284. timeout = 600 for 192.168.150.52:$
[2017-06-21 16:13:13.326519] E [rpc-clnt.c:200:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x105 sent = 2017-06-21 16:03:03.204555. timeout = 600 for 192.168.150.53:$
[2017-06-21 16:18:34.456522] I [MSGID: 106004] [glusterd-handler.c:5888:__glusterd_peer_rpc_notify] 0-management: Peer <gfsnode2> (<e1e1caa5-9842-40d8-8492-a82b079879a3>), in state <Peer in Cluste$
[2017-06-21 16:18:34.456619] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.3/xla...
2017 May 29
1
Failure while upgrading gluster to 3.10.1
...>>>>> [glusterd-handshake.c:2091:__glusterd_peer_dump_version_cbk]
>>>>>>>>>>>>>>> 0-management: Error through RPC layer, retry again later
>>>>>>>>>>>>>>> [2017-05-17 06:48:35.611944] I [MSGID: 106004]
>>>>>>>>>>>>>>> [glusterd-handler.c:5201:__glusterd_peer_rpc_notify]
>>>>>>>>>>>>>>> 0-management: Peer <192.168.0.7> (<5ec54b4f-f60c-48c6-9e55-95f2bb58f633>),
>>>>>>>>>...
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
...__glusterd_peer_dump_version_cbk]
>>>>>>>>>>>>>>>>>>>>> 0-management: Error through RPC layer, retry again later
>>>>>>>>>>>>>>>>>>>>> [2017-05-17 06:48:35.611944] I [MSGID: 106004]
>>>>>>>>>>>>>>>>>>>>> [glusterd-handler.c:5201:__glusterd_peer_rpc_notify]
>>>>>>>>>>>>>>>>>>>>> 0-management: Peer <192.168.0.7> (<5ec54b4f-f60c-48c6-9e55-95f2bb5...
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
...[0x7f75fdc12e5c]
> -->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x27a08)
> [0x7f75fdc1ca08]
> -->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0xd07fa)
> [0x7f75fdcc57fa] ) 0-management: Lock for vol shchst01-sto not held
> [2017-12-20 05:02:44.667795] I [MSGID: 106004]
> [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer
> <shchhv01-sto> (<f6205edb-a0ea-4247-9594-c4cdc0d05816>), in state <Peer
> Rejected>, has disconnected from glusterd.
> [2017-12-20 05:02:44.667948] W [MSGID: 106118]
> [glusterd-handler.c:5...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...mt/glusterd.so(+0x1de5c)
[0x7f75fdc12e5c]
-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x27a08)
[0x7f75fdc1ca08]
-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0xd07fa)
[0x7f75fdcc57fa] ) 0-management: Lock for vol shchst01-sto not held
[2017-12-20 05:02:44.667795] I [MSGID: 106004]
[glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer
<shchhv01-sto> (<f6205edb-a0ea-4247-9594-c4cdc0d05816>), in state <Peer
Rejected>, has disconnected from glusterd.
[2017-12-20 05:02:44.667948] W [MSGID: 106118]
[glusterd-handler.c:5241:__glusterd_peer_rpc_n...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...;> -->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x27a08)
>> [0x7f75fdc1ca08]
>> -->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0xd07fa)
>> [0x7f75fdcc57fa] ) 0-management: Lock for vol shchst01-sto not held
>> [2017-12-20 05:02:44.667795] I [MSGID: 106004]
>> [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer
>> <shchhv01-sto> (<f6205edb-a0ea-4247-9594-c4cdc0d05816>), in state <Peer
>> Rejected>, has disconnected from glusterd.
>> [2017-12-20 05:02:44.667948] W [MSGID: 106118]
>> [...
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>
2017 Jul 03
0
Failure while upgrading gluster to 3.10.1
...rd_peer_dump_version_cbk] 0-management:
>>>>>>>>>>>>>>>>>>>>>> Error through RPC layer, retry again later
>>>>>>>>>>>>>>>>>>>>>> [2017-05-17 06:48:35.611944] I [MSGID: 106004]
>>>>>>>>>>>>>>>>>>>>>> [glusterd-handler.c:5201:__glusterd_peer_rpc_notify] 0-management: Peer
>>>>>>>>>>>>>>>>>>>>>> <192.168.0.7> (<5ec54b4f-f60c-48c6-9e55...
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At