Displaying 20 results from an estimated 46 matches for "106499".
Did you mean:
106496
2018 Feb 19
2
Upgrade from 3.8.15 to 3.12.5
..._add_resp] 0-glusterd: Responded to found1.ssd.org (0), ret: 0, op_ret: -1
[2018-02-19 05:54:49.896254] E [MSGID: 106062] [glusterd-utils.c:9983:glusterd_max_opversion_use_rsp_dict] 0-management: Maximum supported op-version not set in destination dictionary
[2018-02-19 05:57:21.353136] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume VMData
[2018-02-19 05:57:21.358345] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume VMData2
[2018-02-19 05...
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
...rd: Responded to found1.ssd.org (0), ret: 0, op_ret: -1
> [2018-02-19 05:54:49.896254] E [MSGID: 106062] [glusterd-utils.c:9983:
> glusterd_max_opversion_use_rsp_dict] 0-management: Maximum supported
> op-version not set in destination dictionary
> [2018-02-19 05:57:21.353136] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume VMData
> [2018-02-19 05:57:21.358345] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume VMData2
&...
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...event_handler] 0-transport: disconnecting now
[2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk] 0-cli: Received resp to set
[2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0
?? etc-glusterfs-glusterd.vol.log ?
[2018-01-10 14:59:23.676814] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume scratch
[2018-01-10 15:00:29.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2018-01-10 15:01:09.872082] I [MSGID:...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
..._event_handler]
0-transport: disconnecting now
[2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk]
0-cli: Received resp to set
[2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0
?? etc-glusterfs-glusterd.vol.log ?
[2018-01-10 14:59:23.676814] I [MSGID: 106499]
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume scratch
[2018-01-10 15:00:29.516071] I [MSGID: 106488]
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management:
Received get vol req
[2018-01-10 15:01:09.872082] I [MSGID:...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...f00f]
-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
[0x7f373f250a25]
-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
[0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held by
2c6f154f-efe3-4479-addc-b2021aa9d5df
[2017-07-19 15:07:43.128242] I [MSGID: 106499]
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume vm-images-repo
[2017-07-19 15:07:43.130244] E [MSGID: 106119]
[glusterd-op-sm.c:3782:glusterd_op_ac_lock] 0-management: Unable to
acquire lock for vm-images-repo
[2017-07-19 15:07:43.13032...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...onnecting now
> [2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk]
> 0-cli: Received resp to set
> [2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
> ?? etc-glusterfs-glusterd.vol.log ?
>
> [2018-01-10 14:59:23.676814] I [MSGID: 106499]
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
> Received status volume req for volume scratch
> [2018-01-10 15:00:29.516071] I [MSGID: 106488]
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management:
> Received get vol req
> [2018-01-...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...r/mgmt/glusterd.so(+0x2ba25)
> [0x7f373f250a25]
> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held by
> 2c6f154f-efe3-4479-addc-b2021aa9d5df
> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume vm-images-repo
> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
> [glusterd-op-sm.c:3782:glusterd_op_ac_lock] 0-management: Unable
> to acquire lock...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...[0x7f373f250a25]
>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held
>> by 2c6f154f-efe3-4479-addc-b2021aa9d5df
>> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
>> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
>> 0-management: Received status volume req for volume
>> vm-images-repo
>> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
>> [glusterd-op-sm.c:3782:glusterd_op_ac_loc...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
> [0x7f373f250a25] -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held by
> 2c6f154f-efe3-4479-addc-b2021aa9d5df
> [2017-07-19 15:07:43.128242] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume vm-images-repo
> [2017-07-19 15:07:43.130244] E [MSGID: 106119] [glusterd-op-sm.c:3782:glusterd_op_ac_lock]
> 0-management: Unable to acquire lock for vm-images-repo
> [2017-...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...t; /xlator/mgmt/glusterd.so(+0x2ba25) [0x7f373f250a25]
>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held by
>> 2c6f154f-efe3-4479-addc-b2021aa9d5df
>> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
>> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
>> Received status volume req for volume vm-images-repo
>> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
>> [glusterd-op-sm.c:3782:glusterd_op_ac_lock] 0-management: Unable to
>> acquire lock...
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...glusterd.so(+0x2ba25) [0x7f373f250a25]
>>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>>> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo held by
>>> 2c6f154f-efe3-4479-addc-b2021aa9d5df
>>> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
>>> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
>>> Received status volume req for volume vm-images-repo
>>> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
>>> [glusterd-op-sm.c:3782:glusterd_op_ac_lock] 0-management: Unable to
>&g...
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...50a25]
>>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>>> [0x7f373f2f548f] ) 0-management: Lock for vm-images-repo
>>> held by 2c6f154f-efe3-4479-addc-b2021aa9d5df
>>> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
>>> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
>>> 0-management: Received status volume req for volume
>>> vm-images-repo
>>> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
>>> [glusterd-op-sm.c:378...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose,
Gluster is working as expected. The Distribute-replicated type just means
that there are now 2 replica sets and files will be distributed across
them.
A volume of type Replicate (1xn where n is the number of bricks in the
replica set) indicates there is no distribution (all files on the
volume will be present on all the bricks in the volume).
A volume of type Distributed-Replicate
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>>>> [0x7f373f2f548f] ) 0-management: Lock for
>>>> vm-images-repo held by 2c6f154f-efe3-4479-addc-b2021aa9d5df
>>>> [2017-07-19 15:07:43.128242] I [MSGID: 106499]
>>>> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
>>>> 0-management: Received status volume req for volume
>>>> vm-images-repo
>>>> [2017-07-19 15:07:43.130244] E [MSGID: 106119]
>>&g...
2017 Jun 21
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...106118]
[glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock not
released for teravolume
[2017-06-21 16:18:45.628552] I [MSGID: 106163]
[glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack] 0-management:
using the op-version 31000
[2017-06-21 16:23:40.607173] I [MSGID: 106499]
[glusterd-handler.c:4363:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume teravolume
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170621/e60697de/attachment.ht...
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com>
wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same problem with a different peer. I noticed
> with (call it) gluster-2, when I couldn't make a new volume. I compared
> /var/lib/glusterd between them, and
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica.
Thanks
Jose
[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process
2018 Mar 06
4
Fixing a rejected peer
Hello,
So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...__glusterd_peer_rpc_notify]
> 0-management: Lock not released for teravolume
>
> [2017-06-21 16:18:45.628552] I [MSGID: 106163]
> [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 31000
>
> [2017-06-21 16:23:40.607173] I [MSGID: 106499] [glusterd-handler.c:4363:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume teravolume
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/g...
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...0-management: Lock
>> not released for teravolume
>>
>> [2017-06-21 16:18:45.628552] I [MSGID: 106163]
>> [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack]
>> 0-management: using the op-version 31000
>>
>> [2017-06-21 16:23:40.607173] I [MSGID: 106499]
>> [glusterd-handler.c:4363:__glusterd_handle_status_volume] 0-management:
>> Received status volume req for volume teravolume
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lis...