Displaying 15 results from an estimated 15 matches for "virtnod".
Did you mean:
virtnode
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...nagement: Unable to
acquire lock for vm-images-repo
[2017-07-19 15:07:43.130320] E [MSGID: 106376]
[glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler returned: -1
[2017-07-19 15:07:43.130665] E [MSGID: 106116]
[glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
failed on virtnode-0-1-gluster. Please check log file for details.
[2017-07-19 15:07:43.131293] E [MSGID: 106116]
[glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
failed on virtnode-0-2-gluster. Please check log file for details.
[2017-07-19 15:07:43.131360] E [MSGID: 106151]
[glusterd-syncop.c:...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...t; [2017-07-19 15:07:43.130320] E [MSGID: 106376]
> [glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler
> returned: -1
> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management:
> Locking failed on virtnode-0-1-gluster. Please check log file for
> details.
> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management:
> Locking failed on virtnode-0-2-gluster. Please check log file for
> details.
> [2017-07-19...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...SGID: 106376]
>> [glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler
>> returned: -1
>> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management:
>> Locking failed on virtnode-0-1-gluster. Please check log file
>> for details.
>> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management:
>> Locking failed on virtnode-0-2-gluster. Please check log file
>>...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...re lock for vm-images-repo
> [2017-07-19 15:07:43.130320] E [MSGID: 106376] [glusterd-op-sm.c:7775:glusterd_op_sm]
> 0-management: handler returned: -1
> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
> failed on virtnode-0-1-gluster. Please check log file for details.
> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
> failed on virtnode-0-2-gluster. Please check log file for details.
> [2017-07-19 15:07:43.131360] E [MSGID: 106151]...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...s-repo
>> [2017-07-19 15:07:43.130320] E [MSGID: 106376]
>> [glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler returned: -1
>> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
>> failed on virtnode-0-1-gluster. Please check log file for details.
>> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
>> failed on virtnode-0-2-gluster. Please check log file for details.
>> [2017-07-19 15:07:43.131360] E...
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...5:07:43.130320] E [MSGID: 106376]
>>> [glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler returned:
>>> -1
>>> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
>>> failed on virtnode-0-1-gluster. Please check log file for details.
>>> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking
>>> failed on virtnode-0-2-gluster. Please check log file for details.
>>> [2017-07-19 15...
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...t;> [glusterd-op-sm.c:7775:glusterd_op_sm] 0-management: handler
>>> returned: -1
>>> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors]
>>> 0-management: Locking failed on virtnode-0-1-gluster. Please
>>> check log file for details.
>>> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors]
>>> 0-management: Locking failed on virtnode-0-2-gluster. Please
>>&g...
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...:7775:glusterd_op_sm] 0-management:
>>>> handler returned: -1
>>>> [2017-07-19 15:07:43.130665] E [MSGID: 106116]
>>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors]
>>>> 0-management: Locking failed on virtnode-0-1-gluster.
>>>> Please check log file for details.
>>>> [2017-07-19 15:07:43.131293] E [MSGID: 106116]
>>>> [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors]
>>>> 0-management: Locking failed on virtnod...
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
...manually kill a brick process on a non critical
> volume, after that into the log I see:
>
> [2017-06-29 07:03:50.074388] I [MSGID: 100030]
> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running
> /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd -s
> virtnode-0-1-gluster --volfile-id
> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
> -p
> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
> -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket
&g...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...0'.
Today I've tried to manually kill a brick process on a non critical
volume, after that into the log I see:
[2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version
3.8.12 (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
-p
/var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
-S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket --brick-name
/data/glusterf...
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
...gt;> critical volume, after that into the log I see:
>>
>> [2017-06-29 07:03:50.074388] I [MSGID: 100030]
>> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running
>> /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd
>> -s virtnode-0-1-gluster --volfile-id
>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>> -p
>> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
>> -S /var/run/gluster/c779852c21e2a91...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...tried to manually kill a brick process on a non critical
> volume, after that into the log I see:
>
> [2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12
> (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
> -p /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-
> gluster-data-glusterfs-brick1b-iso-images-repo.pid -S /var/run/gluster/
> c779852c21e2a91eaabbdda3b9127262.socket --bri...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...ll a brick process on a non critical
>> volume, after that into the log I see:
>>
>> [2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12
>> (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>> -p /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-glus
>> ter-data-glusterfs-brick1b-iso-images-repo.pid -S
>> /var/run/gluster/c779852c21e2a91eaabbdda3b91272...
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
...hat into the log I see:
>>>
>>> [2017-06-29 07:03:50.074388] I [MSGID: 100030]
>>> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started
>>> running /usr/sbin/glusterfsd version 3.8.12 (args:
>>> /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
>>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>>> -p
>>> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
>>> -S...
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com>
wrote:
> On 06/28/2017 06:52 PM, Paolo Margara wrote:
>
>> Hi list,
>>
>> yesterday I noted the following lines into the glustershd.log log file:
>>
>> [2017-06-28 11:53:05.000890] W [MSGID: 108034]
>> [afr-self-heald.c:479:afr_shd_index_sweep]
>>