Displaying 20 results from an estimated 39 matches for "106118".
Did you mean:
106,18
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
....so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b
* Errors on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:44:00.046099 +00...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
....so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b
* Errors on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:44:00.046099 +00...
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
...d.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b
* Errors on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
glusterd.log:[2023-06-01 02:44:00.046099 +0000...
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b
>
> * Errors on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
> glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms
> glusterd.log:[2023-06-01 02:44:00.04...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...gt; -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
>
> * Errors on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> glusterd.log:[20...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> >
> > * Errors on gluster1.linova.de:
> >
> > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
> lock for gfs_vms
> > gluste...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...s/10.1/xlator/mgmt/glusterd.so(+0xcc525)
>> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
>> a410159b-12db-4cf7-bad5-c5c817679d1b
>> >
>> > * Errors on gluster1.linova.de:
>> >
>> > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
>> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
>> lock for gfs_vms
>> > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
>> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire
>> lock for gfs_...
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
.../glusterd.so(+0xcc525)
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by
> a410159b-12db-4cf7-bad5-c5c817679d1b
> >
> > * Errors on gluster1.linova.de:
> >
> > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to
> acquire lock for gfs_vms
> > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to
> acquir...
2018 May 02
3
Healing : No space left on device
...tor/mgmt/glusterd.so(+0x22549)
[0x7f0047ae2549]
-->/usr/lib64/glusterfs/3.12.9/xlator/mgmt/glusterd.so(+0x2bdf0)
[0x7f0047aebdf0]
-->/usr/lib64/glusterfs/3.12.9/xlator/mgmt/glusterd.so(+0xd8371)
[0x7f0047b98371] ) 0-management: Lock for vol thedude not held
??? ??? The message "W [MSGID: 106118]
[glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock
not released for rom" repeated 3 times between [2018-05-02
09:45:57.262321] and [2018-05-02 09:46:06.267804]
??? ??? [2018-05-02 09:46:06.267826] W [MSGID: 106118]
[glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-ma...
2017 Jun 21
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...rt-type: tcp
Bricks:
Brick1: gfsnode1:/media/brick1
Brick2: gfsnode2:/media/brick1
Brick3: gfsnode3:/media/brick1
Brick4: gfsnode1:/media/brick2
Brick5: gfsnode2:/media/brick2
Brick6: gfsnode3:/media/brick2
Options Reconfigured:
nfs.disable: on
[2017-06-21 16:02:52.376709] W [MSGID: 106118]
[glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock not
released for teravolume
[2017-06-21 16:03:03.429032] I [MSGID: 106163]
[glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack] 0-management:
using the op-version 31000
[2017-06-21 16:13:13.326478] E [rpc-clnt.c:20...
2017 Nov 07
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
...o(+0x22e5a) [0x7f5047169e5a] -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) [0x7f5047173dc8] -->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a) [0x7f504722a72a] ) 0-management: Lock for vol dev_static not held
glusterd.log:[2017-11-05 22:37:06.934806] W [MSGID: 106118] [glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock not released for dev_static
glusterd.log:[2017-11-05 22:39:49.924472] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) [0x7fde97921e5a] -->/usr/lib64/glus...
2017 Nov 06
2
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
Do the users have permission to see/interact with the directories, in
addition to the files?
On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
> Please provide the gluster volume info. Do you see any errors in the
> client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)?
>
>
> Thanks,
> Nithya
>
> On 6
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...a/brick1
>
> Brick3: gfsnode3:/media/brick1
>
> Brick4: gfsnode1:/media/brick2
>
> Brick5: gfsnode2:/media/brick2
>
> Brick6: gfsnode3:/media/brick2
>
> Options Reconfigured:
>
> nfs.disable: on
>
>
>
>
>
> [2017-06-21 16:02:52.376709] W [MSGID: 106118] [glusterd-handler.c:5913:__glusterd_peer_rpc_notify]
> 0-management: Lock not released for teravolume
>
> [2017-06-21 16:03:03.429032] I [MSGID: 106163]
> [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 31000
>
> [2017-06-21...
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
...gt; Brick4: gfsnode1:/media/brick2
>>
>> Brick5: gfsnode2:/media/brick2
>>
>> Brick6: gfsnode3:/media/brick2
>>
>> Options Reconfigured:
>>
>> nfs.disable: on
>>
>>
>>
>>
>>
>> [2017-06-21 16:02:52.376709] W [MSGID: 106118]
>> [glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock
>> not released for teravolume
>>
>> [2017-06-21 16:03:03.429032] I [MSGID: 106163]
>> [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack]
>> 0-management: using the op-version...
2018 May 02
0
Healing : No space left on device
...> [0x7f0047ae2549]
> -->/usr/lib64/glusterfs/3.12.9/xlator/mgmt/glusterd.so(+0x2bdf0)
> [0x7f0047aebdf0]
> -->/usr/lib64/glusterfs/3.12.9/xlator/mgmt/glusterd.so(+0xd8371)
> [0x7f0047b98371] ) 0-management: Lock for vol thedude not held
> ??? ??? The message "W [MSGID: 106118]
> [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock
> not released for rom" repeated 3 times between [2018-05-02
> 09:45:57.262321] and [2018-05-02 09:46:06.267804]
> ??? ??? [2018-05-02 09:46:06.267826] W [MSGID: 106118]
> [glusterd-handler.c:6342:__glust...
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...rt-type: tcp
Bricks:
Brick1: gfsnode1:/media/brick1
Brick2: gfsnode2:/media/brick1
Brick3: gfsnode3:/media/brick1
Brick4: gfsnode1:/media/brick2
Brick5: gfsnode2:/media/brick2
Brick6: gfsnode3:/media/brick2
Options Reconfigured:
nfs.disable: on
[2017-06-21 16:02:52.376709] W [MSGID: 106118] [glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock not released for teravolume
[2017-06-21 16:03:03.429032] I [MSGID: 106163] [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31000
[2017-06-21 16:13:13.326478] E [rpc-clnt.c:20...
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
...a/brick1
>
> Brick3: gfsnode3:/media/brick1
>
> Brick4: gfsnode1:/media/brick2
>
> Brick5: gfsnode2:/media/brick2
>
> Brick6: gfsnode3:/media/brick2
>
> Options Reconfigured:
>
> nfs.disable: on
>
>
>
>
>
> [2017-06-21 16:02:52.376709] W [MSGID: 106118]
> [glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock not
> released for teravolume
>
> [2017-06-21 16:03:03.429032] I [MSGID: 106163]
> [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 31000
>
> [2017-0...
2017 Dec 15
3
Production Volume will not start
...ing failed on nsgtpcfs02.corp.nsgdv.com. Please check log file for details.
[2017-12-15 18:56:17.965184] E [MSGID: 106116] [glusterd-mgmt.c:124:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on tpc-arbiter1-100617. Please check log file for details.
[2017-12-15 18:56:17.965277] E [MSGID: 106118] [glusterd-mgmt.c:2087:glusterd_mgmt_v3_release_peer_locks] 0-management: Unlock failed on peers
[2017-12-15 18:56:17.965372] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.3/xlator/mgmt/glusterd.so(+0xe5631) [0x7f48e44a1631] -->/usr/lib64/glusterfs/3.12.3/xlat...
2017 Dec 18
0
Production Volume will not start
...nsgdv.com. Please check log file for details.
>
> [2017-12-15 18:56:17.965184] E [MSGID: 106116]
> [glusterd-mgmt.c:124:gd_mgmt_v3_collate_errors] 0-management: Unlocking
> failed on tpc-arbiter1-100617. Please check log file for details.
>
> [2017-12-15 18:56:17.965277] E [MSGID: 106118] [glusterd-mgmt.c:2087:
> glusterd_mgmt_v3_release_peer_locks] 0-management: Unlock failed on peers
>
> [2017-12-15 18:56:17.965372] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/3.12.3/xlator/mgmt/glusterd.so(+0xe5631)
> [0x7f48e44a1631] -->/usr/l...
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...rt-type: tcp
Bricks:
Brick1: gfsnode1:/media/brick1
Brick2: gfsnode2:/media/brick1
Brick3: gfsnode3:/media/brick1
Brick4: gfsnode1:/media/brick2
Brick5: gfsnode2:/media/brick2
Brick6: gfsnode3:/media/brick2
Options Reconfigured:
nfs.disable: on
[2017-06-21 16:02:52.376709] W [MSGID: 106118] [glusterd-handler.c:5913:__glusterd_peer_rpc_notify] 0-management: Lock not released for teravolume
[2017-06-21 16:03:03.429032] I [MSGID: 106163] [glusterd-handshake.c:1309:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31000
[2017-06-21 16:13:13.326478] E [rpc-clnt.c:20...