Displaying 20 results from an estimated 571 matches for "mgmt".
Did you mean:
gmt
2017 Jun 20
2
trash can feature, crashed???
...y volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16 16:08:14.410905] I [MSGID: 106132]
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already
stopped
[2017-06-16 16:08:14.412027] I [MSGID: 106568]
[glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is
stopped
[2017-06-16 16:08:14.412217] I [MSGID: 106132]
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management...
2017 Jun 20
0
trash can feature, crashed???
...e volume name. Is it data01 or date01?
> I also limited the max file size to 500MB:
> gluster volume set data01 features.trash-max-filesize 500MB
> 3 hours after that I enabled this, this specific gluster volume went down:
> [2017-06-16 16:08:14.410905] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-
> management: quotad already stopped
> [2017-06-16 16:08:14.412027] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-
> management: quotad service is stopped
> [2017-06-16 16:08:14.412217] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_pro...
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
...3.linova.de:/glusterfs/sde1enc/brick
Status: Connected
Number of entries: 0
root at gluster1:~#
This are the warnings and errors I've found in the logs on our three
servers...
* Warnings on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244...
2017 Jul 30
1
Lose gnfs connection during test
...28.873 /var/log/messages: Jul 30 18:53:02 localhost_10
kernel: nfs: server 10.147.4.99 not responding, still trying
Here is the error message in nfs.log for gluster:
19:26:18.440498] I [rpc-drc.c:689:rpcsvc_drc_init] 0-rpc-service: DRC
is turned OFF
[2017-07-30 19:26:18.450180] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2017-07-30 19:26:18.493551] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2017-07-30 19:26:18.545959] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in volfile...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...working as expected.
I'm using gluster 3.8.12 on CentOS 7.3, the only relevant information
that I found on the log file (etc-glusterfs-glusterd.vol.log) of my
three nodes are the following:
* node1, at the moment the issue begins:
[2017-07-19 15:07:43.130203] W
[glusterd-locks.c:572:glusterd_mgmt_v3_lock]
(-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
[0x7f373f25f00f]
-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
[0x7f373f250a25]
-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
[0x7f373f2f548f] ) 0-management: Lock for vm-images...
2017 Nov 07
2
Enabling Halo sets volume RO
...2.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2017-11-07 22:20:15.232176] I [MSGID: 106600] [glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed
[2017-11-07 22:20:15.235481] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped
[2017-11-07 22:20:15.235512] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped
[2017-11-07 22:20:15.235572] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...CentOS 7.3, the only relevant
> information that I found on the log file
> (etc-glusterfs-glusterd.vol.log) of my three nodes are the following:
>
> * node1, at the moment the issue begins:
>
> [2017-07-19 15:07:43.130203] W
> [glusterd-locks.c:572:glusterd_mgmt_v3_lock]
> (-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
> [0x7f373f25f00f]
> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
> [0x7f373f250a25]
> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...3.linova.de:/glusterfs/sde1enc/brick
Status: Connected
Number of entries: 0
root at gluster1:~#
This are the warnings and errors I've found in the logs on our three
servers...
* Warnings on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244...
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
...r3.linova.de:/glusterfs/sde1enc/brick
Status: Connected
Number of entries: 0
root at gluster1:~#
This are the warnings and errors I've found in the logs on our three
servers...
* Warnings on gluster1.linova.de:
glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244...
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...Connected
> Number of entries: 0
>
> root at gluster1:~#
>
> This are the warnings and errors I've found in the logs on our three
> servers...
>
> * Warnings on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244...
2017 Dec 15
3
Production Volume will not start
...nd we either have to wait or restart Gluster services. In the gluserd.log, it shows the following:
[2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start] 0-management: starting a fresh brick process for brick /exp/b1/gv0
[2017-12-15 18:03:12.673885] I [glusterd-locks.c:729:gd_mgmt_v3_unlock_timer_cbk] 0-management: In gd_mgmt_v3_unlock_timer_cbk
[2017-12-15 18:06:34.304868] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0
[2017-12-15 18:06:34.306603] E [MSGID: 106301] [glusterd-syncop.c:1353:g...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...cted
> Number of entries: 0
>
> root at gluster1:~#
>
> This are the warnings and errors I've found in the logs on our three
> servers...
>
> * Warnings on gluster1.linova.de:
>
> glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd....
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...I found on the log file
>> (etc-glusterfs-glusterd.vol.log) of my three nodes are the
>> following:
>>
>> * node1, at the moment the issue begins:
>>
>> [2017-07-19 15:07:43.130203] W
>> [glusterd-locks.c:572:glusterd_mgmt_v3_lock]
>> (-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
>> [0x7f373f25f00f]
>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
>> [0x7f373f250a25]
>> -->/usr/lib64/glusterfs/3.8.12/...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...m using gluster 3.8.12 on CentOS 7.3, the only relevant information that
> I found on the log file (etc-glusterfs-glusterd.vol.log) of my three
> nodes are the following:
>
> * node1, at the moment the issue begins:
>
> [2017-07-19 15:07:43.130203] W [glusterd-locks.c:572:glusterd_mgmt_v3_lock]
> (-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
> [0x7f373f25f00f] -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x2ba25)
> [0x7f373f250a25] -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
> [0x7f373f2f548f] ) 0-management...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...> root at gluster1:~#
> >
> > This are the warnings and errors I've found in the logs on our three
> > servers...
> >
> > * Warnings on gluster1.linova.de:
> >
> > glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
> [0x7f9b8d19eedf]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
> [0x7f9b8d245ad2]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd....
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...n CentOS 7.3, the only relevant information
>> that I found on the log file (etc-glusterfs-glusterd.vol.log) of my
>> three nodes are the following:
>>
>> * node1, at the moment the issue begins:
>>
>> [2017-07-19 15:07:43.130203] W [glusterd-locks.c:572:glusterd_mgmt_v3_lock]
>> (-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
>> [0x7f373f25f00f] -->/usr/lib64/glusterfs/3.8.12
>> /xlator/mgmt/glusterd.so(+0x2ba25) [0x7f373f250a25]
>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f)
>> [0x7f...
2016 Jun 08
0
unable to connect with virt-v2v to VMWare
Hi,
I am trying to import and convert some VMWare guests from a VMWare cluster
with vCenter version 6, to a KVM (oVirt) host. The KVM node (RHEL 7.2) has
virt-v2v 1.28.1, though I've also tried using Fedora 23 which has 1.32.4.
The details are:
vCenter server: nssesxi-mgmt
Datacenter name: North Sutton Street
esxi server which runs the VM: nssesxi-mgmt04
folder name: Systems
VM name: wvm2
cluster name: nssesxi
Unfortunately, spaces were put in the name of the datacenter, so I escape
them with %20
I authenticate against a AD domain (ARDA) with user 'cam'
So...
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
...t; >
>> > This are the warnings and errors I've found in the logs on our three
>> > servers...
>> >
>> > * Warnings on gluster1.linova.de:
>> >
>> > glusterd.log:[2023-05-31 23:56:00.032233 +0000] W
>> [glusterd-locks.c:545:glusterd_mgmt_v3_lock]
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)
>> [0x7f9b8d19eedf]
>> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2)
>> [0x7f9b8d245ad2]
>> -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/x...
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...elevant information
>>> that I found on the log file (etc-glusterfs-glusterd.vol.log) of my
>>> three nodes are the following:
>>>
>>> * node1, at the moment the issue begins:
>>>
>>> [2017-07-19 15:07:43.130203] W [glusterd-locks.c:572:glusterd_mgmt_v3_lock]
>>> (-->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0x3a00f)
>>> [0x7f373f25f00f] -->/usr/lib64/glusterfs/3.8.12
>>> /xlator/mgmt/glusterd.so(+0x2ba25) [0x7f373f250a25]
>>> -->/usr/lib64/glusterfs/3.8.12/xlator/mgmt/glusterd.so(+0xd048f...
2017 Dec 18
0
Production Volume will not start
...rt Gluster services. In the gluserd.log, it shows the following:
>
>
>
> [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
> 0-management: starting a fresh brick process for brick /exp/b1/gv0
>
> [2017-12-15 18:03:12.673885] I [glusterd-locks.c:729:gd_mgmt_v3_unlock_timer_cbk]
> 0-management: In gd_mgmt_v3_unlock_timer_cbk
>
> [2017-12-15 18:06:34.304868] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume gv0
>
> [2017-12-15 18:06:34.306603] E [MSGID: 106...