Displaying 20 results from an estimated 82 matches for "101190".
Did you mean:
10110
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...address-family' is
deprecated, preferred is 'transport.address-family', continuing with
correction
[2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-04-09 05:08:13.729025] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-04-09 05:08:13.737757] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2018-04-09 05:08:13.738114] I [MSGID: 101190]
[event-epoll.c:613:event_disp...
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...> deprecated, preferred is 'transport.address-family', continuing with
> correction
> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2018-04-09 05:08:13.738114] I [MSGID: 101190]
>...
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
...--volfile-server glusterX --volfile-id
ansible -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log
/tmp/gverify.sh.txIgka}]
[2023-03-14 19:13:48.905883 +0000] I [glusterfsd.c:2421:daemonize] 0-
glusterfs: Pid of current running process is 3466942
[2023-03-14 19:13:48.912723 +0000] I [MSGID: 101190] [event-
epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
index [{index=1}]
[2023-03-14 19:13:48.912759 +0000] I [MSGID: 101190] [event-
epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
index [{index=0}]
[2023-03-14 19:13:48.914529 +0000] E [glusterfsd-
m...
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...ed is 'transport.address-family', continuing with
> correction
> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not
> available"
> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2018-04-09 05:08:13.738114...
2018 Jan 23
2
Understanding client logs
...9 10:20:39.752079] I [MSGID: 100030] [glusterfsd.c:2460:main] 0-/usr/bin/glusterfs: Started running /usr/bin/glusterfs version 3.10.1 (args: /usr/bin/glusterfs --negative-timeout=60 --volfile-server=1\
92.168.67.31 --volfile-id=/interbull-interbull /interbull)
[2017-11-09 10:20:39.763902] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-11-09 10:20:39.768738] I [afr.c:94:fix_quorum_options] 0-interbull-interbull-replicate-0: reindeer: incoming qtype = none
[2017-11-09 10:20:39.768756] I [afr.c:116:fix_quorum_options] 0-interbull-interbull-r...
2017 Jul 20
1
Error while mounting gluster volume
...lid argument]
[1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed to
register the event
[1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140
(Success)
[1970-01-02 10:54:04.420422] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[1970-01-02 10:54:04.420429] I [glusterfsd-mgmt.c:1824:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[1970-01-02 10:54:04.420480] E [MSGID: 101063]
[event-epoll.c:550:event_dispatch_epoll_handl...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
....address-family', continuing with
>> correction
>> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not
>> available"
>> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 1
>> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index...
2018 Jan 23
0
Understanding client logs
...[MSGID: 100030] [glusterfsd.c:2460:main]
> 0-/usr/bin/glusterfs: Started running /usr/bin/glusterfs version 3.10.1
> (args: /usr/bin/glusterfs --negative-timeout=60 --volfile-server=1\
> 92.168.67.31 --volfile-id=/interbull-interbull /interbull)
> [2017-11-09 10:20:39.763902] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2017-11-09 10:20:39.768738] I [afr.c:94:fix_quorum_options]
> 0-interbull-interbull-replicate-0: reindeer: incoming qtype = none
> [2017-11-09 10:20:39.768756] I [afr.c:116:fix_quorum_options]
>...
2018 Feb 24
0
Failed heal volume
Whem I try to run healin volume i have this log errors and 3221 file not
healing:
[2018-02-24 15:32:00.915219] W [socket.c:3216:socket_connect] 0-glusterfs:
Error disabling sockopt IPV6_V6ONLY: "Protocollo non disponibile"
[2018-02-24 15:32:00.915854] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-02-24 15:32:01.925714] E [MSGID: 114058]
[client-handshake.c:1571:client_query_portmap_cbk]
0-datastore_temp-client-1: failed to get the port number for remote
subvolume. Please run 'gluster volume statu...
2018 Feb 13
2
Failed to get quota limits
...ing /usr/sbin/glusterfs version 3.10.7 (args: /usr/sbin/glusterfs --volfile-server localhost --volfile-id myvolume -l /var/log/glusterfs/quota-mount-myvolume.log -p /var/run/gluster/myvolume_quota_list.pid --client-pid -5 /var/run/gluster/myvolume_quota_list/)
[2018-02-13 08:16:09.940432] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-13 08:16:09.940491] E [socket.c:2327:socket_connect_finish] 0-glusterfs: connection to ::1:24007 failed (Connection refused); disconnecting socket
[2018-02-13 08:16:09.940519] I [glusterfsd-mgmt.c:2134:mg...
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
...-volfile-
> id ansible -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log
> /tmp/gverify.sh.txIgka}]
> [2023-03-14 19:13:48.905883 +0000] I [glusterfsd.c:2421:daemonize] 0-
> glusterfs: Pid of current running process is 3466942
> [2023-03-14 19:13:48.912723 +0000] I [MSGID: 101190] [event-
> epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
> index [{index=1}]
> [2023-03-14 19:13:48.912759 +0000] I [MSGID: 101190] [event-
> epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
> index [{index=0}]
> [2023-03-14 19:13:48....
2018 Feb 13
0
Failed to get quota limits
...ersion 3.10.7
> (args: /usr/sbin/glusterfs --volfile-server localhost --volfile-id myvolume
> -l /var/log/glusterfs/quota-mount-myvolume.log -p
> /var/run/gluster/myvolume_quota_list.pid --client-pid -5
> /var/run/gluster/myvolume_quota_list/)
> [2018-02-13 08:16:09.940432] I [MSGID: 101190]
> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with
> index 1
> [2018-02-13 08:16:09.940491] E [socket.c:2327:socket_connect_finish]
> 0-glusterfs: connection to ::1:24007 failed (Connection refused);
> disconnecting socket
> [2018-02-13 08:16:09.940519...
2024 Jan 26
1
Gluster communication via TLS client problem
...glusterfs}, {version=10.5},
{cmdlinestr=/usr/sbin/glusterfs --process-name fuse
--volfile-server=c01.gluster --volfile-id=/gv1 /mnt}]
[2024-01-26 09:30:06.677184 +0000] I [glusterfsd.c:2447:daemonize]
0-glusterfs: Pid of current running process is 931
[2024-01-26 09:30:06.685814 +0000] I [MSGID: 101190]
[event-epoll.c:667:event_dispatch_epoll_worker] 0-epoll: Started thread
with index [{index=1}]
[2024-01-26 09:30:06.686116 +0000] I [MSGID: 101190]
[event-epoll.c:667:event_dispatch_epoll_worker] 0-epoll: Started thread
with index [{index=0}]
[2024-01-26 09:30:06.690443 +0000] I
[glusterfsd-m...
2017 Nov 13
2
snapshot mount fails in 3.12
...id=snaps --subdir-mount=/test/home /mnt/temp)
[2017-11-13 08:46:02.292629] W [MSGID: 101002]
[options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
deprecated, preferred is 'transport.address-family', continuing with
correction
[2017-11-13 08:46:02.298278] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-11-13 08:46:02.300719] E [glusterfsd-mgmt.c:1796:mgmt_getspec_cbk]
0-glusterfs: failed to get the 'volume file' from server
[2017-11-13 08:46:02.300744] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk]
0-...
2017 Dec 29
1
cannot mount with glusterfs-fuse after NFS-Ganesha enabled
...g the log-file option):
[2017-12-28 08:15:30.109110] I [MSGID: 100030] [glusterfsd.c:2412:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.4
(args: /usr/sbin/glusterfs --log-file=log --volfile-server=glnode1
--volfile-id=/gv0 /mnt)
[2017-12-28 08:15:30.128685] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-12-28 08:15:30.132438] W [MSGID: 101095]
[xlator.c:198:xlator_dynload] 0-xlator:
/usr/lib64/glusterfs/3.8.4/xlator/features/ganesha.so: cannot open shared
object file: No such file or directory
[2017-12-28 0...
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
...log. The last line repeats at every 3 seconds.
[2017-08-16 10:49:00.028789] I [cli.c:759:main] 0-cli: Started running gluster with version 3.10.3
[2017-08-16 10:49:00.032509] I [cli-cmd-volume.c:2320:cli_check_gsync_present] 0-: geo-replication not installed
[2017-08-16 10:49:00.033038] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-08-16 10:49:00.033092] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2017-08-16 10:49:03.032434] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecti...
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote:
> Ji-Hyeon,
>
> You're saying that "stripe=2 transport=rdma" should work. Ok, that
> was firstly I wanted to know. I'll put together logs later this week.
Note that "stripe" is not tested much and practically unmaintained. We
do not advise you to use it. If you have large files that you
2018 Feb 13
2
Failed to get quota limits
...ocalhost --volfile-id myvolume
>>>>> -l /var/log/glusterfs/quota-mount-myvolume.log -p
>>>>> /var/run/gluster/myvolume_quota_list.pid --client-pid -5
>>>>> /var/run/gluster/myvolume_quota_list/)
>>>>> [2018-02-13 08:16:09.940432] I [MSGID: 101190]
>>>>> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with
>>>>> index 1
>>>>> [2018-02-13 08:16:09.940491] E [socket.c:2327:socket_connect_finish]
>>>>> 0-glusterfs: connection to ::1:24007 failed (Connection refu...
2018 Feb 13
2
Failed to get quota limits
...gt;>>>>>> -l /var/log/glusterfs/quota-mount-myvolume.log -p
>>>>>>> /var/run/gluster/myvolume_quota_list.pid --client-pid -5
>>>>>>> /var/run/gluster/myvolume_quota_list/)
>>>>>>> [2018-02-13 08:16:09.940432] I [MSGID: 101190]
>>>>>>> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with
>>>>>>> index 1
>>>>>>> [2018-02-13 08:16:09.940491] E [socket.c:2327:socket_connect_finish]
>>>>>>> 0-glusterfs: connection to :...
2018 Feb 13
0
Failed to get quota limits
...le-id myvolume
>>>>>> -l /var/log/glusterfs/quota-mount-myvolume.log -p
>>>>>> /var/run/gluster/myvolume_quota_list.pid --client-pid -5
>>>>>> /var/run/gluster/myvolume_quota_list/)
>>>>>> [2018-02-13 08:16:09.940432] I [MSGID: 101190]
>>>>>> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with
>>>>>> index 1
>>>>>> [2018-02-13 08:16:09.940491] E [socket.c:2327:socket_connect_finish]
>>>>>> 0-glusterfs: connection to ::1:24007 failed...