Try adding routes so it can connect to all.
Also curious if fuse mount does need access to all nodes. Supposedly it
does wright to all at the same time unless you have halo feature enabled.
v
On Sun, Oct 28, 2018 at 1:07 AM O?uz Yar?mtepe <oguzyarimtepe at
gmail.com>
wrote:
> My two nodes are at another vlan. Should my client have connection to all
> nodes, at replicated mod?
>
> Regards.
>
> On Fri, Oct 26, 2018 at 4:44 AM Poornima Gurusiddaiah <pgurusid at
redhat.com>
> wrote:
>
>> Is this a new volume? Has it never been mounted successfully? If so try
>> changing firewall settings to allow gluster ports, also check for
selinux
>> settings.
>>
>> Regards,
>> Poornima
>>
>> On Fri, Oct 26, 2018, 1:26 AM O?uz Yar?mtepe <oguzyarimtepe at
gmail.com>
>> wrote:
>>
>>> One more addition:
>>>
>>> # gluster volume info
>>>
>>>
>>> Volume Name: vol0
>>> Type: Replicate
>>> Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 4 = 4
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: aslrplpgls01:/bricks/brick1/vol0
>>> Brick2: aslrplpgls02:/bricks/brick2/vol0
>>> Brick3: bslrplpgls01:/bricks/brick3/vol0
>>> Brick4: bslrplpgls02:/bricks/brick4/vol0
>>> Options Reconfigured:
>>> cluster.self-heal-daemon: enable
>>> cluster.halo-enabled: True
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: off
>>>
>>> On Thu, Oct 25, 2018 at 10:39 PM O?uz Yar?mtepe <oguzyarimtepe
at gmail.com>
>>> wrote:
>>>
>>>> I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
>>>>
>>>> # gluster peer status
>>>> Number of Peers: 3
>>>>
>>>> Hostname: aslrplpgls02
>>>> Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> Hostname: bslrplpgls01
>>>> Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> Hostname: bslrplpgls02
>>>> Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> # gluster volume status vol0
>>>> Status of volume: vol0
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>>
>>>>
------------------------------------------------------------------------------
>>>> Brick aslrplpgls01:/bricks/brick1/vol0 49152 0
Y
>>>> 12991
>>>> Brick aslrplpgls02:/bricks/brick2/vol0 49152 0
Y
>>>> 9344
>>>> Brick bslrplpgls01:/bricks/brick3/vol0 49152 0
Y
>>>> 61662
>>>> Brick bslrplpgls02:/bricks/brick4/vol0 49152 0
Y
>>>> 61843
>>>> Self-heal Daemon on localhost N/A N/A
Y
>>>> 13014
>>>> Self-heal Daemon on bslrplpgls02 N/A N/A
Y
>>>> 61866
>>>> Self-heal Daemon on bslrplpgls01 N/A N/A
Y
>>>> 61685
>>>> Self-heal Daemon on aslrplpgls02 N/A N/A
Y
>>>> 9367
>>>>
>>>> Task Status of Volume vol0
>>>>
>>>>
------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> This is how volume area is mounted:
>>>>
>>>> /dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
>>>>
>>>> When i try to mount vol0 on a remote machine below is what i
got:
>>>>
>>>> [2018-10-25 19:37:23.033302] D [MSGID: 0]
>>>>>> [write-behind.c:2396:wb_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-write-behind returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.033329] D [MSGID: 0]
>>>>>> [io-cache.c:268:ioc_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-io-cache returned -1 error:
Transport endpoint is not
>>>>>> connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.033356] D [MSGID: 0]
>>>>>> [quick-read.c:473:qr_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-quick-read returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.033373] D [MSGID: 0]
>>>>>> [md-cache.c:1130:mdc_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-md-cache returned -1 error:
Transport endpoint is not
>>>>>> connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.033389] D [MSGID: 0]
>>>>>> [io-stats.c:2278:io_stats_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0 returned -1 error: Transport
endpoint is not connected
>>>>>> [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.033408] W
>>>>>> [fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse:
>>>>>> 00000000-0000-0000-0000-000000000001: failed to resolve
(Transport endpoint
>>>>>> is not connected)
>>>>>
>>>>> [2018-10-25 19:37:23.033426] E
[fuse-bridge.c:928:fuse_getattr_resume]
>>>>>> 0-glusterfs-fuse: 2: GETATTR 1
(00000000-0000-0000-0000-000000000001)
>>>>>> resolution failed
>>>>>
>>>>> [2018-10-25 19:37:23.036511] D [MSGID: 0]
>>>>>> [dht-common.c:3468:dht_lookup] 0-vol0-dht: Calling
fresh lookup for / on
>>>>>> vol0-replicate-0
>>>>>
>>>>> [2018-10-25 19:37:23.037347] D [MSGID: 0]
>>>>>> [afr-common.c:3241:afr_discover_do] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-replicate-0 returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.037375] D [MSGID: 0]
>>>>>> [dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht:
fresh_lookup returned for /
>>>>>> with op_ret -1 [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.037940] D [MSGID: 0]
>>>>>> [afr-common.c:3241:afr_discover_do] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-replicate-0 returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.037963] D [MSGID: 0]
>>>>>> [dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht:
lookup of / on
>>>>>> vol0-replicate-0 returned error [Transport endpoint is
not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.037979] E [MSGID: 101046]
>>>>>> [dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict
is null
>>>>>
>>>>> [2018-10-25 19:37:23.037994] D [MSGID: 0]
>>>>>> [dht-common.c:1505:dht_lookup_dir_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-dht returned -1 error: Transport
endpoint is not
>>>>>> connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038010] D [MSGID: 0]
>>>>>> [write-behind.c:2396:wb_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-write-behind returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038028] D [MSGID: 0]
>>>>>> [io-cache.c:268:ioc_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-io-cache returned -1 error:
Transport endpoint is not
>>>>>> connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038045] D [MSGID: 0]
>>>>>> [quick-read.c:473:qr_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-quick-read returned -1 error:
Transport endpoint is
>>>>>> not connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038061] D [MSGID: 0]
>>>>>> [md-cache.c:1130:mdc_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0-md-cache returned -1 error:
Transport endpoint is not
>>>>>> connected [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038078] D [MSGID: 0]
>>>>>> [io-stats.c:2278:io_stats_lookup_cbk] 0-stack-trace:
stack-address:
>>>>>> 0x7f0d04001038, vol0 returned -1 error: Transport
endpoint is not connected
>>>>>> [Transport endpoint is not connected]
>>>>>
>>>>> [2018-10-25 19:37:23.038096] W
>>>>>> [fuse-resolve.c:132:fuse_resolve_gfid_cbk] 0-fuse:
>>>>>> 00000000-0000-0000-0000-000000000001: failed to resolve
(Transport endpoint
>>>>>> is not connected)
>>>>>
>>>>> [2018-10-25 19:37:23.038110] E
[fuse-bridge.c:928:fuse_getattr_resume]
>>>>>> 0-glusterfs-fuse: 3: GETATTR 1
(00000000-0000-0000-0000-000000000001)
>>>>>> resolution failed
>>>>>
>>>>> [2018-10-25 19:37:23.041169] D
[fuse-bridge.c:5087:fuse_thread_proc]
>>>>>> 0-glusterfs-fuse: terminating upon getting ENODEV when
reading /dev/fuse
>>>>>
>>>>> [2018-10-25 19:37:23.041196] I
[fuse-bridge.c:5199:fuse_thread_proc]
>>>>>> 0-fuse: initating unmount of /mnt/gluster
>>>>>
>>>>> [2018-10-25 19:37:23.041306] D
>>>>>> [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size
>>>>>> reduced. About to flush 5 extra log messages
>>>>>
>>>>> [2018-10-25 19:37:23.041331] D
>>>>>> [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5
>>>>>> extra log messages
>>>>>
>>>>> [2018-10-25 19:37:23.041398] W
[glusterfsd.c:1514:cleanup_and_exit]
>>>>>> (-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
>>>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x5594b73edd65]
>>>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x5594b73edb8b] ) 0-:
>>>>>> received signum (15), shutting down
>>>>>
>>>>> [2018-10-25 19:37:23.041417] D
>>>>>> [mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt:
portmapper signout
>>>>>> arguments not given
>>>>>
>>>>> [2018-10-25 19:37:23.041428] I [fuse-bridge.c:5981:fini]
0-fuse:
>>>>>> Unmounting '/mnt/gluster'.
>>>>>
>>>>> [2018-10-25 19:37:23.041441] I [fuse-bridge.c:5986:fini]
0-fuse:
>>>>>> Closing fuse connection to '/mnt/gluster'.
>>>>>
>>>>>
>>>> This is how i added mount point to fstab
>>>>
>>>> 10.35.72.138:/vol0 /mnt/gluster glusterfs
>>>> defaults,_netdev,log-level=DEBUG 0 0
>>>>
>>>> Any idea what the problem is? I found some bug entries, not
sure
>>>> whether this situation is a bug.
>>>>
>>>>
>>>>
>>>> --
>>>> O?uz Yar?mtepe
>>>> http://about.me/oguzy
>>>>
>>>
>>>
>>> --
>>> O?uz Yar?mtepe
>>> http://about.me/oguzy
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> O?uz Yar?mtepe
> http://about.me/oguzy
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20181030/aa3b413b/attachment.html>