Hi ,
After restarting the service. service entered in to the fail state.
[root at master1 ~]# /etc/init.d/glusterd restart
Stopping glusterd: [FAILED]
Starting glusterd: [FAILED]
Note: This behavior only happening over rdma network. But with ethernet
there is no issue.
Thank you
Atul Yadav
On Tue, Jul 5, 2016 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com>
wrote:
>
>
> On Tue, Jul 5, 2016 at 11:01 AM, Atul Yadav <atulyadavtech at
gmail.com>
> wrote:
>
>> Hi All,
>>
>> The glusterfs environment details are given below:-
>>
>> [root at master1 ~]# cat /etc/redhat-release
>> CentOS release 6.7 (Final)
>> [root at master1 ~]# uname -r
>> 2.6.32-642.1.1.el6.x86_64
>> [root at master1 ~]# rpm -qa | grep -i gluster
>> glusterfs-rdma-3.8rc2-1.el6.x86_64
>> glusterfs-api-3.8rc2-1.el6.x86_64
>> glusterfs-3.8rc2-1.el6.x86_64
>> glusterfs-cli-3.8rc2-1.el6.x86_64
>> glusterfs-client-xlators-3.8rc2-1.el6.x86_64
>> glusterfs-server-3.8rc2-1.el6.x86_64
>> glusterfs-fuse-3.8rc2-1.el6.x86_64
>> glusterfs-libs-3.8rc2-1.el6.x86_64
>> [root at master1 ~]#
>>
>> Volume Name: home
>> Type: Replicate
>> Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f
>> Status: Stopped
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: rdma
>> Bricks:
>> Brick1: master1-ib.dbt.au:/glusterfs/home/brick1
>> Brick2: master2-ib.dbt.au:/glusterfs/home/brick2
>> Options Reconfigured:
>> network.ping-timeout: 20
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> config.transport: rdma
>> cluster.server-quorum-type: server
>> cluster.quorum-type: fixed
>> cluster.quorum-count: 1
>> locks.mandatory-locking: off
>> cluster.enable-shared-storage: disable
>> cluster.server-quorum-ratio: 51%
>>
>> When my single master node is up only, but other nodes are still
showing
>> connected mode ....
>> gluster pool list
>> UUID Hostname State
>> 89ccd72e-cb99-4b52-a2c0-388c99e5c7b3 master2-ib.dbt.au
Connected
>> d2c47fc2-f673-4790-b368-d214a58c59f4 compute01-ib.dbt.au
Connected
>> a5608d66-a3c6-450e-a239-108668083ff2 localhost
Connected
>> [root at master1 ~]#
>>
>>
>> Please advise us
>> Is this normal behavior Or This is issue.
>>
>
> First of, we don't have any master slave configuration mode for gluster
> trusted storage pool i.e. peer list. Secondly, if master2 and compute01 are
> still reflecting as 'connected' even though they are down it means
that
> localhost here didn't receive disconnect events for some reason. Could
you
> restart glusterd service on this node and check the output of gluster pool
> list again?
>
>
>
>>
>> Thank You
>> Atul Yadav
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160705/4ecbeb74/attachment.html>