Hey Atul,
You'd need to provide adequate information to us to get to the actual
issue. I'd recommend you to provide the following:
1. The detailed description of the problem statement.
2. Steps you did in the cluster.
3. log files from all the nodes in the cluster.
You can file a bug for the same if required.
On Thu, Nov 10, 2016 at 11:09 AM, Atul Yadav <atulyadavtech at gmail.com>
wrote:
> Hi Team,
>
> Can i have update on the issue..
>
> Thank You
> Atul Yadav
>
>
> On Tue, Jul 5, 2016 at 4:56 PM, Atul Yadav <atulyadavtech at
gmail.com>
> wrote:
>
>> Hi Team,
>>
>> Updated
>> No , gluster commands are not working from the same node.( in the below
>> mention event)
>> df command will halt the server.
>>
>
>>
>> Note:- After rebooting entire cluster the gluster process is working
fine.
>> But at the time of the failure(one node forcibly down), gluster peer
>> still showing connected to that node.
>>
>> Looking for solution to fix this issue.
>>
>> Thank You
>>
>> On Tue, Jul 5, 2016 at 4:48 PM, Atul Yadav <atulyadavtech at
gmail.com>
>> wrote:
>>
>>> Hi Team,
>>>
>>> Yes, gluster commands are working from the same node.
>>>
>>> Note:- After rebooting entire cluster the gluster process is
working
>>> fine.
>>> But at the time of the failure(one node forcibly down), gluster
peer
>>> still showing connected to that node.
>>>
>>> Thank You
>>> Atul Yadav
>>>
>>>
>>> On Tue, Jul 5, 2016 at 4:32 PM, Atin Mukherjee <amukherj at
redhat.com>
>>> wrote:
>>>
>>>> I could see that glusterd process is up from the log file.
Aren't you
>>>> able to run any gluster commands on the same node?
>>>>
>>>> ~Atin
>>>>
>>>> On Tue, Jul 5, 2016 at 4:05 PM, Atul Yadav <atulyadavtech at
gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> Please go through the attachment
>>>>>
>>>>> Thank you
>>>>> Atul Yadav
>>>>>
>>>>> On Tue, Jul 5, 2016 at 3:55 PM, Samikshan Bairagya <
>>>>> sbairagy at redhat.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 07/05/2016 03:15 PM, Atul Yadav wrote:
>>>>>>
>>>>>>> Hi Atin,
>>>>>>>
>>>>>>> please go through the attachment for log file.
>>>>>>>
>>>>>>> Waiting for your input....
>>>>>>>
>>>>>>>
>>>>>> Hi Atul,
>>>>>>
>>>>>> The glusterd log file can be found here:
>>>>>>
/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log
>>>>>>
>>>>>> Sharing that instead of a RAR archive would make things
easier for us.
>>>>>>
>>>>>>
>>>>>>
>>>>>> thank you
>>>>>>> Atul Yadav
>>>>>>>
>>>>>>> On Tue, Jul 5, 2016 at 2:43 PM, Atin Mukherjee
<amukherj at redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Why don't you share the glusterd log file?
>>>>>>>>
>>>>>>>> On Tue, Jul 5, 2016 at 12:53 PM, Atul Yadav
<
>>>>>>>> atulyadavtech at gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Hi ,
>>>>>>>>>
>>>>>>>>> After restarting the service. service
entered in to the fail state.
>>>>>>>>> [root at master1 ~]# /etc/init.d/glusterd
restart
>>>>>>>>> Stopping glusterd:
[FAILED]
>>>>>>>>> Starting glusterd:
[FAILED]
>>>>>>>>>
>>>>>>>>> Note: This behavior only happening over
rdma network. But with
>>>>>>>>> ethernet
>>>>>>>>> there is no issue.
>>>>>>>>>
>>>>>>>>> Thank you
>>>>>>>>> Atul Yadav
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jul 5, 2016 at 11:28 AM, Atin
Mukherjee <
>>>>>>>>> amukherj at redhat.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Jul 5, 2016 at 11:01 AM, Atul
Yadav <
>>>>>>>>>> atulyadavtech at gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Hi All,
>>>>>>>>>>>
>>>>>>>>>>> The glusterfs environment details
are given below:-
>>>>>>>>>>>
>>>>>>>>>>> [root at master1 ~]# cat
/etc/redhat-release
>>>>>>>>>>> CentOS release 6.7 (Final)
>>>>>>>>>>> [root at master1 ~]# uname -r
>>>>>>>>>>> 2.6.32-642.1.1.el6.x86_64
>>>>>>>>>>> [root at master1 ~]# rpm -qa | grep
-i gluster
>>>>>>>>>>> glusterfs-rdma-3.8rc2-1.el6.x86_64
>>>>>>>>>>> glusterfs-api-3.8rc2-1.el6.x86_64
>>>>>>>>>>> glusterfs-3.8rc2-1.el6.x86_64
>>>>>>>>>>> glusterfs-cli-3.8rc2-1.el6.x86_64
>>>>>>>>>>>
glusterfs-client-xlators-3.8rc2-1.el6.x86_64
>>>>>>>>>>>
glusterfs-server-3.8rc2-1.el6.x86_64
>>>>>>>>>>> glusterfs-fuse-3.8rc2-1.el6.x86_64
>>>>>>>>>>> glusterfs-libs-3.8rc2-1.el6.x86_64
>>>>>>>>>>> [root at master1 ~]#
>>>>>>>>>>>
>>>>>>>>>>> Volume Name: home
>>>>>>>>>>> Type: Replicate
>>>>>>>>>>> Volume ID:
2403ddf9-c2e0-4930-bc94-734772ef099f
>>>>>>>>>>> Status: Stopped
>>>>>>>>>>> Number of Bricks: 1 x 2 = 2
>>>>>>>>>>> Transport-type: rdma
>>>>>>>>>>> Bricks:
>>>>>>>>>>> Brick1:
master1-ib.dbt.au:/glusterfs/home/brick1
>>>>>>>>>>> Brick2:
master2-ib.dbt.au:/glusterfs/home/brick2
>>>>>>>>>>> Options Reconfigured:
>>>>>>>>>>> network.ping-timeout: 20
>>>>>>>>>>> nfs.disable: on
>>>>>>>>>>> performance.readdir-ahead: on
>>>>>>>>>>> transport.address-family: inet
>>>>>>>>>>> config.transport: rdma
>>>>>>>>>>> cluster.server-quorum-type: server
>>>>>>>>>>> cluster.quorum-type: fixed
>>>>>>>>>>> cluster.quorum-count: 1
>>>>>>>>>>> locks.mandatory-locking: off
>>>>>>>>>>> cluster.enable-shared-storage:
disable
>>>>>>>>>>> cluster.server-quorum-ratio: 51%
>>>>>>>>>>>
>>>>>>>>>>> When my single master node is up
only, but other nodes are still
>>>>>>>>>>> showing connected mode ....
>>>>>>>>>>> gluster pool list
>>>>>>>>>>> UUID
Hostname
>>>>>>>>>>> State
>>>>>>>>>>>
89ccd72e-cb99-4b52-a2c0-388c99e5c7b3 master2-ib.dbt.au
>>>>>>>>>>> Connected
>>>>>>>>>>>
d2c47fc2-f673-4790-b368-d214a58c59f4 compute01-ib.dbt.au
>>>>>>>>>>> Connected
>>>>>>>>>>>
a5608d66-a3c6-450e-a239-108668083ff2 localhost
>>>>>>>>>>> Connected
>>>>>>>>>>> [root at master1 ~]#
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Please advise us
>>>>>>>>>>> Is this normal behavior Or This is
issue.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> First of, we don't have any master
slave configuration mode for
>>>>>>>>>> gluster
>>>>>>>>>> trusted storage pool i.e. peer list.
Secondly, if master2 and
>>>>>>>>>> compute01 are
>>>>>>>>>> still reflecting as 'connected'
even though they are down it
>>>>>>>>>> means that
>>>>>>>>>> localhost here didn't receive
disconnect events for some reason.
>>>>>>>>>> Could you
>>>>>>>>>> restart glusterd service on this node
and check the output of
>>>>>>>>>> gluster pool
>>>>>>>>>> list again?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Thank You
>>>>>>>>>>> Atul Yadav
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-users mailing list
>>>>>>> Gluster-users at gluster.org
>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
>>>>>>>
>>>>>
>>>>
>>>
>>
>
--
~ Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161110/d64b4ebd/attachment.html>