Displaying 10 results from an estimated 10 matches for "pplxgluster04".
Did you mean:
pplxgluster01
2017 Aug 30
2
Gluster status fails
...working
gluster volume status
Another transaction is in progress for atlasglust. Please try again after
sometime.
When I tried from other node then it seems two nodes have Locking issue
gluster volume status
Locking failed on pplxgluster01... Please check log file for details.
Locking failed on pplxgluster04... Please check log file for details.
Also noticed that glusterfsd process is using around 1000% cpu usage. It
is a decent server with 16 core and 64GB RAM.
Gluster version is 3.11.2-1
Can you please suggest that how to troubleshoot further?
Thanks
Kashif
-------------- next part ------------...
2017 Jun 09
2
Extremely slow du
...me ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
auth.allow: x.y.z
I am not using directory quota.
Please let me know if you require some more info
Thanks
Ka...
2017 Jun 12
2
Extremely slow du
...pshot Count: 0
>> Number of Bricks: 5
>> Transport-type: tcp
>> Bricks:
>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> auth.allow: x.y.z
>>
>> I am not using directo...
2017 Jun 10
0
Extremely slow du
...gt; Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> transport.address-family: inet
> auth.allow: x.y.z
>
> I am not using directory quota.
>
> Please let m...
2017 Jun 16
0
Extremely slow du
...Number of Bricks: 5
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
>>> Options Reconfigured:
>>> nfs.disable: on
>>> performance.readdir-ahead: on
>>> transport.address-family: inet
>>> auth.allow: x.y.z
>>>
>...
2017 Jun 18
1
Extremely slow du
...>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>>>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>>>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>>>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
>>>> Options Reconfigured:
>>>> nfs.disable: on
>>>> performance.readdir-ahead: on
>>>> transport.address-family: inet
>>>> auth.allow...
2017 Aug 31
0
Gluster status fails
...try again
>>> after sometime.
>>>
>>> When I tried from other node then it seems two nodes have Locking issue
>>>
>>> gluster volume status
>>> Locking failed on pplxgluster01... Please check log file for details.
>>> Locking failed on pplxgluster04... Please check log file for details.
>>>
>>
>> This suggests that there are concurrent gluster cli operations been
>> performed on the same volume. Are you monitoring the cluster through nagios
>> or you have a script on all the nodes which checks for volume's...
2017 Jul 11
2
Extremely slow du
...Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> transport.address-...
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the
version of gluster that you are using?
Regards,
Vijay
On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in