Displaying 10 results from an estimated 10 matches for "pplxgluster01".
Did you mean:
pplxgluster04
2017 Aug 30
2
Gluster status fails
...ere some times files creation fails. I found
that volume status is not working
gluster volume status
Another transaction is in progress for atlasglust. Please try again after
sometime.
When I tried from other node then it seems two nodes have Locking issue
gluster volume status
Locking failed on pplxgluster01... Please check log file for details.
Locking failed on pplxgluster04... Please check log file for details.
Also noticed that glusterfsd process is using around 1000% cpu usage. It
is a decent server with 16 core and 64GB RAM.
Gluster version is 3.11.2-1
Can you please suggest that how to troub...
2017 Jun 09
2
Extremely slow du
..._64
clients are centos 6 but I tested with a centos 7 client as well and
results didn't change
gluster volume info Volume Name: atlasglust
Type: Distribute
Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
Options Reconfigured:
nfs.disable: on
perform...
2017 Jun 12
2
Extremely slow du
...9;t change
>>
>> gluster volume info Volume Name: atlasglust
>> Type: Distribute
>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5
>> Transport-type: tcp
>> Bricks:
>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
>>...
2017 Jun 10
0
Extremely slow du
...client as well and
> results didn't change
>
> gluster volume info Volume Name: atlasglust
> Type: Distribute
> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0
> Options Reconfigured...
2017 Jun 16
0
Extremely slow du
...ster volume info Volume Name: atlasglust
>>> Type: Distribute
>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 5
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick0...
2017 Jun 18
1
Extremely slow du
...lasglust
>>>> Type: Distribute
>>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 5
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>>>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
>>>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>>>> Brick5: pplxgluster05.x.y.z:/glu...
2017 Aug 31
0
Gluster status fails
...olume status
>>> Another transaction is in progress for atlasglust. Please try again
>>> after sometime.
>>>
>>> When I tried from other node then it seems two nodes have Locking issue
>>>
>>> gluster volume status
>>> Locking failed on pplxgluster01... Please check log file for details.
>>> Locking failed on pplxgluster04... Please check log file for details.
>>>
>>
>> This suggests that there are concurrent gluster cli operations been
>> performed on the same volume. Are you monitoring the cluster through n...
2017 Jul 11
2
Extremely slow du
...Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0
>...
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the
version of gluster that you are using?
Regards,
Vijay
On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in