search for: brick005

Displaying 8 results from an estimated 8 matches for "brick005".

Did you mean: brick001
2017 Jun 09
2
Extremely slow du
...ricks: 5 Transport-type: tcp Bricks: Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0 Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet auth.allow: x.y.z I am not using directory quota. Please let me know if you require some more info Thanks Kashif On Fri, Jun 9, 2017 at 2:34 PM, Vijay Bellur <vbellur at redhat.com> w...
2017 Jun 12
2
Extremely slow du
...Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0 >> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 >> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 >> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 >> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 >> Options Reconfigured: >> nfs.disable: on >> performance.readdir-ahead: on >> transport.address-family: inet >> auth.allow: x.y.z >> >> I am not using directory quota. >> >> Please let me know if you require some more info >> >&gt...
2017 Jun 10
0
Extremely slow du
...gt; Bricks: > Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0 > Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 > Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 > Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 > Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 > Options Reconfigured: > nfs.disable: on > performance.readdir-ahead: on > transport.address-family: inet > auth.allow: x.y.z > > I am not using directory quota. > > Please let me know if you require some more info > > Thanks > > Kashif > > > &gt...
2017 Jun 16
0
Extremely slow du
...ster01.x.y.z:/glusteratlas/brick001/gv0 >>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 >>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 >>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 >>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 >>> Options Reconfigured: >>> nfs.disable: on >>> performance.readdir-ahead: on >>> transport.address-family: inet >>> auth.allow: x.y.z >>> >>> I am not using directory quota. >>> >>> Please let me know if you req...
2017 Jun 18
1
Extremely slow du
...usteratlas/brick001/gv0 >>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 >>>> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 >>>> Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 >>>> Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 >>>> Options Reconfigured: >>>> nfs.disable: on >>>> performance.readdir-ahead: on >>>> transport.address-family: inet >>>> auth.allow: x.y.z >>>> >>>> I am not using directory quota. >>>> >>&g...
2017 Jul 11
2
Extremely slow du
...Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0 > Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0 > Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/gv0 > Brick5: pplxgluster05.x.y.z:/glusteratlas/brick005/gv0 > Options Reconfigured: > nfs.disable: on > performance.readdir-ahead: on > transport.address-family: inet > auth.allow: x.y.z > > I am not using director...
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the version of gluster that you are using? Regards, Vijay On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi > > I have just moved our 400 TB HPC storage from lustre to gluster. It is > part of a research institute and users have very small files to big files > ( few
2017 Jun 09
2
Extremely slow du
Hi I have just moved our 400 TB HPC storage from lustre to gluster. It is part of a research institute and users have very small files to big files ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks. All servers are connected through 10G ethernet but not all clients. Gluster volumes are distributed without any replication. There are approximately 80 million files in