Displaying 8 results from an estimated 8 matches for "f722618a624b".
2017 Jun 09
2
Extremely slow du
...anks for your quick response. I am using gluster 3.8.11 on Centos 7
servers
glusterfs-3.8.11-1.el7.x86_64
clients are centos 6 but I tested with a centos 7 client as well and
results didn't change
gluster volume info Volume Name: atlasglust
Type: Distribute
Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
Brick4: pplxgluster04.x.y.z:/glusteratlas/brick004/...
2017 Jun 12
2
Extremely slow du
...ervers
>> glusterfs-3.8.11-1.el7.x86_64
>>
>> clients are centos 6 but I tested with a centos 7 client as well and
>> results didn't change
>>
>> gluster volume info Volume Name: atlasglust
>> Type: Distribute
>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5
>> Transport-type: tcp
>> Bricks:
>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>> Brick3: pplxgluster03.x.y.z:/glust...
2017 Jun 10
0
Extremely slow du
...uster 3.8.11 on Centos 7
> servers
> glusterfs-3.8.11-1.el7.x86_64
>
> clients are centos 6 but I tested with a centos 7 client as well and
> results didn't change
>
> gluster volume info Volume Name: atlasglust
> Type: Distribute
> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
> Brick3: pplxgluster03.x.y.z:/glusteratlas/brick003/gv0
> Brick4...
2017 Jun 16
0
Extremely slow du
...8.11-1.el7.x86_64
>>>
>>> clients are centos 6 but I tested with a centos 7 client as well and
>>> results didn't change
>>>
>>> gluster volume info Volume Name: atlasglust
>>> Type: Distribute
>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 5
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
>>> Br...
2017 Jun 18
1
Extremely slow du
...t;>
>>>> clients are centos 6 but I tested with a centos 7 client as well and
>>>> results didn't change
>>>>
>>>> gluster volume info Volume Name: atlasglust
>>>> Type: Distribute
>>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 5
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>>>> Brick2: pplxgluster02..x.y.z:/glusteratlas/...
2017 Jul 11
2
Extremely slow du
...clients are centos 6 but I tested with a centos 7
> client as well and results didn't change
>
> gluster volume info Volume Name: atlasglust
> Type: Distribute
> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>...
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the
version of gluster that you are using?
Regards,
Vijay
On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in