Displaying 10 results from an estimated 10 matches for "atlasglust".
2017 Aug 30
2
Gluster status fails
Hi
I am running a 400TB five node purely distributed gluster setup. I am
troubleshooting an issue where some times files creation fails. I found
that volume status is not working
gluster volume status
Another transaction is in progress for atlasglust. Please try again after
sometime.
When I tried from other node then it seems two nodes have Locking issue
gluster volume status
Locking failed on pplxgluster01... Please check log file for details.
Locking failed on pplxgluster04... Please check log file for details.
Also noticed that glusterfsd...
2017 Jun 09
2
Extremely slow du
Hi Vijay
Thanks for your quick response. I am using gluster 3.8.11 on Centos 7
servers
glusterfs-3.8.11-1.el7.x86_64
clients are centos 6 but I tested with a centos 7 client as well and
results didn't change
gluster volume info Volume Name: atlasglust
Type: Distribute
Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/gv0
Brick3: pplxgluster03.x.y.z:/glusteratlas...
2017 Jun 12
2
Extremely slow du
...Thanks for your quick response. I am using gluster 3.8.11 on Centos 7
>> servers
>> glusterfs-3.8.11-1.el7.x86_64
>>
>> clients are centos 6 but I tested with a centos 7 client as well and
>> results didn't change
>>
>> gluster volume info Volume Name: atlasglust
>> Type: Distribute
>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5
>> Transport-type: tcp
>> Bricks:
>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
>> Brick2: pplxglus...
2017 Jun 10
0
Extremely slow du
...e:
> Hi Vijay
>
> Thanks for your quick response. I am using gluster 3.8.11 on Centos 7
> servers
> glusterfs-3.8.11-1.el7.x86_64
>
> clients are centos 6 but I tested with a centos 7 client as well and
> results didn't change
>
> gluster volume info Volume Name: atlasglust
> Type: Distribute
> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick001/gv0
> Brick2: pplxgluster02..x.y.z:/glusteratlas/brick002/...
2017 Jun 16
0
Extremely slow du
...se. I am using gluster 3.8.11 on Centos 7
>>> servers
>>> glusterfs-3.8.11-1.el7.x86_64
>>>
>>> clients are centos 6 but I tested with a centos 7 client as well and
>>> results didn't change
>>>
>>> gluster volume info Volume Name: atlasglust
>>> Type: Distribute
>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 5
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: pplxgluster01.x.y.z:/glusteratlas/brick0...
2017 Jun 18
1
Extremely slow du
...1 on Centos 7
>>>> servers
>>>> glusterfs-3.8.11-1.el7.x86_64
>>>>
>>>> clients are centos 6 but I tested with a centos 7 client as well and
>>>> results didn't change
>>>>
>>>> gluster volume info Volume Name: atlasglust
>>>> Type: Distribute
>>>> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 5
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: pplxglus...
2017 Aug 31
0
Gluster status fails
...m running a 400TB five node purely distributed gluster setup. I am
>>> troubleshooting an issue where some times files creation fails. I found
>>> that volume status is not working
>>>
>>> gluster volume status
>>> Another transaction is in progress for atlasglust. Please try again
>>> after sometime.
>>>
>>> When I tried from other node then it seems two nodes have Locking issue
>>>
>>> gluster volume status
>>> Locking failed on pplxgluster01... Please check log file for details.
>>> Locking fa...
2017 Jul 11
2
Extremely slow du
...3.8.11 on Centos 7 servers
> glusterfs-3.8.11-1.el7.x86_64
>
> clients are centos 6 but I tested with a centos 7
> client as well and results didn't change
>
> gluster volume info Volume Name: atlasglust
> Type: Distribute
> Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5
> Transport-type: tcp
>...
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the
version of gluster that you are using?
Regards,
Vijay
On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in