Displaying 9 results from an estimated 9 matches for "androidpolice_data3".
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...k, heal
status takes many minutes to return, glusterfsd uses up tons of CPU (I saw
it spike to 600%). gluster already has massive performance issues for me,
but healing after a 4-hour downtime is on another level of bad perf.
For example, this command took many minutes to run:
gluster volume heal androidpolice_data3 info summary
Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
Status: Connected
Total Number of entries: 91
Number of entries in heal pending: 90
Number of entries in split-brain: 0
Number of entries possibly healing: 1
Brick forge:/mnt/forge_block4/androidpolice_data3
Status: Connected
Total N...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...return, glusterfsd uses up tons of
> CPU (I saw it spike to 600%). gluster already has massive performance
> issues for me, but healing after a 4-hour downtime is on another level
> of bad perf.
>
> For example, this command took many minutes to run:
>
> gluster volume heal androidpolice_data3 info summary
> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
> Status: Connected
> Total Number of entries: 91
> Number of entries in heal pending: 90
> Number of entries in split-brain: 0
> Number of entries possibly healing: 1
>
> Brick forge:/mnt/forge_block4/androi...
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...nutes to return, glusterfsd uses up tons of CPU (I
> saw it spike to 600%). gluster already has massive performance issues for
> me, but healing after a 4-hour downtime is on another level of bad perf.
>
> For example, this command took many minutes to run:
>
> gluster volume heal androidpolice_data3 info summary
> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
> Status: Connected
> Total Number of entries: 91
> Number of entries in heal pending: 90
> Number of entries in split-brain: 0
> Number of entries possibly healing: 1
>
> Brick forge:/mnt/forge_block4/androi...
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...fsd uses up tons of CPU (I
>> saw it spike to 600%). gluster already has massive performance issues for
>> me, but healing after a 4-hour downtime is on another level of bad perf.
>>
>> For example, this command took many minutes to run:
>>
>> gluster volume heal androidpolice_data3 info summary
>> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
>> Status: Connected
>> Total Number of entries: 91
>> Number of entries in heal pending: 90
>> Number of entries in split-brain: 0
>> Number of entries possibly healing: 1
>>
>> Bric...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...of CPU (I saw it spike to 600%). gluster already has
>> massive performance issues for me, but healing after a 4-hour
>> downtime is on another level of bad perf.
>>
>> For example, this command took many minutes to run:
>>
>> gluster volume heal androidpolice_data3 info summary
>> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
>> Status: Connected
>> Total Number of entries: 91
>> Number of entries in heal pending: 90
>> Number of entries in split-brain: 0
>> Number of entries possibly healing:...
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...(I
>>> saw it spike to 600%). gluster already has massive performance issues for
>>> me, but healing after a 4-hour downtime is on another level of bad perf.
>>>
>>> For example, this command took many minutes to run:
>>>
>>> gluster volume heal androidpolice_data3 info summary
>>> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
>>> Status: Connected
>>> Total Number of entries: 91
>>> Number of entries in heal pending: 90
>>> Number of entries in split-brain: 0
>>> Number of entries possibly healing:...
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
...ssive performance issues for me,
>>> but healing after a 4-hour downtime is on another level
>>> of bad perf.
>>>
>>> For example, this command took many minutes to run:
>>>
>>> gluster volume heal androidpolice_data3 info summary
>>> Brick nexus2:/mnt/nexus2_block4/androidpolice_data3
>>> Status: Connected
>>> Total Number of entries: 91
>>> Number of entries in heal pending: 90
>>> Number of entries in spli...
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I actually saw that post already and even asked a question 4 days ago (
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917).
The accepted answer also seems to go against your suggestion to enable
direct-io-mode as it says it should be disabled for better performance when
used just for file accesses.
It'd be great if someone from the Gluster team
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself.
here is direct-io-mode
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode
Same as you I ran tests on a large volume of files, finding that main
delays are in attribute calls, ending up with those mount options to add
performance.
I discovered those options through basically googling this user list with