Displaying 20 results from an estimated 100000 matches similar to: "where is performance.cache-size"
2017 Oct 09
0
Peer isolation while healing
Hi,
There is no way to isolate the healing peer. Healing happens from the good
brick to the bad brick.
I guess your replica bricks are on a different peers. If you try to isolate
the healing peer, it will stop the healing process itself.
What is the error you are getting while writing? It would be helpful to
debug the issue, if you can provide us the output of the following commands:
gluster
2017 Oct 09
2
Peer isolation while healing
That make sense ^_^
Unfortunately I haven't kept the interresting data you need.
Basically I had some write errors on my gluster clients when my
monitoring tool tested mkdir & create files.
The server's load was huge during the healing (cpu at 100%), and the
disk latency increased a lot.
That may be the source of my write errors, we'll know for sure next
time... I'll keep
2017 Oct 11
1
gluster volume + lvm : recommendation or neccessity ?
After some extra reading about LVM snapshots & Gluster, I think I can
conclude it may be a bad idea to use it on big storage bricks.
I understood that the LVM maximum metadata, used to store the snapshots
data, is about 16GB.
So if I have a brick with a volume arount 10TB (for example), daily
snapshots, files changing ~100GB : the LVM snapshot is useless.
LVM's snapshots doesn't
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I'm using only localhost: mounts.
Can you please explain what effect each option has on performance issues
shown in my posts? "negative-timeout=10,attribute-timeout=30,fopen-
keep-cache,direct-io-mode=enable,fetch-attempts=5" From what I remember,
direct-io-mode=enable didn't make a difference in my tests, but I suppose I
can try again. The explanations about
2017 Oct 09
3
Peer isolation while healing
Hi everyone,
I've been using gluster for a few month now, on a simple 2 peers
replicated infrastructure, 22Tb each.
One of the peers has been offline last week during 10 hours (raid resync
after a disk crash), and while my gluster server was healing bricks, I
could see some write errors on my gluster clients.
I couldn't find a way to isolate my healing peer, in the documentation
or
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I actually saw that post already and even asked a question 4 days ago (
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917).
The accepted answer also seems to go against your suggestion to enable
direct-io-mode as it says it should be disabled for better performance when
used just for file accesses.
It'd be great if someone from the Gluster team
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab
use ones from here
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I went on with using local mounts to achieve performance as well
Also, 3.12 or 3.10 branches would be preferable for production
On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com>
wrote:
> Hi again,
>
> I'd like to
2018 Apr 06
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi again,
I'd like to expand on the performance issues and plead for help. Here's one
case which shows these odd hiccups: https://i.imgur.com/CXBPjTK.gifv.
In this GIF where I switch back and forth between copy operations on 2
servers, I'm copying a 10GB dir full of .apk and image files.
On server "hive" I'm copying straight from the main disk to an attached
volume
2018 Apr 06
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
I restarted rsync, and this has been sitting there for almost a minute,
barely moved several bytes in that time:
2014/11/545b06baa3d98/com.google.android.apps.inputmethod.zhuyin-2.1.0.79226761-armeabi-v7a-175-minAPI14.apk
6,389,760 45% 18.76kB/s 0:06:50
I straced each of the 3 processes rsync created and saw this (note: every
time there were several seconds of no output, I
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:14 AM, Artem Russakovskii wrote:
> Following up here on a related and very serious for us issue.
>
> I took down one of the 4 replicate gluster servers for maintenance
> today. There are 2 gluster volumes totaling about 600GB. Not that much
> data. After the server comes back online, it starts auto healing and
> pretty much all operations on gluster freeze for
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself.
here is direct-io-mode
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode
Same as you I ran tests on a large volume of files, finding that main
delays are in attribute calls, ending up with those mount options to add
performance.
I discovered those options through basically googling this user list with
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote:
> Hi Ravi,
>
> Could you please expand on how these would help?
>
> By forcing full here, we move the logic from the CPU to network, thus
> decreasing CPU utilization, is that right?
Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum
which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 06
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi,
I'm trying to squeeze performance out of gluster on 4 80GB RAM 20-CPU
machines where Gluster runs on attached block storage (Linode) in (4
replicate bricks), and so far everything I tried results in sub-optimal
performance.
There are many files - mostly images, several million - and many operations
take minutes, copying multiple files (even if they're small) suddenly
freezes up for
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Btw, I've now noticed at least 5 variations in toggling binary option
values. Are they all interchangeable, or will using the wrong value not
work in some cases?
yes/no
true/false
True/False
on/off
enable/disable
It's quite a confusing/inconsistent practice, especially given that many
options will accept any value without erroring out/validation.
Sincerely,
Artem
--
Founder, Android
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote:
> Btw, I've now noticed at least 5 variations in toggling binary option
> values. Are they all interchangeable, or will using the wrong value
> not work in some cases?
>
> yes/no
> true/false
> True/False
> on/off
> enable/disable
>
> It's quite a confusing/inconsistent practice, especially given that
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi,
Could you please expand on how these would help?
By forcing full here, we move the logic from the CPU to network, thus
decreasing CPU utilization, is that right? This is assuming the CPU and
disk utilization are caused by the differ and not by lstat and other calls
or something.
> Option: cluster.data-self-heal-algorithm
> Default Value: (null)
> Description: Select between
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite
ready yet, and there's no mention of the option.
Does it mean that whatever is ready now in 4.0.1 is incomplete but can be
enabled via granular-entry-heal=on, and when it is complete, it'll become
the default and the flag will simply go away?
Is there any risk enabling the option now in 4.0.1?
Sincerely,
Artem
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue.
I took down one of the 4 replicate gluster servers for maintenance today.
There are 2 gluster volumes totaling about 600GB. Not that much data. After
the server comes back online, it starts auto healing and pretty much all
operations on gluster freeze for many minutes.
For example, I was trying to run an ls -alrt in a folder with 7300
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].
* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
LV. More details about thin provisioning and thin provisioned snapshot
can be found here.
* This thinly
2017 Oct 11
0
gluster volume + lvm : recommendation or neccessity ?
On 10/11/2017 09:50 AM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems recommended to
> use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
>