search for: wmark

Displaying 12 results from an estimated 12 matches for "wmark".

Did you mean: mark
2018 Jul 11
1
[PATCH v35 1/5] mm: support to get hints of free page blocks
...e small amount of memory, e.g. 2 free page blocks of "MAX_ORDER - 1". So when other applications happen to do some allocation, they may easily get some from the reserved memory left on the free list. Without that reserved memory, other allocation may cause the system free memory below the WMARK[MIN], and kswapd would start to do swapping. This is actually just a small optimization to reduce the probability of causing swapping (nice to have, but not mandatary because we will allocate free page blocks one by one). > But let me note that I am not really convinced how this (or previous)...
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
...85760 cluster.tier-max-mb: 64000 features.ctr-sql-db-wal-autocheckpoint: 2500 cluster.tier-hot-compact-frequency: 86400 cluster.tier-cold-compact-frequency: 86400 performance.readdir-ahead: off cluster.watermark-low: 50 storage.build-pgfid: on performance.rda-request-size: 128KB performance.rda-low-wmark: 4KB cluster.min-free-disk: 5% auto-delete: enable On Sun, Feb 4, 2018 at 9:44 PM, Amar Tumballi <atumball at redhat.com> wrote: > Thanks for the report Artem, > > Looks like the issue is about cache warming up. Specially, I suspect rsync > doing a 'readdir(), stat(), file...
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2018 Jul 11
3
[PATCH v35 1/5] mm: support to get hints of free page blocks
On 07/11/2018 05:21 PM, Michal Hocko wrote: > On Tue 10-07-18 18:44:34, Linus Torvalds wrote: > [...] >> That was what I tried to encourage with actually removing the pages >> form the page list. That would be an _incremental_ interface. You can >> remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark >> them free for ballooning that way. And if you
2018 Jul 11
3
[PATCH v35 1/5] mm: support to get hints of free page blocks
On 07/11/2018 05:21 PM, Michal Hocko wrote: > On Tue 10-07-18 18:44:34, Linus Torvalds wrote: > [...] >> That was what I tried to encourage with actually removing the pages >> form the page list. That would be an _incremental_ interface. You can >> remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark >> them free for ballooning that way. And if you
2019 Dec 28
1
GFS performance under heavy traffic
...>> cluster.use-compound-fops ? ? ? ? ? ? ? off ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.parallel-readdir ? ? ? ? ? ?off ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.rda-request-size ? ? ? ? ? ?131072 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.rda-low-wmark ? ? ? ? ? ? ? 4096 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.rda-high-wmark ? ? ? ? ? ? ?128KB ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.rda-cache-limit ? ? ? ? ? ? 10MB ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> performance.nl-cache-positive-entry ? ? false ? ? ? ? ?...
2019 Dec 27
0
GFS performance under heavy traffic
...? ? ? ? ? ? ? ? > cluster.use-compound-fops ? ? ? ? ? ? ? off ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.parallel-readdir ? ? ? ? ? ?off ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.rda-request-size ? ? ? ? ? ?131072 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.rda-low-wmark ? ? ? ? ? ? ? 4096 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.rda-high-wmark ? ? ? ? ? ? ?128KB ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.rda-cache-limit ? ? ? ? ? ? 10MB ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > performance.nl-cache-positive-entry ? ? false ? ? ? ? ? ? ? ? ? ? ?...
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
...Brick3: pod-sjc1-gluster1:/data/brick1/gv0 Brick4: pod-sjc1-gluster2:/data/brick1/gv0 Brick5: pod-sjc1-gluster1:/data/brick2/gv0 Brick6: pod-sjc1-gluster2:/data/brick2/gv0 Brick7: pod-sjc1-gluster1:/data/brick3/gv0 Brick8: pod-sjc1-gluster2:/data/brick3/gv0 Options Reconfigured: performance.rda-low-wmark: 4KB performance.rda-request-size: 128KB storage.build-pgfid: on cluster.watermark-low: 50 performance.readdir-ahead: off cluster.tier-cold-compact-frequency: 86400 cluster.tier-hot-compact-frequency: 86400 features.ctr-sql-db-wal-autocheckpoint: 2500 cluster.tier-max-mb: 64000 cluster.tier-max-pro...
2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
...atures.ctr-sql-db-wal-autocheckpoint: 2500 > cluster.tier-hot-compact-frequency: 86400 > cluster.tier-cold-compact-frequency: 86400 > performance.readdir-ahead: off > cluster.watermark-low: 50 > storage.build-pgfid: on > performance.rda-request-size: 128KB > performance.rda-low-wmark: 4KB > cluster.min-free-disk: 5% > auto-delete: enable > > > On Sun, Feb 4, 2018 at 9:44 PM, Amar Tumballi <atumball at redhat.com> wrote: > >> Thanks for the report Artem, >> >> Looks like the issue is about cache warming up. Specially, I suspect >>...
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, On 03/19/2018 03:42 PM, TomK wrote: > On 3/19/2018 5:42 AM, Ondrej Valousek wrote: > Removing NFS or NFS Ganesha from the equation, not very impressed on my > own setup either.? For the writes it's doing, that's alot of CPU usage > in top. Seems bottle-necked via a single execution core somewhere trying > to facilitate read / writes to the other bricks. > >
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier