similar to: [Bug 9560] New: drop-cache option

Displaying 20 results from an estimated 600 matches similar to: "[Bug 9560] New: drop-cache option"

2009 Dec 21
3
DO NOT REPLY [Bug 7004] New: Use posix_fadvise to free cached file contents when done
https://bugzilla.samba.org/show_bug.cgi?id=7004 Summary: Use posix_fadvise to free cached file contents when done Product: rsync Version: 3.0.6 Platform: All OS/Version: Linux Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: ted at
2007 May 07
1
Announce: rsync fadvise (cache dropping) patch updated
Hi List, I have updated my rsync fadvise patch which stops rsync from ousting all your other data from cache when running large jobs. I have also written an article about the whole issue. http://insights.oetiker.ch/linux/fadvise.html cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten http://it.oetiker.ch tobi@oetiker.ch ++41 62 213 9902
2007 Apr 22
1
patch to stop rsync from polluting the filesystem cache
Hi List, I am using rsync for hard-link backup. I found that there is a major problem with frequent backup filling up the file system cache with all the data from the files being backed up. The effect is that all the other 'sensible' data in the cache gets thrown out in the process. This is rather unfortunate as the performance of the system becomes very bad after running rsync. Some
2020 Aug 08
1
Re: [PATCH nbdkit] plugins: file: More standard cache mode names
On Sun, Aug 9, 2020 at 12:28 AM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Sat, Aug 08, 2020 at 01:24:02AM +0300, Nir Soffer wrote: > > The new cache=none mode is misleading since it does not avoid usage of > > the page cache. When using shared storage, we may get stale data from > > the page cache. When writing, we flush after every write which is > >
2010 Nov 04
4
fadvise DONTNEED implementation (or lack thereof)
I've recently been trying to track down the root cause of my server's persistent issue of thrashing horribly after being left inactive. It seems that the issue is likely my nightly backup schedule (using rsync) which traverses my entire 50GB home directory. I was surprised to find that rsync does not use fadvise to notify the kernel of its use-once data usage pattern. It looks like a
2010 Nov 23
1
[RFC PATCH] fadvise support in rsync
Warning for kernel folks: I'm not much of an mm person; let me know if I got anything horribly wrong. Many folks use rsync in their nightly backup jobs. In these applications, speed is of minimal concern and should be sacrificed in order to minimize the effect of rsync on the rest of the machine. When rsync is working on a large directory it can quickly fill the page cache with written data,
2020 Aug 07
2
[PATCH nbdkit] plugins: file: More standard cache mode names
The new cache=none mode is misleading since it does not avoid usage of the page cache. When using shared storage, we may get stale data from the page cache. When writing, we flush after every write which is inefficient and unneeded. Rename the cache modes to: - writeback - write complete when the system call returned, and the data was copied to the page cache. - writethrough - write completes
2012 Feb 18
4
FADV_DONTNEED support
While going through an old todo list I found that these patches had fallen by the way-side. About a year ago I initiated a discussion[1] with the Linux kernel folks regarding the lack of any useable fadvise support on the kernel side. As a result, I was observing extremely poor performance on my server after backup as executable pages were being swapped out in favor of data waiting to be flushed
2014 Apr 01
1
BUG dovecot and nginx
we have setup a two level proxy configuration for our zimbra server: [ dovecot 2.2.12 (imap proxy mode) ] V [ nginx (imap proxy mode) ] V [ zimbra imap server] and it does not work ... after tying a login, the connection just hangs and ends after 30 seconds with a timeout. - if I try again rightaway in the same dovecot connection, the login goes though without trouble.
2014 Apr 01
1
how to enable debugging in imapc
Hi Net, How can I enable debug messages in the imap-proxy client? Trying to figure why the imap-proxy mode does not work towards ngnix. specifically, how can i set conn->client->set.debug in ./src/lib-imap-client/imapc-connection.c cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland www.oetiker.ch tobi at oetiker.ch +41 62 775 9902 *** We are hiring
2012 Aug 10
6
qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE
Hi list, Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen). To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure. Even not consider the nested case, I saw there is a bug reporting normal VM
2019 May 17
4
[nbdkit PATCH 0/3] Add noparallel filter
Being able to programmatically force nbdkit to be less parallel can be useful during testing. I was less sure about patch 3, but if you like it, I'm inclined to instead squash it into patch 1. This patch is written to apply after my NBD_CMD_CACHE work (since I touched the nocache filter); but can be rearranged if we think this series should go in first while that one undergoes any adjustments
2000 Aug 29
1
SNAP-2000082900
I have been testing the SNAP-2000082900 on solaris ... earlier I wrote that the 'connection dies on exit of x11 forwarded motif application' bug was solved with this release ... unfortunately further testing showed that it just did not occur on the machine I tested. All our other machines still show it ... cheers tobi -- ______ __ _ /_ __/_ / / (_) Oetiker, Timelord &
2019 May 16
27
[nbdkit PATCH v2 00/24] implement NBD_CMD_CACHE
Since v1: - rework .can_cache to be tri-state, with default of no advertisement (ripple effect through other patches) - add a lot more patches in order to round out filter support And in the meantime, Rich pushed NBD_CMD_CACHE support into libnbd, so in theory we now have a way to test cache commands through the entire stack. Eric Blake (24): server: Internal hooks for implementing
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote: > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > These ones? > > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html > > No, we had a bug when copying image from glance caused sanlock timeouts > because of the unpredictable page cache flushes. > > We
2015 Aug 04
3
php-R
Estimados colegas: Estoy tratando de ejecutar varios scripts de R a través de php. Para ello estoy utilizando el siguiente código pero me sale: El URL solicitado no ha sido localizado en este servidor. El URL de la página que lo refirió[1] parece ser equivocado u obsoleto. Por favor comunique al autor de esa página[1] acerca del error. Este código que está sacado de internet indica la
2020 Aug 07
3
[PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
You can use these flags as described in the manual page to optimize access patterns, and to get better behaviour with the page cache in some scenarios. For my testing I used the cachedel and cachestats utilities written by Julius Plenz (https://github.com/Feh/nocache). I started with a 32 GB file of random data on a machine with about 32 GB of RAM. At the beginning of the test I evicted the
2000 Dec 11
1
OpenSSH 2.3.0p1: Broken pipe / SIGPIPE
Dear OpenSSH gurus! ;-) I recently upgraded from "OpenSSH 2.1.1p4" to "OpenSSH 2.3.0p1" on my Linux 2.2.17 box with OpenSSL 0.9.5a (RedHat 7.0). According to the "ChangeLog", there was a change in SIGPIPE handling: | 20000930 | [...] | - (djm) Ignore SIGPIPEs from serverloop to child. Fixes crashes with | very short lived X connections. Bug report from
2010 Jul 21
1
prediction from a logistic mixed effects model
Hi, Is there any similar command to "predict" which can be used with a logistic random effects model? I have run a random effects model using "lme()", and then use "predict.lme()" with no problems. However, I would also like to run a logistic random effects model, and then also run a predict command on the logistic random effects model. If I use "lme()",
2013 Dec 02
2
lastes sources don't include "drop_cache" option
Was there some reason that patch got dropped? Otherwise rsync eats up all the buffer memory. Note -- I tried directio -- didn't work due to alignment issues -- buffers have to be aligned to sectors. The kernel, if I remember correctly, has been on again/off again on requiring alignment on directio -- because most of the drivers and devices do for directio to work, at. "dd"