search for: readaheads

Displaying 20 results from an estimated 323 matches for "readaheads".

Did you mean: readahead
2010 Jul 28
6
Read ahead / prefetching
Hi, I am trying to educate myself on prefetching/readahead algorithm for Lustre''s read. For a starter I only have two simple questions. 1 - Does Lustre detect linear or random I/O pattern or it always triggers readahead? 2 - If readahead is triggered, how many pages are read in addition to what is necessary? Thanks, Arifa.
2019 Apr 01
1
[PATCH nbdkit v2] Add readahead filter.
Simpler, and including tests. Rich.
2011 Jan 04
16
[PATCH v2 0/5] add new ioctls to do metadata readahead in btrfs
Hi, We have file readahead to do asyn file read, but has no metadata readahead. For a list of files, their metadata is stored in fragmented disk space and metadata read is a sync operation, which impacts the efficiency of readahead much. The patches try to add meatadata readahead for btrfs. In btrfs, metadata is stored in btree_inode. Ideally, if we could hook the inode to a fd so we could use
2019 Apr 01
1
[PATCH nbdkit] Add readahead filter.
A suggested readahead filter. I've only lightly tested this, but it seems to work fine with qemu-img convert. The commit needs proper tests. Rich.
2006 Jul 10
3
Kernel-utils stupidities (readahead and cpuspeed)
Hi all, I think I've spotted a few stupidities (bugs) in the current version of kernel-utils (kernel-utils-2.4-13.1.80). I'm sure these are all propagated from upstream, but I hope someone could have a quick look to verify this and see if we either can push complaints upwards, or provide local fixes. The kernel-utils package provides several 'kernel-type' functions -
2019 Apr 23
0
[nbdkit PATCH 3/4] filters: Utilize ACQUIRE_LOCK_FOR_CURRENT_SCOPE
Now that cleanup.h is in common code, we can use it in our filters where it makes sense. Many uses of pthread_mutex_unlock() are not function-wide, and over small enough snippets of code as to be easier to read when left as-is; but when the scope is indeed function-wide or across multiple exit paths, it is nicer to use the macro for automatic unlock. Signed-off-by: Eric Blake
2019 Sep 20
0
[PATCH v4 07/12] v2v: nbdkit: Add the readahead filter unconditionally if it is available.
The readahead filter is a self-configuring filter that makes sequential reads faster when the plugin is slow (and all of the plugins we use here are always slow). I observed the behaviour of the readahead filter with our qcow2 overlay when converting a guest from a vCenter source. Even when doing random reads, qemu issues 64K reads which happen to also be the minimum request size of the
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcommit_memory=2 # Memory to
2020 Jun 19
1
Re: [PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
On 6/19/20 7:47 AM, Richard W.M. Jones wrote: > This appears to be the cause of timeouts during the conversion step > where VMware VCenter server's Tomcat HTTPS server stops responding to > requests (or rather, responds only with 503 errors). The server later > recovers and in fact because of the retry filter the conversion > usually succeeds, but I found that we can avoid the
2020 Jun 19
2
[PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
I'm still testing this fix, so let's hold off the review for the moment. Also it may be better to specifically identify problematic servers rather than disabling this for every curl source. eg. I suspect that the problem is the Java server used by VCenter, so we might think about only disabling readahead for that single case. Rich.
2020 May 28
2
[PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
This is the simplest solution to this problem. There are two other possible fixes I considered: Increase the documented limit (see http://libguestfs.org/virt-v2v-input-vmware.1.html#vddk:-esxi-nfc-service-memory-limits). However at the moment we know the current limit works through extensive testing (without readahead), plus I have no idea nor any way to test if larger limits are supported by
2019 Apr 01
1
Readahead in the nbdkit curl plugin
I'm trying to replicate the features of the qemu curl plugin in nbdkit's curl plugin, in order that we can use nbdkit in virt-v2v to access VMware servers. I've implemented everything else so far [not posted yet] except for readahead. To my surprise actually, qemu's curl driver implements readahead itself. I thought it was a curl feature. I'm not completely clear _how_ it
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k -
2020 May 28
0
[PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
This filter deliberately tries to coalesce reads into larger requests. Unfortunately VMware has low limits on the size of requests it can serve to a VDDK client and the larger requests would break with errors like this: nbdkit: vddk[3]: error: [NFC ERROR] NfcFssrvrProcessErrorMsg: received NFC error 5 from server: Failed to allocate the requested 33554456 bytes We already increase the maximum
2019 Sep 20
4
Re: [PATCH v4 07/12] v2v: nbdkit: Add the readahead filter unconditionally if it is available.
On Fri, Sep 20, 2019 at 10:28:18AM +0100, Richard W.M. Jones wrote: >The readahead filter is a self-configuring filter that makes >sequential reads faster when the plugin is slow (and all of the >plugins we use here are always slow). > >I observed the behaviour of the readahead filter with our qcow2 >overlay when converting a guest from a vCenter source. Even when >doing
2020 Nov 09
2
vfs readahead && windows server 2016/2019?
Quick question... The docs for vfs readahead mention Windows Vista, but what about modern Windows versions? Windows Server 2016 or Windows Server 2019? Thank you, -- BOB BUCK SENIOR PLATFORM SOFTWARE ENGINEER SKIDMORE, OWINGS & MERRILL 7 WORLD TRADE CENTER 250 GREENWICH STREET NEW YORK, NY 10007 T (212) 298-9624 ROBERT.BUCK at SOM.COM
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2020 Jun 19
0
[PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
This appears to be the cause of timeouts during the conversion step where VMware VCenter server's Tomcat HTTPS server stops responding to requests (or rather, responds only with 503 errors). The server later recovers and in fact because of the retry filter the conversion usually succeeds, but I found that we can avoid the problem by disabling readahead. --- v2v/nbdkit_sources.ml | 8 ++++----
2014 Nov 23
0
[PATCH 2/3] New API: guestfs_blockdev_setra: Adjust readahead for filesystems and devices.
This adds a binding for 'blockdev --setra', allowing you to adjust the readahead parameter for filesystems and devices. --- daemon/blockdev.c | 30 ++++++++++++++++++++---------- generator/actions.ml | 14 ++++++++++++++ 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/daemon/blockdev.c b/daemon/blockdev.c index 8a7b1a8..6e8821d 100644 --- a/daemon/blockdev.c +++
2003 Feb 19
2
Win2K/XP, oplocks, and readahead.
Hi! I'm working with Samba backed by a high performance filesystem. From a Windows 2K and Windows XP client I'm trying to achieve very high speed single file throughput over GigE from the Windows client using either open/write or CreateFile/ReadFile APIs. I'd rather not venture into overlapped IO there so that we don't have to recommend that all our customers rewrite their