search for: blk_read

Displaying 20 results from an estimated 49 matches for "blk_read".

2002 Sep 24
3
Samba performance issues
Hi all We are implementing samba-ldap to act as an nt pdc and are seeing performance problems. We have a 1ghz, 3gb Ram, 36gb box that is running samba-2.2.5 and openldap-2.0.23 under redhat 7.3 with kernel 2.4.18-3. Clients are all Win2k SP3. All the ldap requests are to the localhost interface. The box is acting as the PDC for the domain, and also sharing diskspace and printers. When we get
2007 Oct 18
1
Vista performance (uggh)
...suggestions on where to go from here? iostat 5 output for the physical devices below: Using a new quad core running Vista client on gigabit - Reads at 8MB/s avg-cpu: %user %nice %system %iowait %steal %idle 2.81 0.00 9.62 73.95 0.00 13.63 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn hda 51.10 3033.27 20.84 15136 104 hde 53.91 3028.46 16.03 15112 80 hdg 49.90 2993.19 19.24 14936 96 hdi 47.49 3036.47 6.41...
2008 Mar 28
1
bwlimit on rsync locally
...l. In both case the block written speed is increased by the same amount. How could I really slow down I/O while using rsync? Any help would be greatly appreciated. Regards, - Reeve rsync without --bwlimit: > iostat; rsync -a -r --stats swapfile swapfile.rsync; iostat Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 7.53 103.43 421.27 64549578 262909196 Number of files: 1 Number of files transferred: 1 Total file size: 2147483648 bytes Total transferred file size: 2147483648 bytes Literal data: 2147483648 bytes Matched data: 0 bytes File li...
2019 May 13
0
[nbdkit PATCH v2 2/2] cache, cow: Reduce use of bounce-buffer
...+ } } + blknum = offset / blksize; /* block number */ + blkoffs = offset % blksize; /* offset within the block */ + + /* Unaligned head */ + if (blkoffs) { + uint64_t n = MIN (blksize - blkoffs, count); + + assert (block); + ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock); + r = blk_read (next_ops, nxdata, blknum, block, err); + if (r == -1) + return -1; + + memcpy (buf, &block[blkoffs], n); + + buf += n; + count -= n; + offset += n; + blknum++; + } + + /* Aligned body */ /* XXX This breaks up large read requests into smaller ones, which * is a p...
2019 May 11
2
[nbdkit PATCH] cache: Reduce use of bounce-buffer
...+ nbdkit_error ("malloc: %m"); + return -1; + } } /* XXX This breaks up large read requests into smaller ones, which @@ -258,12 +261,14 @@ cache_pread (struct nbdkit_next_ops *next_ops, void *nxdata, { ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock); - r = blk_read (next_ops, nxdata, blknum, block, err); + r = blk_read (next_ops, nxdata, blknum, + blkoffs || n < blksize ? block : buf, err); } if (r == -1) return -1; - memcpy (buf, &block[blkoffs], n); + if (blkoffs || n < blksize) + memcpy (buf, &...
2019 May 13
3
[nbdkit PATCH v2 0/2] Bounce buffer cleanups
Based on Rich's review of my v1 that touched only cache.c, I have now tried to bring all three filters with alignment rounding in line with one another. There is definitely room for future improvements once we teach nbdkit to let filters and plugins advertise block sizes, but I'm hoping to get NBD_CMD_CACHE implemented first. Eric Blake (2): blocksize: Process requests in linear order
2018 Dec 28
0
[PATCH nbdkit 5/9] cache: Allow this filter to serve requests in parallel.
...{ uint64_t blknum, blkoffs, n; + int r; blknum = offset / BLKSIZE; /* block number */ blkoffs = offset % BLKSIZE; /* offset within the block */ @@ -186,7 +197,10 @@ cache_pread (struct nbdkit_next_ops *next_ops, void *nxdata, if (n > count) n = count; - if (blk_read (next_ops, nxdata, blknum, block, err) == -1) { + pthread_mutex_lock (&lock); + r = blk_read (next_ops, nxdata, blknum, block, err); + pthread_mutex_unlock (&lock); + if (r == -1) { free (block); return -1; } @@ -225,6 +239,7 @@ cache_pwrite (struct nbdkit_nex...
2019 Apr 24
0
[nbdkit PATCH 4/4] filters: Check for mutex failures
..._FOR_CURRENT_SCOPE (&lock); r = blk_set_size (size); - pthread_mutex_unlock (&lock); if (r == -1) return -1; @@ -266,9 +265,10 @@ cache_pread (struct nbdkit_next_ops *next_ops, void *nxdata, if (n > count) n = count; - pthread_mutex_lock (&lock); - r = blk_read (next_ops, nxdata, blknum, block, err); - pthread_mutex_unlock (&lock); + { + ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock); + r = blk_read (next_ops, nxdata, blknum, block, err); + } if (r == -1) return -1; @@ -316,13 +316,12 @@ cache_pwrite (struct nbdkit_next_ops...
2009 Oct 20
1
ocfs2 - problem with performance
...done Writing superblock: done Writing backup superblock: 4 block(s) Formatting Journals: done Formatting slot map: done Writing lost+found: done mkfs.ocfs2 successful My linux has "load average" more than 2 (now is load average: 8.47, 6.30, 6.61) "iostat" is tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn 1107.00 4619.50 166.75 9239 333 Can you help me somebody? Thank you Jan
2018 Feb 01
0
[nbdkit PATCH v2 1/3] backend: Rework internal/filter error return semantics
...*nxdata, void *handle, void *buf, uint32_t count, uint64_t offset) { uint8_t *block; + int r; block = malloc (BLKSIZE); if (block == NULL) { @@ -357,9 +360,10 @@ cache_pread (struct nbdkit_next_ops *next_ops, void *nxdata, if (n > count) n = count; - if (blk_read (next_ops, nxdata, blknum, block) == -1) { + r = blk_read (next_ops, nxdata, blknum, block); + if (r) { free (block); - return -1; + return r; } memcpy (buf, &block[blkoffs], n); @@ -379,6 +383,7 @@ cache_pwrite (struct nbdkit_next_ops *next_ops, void *nxdata,...
2019 Apr 01
1
Readahead in the nbdkit curl plugin
I'm trying to replicate the features of the qemu curl plugin in nbdkit's curl plugin, in order that we can use nbdkit in virt-v2v to access VMware servers. I've implemented everything else so far [not posted yet] except for readahead. To my surprise actually, qemu's curl driver implements readahead itself. I thought it was a curl feature. I'm not completely clear _how_ it
2019 May 13
0
Re: [nbdkit PATCH] cache: Reduce use of bounce-buffer
...+ return -1; > + } > } > > /* XXX This breaks up large read requests into smaller ones, which > @@ -258,12 +261,14 @@ cache_pread (struct nbdkit_next_ops *next_ops, void *nxdata, > > { > ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock); > - r = blk_read (next_ops, nxdata, blknum, block, err); > + r = blk_read (next_ops, nxdata, blknum, > + blkoffs || n < blksize ? block : buf, err); > } > if (r == -1) > return -1; > > - memcpy (buf, &block[blkoffs], n); > + if (blkoffs...
2008 Mar 29
1
Help in troubleshoot cause of high kernel activity
...0256k used, 34952k free, 142100k buffers Swap: 16777208k total, 66140k used, 16711068k free, 1276564k cached iostat Snapshot ============ avg-cpu: %user %nice %system %iowait %steal %idle 18.96 0.00 25.57 5.16 0.01 50.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 54.19 63.31 2460.80 42689802 1659234904 sdb 55.12 76.41 2460.80 51521720 1659234904 md1 315.95 139.72 2442.00 94207644 1646554216 md0 0.01 0.00 0.02...
2019 Apr 24
7
[nbdkit PATCH 0/4] More mutex sanity checking
I do have a question about whether patch 2 is right, or whether I've exposed a bigger problem in the truncate (and possibly other) filter, but the rest seem fairly straightforward. Eric Blake (4): server: Check for pthread lock failures truncate: Factor out reading real_size under mutex plugins: Check for mutex failures filters: Check for mutex failures filters/cache/cache.c
2018 Jan 21
2
Re: [PATCH nbdkit] filters: Add copy-on-write filter.
...w-filter.pod mv $@.t $@ endif -endif diff --git a/filters/cow/cow.c b/filters/cow/cow.c index 287c94e..2b023af 100644 --- a/filters/cow/cow.c +++ b/filters/cow/cow.c @@ -38,20 +38,22 @@ * takes up no space. * * We confine all pread/pwrite operations to the filesystem block - * size. The blk_read and blk_write functions below always happen on - * whole filesystem block boundaries. A smaller-than-block-size - * pwrite will turn into a read-modify-write of a whole block. We - * also assume that the plugin returns the same immutable data for - * each pread call we make, and optimize on this...
2017 Feb 10
1
dovecot config for 1500 simultaneous connection
...[root at ns1 domains]# iostat Linux 2.6.32-431.29.2.el6.x86_64 (ns1.bizmailserver.net) 02/10/2017 _x86_64_ (12 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 2.67 0.00 0.65 3.43 0.00 93.25 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdd 44.95 1094.25 765.10 720884842 504041712 sdc 1.92 32.15 0.03 21178186 21248 sdb 34.71 1377.37 625.54 907398402 412102224 sda 49.88 124.29 2587.32...
2019 Jan 02
1
Re: [PATCH nbdkit v2 1/2] Annotate internal function parameters with attribute((nonnull)).
...the current nbdkit.git master; I'm assuming the patch depends on some of your other patches landing first. > @@ -51,10 +51,15 @@ extern void blk_free (void); > extern int blk_set_size (uint64_t new_size); > > /* Read a single block from the cache or plugin. */ > -extern int blk_read (struct nbdkit_next_ops *next_ops, void *nxdata, uint64_t blknum, uint8_t *block, int *err); > +extern int blk_read (struct nbdkit_next_ops *next_ops, void *nxdata, > + uint64_t blknum, uint8_t *block, int *err) > + __attribute__((__nonnull__ (1, 2, 4, 5))); nxdata is...
2018 Jan 20
4
[PATCH nbdkit] filters: Add copy-on-write filter.
Eric, you'll probably find the design "interesting" ... It does work, for me at least. Rich.
2018 Jan 28
3
[nbdkit PATCH 0/2] RFC: tweak error handling, add log filter
Here's what I'm currently playing with; I'm not ready to commit anything until I rebase my FUA work on top of this, as I only want to break filter ABI once between releases. Eric Blake (2): backend: Rework internal/filter error return semantics filters: Add log filter TODO | 2 - docs/nbdkit-filter.pod | 84 +++++++-- docs/nbdkit.pod
2004 Apr 20
0
Re: ocfs performance question
...ct DELL. We are in contact with them... as in, they inform us of any issues they have with linux/ocfs on their hardware. As we do not have this particular hardware inhouse, all we can only speculate as to what the issue is. Things to look for ==> output of.... iostat 1 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-1 2.00 36.00 1.00 36 1 This is just with 1 mounted ocfs volume and no processes touching it. Notice the 2.00 tps... transfers per sec. If you see something like 37 or so, you are running into a known EL3 issue. (...