similar to: Re: measured throughput variations

Displaying 20 results from an estimated 3000 matches similar to: "Re: measured throughput variations"

2002 Jun 03
1
64 K write access grouped in a single disk access ?
Hi Stephen, I would like to know the behavior of ext3 for a write() request used with O_SYNC for 64 K in term of disk access method: - is there a chance to have only one disk access (instead of 16 x 4 K corresponding to each mapped page ) or at maximum two in journaled mode ? - if this is possible then how can i force such a grouping of data in a single disk output ? NB: i disable cached write
2002 Jan 09
1
inconsistent file content after killing nfs daemon
Hi Stephen, I use ext3 with kernel 2.4.14. I'm happy to have verified that nfs+ext3 in journal mode doesn't provide atomic write for the user point of view. My program writes sequential records of 64KB in a file through a nfs mount point. The blocks of data are initialized with a serie of integer: 1, 2, 3 ... I kill the nfsd daemons while two instance of the program are writing their 600
2017 Jun 22
0
Slow write times to gluster disk
Hi, Today we experimented with some of the FUSE options that we found in the list. Changing these options had no effect: gluster volume set test-volume performance.cache-max-file-size 2MB gluster volume set test-volume performance.cache-refresh-timeout 4 gluster volume set test-volume performance.cache-size 256MB gluster volume set test-volume performance.write-behind-window-size 4MB gluster
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the O_SYNC flag. This affects subsequent write()''s by providing synchronized I/O file integrity completion. That is, each write(2) will wait for both the file data and file status to be physically updated. Because of this, he''s seeing some delays on the file write()''s. This is verified with
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found in the >> list. >> >> Changing these options had no effect: >> >>
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote: > > Hi All, > > Decided to try another tests of gluster mounted via FUSE vs gluster > mounted via NFS, this time using the software we run in production (i.e. > our ocean model writing a netCDF file). > > gluster mounted via NFS the run took 2.3 hr > > gluster mounted via FUSE: the run took
2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks. I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben, Sorry this took so long, but we had a real-time forecasting exercise last week and I could only get to this now. Backend Hardware/OS: * Much of the information on our back end system is included at the top of http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html * The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY V.4 6TB
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > > Hi, > > Today we experimented with some of the FUSE options that we found in the > list. > > Changing these options had no effect: > > gluster volume set test-volume performance.cache-max-file-size 2MB > gluster volume set test-volume performance.cache-refresh-timeout 4 > gluster
2016 Jan 06
0
[klibc:master] MIPS: Update archfcntl.h
Commit-ID: 3fefc6a404a970a911417d0345618a7e9abfef70 Gitweb: http://git.kernel.org/?p=libs/klibc/klibc.git;a=commit;h=3fefc6a404a970a911417d0345618a7e9abfef70 Author: Ben Hutchings <ben at decadent.org.uk> AuthorDate: Wed, 6 Jan 2016 00:43:25 +0000 Committer: H. Peter Anvin <hpa at linux.intel.com> CommitDate: Tue, 5 Jan 2016 17:45:36 -0800 [klibc] MIPS: Update archfcntl.h
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2017 Jun 26
3
Slow write times to gluster disk
Hi All, Decided to try another tests of gluster mounted via FUSE vs gluster mounted via NFS, this time using the software we run in production (i.e. our ocean model writing a netCDF file). gluster mounted via NFS the run took 2.3 hr gluster mounted via FUSE: the run took 44.2 hr The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which
2005 Dec 22
2
zpool iostat output gets buffered
I''m trying to write a SLAMD (http://www.slamd.com/) resource monitor that can be used to measure the I/O throughput on a ZFS pool, and in particular to be able to get the read and write rates. In order to do this, I''m basically executing "zpool iostat {interval}" and parsing the output to capture the values in the "bandwidth read" and "bandwidth
2020 Aug 07
2
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote: > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > These ones? > > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html > > No, we had a bug when copying image from glance caused sanlock timeouts > because of the unpredictable page cache flushes. > > We
2002 Aug 21
2
journal tuning
Hello, Is there some document about ext3 performance tuning and choosing the right type and size of journal? Except RedHat's white paper. From what I read I understood that for typical operations data=ordered is prefered. For the cases when there are many writes not appending to files data=journal is the choice. And if I want to get the most performance or in case where program is doing
2006 Oct 31
0
6269165 misleading comments in usr/src/cmd/stat/iostat/iostat.c
Author: petede Repository: /hg/zfs-crypto/gate Revision: ef5d30f0abff39691966b9805b895f3c757b5a9f Log message: 6269165 misleading comments in usr/src/cmd/stat/iostat/iostat.c 6286482 remove the only occurrence of "shit" in OpenSolaris 6231501 Typo in <sys/fem.h> Contributed by Shawn Walker <binarycrusader at gmail.com>. Files: update: usr/src/cmd/stat/iostat/iostat.c
2008 Mar 28
1
bwlimit on rsync locally
Does "bwlimit" option really work on rsync locally? We have one type of harddisk and want to slow down rsync I/O on disk because I don't want the disk head gets too hot. While I'm trying to use --bwlimit option, it looks the rsync speed was slowed down, but iostat is not improved at all. In both case the block written speed is increased by the same amount. How could I really
2015 Nov 13
4
[PATCH 1/4] extlinux: simplification
Merge installation of ldlinux.c32 from ext2_fat_install_file, btrfs_install_file and xfs_install_file into one function ext_install_ldlinux_c32 Signed-off-by: Nicolas Cornu <nicolac76 at yahoo.fr> --- extlinux/main.c | 106 +++++++++++++++++++++----------------------------------- 1 file changed, 40 insertions(+), 66 deletions(-) diff --git a/extlinux/main.c b/extlinux/main.c index
2020 Aug 07
0
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Fri, Aug 7, 2020 at 5:36 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote: > > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > > These ones? > > > https://www.redhat.com/archives/libguestfs/2020-August/msg00078.html > > > > No, we had a bug when