Displaying 20 results from an estimated 3000 matches similar to: "iozone results"
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff,
On 07/03/2010 03:58 AM, Jeff Moyer wrote:
> Hi,
>
> Running iozone or fs_mark with fsync enabled, the performance of CFQ is
> far worse than that of deadline for enterprise class storage when dealing
> with file sizes of 8MB or less. I used the following command line as a
> representative test case:
>
> fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2017 Oct 10
0
small files performance
I just tried setting:
performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on
and clients could not see their files with ls when accessing via a fuse
mount. The files and directories were there,
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi,
Well I have a freshly built system with ZFS raidz.
Intel P4 2.4 Ghz
1GB Ram
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
(2) Intel Dual Port 1Gbit nics
I have (5) 300GB disks in a Raidz1 with Zfs.
I''ve created a couple of FS on this.
/export/downloads
/export/music
/export/musicraw
I''ve shared these out as well.
First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi,
I am measuring the performance of my newly bought NAS with IOZone.
The NAS is of an embedded linux with samba installed. (CPU is Intel Atom)
The IOZone reported that write performance to be over 1GBps while the file
size less or equals to 1GB.
Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at
most.
The testing report of IOZone is amazing.
Later I found that If the
2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all,
Can anyone please steer me in the right direction with this one? I've
searched the net, but couldn't find a clear answer.
How do I actually generate graphs from iozone, using OpenOffice? Every
website I've been to simply mentions that iozone can output an xls
file which can be used in MS Excel to generate a 3D graph. But, I
can't see how it's actually done. Can anyone
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello:
Sorry for asking iozone ques in this mailing list but couldn't find
any mailing list on iozone...
In IOZone, is there a way to configure # of outstanding requests
client sends to server side? Something on the lines of IOMeter option
"Number of outstanding requests".
Thanks a lot!
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL,
IHAC who would like to use Sun Fire X4500 to be the NFS server for the
backend services, and would like to see the potential performance gain
comparing to their existing systems. However the outputs of the I/O
stress test with iozone show the mixed results as follows:
* The read performance sharply degrades (almost down to 1/20, i.e
from 2,000,000 down to 100,000) when the
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone
[1] on it results in an oops [2]. remove_suid is called, accessing
offset 14 of a NULL pointer.
Let me know if you''d like me to test any fix, do further debugging or
get more information.
Thanks,
Daniel
--- [1]
# mkfs.btrfs /dev/sda4
# mount /dev/sda4 /mnt
/mnt# iozone -a .
--- [2]
[ 899.118926] BUG: unable to
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks,
I would appreciate it if someone can help me understand some weird
results I''m seeing with trying to do performance testing with an SSD
offloaded ZIL.
I''m attempting to improve my infrastructure''s burstable write capacity
(ZFS based WebDav servers), and naturally I''m looking at implementing
SSD based ZIL devices.
I have a test machine with the
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet.
#] iozone -r 32k -r 512k -s 8G
KB reclen write rewrite read reread read write
read rewrite read fwrite frewrite fread freread
8388608 32 10559 9792 62435 62260
8388608 512 63012 63409 63409 63138
It seems 32k write/rewrite performance are very
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a
virtualization system. In this moment I have a very basic test
configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB
ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly
interconnected. Gluster configuration is replica 2. OS is Fedora 27
For my tests I used dd and I found strange results. Apparently the
2017 Sep 28
2
Bandwidth and latency requirements
Interesting table Karan!,
Could you please tell us how you did the benchmark? fio or iozone
orsimilar?
thanks
Arman.
On Wed, Sep 27, 2017 at 1:20 PM, Karan Sandha <ksandha at redhat.com> wrote:
> Hi Collin,
>
> During our arbiter latency testing for completion of ops we found the
> below results:- an arbiter node in another data centre and both the data
> bricks in the
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD?
Regards,
Bartosz
> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47:
>
> Hi gluster users,
> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Sep 29
0
Bandwidth and latency requirements
It was simple emulation of network packets on the port of the server node
using tc tool tc qdisc add dev <port> root netem delay <time>ms. The files
were created using dd tool (in-built in linux) and mkdir. Post the IO's we
verified with no pending heals.
Thanks & Regards
On Thu, Sep 28, 2017 at 2:06 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:
> Interesting
2017 Sep 10
1
GlusterFS as virtual machine storage
Hey guys,
I got another "reboot crash" with gfapi and this time libvirt-3.2.1
(from cbs.centos.org). Is there anyone who can audit the libgfapi
usage in libvirt? :-)
WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O
situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so
this might be even related to amount of IOPS.
-ps
On Sun, Sep 10, 2017 at 6:37 AM, WK
2017 Oct 12
0
Bandwidth and latency requirements
Apologies for the late reply.
Further to this, if my Linux clients are connecting uing glusterfs-fuse and
I have my volumes defined like this:
dc1srv1:/gv_fileshare dc2srv1:/gv_fileshare dc1srv2:/gv_fileshare
dc2srv2:/gv_fileshare (replica 2)
How do I ensure that clients in dc1 prefer dc1srv1 and dc1srv2 while
clients in dc2 prefer the dc2 servers?
Is it simply a matter of ordering in
2003 Oct 10
2
Actual audio bitrates
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
I was just measuring the bitrates of a couple of codecs via iax. I'm getting
much higher numbers than expected, so maybe I'm doing something wrong?
Measured with iptraf, values displayed are:
codec: measured bitrate (bitrate according codec definition)
gsm: 52 kbps (13 kpbs)
alaw: 154 kbps (?)
speex: 57 kpbs (24 kpbs)
Seems a little
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon,
Can you please turn OFF client-io-threads as we have seen degradation of
performance with io-threads ON on sequential read/writes, random
read/writes. Server event threads is 1 and client event threads are 2 by
default.
Thanks & Regards
On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com>
wrote:
> Hi gluster users,
> I've spent several