Displaying 20 results from an estimated 100 matches similar to: "Filebench Performance is weird"
2007 Oct 08
16
Fileserver performance tests
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following:
[i]zpool create
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote:
> I created a ZFS file system like the following with /mypool/cache being
> the partition for the Squid cache:
>
> 18:51:27 root at solaris:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mypool 478M 31.0G 10.0M /mypool
> mypool/cache 230M 9.78G 230M
2007 Nov 29
10
ZFS write time performance question
HI,
The question is a ZFS performance question in reguards to SAN traffic.
We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results.
Test Setup:
Solaris 10: 11/06
Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS)
Sun Fire v490 server
LSI Raid 3994 on backend
ZFS Record Size: 128KB (default)
VxFS Block Size: 8KB(default)
The only thing
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 - cache device
I ran an OLTP bench mark to emulate a DMBS
One I ran the benchmark, the pool started create the database file on the
ssd cache device ???????????
can any one explain why this happening ?
is not L2ARC is used to absorb the evicted data
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
Hi there
I''m busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup.
Cheers
This message posted from
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is
1GbE, i get filebench test with a client.
refer to this link:
https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
the more server for gluster, more throughput should gain. I have tested the
network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain
about 300 MB/s (3*117
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com>
Date: Thu, 31 Jul 2014 09:47:20 +0300
Subject: [PATCH] vhost: Add polling mode
When vhost is waiting for buffers from the guest driver (e.g., more packets to
send in vhost-net's transmit queue), it normally goes to sleep and waits for the
guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit
(and possibly
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com>
Date: Thu, 31 Jul 2014 09:47:20 +0300
Subject: [PATCH] vhost: Add polling mode
When vhost is waiting for buffers from the guest driver (e.g., more packets to
send in vhost-net's transmit queue), it normally goes to sleep and waits for the
guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit
(and possibly
2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks.
I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2008 Aug 21
3
ZFS handling of many files
Hello,
I have been experimenting with ZFS on a test box, preparing to present it to
management.
One thing I cannot test right now is our real-world application load. We
write to CIFS shares currently in small files.
We write about 250,000 files a day, in various sizes (1KB to 500MB). Some
directories get a lot of individual files (sometimes 50,000 or more) in a
single directory.
We spoke to a Sun
2009 Jan 17
2
Comparison between the S-TEC Zeus and the Intel X25-E ??
I''m looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they''re
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked at the Intel X25-E series, and they look comparable in
performance. At about 20% of the cost.
http://www.intel.com/design/flash/nand/extreme/index.htm
Can anyone enlighten me as to any possible difference between an STEC
2004 Jul 20
8
[Bug 897] scp doesn't clean up forked children when processing multiple files
http://bugzilla.mindrot.org/show_bug.cgi?id=897
Summary: scp doesn't clean up forked children when processing
multiple files
Product: Portable OpenSSH
Version: 3.8p1
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P2
Component: scp
AssignedTo:
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2007 Jan 10
1
Solaris 10 11/06
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I''m also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be available):
Solaris 10 Update 3 (11/06) Patches
sparc Patches
* 118833-36 SunOS 5.10:
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one
question I can''t intelligently answer is what blocksize I should use in
these tests.
I assume there is something which monitors present disk activity, that I
could run on my production servers, to give me some statistics of the block
sizes that the users are actually performing on the production server.
2012 Jul 19
11
Very slow samba file transfer speed... any ideas ?
Hi,
I have btrfs volume, shared via samba.
I have a directory of documents that I want to backup on my server.
win7 reports a maximum of ~3.10MB/s transfer
transferring the same directory on a ext4 samba share I get 25MB/s +
Any ideas?
Is it like that because of how btrfs works and is setup?
Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body
2009 Nov 24
9
Best practices for zpools on zfs
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (iSCSI, FC) services to other machines that
are running Solaris. Some of the use will be for LDoms and zones[1],
which would create zpools on top of zfs (fs or zvol). I have concerns
about variable block sizes and the implications for performance.
1.
2013 Jan 07
5
mpt_sas multipath problem?
Greetings,
We''re trying out a new JBOD here. Multipath (mpxio) is not working,
and we could use some feedback and/or troubleshooting advice.
The OS is oi151a7, running on an existing server with a 54TB pool
of internal drives. I believe the server hardware is not relevant
to the JBOD issue, although the internal drives do appear to the
OS with multipath device names (despite the fact