similar to: ZFS SMI vs EFI performance using filebench

Displaying 20 results from an estimated 600 matches similar to: "ZFS SMI vs EFI performance using filebench"

2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
Hi there I''m busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you. Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup. Cheers This message posted from
2007 Oct 08
16
Fileserver performance tests
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create
2007 Mar 30
0
On disk SMI & EFI label documentation
I''m not sure if this alias is only to discuss the Solaris ZFS implementation or others. I''m writing my own ZFS code from scratch in Java. I''ll skip the reasons why I''m doing this in Java, lets just assume I have some. Is there any good documentation on the disk label structures? Right now my code is just reading the ZFS labels and the nvlist data but when
2008 Feb 06
3
x86: clear_IO_APIC_pin() and SMI delivery mode
clear_IO_APIC_pin() ignores entries that are set to delivery mode SMI. While this seems reasonable if the entry was unmasked, I consider it dubious for masked entries. In Linux, such behavior is benign since when the entry later is being used for some normal interrupt, the old setting is simply overwritten. In Xen, however, ioapic_guest_write() uses the vector field to determine the previous
2008 Nov 04
0
QUESTIONS from EMC: EFI and SMI Disk Labels
All, my apologies in advance for the wide distribution - it was recommended that I contact these aliases but if there is a more appropriate one, please let me know... I have received the following EFI disk-related questions from the EMC PowerPath team who would like to provide more complete support for EFI disks on Sun platforms... I would appreciate help in answering these questions...
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a single rpool on cXtYd0s2, if I can even do that, and improved performance compared to having two
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote: > I created a ZFS file system like the following with /mypool/cache being > the partition for the Squid cache: > > 18:51:27 root at solaris:~$ zfs list > NAME USED AVAIL REFER MOUNTPOINT > mypool 478M 31.0G 10.0M /mypool > mypool/cache 230M 9.78G 230M
2009 Feb 03
1
Cannot Mirror RPOOL, Can''t Label Disk to SMI
Dear ZFS experts, I have 2 SATA 500 GB Hard Drive on my Dual Core PC I have installed OpenSolaris 2008.11 using Live CD I got from Sun Tech Days in Singapore Now, using all the guidelines I got here at Indiana Discussion, I can''t attach my second drive to rpool to make them mirror Initially I was playing around with similar configuration in VirtualBox, and it does not succeed. Finally
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is 1GbE, i get filebench test with a client. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf the more server for gluster, more throughput should gain. I have tested the network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain about 300 MB/s (3*117
2014 Aug 20
0
[PATCH] vhost: Add polling mode
On 10/08/14 10:30, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to "kick" it.
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2014 Aug 21
0
[PATCH] vhost: Add polling mode
From: Razya Ladelsky > "Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > > > Results: > > > > > > Netperf, 1 vm: > > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > > Number of exits/sec decreased 6x. > > > The same improvement was shown when I tested with 3
2014 Aug 10
0
[PATCH] vhost: Add polling mode
On Sun, Aug 10, 2014 at 11:30:35AM +0300, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to
2014 Aug 20
0
[PATCH] vhost: Add polling mode
On Sun, Aug 10, 2014 at 11:30:35AM +0300, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to
2008 Nov 17
0
Overhead evaluation of my nfsv3client probe implementation
Hi, Thanks for the comment for my nfsv3client probe implementation! I have made changes accordingly. Webrev: http://cr.opensolaris.org/~danhua/webrev/ To reduce the overhead, I use a local variable to save XID, rather than alloc memory space with kmem_zalloc(). According to the overhead caused by tsd_get() and tsd_set(), I did an experiment to measure it. In this experiment, I run a dtrace
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly
2007 Jan 10
1
Solaris 10 11/06
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I''m also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be available): Solaris 10 Update 3 (11/06) Patches sparc Patches * 118833-36 SunOS 5.10: