similar to: Filebench, X4200 and Sun Storagetek 6140

Displaying 20 results from an estimated 400 matches similar to: "Filebench, X4200 and Sun Storagetek 6140"

2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set
2007 Oct 08
16
Fileserver performance tests
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create
2009 Apr 23
1
ZFS SMI vs EFI performance using filebench
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and
2008 Jan 09
1
Sun Fire X4200 M2 / CentOS 5 APIC issues
I had been running CentOS 5 happily on my Sun Fire X4200 M2 systems, then I upgraded the BIOS and iLOM firmware. Now I'm running into what seems to be a fairly common problem with newer motherboards. I cannot boot unless I use the 'noapic' kernel option. If I try to boot the kernel normally, I get the error: "MP-BIOS bug: 8254 timer not connected to IO-APIC" I can
2006 Jul 07
0
Sun x4200 vs. Xen
Has anyone successfully managed to get Xen working on a Sun x4200? I am trying to boot the Xen kernel after installation from RPM on a clean RHEL ES 4 update 3 installation and it is frozen/hanging/hardware lock on "PCI: Using configuration type 1". Any ideas are much appreciated. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote: > I created a ZFS file system like the following with /mypool/cache being > the partition for the Squid cache: > > 18:51:27 root at solaris:~$ zfs list > NAME USED AVAIL REFER MOUNTPOINT > mypool 478M 31.0G 10.0M /mypool > mypool/cache 230M 9.78G 230M
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi, I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b] So is this variable not available in the Solaris kernel? I''m getting really poor
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is 1GbE, i get filebench test with a client. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf the more server for gluster, more throughput should gain. I have tested the network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain about 300 MB/s (3*117
2003 Nov 04
1
General installation question: mksh error
Here is the last output from the make install command: Installing bin/readonly.so as /usr/local/samba/lib/vfs/readonly.so Installing bin/cap.so as /usr/local/samba/lib/vfs/cap.so Installing bin/CP850.so as /usr/local/samba/lib/charset/CP850.so Installing bin/CP437.so as /usr/local/samba/lib/charset/CP437.so ./install-sh -c bin/libsmbclient.so /usr/local/samba/lib mksh: Fatal error: Cannot load
2008 Apr 08
1
Please help: LDAP configuration _almost_ works.
Red Hat Linux release 7.2 (Enigma) OpenLDAP 2.3.38 Dovecot 1.0.12 SHORT VERSION ----- ------- Here is my dovecot-ldap.conf: hosts = ldap.lrtz dn = cn=varmail,ou=users,dc=lorentz,dc=com dnpass = ********* ldap_version = 3 auth_bind = yes pass_filter = (&(objectClass=inetOrgPerson)(mail=%Lu)) base = ou=users, dc=%Dd scope = onelevel I have tested using the above information with
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly