similar to: Measuring ZFS performance - IOPS and throughput

Displaying 20 results from an estimated 5000 matches similar to: "Measuring ZFS performance - IOPS and throughput"

2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2013 Mar 18
2
Disk iops performance scalability
Hi, Seeing a drop-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes
2008 Jul 07
8
zfs-discuss Digest, Vol 33, Issue 19
Hello Ross, We''re trying to accomplish the same goal over here, ie. serving multiple VMware images from a NFS server. Could you tell what kind of NVRAM device did you end up choosing? We bought a Micromemory PCI card but can''t get a Solaris driver for it... Thanks Gilberto On 7/6/08 9:54 AM, "zfs-discuss-request at opensolaris.org" <zfs-discuss-request at
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2016 Feb 03
6
Measuring memory bandwidth utilization
I'd like to know what the cause of a particular DB server's slowdown might be. We've ruled out IOPs for the disks (~ 20%) and raw CPU load (top shows perhaps 1/2 of cores busy, but the system slows to a crawl. We're suspecting that we're simply running out of memory bandwidth but have no way to confirm this suspicion. Is there a way to test for this? Think: iostat but for
2007 Oct 08
16
Fileserver performance tests
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2010 Mar 11
1
zpool iostat / how to tell if your iop bound
What is the best way to tell if your bound by the number of individual operations per second / random io? "zpool iostat" has an "operations" column but this doesn''t really tell me if my disks are saturated. Traditional "iostat" doesn''t seem to be the greatest place to look when utilizing zfs. Thanks, Chris -------------- next part -------------- An
2009 Apr 23
1
Unexpectedly poor 10-disk RAID-Z2 performance?
Hail, caesar. I''ve got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives everyone''s so fond of. They''ve all received a firmware upgrade (the sane one, not the one that caused your drives to brick if the internal event log hit the wrong number on boot). They''re attached to an ARC-1280ML, a reasonably good SATA controller, which has 1 GB of ECC DDR2 for
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks, I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s doing it, and hopefully even what they''re doing. I can''t seem to find any way to do that. Any suggestions? Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me performance statistics and so forth. I''m looking for something more granular. Either *who* the
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams, I have the following IO Performance Specific Questions (and I''m already savy with the lockstat and pre-dtrace utilities for performance analysis.. but in need of details regarding specifying IO bottlenecks @ the controller or IO bus..) : **Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service times and kernel contention.. )/ I''m
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up