similar to: SATA disk perf question

Displaying 20 results from an estimated 3000 matches similar to: "SATA disk perf question"

2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2008 Dec 17
12
disk utilization is over 200%
Hello, I use Brendan''s sysperfstat script to see the overall system performance and found the the disk utilization is over 100: 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 ------ Utilisation ------ ------ Saturation ------ Time %CPU %Mem %Disk %Net CPU Mem
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we''ve had incidents where IMAP operations became very slow. The general symptoms are that the number of imapd, pop3d, and lmtpd processes increases, the CPU load average increases, but the ZFS I/O bandwidth decreases. At the same time, ZFS
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2012 Jul 18
4
asterisk 1.8 on Solaris/sparc
I've got the latest asterisk 1.8 running on a Netra X1 with Solaris 10 u10. The system itself is happy and phone calls (between two parties) seem fine. Unfortunately, when a caller listens to a Playback recording, there seems to be moments of stutter - perhaps 1 second of stutter for every 10 seconds of Playback. The stutter is not consistent at the same point of the playback file. To
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies? The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2008 Oct 11
5
questions about replacing a raidz2 vdev disk with a larger one
I''d like to replace/upgrade two 500GB disks in RaidZ2 vdev with 1TB disks, but I have some preliminary questions/concerns before trying ''zfs replace dpool ?'' Will ZFS permit this replacement? Will ZFS use the extra space in a heterogeneous RaidZ2 vdev, or is the size limited by the smallest disk in the vdev? Thanks in advance, Vizzini The system is currently running
2010 Mar 11
1
zpool iostat / how to tell if your iop bound
What is the best way to tell if your bound by the number of individual operations per second / random io? "zpool iostat" has an "operations" column but this doesn''t really tell me if my disks are saturated. Traditional "iostat" doesn''t seem to be the greatest place to look when utilizing zfs. Thanks, Chris -------------- next part -------------- An
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to