search for: svc_t

Displaying 12 results from an estimated 12 matches for "svc_t".

Did you mean: src_t
2006 Jul 01
1
The ZFS Read / Write roundabout
...at my IO rate also seesawed up and down between 31MB/s and 28MB/s, over a 5 second interval... I was not expecting that... Thoughts? Thanks! :) Nathan. Here is the iostat example - extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b cmdk0 0.0 201.5 0.0 23908.7 33.0 2.0 173.5 100 100 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b cmdk0 0.0 200.0 0.0 24...
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...t on devices sd15 and sd16 never are answered. I tried this with both no-cache-flush enabled and off, with negligible difference. Is there anyway to force a better balance of reads/writes during heavy writes? extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0...
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0 46239.2 0.0 0.0 6.6 14.7 0 79 sd7 452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79 sd8 460.7 0.0 46947.7 0.0 0.0 5.5...
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2009 Apr 15
5
StorageTek 2540 performance radically changed
...0 When using the 64KB record length, the service times are terrible. At first I thought that my drive array must be broken but now it seems like a change in the ZFS caching behavior (i.e. caching gone!): extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd2 1.3 0.3 6.8 2.0 0.0 0.0 1.7 0 0 sd10 0.0 99.3 0.0 12698.3 0.0 32.2 324.5 0 97 sd11 0.3 105.9 38.4 12753.3 0.0 31.8 299...
2009 Jul 07
0
[perf-discuss] help diagnosing system hang
...7.97M X25E 59.8M 29.7G 0 588 0 10.3M Still a lot of of the same errors on the console though (more often actually...) Output from iostat -zx 10 if it is of interest: extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd32 0.0 718.7 0.0 1437.4 0.0 0.0 0.0 0 3 sd32 0.0 401.1 0.0 6089.1 0.0 0.8 2.1 0 43 sd32 0.0 1187.5 0.0 12341.7 0.0 0.7 0.6 2 37 sd32 0.0 758.2 0.0 14835.1 0.0 1.7 2.3 4 66 sd32 0.0 403.1 0.0 4606.8 0.0 1.5...
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2008 Nov 12
5
System deadlock when using mksnap_ffs
I've been playing around with snapshots lately but I've got a problem on one of my servers running 7-STABLE amd64: FreeBSD paladin 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE #8: Mon Nov 10 20:49:51 GMT 2008 tdb@paladin:/usr/obj/usr/src/sys/PALADIN amd64 I run the mksnap_ffs command to take the snapshot and some time later the system completely freezes up: paladin# cd /u2/.snap/ paladin#
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now