search for: wsvc_t

Displaying 20 results from an estimated 26 matches for "wsvc_t".

Did you mean: asvc_t
2009 Dec 16
27
zfs hanging during reads
...rying to read from the pool. When the pool is in it''s very slow/hung state, you can''t do anything with the devices. Here''s my devices: tim at opensolaris:~$ iostat -xnz extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 1.5 2.7 98.1 29.4 0.1 0.0 18.6 8.3 1 2 c0d0 3.9 0.6 220.9 57.9 0.0 0.0 5.6 1.2 0 0 c6d1 8.1 0.0 456.8 0.0 0.0 0.0 0.4 0.5 0 0 c3t0d0 12.8 0.0 717.7 0.0 5.5 0.2 433.1 14.6 18 19 c3t1d0 4.9...
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2008 Dec 17
12
disk utilization is over 200%
...of 0+1 = OS disk =72 GB = d0 2+3 = apps data disk = 146 GB = d2, SVM soft partition with one UFS file system is active at that time, iostat showed strange output: cpu us sy wt id 13 9 0 78 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d30 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d40 0.0 0.0 0.0 0.0 0.0 335.0 0.0 0.0 0 100 md/d52 0.0 0.0 0.0 0.0 334.0 1.0 0.0 0.0 100 100 c3t2d0...
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
...om time. If you look at the output from iostat -cxn 1 below you find that the first one is okay but the second on the disks are in 100 %w... and it stays at 100 %w for a few seconds. us sy wt id 0 34 0 66 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 0.0 400.9 0.0 49560.1 14.1 0.5 35.2 1.2 47 48 c5t0d0...
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
...c2t42d0 ONLINE 0 0 0 c2t42d1 ONLINE 0 0 0 c2t42d2 ONLINE 0 0 0 errors: No known data errors bash-3.00# iostat -xnz 1 [...] extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 16.2 0.0 35.4 0.0 0.1 0.0 3.7 0 2 c2t42d2 0.0 31.3 0.0 634.3 0.0 0.1 0.0 4.1 0 3 c2t42d1 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0...
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we''ve had incidents where IMAP operations became very slow. The general symptoms are that the number of imapd, pop3d, and lmtpd processes increases, the CPU load average increases, but the ZFS I/O bandwidth decreases. At the same time, ZFS
2006 Jul 30
6
zfs mount stuck in zil_replay
...a10069fad8, 0, 0, ff38e40c, 100) 000002a10069f221 syscall_ap+0x44(2a0, ffbfeca0, 1118bf4, 600013fbc40, 15, 0) 000002a10069f2e1 syscall_trap32+0xcc(52b3c8, ffbfeca0, 100, ff38e40c, 0, 0) > # iostat -xnz 1 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 3.0 513.0 192.0 2822.1 0.0 2.1 0.0 4.0 0 95 c4t600C0FF00000000009258F3E4C4C5601d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 7.0 598.1 388.6 1832.9 0.0 2.0 0.0 3.4 0 93 c4...
2012 Jul 18
4
asterisk 1.8 on Solaris/sparc
I've got the latest asterisk 1.8 running on a Netra X1 with Solaris 10 u10. The system itself is happy and phone calls (between two parties) seem fine. Unfortunately, when a caller listens to a Playback recording, there seems to be moments of stutter - perhaps 1 second of stutter for every 10 seconds of Playback. The stutter is not consistent at the same point of the playback file. To
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
...t=1000000 ^Z [1]+ Stopped dd if=/dev/rdsk/c5t29d0 of=tst bs=1024 count=1000000 ROOT $ bg [1]+ dd if=/dev/rdsk/c5t29d0 of=tst bs=1024 count=1000000 & ROOT $ iostat -xn c5t29d0 1 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 3.8 0.1 4.1 0.0 0.0 0.0 0.0 0.2 0 0 c5t29d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 5018.1 2.0 5031.1 0.0 0.0 0.8 0.0 0.2 5 80 c5t29d0...
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
...c configured on a 32GB Intel > X25-E ssd and slog on another32GB X25-E ssd. > > > > According to our tester, Oracle writes are extremely > slow (high latency).??? > > > > Below is a snippet of iostat: > > > >? ???r/s? ? > w/s???Mr/s???Mw/s wait actv > wsvc_t asvc_t? %w? %b device > >? ???0.0? ? 0.0? > ? 0.0? ? 0.0? 0.0? 0.0? ? > 0.0? ? 0.0???0???0 > c0 > >? ???0.0? ? 0.0? > ? 0.0? ? 0.0? 0.0? 0.0? ? > 0.0? ? 0.0???0???0 > c0t0d0 > >? > 4898.3???34.2???23.2? > ? 1.4? 0.1 385.3? ? > 0.0???78.1???0 1246 c1 >...
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies? The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2009 Dec 24
1
high read iops - more memory for arc?
...is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle writes are extremely slow (high latency). Below is a snippet of iostat: r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 4898.3 34.2 23.2 1.4 0.1 385.3 0.0 78.1 0 1246 c1 0.0 0.8 0.0 0.0 0.0 0.0 0.0 16.0 0 1 c1t0d0 401.7...
2007 Apr 11
0
raidz2 another resilver problem
...61G 19.3T 592 321 9.43M 460K thumper-9 761G 19.3T 421 196 8.25M 254K thumper-9 761G 19.3T 550 201 22.6M 290K ^C bash-3.00# But: bash-3.00# iostat -xnz 1 [removed first output] extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 25.1 0.0 904.7 0.0 0.0 0.1 0.8 5.9 2 15 c4t0d0 20.1 0.0 711.0 0.0 0.0 0.1 0.0 5.9 0 12 c6t0d0 23.1 0.0 776.3 0.0 0.0 0.1 1.5 6.2 3 14 c1t0d0 22.1 0.0 585.6 0.0 0.0 0.1 0.3 6.1 1 14 c6t1d0 20...
2007 Jul 31
0
controller number mismatch
Hi, I just noticed something interesting ... don''t know whether it''s relevant or not (two commands run in succession during a ''nightly'' run): $ iostat -xnz 6 [...] extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.3 0.0 0.8 0.0 0.0 0.2 0.2 0 0 c2t0d0 2.2 79.8 128.7 216.1 0.0 0.1 0.1 1.4 1 6 c1t0d0 2.0 76.8 118.1 208.6 0.0 0.1 0.1 1.2 1 4 c1t1d0 1.7 79.0 106.7 216.1 0.0 0.1 0.1 1.1 1 5 c1t2d0...
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0 0.0 0.0 0.3 0 2 c0...
2005 Nov 20
2
ZFS & small files
...60 892660 176532 0 0 0 0 0 0 0 34 0 0 0 411 122107 985 7 93 0 0 0 60 886412 170284 2 3 0 0 0 0 0 42 0 0 0 417 111849 1077 7 85 8 0 0 60 883332 167040 0 0 0 0 0 0 0 15 0 0 0 391 475 354 0 1 99 output from iostat -xcn 5: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 9.6 0.0 70.9 0.0 0.0 0.0 1.5 0 1 c1t0d0 0.0 38.4 0.0 559.8 0.0 0.6 0.0 15.3 0 4 c1t0d0 0.4 41.8 3.2 646.5 0.0 0.5 0.0 11.9 0 4 c1t0d0 0.0 30.4 0.0 551.9 0.0 0.4 0.0 12.5 0 3 c1t0d0 0...
2008 Feb 05
31
ZFS Performance Issue
....70M 7.25M pool1 5.69G 266G 98 267 5.73M 7.32M pool1 5.69G 266G 92 253 5.76M 7.31M pool1 5.69G 266G 90 254 5.67M 7.43M and here is regular iostat: # iostat -xnz 5 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.2 0.0 0.1 0.0 0.0 0.0 0.3 0 0 c0t0d0 0.0 0.2 0.0 0.1 0.0 0.0 0.0 0.3 0 0 c0t1d0 20.4 145.0 1315.8 3714.5 0.0 2.8 0.0 16.8 0 21 c0t2d0 21.4 143.2 1380.2 3711.3 0.0 4.1 0.0 25.1 0 27 c0t3d0 23...
2007 Nov 13
4
sd_max_throttle
Hello, we are using hardware array and its vendor recommends the following setting in /etc/system: set sd:sd_max_throttle = <value> or set ssd:ssd_max_throttle = <value> Is it possible to monitor *somehow* whether the variable becomes sort of bottleneck ? Or how its value influences io traffic ? Regards przemol
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME