Displaying 5 results from an estimated 5 matches for "vq_max_pending".
Did you mean:
rx_max_pending
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some
additional information) are available at http://www.meangrape.com/
2007/08/oscon-zfs/
Jay Edwards
jay at meangrape.com
http://www.meangrape.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
...blem with ZFS prefering reads to writes.
I also see in ''zpool iostat -v 1'' that writes are issued to disk only once in 10 secs, and then its 2000rq one sec.
Reads are sustained at cca 800rq/s.
Is there a way to tune this read/write ratio? Is this know problem?
I tried to change vq_max_pending as suggested by Eric in http://blogs.sun.com/erickustarz/entry/vq_max_pending
But no change in this write behaviour.
Iostat shows cca 20-30ms asvc_t, 0%w, and cca 30% busy on all drives so these are not saturated it seems. (before with UTF they had 90%busy, 1%wait).
System is Sol 10 U2, sun x4200...
2006 Nov 28
7
Convert Zpool RAID Types
Hello,
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?
-J
2007 Sep 04
23
I/O freeze after a disk failure
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52):
Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting