Displaying 10 results from an estimated 10 matches for "zfs_vdev_max_pending".
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance.
We are seeing poor read performance from the zfs pool.
The release of solaris we are using is:
Solaris 10 10/09 s10s_u8wos_08a SPARC
The server itself is a T2000
I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
And if it is impeding read performance, how do we go about finding a new value for this parameter?
Of course I may misunderstand this parameter entirely and would be quite happy for an proper explana...
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2009 Sep 09
0
I/O performance problems on Dell 1850 w/ PERC 4e/Si RAID1 (Sol 10U6)
...g a large file to the system results in 50 to 60 megs worth of transfer, and then a stall for five to ten seconds, then continuing.
I''ve seen others complain about ZFS performance on the MegaRAID controllers, and I''ve tried some of the suggestions in those threads... dialing down zfs_vdev_max_pending (currently 35, I''ve tried 15 and even 4) and turning off prefetch, all via mdb -kw... it doesn''t seem to make a difference.
Any suggestions as to where I can try to figure out what''s going on here?
--
This message posted from opensolaris.org
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
...s led to to the computer hanging within a minute
("vmstat 1" shows that free RAM plummets towards the zero
mark).
I''ve tried preparing the system tunables as well:
:; echo "aok/W 1" | mdb -kw
:; echo "zfs_recover/W 1" | mdb -kw
and sometimes adding:
:; echo zfs_vdev_max_pending/W0t5 | mdb -kw
:; echo zfs_resilver_delay/W0t0 | mdb -kw
:; echo zfs_resilver_min_time_ms/W0t20000 | mdb -kw
:; echo zfs_txg_synctime/W0t1 | mdb -kw
In this case I am not very hesitant to recreate the rpool
and reinstall the OS - it was mostly needed to server the
separate data pool. However this...
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All,
It may be this not the correct mailing list, but I''m having a ZFS issue
when a disk is failing.
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1).
It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each
of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x
LSI
2009 Oct 10
11
SSD over 10gbe not any faster than 10K SAS over GigE
GigE wasn''t giving me the performance I had hoped for so I spring for some 10Gbe cards. So what am I doing wrong.
My setup is a Dell 2950 without a raid controller, just a SAS6 card. The setup is as such
:
mirror rpool (boot) SAS 10K
raidz SSD 467 GB on 3 Samsung 256 MLC SSD (220MB/s each)
to create the raidz I did a simple zpool create raidz SSD c1xxxxx c1xxxxxx c1xxxxx. I have
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but