Charles Wright
2009-Jan-13 15:48 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate Serial Number: Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Sense Key: Not Ready Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] ASC: 0x4 (LUN is becoming ready), ASCQ: 0x1, FRU: 0x0 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,377 at 17/pci17d3,1261 at 0/sd at c,0 (sd14): Jan 9 11:03:47 yoda.asc.edu Error for Command: write(10) Error Level: Retryable Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate Serial Number: Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Sense Key: Not Ready Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] ASC: 0x4 (LUN is becoming ready), ASCQ: 0x1, FRU: 0x0 zfs eventually would degrade the drives due to the errors. I''m positive that there is nothing wrong with my hardware. Here is the driver I used under Solaris 10 u6. ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Solaris/DRIVER/1.20.00.16-80731/readme.txt I got these errors using either JBOD or configuring the Drives as pass-through. I turned off NCQ and Tagged Queuing and still got errors. yoda:~/bin # zpool status pool: backup state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using ''zpool clear'' or replace the device with ''zpool replace''. see: http://www.sun.com/msg/ZFS-8000-9P scrub: resilver completed after 1h7m with 0 errors on Fri Jan 9 09:57:46 2009 config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c2t2d0 ONLINE 0 5 0 c2t3d0 ONLINE 0 1 0 c2t4d0 ONLINE 0 1 0 c2t5d0 ONLINE 0 2 0 c2t6d0 ONLINE 0 2 0 c2t7d0 ONLINE 0 2 0 c2t8d0 ONLINE 0 3 0 raidz1 ONLINE 0 0 0 c2t9d0 ONLINE 0 2 0 c2t10d0 ONLINE 0 2 0 c2t11d0 ONLINE 0 3 0 c2t12d0 ONLINE 0 3 0 c2t13d0 ONLINE 0 3 0 c2t14d0 ONLINE 0 2 0 c2t15d0 ONLINE 0 51 0 errors: No known data errors pool: rpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using ''zpool clear'' or replace the device with ''zpool replace''. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t0d0s0 ONLINE 0 5 0 c2t1d0s0 ONLINE 3 2 0 errors: No known data errors Under opensolaris, I don''t get the SCSI timeout errors but I do get error messages like this: Jan 13 09:30:39 yoda last message repeated 5745 times Jan 13 09:30:39 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (255 > 256) Jan 13 09:30:39 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) Jan 13 09:30:49 yoda last message repeated 2938 times Jan 13 09:30:49 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (254 > 256) Jan 13 09:30:49 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) Jan 13 09:30:53 yoda last message repeated 231 times Jan 13 09:30:53 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (257 > 256) Jan 13 09:30:53 yoda last message repeated 2 times Jan 13 09:30:53 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) Jan 13 09:31:11 yoda last message repeated 1191 times Jan 13 09:31:11 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (255 > 256) Jan 13 09:31:11 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) Fortunately it looks like zpool status is not effected under opensolaris root at yoda:~/bin# zpool status pool: backup state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 c4t5d0 ONLINE 0 0 0 c4t6d0 ONLINE 0 0 0 c4t7d0 ONLINE 0 0 0 c4t8d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c4t9d0 ONLINE 0 0 0 c4t10d0 ONLINE 0 0 0 c4t11d0 ONLINE 0 0 0 c4t12d0 ONLINE 0 0 0 c4t13d0 ONLINE 0 0 0 c4t14d0 ONLINE 0 0 0 c4t15d0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using ''zpool clear'' or replace the device with ''zpool replace''. see: http://www.sun.com/msg/ZFS-8000-9P scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 c4t1d0s0 ONLINE 0 0 2 errors: No known data errors This is the version of the ARECA driver that comes with Opensolaris. root at yoda:~/bin# pkginfo -l SUNWarcmsr PKGINST: SUNWarcmsr NAME: Areca SAS/SATA RAID driver CATEGORY: system ARCH: i386 VERSION: 11.11,REV=2008.10.30.20.37 VENDOR: Sun Microsystems, Inc. DESC: SAS/SATA RAID driver HOTLINE: Please contact your local service provider STATUS: completely installed How do I find out who maintains that and how it compares with the Driver I downloaded from ARECA directly? Thanks -- This message posted from opensolaris.org
t. johnson
2009-Jan-13 21:37 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Just a hunch.. but what kind of drives are you using? Many of the raid card vendors report that "consumer class" drives are incompatible with their cards because the drives will spend much longer trying to recover from failure than the "enterprise class" drives will. This causes the card to think the drive is bad and report a failure. I''m not exactly qualified to make that diagnosis though so take it with a big lump of salt. I have been somewhat curious to know how zfs is able to deal with these problems that pop up with the hardware raid cards and consumer drives. -- This message posted from opensolaris.org
Charles Wright
2009-Jan-13 21:54 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Thanks for the reply, I''ve also had issue with consumer class drives and other raid cards. The drives I have here (all 16 drives) are Seagate? Barracuda? ES enterprise hard drives Model Number ST3500630NS If the problem was with the drive I would expect the same behavior in both solaris and opensolaris. -- This message posted from opensolaris.org
zfs user
2009-Jan-14 06:02 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Charles Wright wrote:> Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card > I got errors on all drives that result from SCSI timeout errors.[snip litany of errors] I had similar problems on a 1120 card with 2008.05 I upgraded to 2008.11 and the something*.16 sun areca driver/module that I found somewhere - I also updated the Areca firmware to the latest versions from their site. Not sure which fixed my problems. I believe the person in sun who has had a key role in regard to the Areca drivers is James C. McPherson http://blogs.sun.com/jmcp/
Orvar Korvar
2009-Jan-14 11:17 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Ive read about some Areca bug(?) being fixed in SXCE b105? -- This message posted from opensolaris.org
Johan Hartzenberg
2009-Jan-14 13:18 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
There is an update in build 105, but it is only pertaining to the Raid Management tool: Issues Resolved: BUG/RFE:6776690<http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6776690>Areca raid management util doesn''t work on solaris Files Changed: update:usr/src/uts/intel/io/scsi/adapters/arcmsr/arcmsr.c<http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/scsi/adapters/arcmsr/arcmsr.c> update:usr/src/uts/intel/io/scsi/adapters/arcmsr/arcmsr.h<http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/scsi/adapters/arcmsr/arcmsr.h> On Wed, Jan 14, 2009 at 1:17 PM, Orvar Korvar < knatte_fnatte_tjatte at yahoo.com> wrote:> Ive read about some Areca bug(?) being fixed in SXCE b105? > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke My blog: http://initialprogramload.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090114/8d51acca/attachment.html>
Charles Wright
2009-Jan-14 16:20 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Thanks for the info. I''m running the Latest Firmware for my card: V1.46 with BOOT ROM Version V1.45 Could you tell me how you have your card configured? Are you using JBOD, RAID, or Pass Through? What is your Max SATA mode set too? How may drives do you have attached? What is your ZFS config like? Thanks. -- This message posted from opensolaris.org
Charles Wright
2009-Jan-14 16:37 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Here''s an update: I thought that the error message arcmsr0: too many outstanding commands might be due to a Scsi queue being over ran The areca driver has #*define* ARCMSR_MAX_OUTSTANDING_CMD <http://src.opensolaris.org/source/s?defs=ARCMSR_MAX_OUTSTANDING_CMD> 256 What might be happening is each raid set results in a new instance of the areca driver getting loaded so perhaps the scsi queue on the card is just get over ran as each drive is getting a queue depth of 256, as such I tested with sd_max_throttle:16 (16 Drives * 16 Queues = 256) I verified sd_max_throttle got set via: root at yoda:~/solaris-install-stuff# echo "sd_max_throttle/D" |mdb -k sd_max_throttle: sd_max_throttle:16 I see that if I run this script to create a bunch of small files I can make a lot of drives jump to degrade in a hurry. #!/bin/bash dir=/backup/homebackup/junk mkdir -p $dir cd $dir date printf "Creating 10000 1k files in $dir \n" i=10000 while [ $i -ge 0 ] do j=`expr $i - 1` dd if=/dev/zero of=$i count=1 bs=1k &> /dev/null i=$j done date i=10000 printf "Deleting 10000 1k files in $dir \n" while [ $i -ge 0 ] do j=`expr $i - 1` rm $i i=$j done date Before running the script: root at yoda:~# zpool status pool: backup state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 raidz1 ONLINE 0 0 2 c4t2d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 c4t5d0 ONLINE 0 0 0 c4t6d0 ONLINE 0 0 0 c4t7d0 ONLINE 0 0 0 c4t8d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 2 c4t9d0 ONLINE 0 0 0 c4t10d0 ONLINE 0 0 0 c4t11d0 ONLINE 0 0 0 c4t12d0 ONLINE 0 0 0 c4t13d0 ONLINE 0 0 0 c4t14d0 ONLINE 0 0 0 c4t15d0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 c4t1d0s0 ONLINE 0 0 0 errors: No known data errors AFTER running the script: root at yoda:~/solaris-install-stuff# zpool status -v pool: backup state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested config: NAME STATE READ WRITE CKSUM backup DEGRADED 0 0 5 raidz1 DEGRADED 0 0 14 c4t2d0 DEGRADED 0 0 0 too many errors c4t3d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 c4t5d0 DEGRADED 0 0 0 too many errors c4t6d0 ONLINE 0 0 0 c4t7d0 DEGRADED 0 0 0 too many errors c4t8d0 DEGRADED 0 0 0 too many errors raidz1 DEGRADED 0 0 12 c4t9d0 DEGRADED 0 0 0 too many errors c4t10d0 DEGRADED 0 0 0 too many errors c4t11d0 DEGRADED 0 0 0 too many errors c4t12d0 DEGRADED 0 0 0 too many errors c4t13d0 DEGRADED 0 0 0 too many errors c4t14d0 DEGRADED 0 0 0 too many errors c4t15d0 DEGRADED 0 0 1 too many errors errors: Permanent errors have been detected in the following files: backup/homebackup:<0x0> pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 c4t1d0s0 ONLINE 0 0 0 errors: No known data errors BTW I called Seagate to check the drive firmware. They confirm that Firmware version 3.AEK is the latest for the drives I have. This is the version running on all 16 of my drives. I''m about out of ideas to try. -- This message posted from opensolaris.org
Richard Elling
2009-Jan-14 17:11 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Charles Wright wrote:> Here''s an update: > > I thought that the error message > arcmsr0: too many outstanding commands > might be due to a Scsi queue being over ran >Rather than messing with sd_max_throttle, you might try changing the number of iops ZFS will queue to a vdev. IMHO this is easier to correlate because the ZFS tunable is closer to the application than an sd tunable. Details at: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29 iostat will show the number of iops queued to the device in the actv column, but for modern hardware this number can fluctuate quite a bit in a 1-second sample period -- which implies that you need lots of load to see it. The problem is that if lots of load makes it fall over, then the load will be automatically reduced -- causing you to chase your tail. It should be fairly easy to whip up a dtrace script which would quantize the queue depth, though... [need more days in the hour...] -- richard
James C. McPherson
2009-Jan-14 23:21 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Just to let everybody know, I''m in touch with Charles and we''re working on this problem offline. We''ll report back to the list when we''ve got something to talk about. James On Wed, 14 Jan 2009 08:37:44 -0800 (PST) Charles Wright <charles at asc.edu> wrote:> Here''s an update: > > I thought that the error message > arcmsr0: too many outstanding commands > might be due to a Scsi queue being over ran > > The areca driver has > #*define* ARCMSR_MAX_OUTSTANDING_CMD > <http://src.opensolaris.org/source/s?defs=ARCMSR_MAX_OUTSTANDING_CMD> > 256 > > What might be happening is each raid set results in a new instance of > the areca driver getting loaded so perhaps the scsi queue on the card > is just get over ran as each drive is getting a queue depth of 256, > as such I tested with sd_max_throttle:16 > > (16 Drives * 16 Queues = 256) > > I verified sd_max_throttle got set via: > root at yoda:~/solaris-install-stuff# echo "sd_max_throttle/D" |mdb -k > sd_max_throttle: > sd_max_throttle:16 >James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Charles Wright
2009-Jan-15 21:37 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
I''ve tried putting this in /etc/system and rebooting set zfs:zfs_vdev_max_pending = 16 Are we sure that number equates to a scsi command? Perhaps I should set it to 8 and see what happens. (I have 256 scsi commands I can queue across 16 drives) I still got these error messages in the log. Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (257 > 256) Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) Jan 15 15:29:43 yoda last message repeated 73 times I watched iostat -x a good bit and usually it is 0.0 or 0.1 root at yoda:~# iostat -x extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd1 0.4 2.0 22.3 13.5 0.1 0.0 39.3 1 2 sd2 0.5 2.0 25.6 13.5 0.1 0.0 40.4 2 2 sd3 0.3 21.5 18.7 334.4 0.7 0.1 40.1 13 15 sd4 0.3 21.6 18.9 334.4 0.7 0.1 40.6 13 15 sd5 0.3 21.5 19.2 334.4 0.7 0.1 39.7 12 15 sd6 0.3 21.6 18.6 334.4 0.7 0.2 40.4 13 15 sd7 0.3 21.6 18.7 334.4 0.7 0.1 40.3 12 15 sd8 0.3 21.6 18.7 334.4 0.7 0.2 40.1 13 15 sd9 0.3 21.5 18.5 334.5 0.7 0.1 40.0 12 14 sd10 0.3 21.4 18.9 333.6 0.7 0.1 40.2 12 14 sd11 0.3 21.4 18.9 333.6 0.7 0.1 39.3 12 15 sd12 0.3 21.4 19.4 333.6 0.7 0.2 40.0 13 15 sd13 0.3 21.4 18.9 333.6 0.7 0.1 40.3 13 15 sd14 0.3 21.4 19.0 333.6 0.7 0.1 38.8 12 14 sd15 0.3 21.4 19.1 333.6 0.7 0.1 39.6 12 14 sd16 0.3 21.4 18.7 333.6 0.7 0.1 39.3 12 14 -- This message posted from opensolaris.org
Richard Elling
2009-Jan-15 22:17 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Charles Wright wrote:> I''ve tried putting this in /etc/system and rebooting > set zfs:zfs_vdev_max_pending = 16You can change this on the fly, without rebooting. See the mdb command at: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29> Are we sure that number equates to a scsi command?yes, though actually it pertains to all devices used by ZFS, even if they are not SCSI devices.> Perhaps I should set it to 8 and see what happens. > (I have 256 scsi commands I can queue across 16 drives) > > I still got these error messages in the log. > > Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (257 > 256) > Jan 15 15:29:40 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many outstanding commands (256 > 256) > Jan 15 15:29:43 yoda last message repeated 73 times > > I watched iostat -x a good bit and usually it is 0.0 or 0.1iostat -x, without any intervals, shows the average since boot time, which won''t be useful. Try "iostat -x 1" to see 1-second samples while your load is going.> root at yoda:~# iostat -x > extended device statistics > device r/s w/s kr/s kw/s wait actv svc_t %w %b > sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 > sd1 0.4 2.0 22.3 13.5 0.1 0.0 39.3 1 2 > sd2 0.5 2.0 25.6 13.5 0.1 0.0 40.4 2 2 > sd3 0.3 21.5 18.7 334.4 0.7 0.1 40.1 13 15 > sd4 0.3 21.6 18.9 334.4 0.7 0.1 40.6 13 15 > sd5 0.3 21.5 19.2 334.4 0.7 0.1 39.7 12 15 > sd6 0.3 21.6 18.6 334.4 0.7 0.2 40.4 13 15 > sd7 0.3 21.6 18.7 334.4 0.7 0.1 40.3 12 15 > sd8 0.3 21.6 18.7 334.4 0.7 0.2 40.1 13 15 > sd9 0.3 21.5 18.5 334.5 0.7 0.1 40.0 12 14 > sd10 0.3 21.4 18.9 333.6 0.7 0.1 40.2 12 14 > sd11 0.3 21.4 18.9 333.6 0.7 0.1 39.3 12 15 > sd12 0.3 21.4 19.4 333.6 0.7 0.2 40.0 13 15 > sd13 0.3 21.4 18.9 333.6 0.7 0.1 40.3 13 15 > sd14 0.3 21.4 19.0 333.6 0.7 0.1 38.8 12 14 > sd15 0.3 21.4 19.1 333.6 0.7 0.1 39.6 12 14 > sd16 0.3 21.4 18.7 333.6 0.7 0.1 39.3 12 14NB 40ms average service time (svc_t) is considered very slow for modern disks. You should look at this on the intervals to get a better idea of the svc_t under load. You want to see something more like 10ms, or less, for good performance on HDDs. -- richard
Charles Wright
2009-Jan-16 15:53 UTC
[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
I tested with zfs_vdev_max_pending=8 I hoped this should make the error messages arcmsr0: too many outstanding commands (257 > 256) go away but it did not. zfs_vdev_max_pending=8 this should have only allowed 128 commands total to be outstanding I would think (16 Drives * 8 = 128). However I haven''t yet been able to corrupt zfs with it set to 8 (yet...) So it seems to have helped. I took a log of iostat -x 1 while I was doing a log of I/O and posted it here http://wrights.webhop.org/areca/solaris-info/zfs_vdev_max_pending-tests/8/iostat-8.txt You can see the number of errors and other info here http://wrights.webhop.org/areca/solaris-info/zfs_vdev_max_pending-tests/8/ Information about my system can also be found here http://wrights.webhop.org/areca/ Thanks for the suggestion. I''m working with James and Erich and hopefully they will find something in the driver code. -- This message posted from opensolaris.org
Possibly Parallel Threads
- zfs, raidz, spare and jbod
- Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
- Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
- Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation
- areca 1100 kmod / kernel support