similar to: solaris 10u8 hangs with message Disconnected command timeout for Target 0

Displaying 20 results from an estimated 1054 matches similar to: "solaris 10u8 hangs with message Disconnected command timeout for Target 0"

2011 Aug 18
0
zfs-discuss Digest, Vol 70, Issue 37
Please check whether you have latest '' MPT '' patch installed on your server ? If not , please install MPT patch . It will fix the issue . Regards, Gowrisankar . On Thu, Aug 18, 2011 at 5:30 PM, <zfs-discuss-request at opensolaris.org>wrote: > Send zfs-discuss mailing list submissions to > zfs-discuss at opensolaris.org > > To subscribe or unsubscribe
2010 Jun 01
5
Solaris 10U8, Sun Cluster, and SSD issues.
Hello All, We are currently testing a NFS+Sun Cluster solution with ZFS in our environment. Currently we have 2 HP DL360s each with a 2-port LSI SAS 9200-8e controller (mpt_sas driver) connected to a Xyratex OneStor SP1224s 24-bay sas tray. The xyratex sas tray has 2 ports on the controller which can connect to each server. We have a zpool of 2x (8+2) drives and 1 hot spare and also 3 intel
2006 Dec 05
15
weird thing with zfs
ok, two weeks ago I did notice one of my disk in zpool got problems. I was getting "Corrupt label; wrong magic number" messages, then when I looked in format it did not see that disk... (last disk) I had that setup running for few months now and all of the sudden last disk failed. So I ordered another disk, had it replaced like a week ago, I did issue replace command after disk
2006 Aug 03
9
System hangs on SCSI error
Hi, Using a ZFS emulated volume, I wasn''t expecting to see a system [1] hang caused by a SCSI error. What do you think? The error is not systematic. When it happens, the Solaris/Xen dom0 console keeps displaying the following message and the system hangs. *Aug 3 11:11:23 jesma58 scsi: WARNING: /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@1,0 (sd2): Aug 3 11:11:23
2006 Aug 03
9
System hangs on SCSI error
Hi, Using a ZFS emulated volume, I wasn''t expecting to see a system [1] hang caused by a SCSI error. What do you think? The error is not systematic. When it happens, the Solaris/Xen dom0 console keeps displaying the following message and the system hangs. *Aug 3 11:11:23 jesma58 scsi: WARNING: /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@1,0 (sd2): Aug 3 11:11:23
2007 Feb 11
5
Why doesn''t Solaris remove a faulty disk from operation?
Howdy, On one of my Solaris 10 11/06 servers, I am getting numerous errors similar to the following: Feb 11 09:30:23 rx scsi: WARNING: /pci at b,2000/scsi at 2,1/sd at 2,0 (sd1): Feb 11 09:30:23 rx Error for Command: write(10) Error Level: Retryable Feb 11 09:30:23 rx scsi: Requested Block: 58458343 Error Block: 58458343 Feb 11 09:30:23 rx scsi: Vendor:
2006 Mar 12
4
scary boot-time messages triggered by write cache enable?
I just upgraded a server to snv_35. At boot I was greeted with 22 error messages of this form: WARNING: /scsi_vhci/ssd at g20000004cf29b948 (ssd28): Error for Command: read(10) Error Level: Retryable Requested Block: 290 Error Block: 290 Vendor: SEAGATE Serial Number: 0140K0LQ70 Sense Key:
2008 Mar 07
11
''zfs create'' hanging
We have a Sun Fire X4500 (Thumper) with 48 750GB SATA drives being used as an NFS server. My original plan was to reinstall Linux on it but after getting it and playing around with zfs I decided to give Solaris a try. I have created over 30 zfs filesystems so far and exported them via NFS and this has been working fine. Well, almost. A couple of weeks ago I discovered clients could no
2007 Aug 05
17
scrub halts
I''ve got a 5-500Gb Sata Raid-Z stack running under build 64a. I have two problems that may or may not be interrelated. 1) zpool scrub stops. If I do a "zpool status" it merrily continues for awhile. I can''t see any pattern in this action with repeated scrubs. 2) Bad blocks on one disk. This is repeatable, so I''m sending the disk back for replacement. (1)
2007 Mar 26
11
error-message from a nexsan-storage
Hi. I have a nexsan atabeast with 2 raid-controllers each with 21 disks @ 400 GB. Each raid-controller has five raid-5 LUN''s and one hotspare. The solaris is from 2006/11. I have created a single raidz2 tank from the ten LUN''s. The raid-conroller is connected to a Dell PE 2650 with two qlogic 2310 hba''s, frame-size is 1024, speed is 2 GHz. The hba''s are
2009 Nov 11
15
ZFS on JBOD storage, mpt driver issue - server not responding
Server using [b]Sun StorageTek 8-port external SAS PCIe HBA [/b](mpt driver) connected to external JBOD array with 12 disks. Here is link to the exact SAS (Sun) adapter: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf (LSI SAS3801) When running IO intensive operations (zpool scrub) for couple of hours, the server locks with the following repeating
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess
2008 Dec 04
11
help diagnosing system hang
Hi all, First, I''ll say my intent is not to spam a bunch of lists, but after posting to opensolaris-discuss I had someone communicate with me offline that these lists would possibly be a better place to start. So here we are. For those on all three lists, sorry for the repetition. Second, this message is meant to solicit help in diagnosing the issue described below. Any hints on
2007 Oct 09
7
ZFS 60 second pause times to read 1K
Every day we see pause times of sometime 60 seconds to read 1K of a file for local reads as well as NFS in a test setup. We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and uses 4TB.
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in
2010 Jun 21
1
Many checksum errors during resilver.
I''ve decided to upgrade my home server capacity by replacing the disks in one of my mirror vdevs. The procedure appeared to work out, but during resilver, a couple million checksum errors were logged on the new device. I''ve read through quite a bit of the archive and searched around a bit, but can not find anything definitive to ease my mind on whether to proceed. SunOS
2006 Sep 14
1
Kernel panic on "zpool import"
Hi, I just triggered a kernel panic while trying to import a zpool. The disk in the zpool was residing on a Symmetrix and mirrored with SRDF. The host sees both devices though (one writeable device "R1" on one Symmetrix box and one write protected device "R2" on another Symmetrix box). It seems zfs tries to import the write protected device (instead of the writeable one) and
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2007 Oct 22
4
Parallel zfs destroy results in No more processes
Running 102 parallel "zfs destroy -r" commands on an X4500 running S10U4 has resulted in "No more processes" errors in existing login shells for several minutes of time, but then fork() calls started working again. However, none of the zfs destroy processes have actually completed yet, which is odd since some of the filesystems are trivially small. After fork() started