similar to: solaris 10u8 hangs with message Disconnected command timeout for Target 0

Displaying 20 results from an estimated 700 matches similar to: "solaris 10u8 hangs with message Disconnected command timeout for Target 0"

2011 Aug 18
0
zfs-discuss Digest, Vol 70, Issue 37
Please check whether you have latest '' MPT '' patch installed on your server ? If not , please install MPT patch . It will fix the issue . Regards, Gowrisankar . On Thu, Aug 18, 2011 at 5:30 PM, <zfs-discuss-request at opensolaris.org>wrote: > Send zfs-discuss mailing list submissions to > zfs-discuss at opensolaris.org > > To subscribe or unsubscribe
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get
2007 Oct 09
7
ZFS 60 second pause times to read 1K
Every day we see pause times of sometime 60 seconds to read 1K of a file for local reads as well as NFS in a test setup. We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and uses 4TB.
2008 Dec 04
11
help diagnosing system hang
Hi all, First, I''ll say my intent is not to spam a bunch of lists, but after posting to opensolaris-discuss I had someone communicate with me offline that these lists would possibly be a better place to start. So here we are. For those on all three lists, sorry for the repetition. Second, this message is meant to solicit help in diagnosing the issue described below. Any hints on
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2008 Aug 02
13
are these errors dangerous
Hi everyone, I''ve been running a zfs fileserver for about a month now (on snv_91) and it''s all working really well. I''m scrubbing once a week and nothing has come up as a problem yet. I''m a little worried as I''ve just noticed these messages in /var/adm/message and I don''t know if they''re bad or just informational: Aug 2 14:46:06
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2008 Jul 08
0
Disks errors not shown by zpool?
Ok, this is not a OpenSolaris question, but it is a Solaris and ZFS question. I have a pool with three mirrored vdevs. I just got an error message from FMD that read failed from one on the disks,(c1t6d0). All with instructions on how to handle the problem and replace the devices, so far everything is good. But the zpool still thinks everything is fine. Shouldn''t zpool also show
2010 Jun 18
6
WD caviar/mpt issues
I know that this has been well-discussed already, but it''s been a few months - WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting out lots of beloved " Log info 31080000 received for target" messages, and just generally not working right. (SM 836EL1 and 836TQ chassis - though I have several variations on theme depending on date of purchase: 836EL2s,
2009 Jul 07
0
[perf-discuss] help diagnosing system hang
Interresting... I wonder what differs between your system and mine. With my dirt-simple stress-test: server1# zpool create X25E c1t15d0 server1# zfs set sharenfs=rw X25E server1# chmod a+w /X25E server2# cd /net/server1/X25E server2# gtar zxf /var/tmp/emacs-22.3.tar.gz and a fully patched X42420 running Solaris 10 U7 I still see these errors: Jul 7 22:35:04 merope Error for Command:
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2011 Jul 01
1
NUT PSU/IPMI driver using FreeIPMI (was: [Freeipmi-devel] in need of guidance...)
Hi Al, (FYI, I cc'ed the NUT developers list for info) 2011/6/30 Albert Chu <chu11 at llnl.gov> > Hi Arnaud, > > On Tue, 2011-06-28 at 12:19 -0700, Arnaud Quette wrote: > > Hi Al, > > > > 2011/6/28 Albert Chu <chu11 at llnl.gov> > > On Tue, 2011-06-28 at 02:28 -0700, Arnaud Quette wrote: > > (...) > > >
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2008 Jan 17
9
ATA UDMA data parity error
Hey all, I''m not sure if this is a ZFS bug or a hardware issue I''m having - any pointers would be great! Following contents include: - high-level info about my system - my first thought to debugging this - stack trace - format output - zpool status output - dmesg output High-Level Info About My System --------------------------------------------- - fresh
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2010 Aug 17
4
Narrow escape with FAULTED disks
Nothing like a "heart in mouth moment" to shave tears from your life. I rebooted a snv_132 box in perfect heath, and it came back up with two FAULTED disks in the same vdisk group. Everything an hour on Google I found basically said "your data is gone". All 45Tb of it. A postmortem of fmadm showed a single disk failed with smart predictive failure. No indication why the
2009 Jul 29
0
LVM and ZFS
I''m curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about "Bus and lots of "stalled" I/O. But as soon as I wrap that device inside an LVM metadevice and then use it in the ZFS zpool things work perfectly fine and smoothly (no
2007 Apr 03
2
Corrupt inodes on shared disk...
I am having problems when using a Dell PowerVault MD3000 with multipath from a Dell PowerEdge 1950. I have 2 cables connected and mount the partition on the DAS Array. I am using RHEL 4.4 with RHCS and a two node cluster. Only one node is "Active" at a time, it creates a mount to the partition, and if there is an issue RHCS will fence the device and then the other node will mount the