similar to: Narrow escape with FAULTED disks

Displaying 20 results from an estimated 200 matches similar to: "Narrow escape with FAULTED disks"

2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one.... Illumos/Openindiana let me add a device as a hot spare that evidently has a different sector alignment than all of the other drives in the array. So now I''m at the point that I /need/ a hot spare and it doesn''t look like I have it. And, worse, the other spares I have are all the same model as said hot spare. Is there anything I can do with this or
2011 Jul 01
1
NUT PSU/IPMI driver using FreeIPMI (was: [Freeipmi-devel] in need of guidance...)
Hi Al, (FYI, I cc'ed the NUT developers list for info) 2011/6/30 Albert Chu <chu11 at llnl.gov> > Hi Arnaud, > > On Tue, 2011-06-28 at 12:19 -0700, Arnaud Quette wrote: > > Hi Al, > > > > 2011/6/28 Albert Chu <chu11 at llnl.gov> > > On Tue, 2011-06-28 at 02:28 -0700, Arnaud Quette wrote: > > (...) > > >
2010 Nov 29
9
Seagate ST32000542AS and ZFS perf
Hi, Does anyone use Seagate ST32000542AS disks with ZFS? I wonder if the performance is not that ugly as with WD Green WD20EARS disks. Thanks, -- Piotr Jasiukajtis | estibi | SCA OS0072 http://estseg.blogspot.com
2011 Aug 16
2
solaris 10u8 hangs with message Disconnected command timeout for Target 0
Hi, My solaris storage hangs. I login to the console and there is messages[1] display on the console. I can''t login into the console and seems the IO is totally blocked. The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2 HBA cable connect the server and MD3000. The symptom is random. It is very appreciated if any one can help me out. Regards, Ding [1] Aug 16
2009 Nov 17
13
ZFS storage server hardware
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware.
2008 Aug 02
13
are these errors dangerous
Hi everyone, I''ve been running a zfs fileserver for about a month now (on snv_91) and it''s all working really well. I''m scrubbing once a week and nothing has come up as a problem yet. I''m a little worried as I''ve just noticed these messages in /var/adm/message and I don''t know if they''re bad or just informational: Aug 2 14:46:06
2009 Jul 13
2
questions regarding RFE 6334757 and CR 6322205 disk write cache. thanks (case 11356581)
Hello experts, I would like consult you some questions regarding RFE 6334757 and CR 6322205 (disk write cache). ========================================== RFE 6334757 disk write cache should be enabled and should have a tool to switch it on and off CR 6322205 Enable disk write cache if ZFS owns the disk ========================================== The cu found on SPARC Enterprise T5140,
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2008 Aug 12
2
ZFS, SATA, LSI and stability
After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8 Marvell controllers and opensolaris snv79 (same as described here: http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just start over using new hardware and opensolaris 2008.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get
2008 Jan 06
1
How do I blank or overwrite DVD-RW disks in CentOS 5?
I'm trying to use DVD-RW ("minus RW") disks in my LG GSA-4040B drive. I can write a new disk just fine, but can't find any way to blank or re-use a disk. When I run xcdroast and click on the "Blank CD/DVD+-RW" button, I get "Error while blanking." Here is last part of the dialog: Using generic SCSI-3/mmc DVD-R(W) driver (mmc_mdvd) Driver flags
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2015 Apr 09
1
resource busy
How do I find whatever it is that wodim or readom thinks is using /dev/sr0 and kill it? So far, reboot is the only solution I've found that works. I don't like it. I want to be able to use my DVD-burner more than once without rebooting. lsof has not helped. -- Michael hennebry at web.cs.ndsu.NoDak.edu "SCSI is NOT magic. There are *fundamental technical reasons* why it is
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2012 Nov 02
6
FreeBSD 9.1 stability/robustness?
I need to build up a few servers and routers, and am wondering how FreeBSD 9.1 is shaping up. Will it be likely to be more stable and robust than 9.0-RELEASE? Are there issues that will have to wait until 9.2-RELEASE to be fixed? Opinions welcome. --Brett Glass
2007 Oct 09
7
ZFS 60 second pause times to read 1K
Every day we see pause times of sometime 60 seconds to read 1K of a file for local reads as well as NFS in a test setup. We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and uses 4TB.
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID SUNW-MSG-ID Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2008 Jul 08
0
Disks errors not shown by zpool?
Ok, this is not a OpenSolaris question, but it is a Solaris and ZFS question. I have a pool with three mirrored vdevs. I just got an error message from FMD that read failed from one on the disks,(c1t6d0). All with instructions on how to handle the problem and replace the devices, so far everything is good. But the zpool still thinks everything is fine. Shouldn''t zpool also show