I''m trying to set up a raidz pool on 4 disks attached to an Asus P5BV-M motherboard with an Intel ICH7R. The bios lets me pick IDE, RAID, or AHCI for the disks. I''m not interested in the motherboard''s raid, and reading previous posts, it sounded like there were performance advantages to picking AHCI. However, I am getting errors and I am unable to create the pool. Running format tells me AVAILABLE DISK SELECTIONS: 0. c12d1 <DEFAULT cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci-ide at 1f,1/ide at 0/cmdk at 1,0 1. c13t0d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 0,0 2. c13t1d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 1,0 3. c13t2d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 2,0 4. c13t3d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 3,0 The first disk is an IDE disk containing the OS, and the 2nd four are for the pool. Then # zpool create mypool raidz c13t0d0 c13t1d0 c13t2d0 c13t3d0 cannot label ''c13t0d0'': try using fdisk(1M) and then provide a specific slice When doing this, dmsg says: Apr 15 17:14:15 fs8 ahci: [ID 296163 kern.warning] WARNING: ahci0: ahci port 0 has task file error Apr 15 17:14:15 fs8 ahci: [ID 687168 kern.warning] WARNING: ahci0: ahci port 0 is trying to do error recovery Apr 15 17:14:15 fs8 ahci: [ID 551337 kern.warning] WARNING: ahci0: Apr 15 17:14:15 fs8 ahci: [ID 693748 kern.warning] WARNING: ahci0: ahci port 0 task_file_status = 0x451 Apr 15 17:14:15 fs8 genunix: [ID 353554 kern.warning] WARNING: Device /pci at 0,0/pci1043,819e at 1f,2/disk at 0,0 failed to power up. I find reports from 2006 that the ICH7R is well supported, so I''m not sure what the problem is. Any suggestions? -- This message posted from opensolaris.org
Hi, are the drives properly configured in cfgadm? Cheers, Tonmaus -- This message posted from opensolaris.org
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Tonmaus > > are the drives properly configured in cfgadm?I agree. You need to do these: devfsadm -Cv cfgadm -al
devfsadm -Cv gave a lot of "removing file" messages, apparently for items that were not relevant. cfgadm -al says, about the disks, sata0/0::dsk/c13t0d0 disk connected configured ok sata0/1::dsk/c13t1d0 disk connected configured ok sata0/2::dsk/c13t2d0 disk connected configured ok sata0/3::dsk/c13t3d0 disk connected configured ok I still get the same error message, but I''m guessing now that means I have to create a partition on the device. However, I am still stymied for the time being. fdisk can''t open any of the /dev/rdsk/c13t*d0p0 devices. I tried running format, and get this AVAILABLE DISK SELECTIONS: 0. c12d1 <DEFAULT cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci-ide at 1f,1/ide at 0/cmdk at 1,0 1. c13t0d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 0,0 2. c13t1d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 1,0 3. c13t2d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 2,0 4. c13t3d0 <drive type unknown> /pci at 0,0/pci1043,819e at 1f,2/disk at 3,0 Specify disk (enter its number): 1 Error: can''t open disk ''/dev/rdsk/c13t0d0p0''. AVAILABLE DRIVE TYPES: 0. Auto configure 1. other Specify disk type (enter its number): 0 Auto configure failed No Solaris fdisk partition found. At this point, I not sure whether to run fdisk, format or something else. I tried fdisk, partition and label, but gut the message "Current Disk Type is not set." I expect this is a problem because of the "drive type unknown" appearing on the drives. I gather from another thread that I need to run fdisk, but I haven''t been able to do it. -- This message posted from opensolaris.org
Your adapter read-outs look quite different than mine. I am on ICH-9, snv_133. Maybe that''s why. But I thought I should ask on that occasion: -build? -do the drives currently support SATA-2 standard (by model, by jumper settings?) - could it be that the Areca controller has done something to them partition-wise? Regards, Tonmaus -- This message posted from opensolaris.org
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Willard Korfhage > > devfsadm -Cv gave a lot of "removing file" messages, apparently for > items that were not relevant.That''s good. If there were no necessary changes, devfsadm would say nothing.> I still get the same error message, but I''m guessing now that means I > have to create a partition on the device. However, I am still stymiedThere should be no need to create partitions. Something simple like this should work: zpool create junkfooblah c13t0d0 And if it doesn''t work, try "zpool status" just to verify for certain, that device is not already part of any pool.> for the time being. fdisk can''t open any of the /dev/rdsk/c13t*d0p0 > devices. I tried running format, and get thisThere may be something weird happening in your system. I can''t think of any reason for that behavior. Unless you simply have a SATA card that has no proper driver support from opensolaris while in AHCI mode.> Error: can''t open disk ''/dev/rdsk/c13t0d0p0''.Yeah. Weird.> AVAILABLE DRIVE TYPES: > 0. Auto configure > 1. other > Specify disk type (enter its number): 0 > Auto configure failed > No Solaris fdisk partition found.Yeah. Weird.
No Areca controller on this machine. It is a different box, and the drives are just plugged into the SATA ports on the motherboard. I''m running build svn_133, too. The drives are recent - 1.5TB drives, 3 Western Digital and 1 Seagate, if I recall correctly. They ought to support SATA-2. They are brand new, and haven''t been used before. I have the feeling I''m missing some simple, obvious step because I''m still pretty new to OpenSolaris. -- This message posted from opensolaris.org
> There should be no need to create partitions. > Something simple like this > hould work: > zpool create junkfooblah c13t0d0 > > And if it doesn''t work, try "zpool status" just to > verify for certain, that > device is not already part of any pool.It is not part of any pool. I get the same "cannot label" message, and dmsg still shows the task file error messages that I mentioned before. The drives are new, and I don''t think they are bad. Likewise, the motherboard is new, although I see the last BIOS release was September, 2008, so the design has been out for a while. -- This message posted from opensolaris.org
On Fri, Apr 16, 2010 at 11:46:01AM -0700, Willard Korfhage wrote:> The drives are recent - 1.5TB drivesI''m going to bet this is a 32-bit system, and you''re getting screwed by the 1TB limit that applies there. If so, you will find clues hidden in dmesg from boot time about this, as the drives are probed. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100417/f8c6f8ff/attachment.bin>
isainfo -k returns amd64, so I don''t think that is the answer. -- This message posted from opensolaris.org
I solved the mystery - an astounding 7 out of the 10 brand new disks I was using were bad. I was using 4 at a time, and it wasn''t until a good one got in the mix that I realized what was wrong. FYI, these were Western Digital WD15EADS and Samsung HD154UI. Each brand was mostly bad, with one or two good disks. The bad ones are functional enough that the BIOS can tell what type they are, but I got a lot of errors when I plugged them into a Linux box to check them. The whole thing is bizarre enough that I wonder if they got damaged in shipping or if my machine somehow damaged them. -- This message posted from opensolaris.org