Displaying 6 results from an estimated 6 matches for "sd12".
Did you mean:
sd1
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd14 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd15 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0...
2009 Apr 15
5
StorageTek 2540 performance radically changed
...0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd2 1.3 0.3 6.8 2.0 0.0 0.0 1.7 0 0
sd10 0.0 99.3 0.0 12698.3 0.0 32.2 324.5 0 97
sd11 0.3 105.9 38.4 12753.3 0.0 31.8 299.9 0 99
sd12 0.0 100.2 0.0 12095.9 0.0 26.4 263.8 0 82
sd13 0.0 102.3 0.0 12959.7 0.0 31.0 303.4 0 94
sd14 0.1 97.2 12.8 12291.8 0.0 30.4 312.0 0 92
sd15 0.0 99.7 0.0 12057.5 0.0 26.0 260.8 0 80
sd16 0.1 98.8 12.8 12634.3 0.0 31.9 322.1 0 96...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Jan 17
9
ATA UDMA data parity error
...41:19 san sata: [ID 514995 kern.info] SATA Gen1
signaling speed (1.5Gbps)
Jan 17 06:41:19 san sata: [ID 349649 kern.info] Supported queue
depth 32
Jan 17 06:41:19 san sata: [ID 349649 kern.info] capacity =
1953525168 sectors
Jan 17 06:41:19 san scsi: [ID 193665 kern.info] sd12 at marvell88sx1:
target 0 lun 0
Jan 17 06:41:19 san genunix: [ID 936769 kern.info] sd12 is
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 6/disk at 0,0
Jan 17 06:41:19 san genunix: [ID 408114 kern.info]
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 6/disk at 0,0 (s...