Displaying 9 results from an estimated 9 matches for "sd11".
Did you mean:
d11
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd14 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd15 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd16...
2008 Jan 10
2
NCQ
...460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73
sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79
sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 90
sd9 430.7 0.0 44266.5 0.0 5.8 0.8 15.5 78 84
sd10 421.7 0.0 44451.4 0.0 6.3 0.9 17.1 80 87
sd11 421.7 0.0 44196.1 0.0 5.8 0.8 15.8 75 80
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
z 1.06T 3.81T 2.92K 0 360M 0
raidz1 564G 2.86T 1.51K...
2006 Oct 26
2
experiences with zpool errors and glm flipouts
...: Resetting scsi bus, got incorrect phase from (1,0)
scsi: WARNING: /pci at 1d,700000/scsi at 4 (glm0):
got SCSI bus reset
genunix: NOTICE: glm0: fault detected in device; service still available
genunix: NOTICE: glm0: got SCSI bus reset
scsi: WARNING: /pci at 1d,700000/scsi at 4/sd at 1,0 (sd11):
auto request sense failed (reason=reset)
Eventually I had to drive in to work to reboot the machine, although
the system did not tip over. After a reboot to single user mode, the
same symptoms recurred (since it seems that the resilver kicked off
again... and at a certain stage hit this...
2009 Apr 15
5
StorageTek 2540 performance radically changed
...evice r/s w/s kr/s kw/s wait actv svc_t %w %b
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd2 1.3 0.3 6.8 2.0 0.0 0.0 1.7 0 0
sd10 0.0 99.3 0.0 12698.3 0.0 32.2 324.5 0 97
sd11 0.3 105.9 38.4 12753.3 0.0 31.8 299.9 0 99
sd12 0.0 100.2 0.0 12095.9 0.0 26.4 263.8 0 82
sd13 0.0 102.3 0.0 12959.7 0.0 31.0 303.4 0 94
sd14 0.1 97.2 12.8 12291.8 0.0 30.4 312.0 0 92
sd15 0.0 99.7 0.0 12057.5 0.0 26.0 260.8 0 80...
2008 Aug 12
2
ZFS, SATA, LSI and stability
...8.05 upgraded to snv94. We used again a supermicro X7DBE but now with two LSI SAS3081E SAS controllers. And guess what? Now we get these error-messages in /var/adm/messages:
Aug 11 18:20:52 thumper2 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci8086,2690 at 1c/pci1000,3140 at 0/sd at 5,0 (sd11):
Aug 11 18:20:52 thumper2 Error for Command: read(10) Error Level: Retryable
Aug 11 18:20:52 thumper2 scsi: [ID 107833 kern.notice] Requested Block: 1423173120 Error Block: 1423173120
Aug 11 18:20:52 thumper2 scsi: [ID 107833 kern.notice] Vendor: ATA...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Jan 17
9
ATA UDMA data parity error
...41:20 san sata: [ID 514995 kern.info] SATA Gen1
signaling speed (1.5Gbps)
Jan 17 06:41:20 san sata: [ID 349649 kern.info] Supported queue
depth 32
Jan 17 06:41:20 san sata: [ID 349649 kern.info] capacity =
1953525168 sectors
Jan 17 06:41:20 san scsi: [ID 193665 kern.info] sd11 at marvell88sx2:
target 5 lun 0
Jan 17 06:41:20 san genunix: [ID 936769 kern.info] sd11 is
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0,1/pci11ab,11ab at 6/disk at 5,0
Jan 17 06:41:20 san genunix: [ID 408114 kern.info]
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0,1/pci11ab,11ab at 6/disk at 5,...