search for: sd13

Displaying 8 results from an estimated 8 matches for "sd13".

Did you mean: d13
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd14 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd15 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s...
2007 Oct 09
7
ZFS 60 second pause times to read 1K
Every day we see pause times of sometime 60 seconds to read 1K of a file for local reads as well as NFS in a test setup. We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the files system mounted over v5 krb5 NFS and accessed directly. The pool is a 20TB pool and is using . There are three filesystems, backup, test and home. Test has about 20 million files and uses 4TB.
2009 Apr 15
5
StorageTek 2540 performance radically changed
...0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd2 1.3 0.3 6.8 2.0 0.0 0.0 1.7 0 0 sd10 0.0 99.3 0.0 12698.3 0.0 32.2 324.5 0 97 sd11 0.3 105.9 38.4 12753.3 0.0 31.8 299.9 0 99 sd12 0.0 100.2 0.0 12095.9 0.0 26.4 263.8 0 82 sd13 0.0 102.3 0.0 12959.7 0.0 31.0 303.4 0 94 sd14 0.1 97.2 12.8 12291.8 0.0 30.4 312.0 0 92 sd15 0.0 99.7 0.0 12057.5 0.0 26.0 260.8 0 80 sd16 0.1 98.8 12.8 12634.3 0.0 31.9 322.1 0 96 sd17 0.0 99.0 0.0 12522.2 0.0 30.9 312.0 0 94...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2008 Jan 17
9
ATA UDMA data parity error
...41:19 san sata: [ID 514995 kern.info] SATA Gen1 signaling speed (1.5Gbps) Jan 17 06:41:19 san sata: [ID 349649 kern.info] Supported queue depth 32 Jan 17 06:41:19 san sata: [ID 349649 kern.info] capacity = 1953525168 sectors Jan 17 06:41:19 san scsi: [ID 193665 kern.info] sd13 at marvell88sx1: target 1 lun 0 Jan 17 06:41:19 san genunix: [ID 936769 kern.info] sd13 is /pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 6/disk at 1,0 Jan 17 06:41:19 san genunix: [ID 408114 kern.info] /pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 6/disk at 1,0 (s...