Displaying 11 results from an estimated 11 matches for "sd10".
Did you mean:
d10
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...d5 0.0 118.0 0.0 15099.9 0.0 35.0 296.7 0 100
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd14 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd15...
2008 May 29
1
>1TB ZFS thin provisioned partition prevents Opensolaris from booting.
...ot.
Upon reboot, the console reports the following errors.
WARNING: /scsi_vhci/disk at g0100015c55fb40900002a00483e34c6 (sd9):
disk has 3221225472 blocks, which is too large for a 32-bit kernel
WARNING: /iscsi/disk at 0000iqn.1986-03.com.sun%3A02%3Aee2143f2-f5ce-6414-fcda-8035dacfc3730001,0 (sd10):
disk has 3221225472 blocks, which is too large for a 32-bit kernl
And it continues to do this on the other partition i had created.
Ultimately coreadm:default fails bad
and the server is stuck at
svc.startd[7]: Lost repository event due to disconnection.
I am on a Poweredge 2650 with 2xXeon...
2008 Jan 10
2
NCQ
...452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79
sd8 460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73
sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79
sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 90
sd9 430.7 0.0 44266.5 0.0 5.8 0.8 15.5 78 84
sd10 421.7 0.0 44451.4 0.0 6.3 0.9 17.1 80 87
sd11 421.7 0.0 44196.1 0.0 5.8 0.8 15.8 75 80
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
z 1.06T 3.81...
2009 Apr 15
5
StorageTek 2540 performance radically changed
...ching gone!):
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd2 1.3 0.3 6.8 2.0 0.0 0.0 1.7 0 0
sd10 0.0 99.3 0.0 12698.3 0.0 32.2 324.5 0 97
sd11 0.3 105.9 38.4 12753.3 0.0 31.8 299.9 0 99
sd12 0.0 100.2 0.0 12095.9 0.0 26.4 263.8 0 82
sd13 0.0 102.3 0.0 12959.7 0.0 31.0 303.4 0 94
sd14 0.1 97.2 12.8 12291.8 0.0 30.4 312.0 0 92...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2008 Aug 12
2
ZFS, SATA, LSI and stability
...140 at 0 (mpt1):
Aug 11 17:47:47 thumper2 Log info 0x31123000 received for target 4.
Aug 11 17:47:47 thumper2 scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc
Aug 11 17:47:48 thumper2 scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci8086,2690 at 1c/pci1000,3140 at 0/sd at 4,0 (sd10):
Aug 11 17:47:48 thumper2 Error for Command: read(10) Error Level: Retryable
Aug 11 17:47:48 thumper2 scsi: [ID 107833 kern.notice] Requested Block: 252165120 Error Block: 252165120
Aug 11 17:47:48 thumper2 scsi: [ID 107833 kern.notice] Vendor: ATA...
2009 Jan 17
2
Comparison between the S-TEC Zeus and the Intel X25-E ??
I''m looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they''re
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked at the Intel X25-E series, and they look comparable in
performance. At about 20% of the cost.
http://www.intel.com/design/flash/nand/extreme/index.htm
Can anyone enlighten me as to any possible difference between an STEC
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Jan 17
9
ATA UDMA data parity error
...41:19 san sata: [ID 514995 kern.info] SATA Gen1
signaling speed (1.5Gbps)
Jan 17 06:41:19 san sata: [ID 349649 kern.info] Supported queue
depth 32
Jan 17 06:41:19 san sata: [ID 349649 kern.info] capacity =
1953525168 sectors
Jan 17 06:41:19 san scsi: [ID 193665 kern.info] sd10 at marvell88sx0:
target 5 lun 0
Jan 17 06:41:19 san genunix: [ID 936769 kern.info] sd10 is
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 4/disk at 5,0
Jan 17 06:41:19 san genunix: [ID 408114 kern.info]
/pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 4/disk at 5,0 (s...
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted