search for: c1t11d0

Displaying 9 results from an estimated 9 matches for "c1t11d0".

Did you mean: c1t10d0
2008 Jul 11
3
Linux equivalent of 'format' in solaris
.../sbus at 3,0/SUNW,fas at 3,8800000/sd at 3,0 2. c0t4d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus at 3,0/SUNW,fas at 3,8800000/sd at 4,0 3. c1t10d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus at 3,0/QLGC,isp at 0,10000/sd at a,0 4. c1t11d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus at 3,0/QLGC,isp at 0,10000/sd at b,0 5. c1t12d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus at 3,0/QLGC,isp at 0,10000/sd at c,0 Specify disk (enter its number): -------------- next part -------------- An HTML at...
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
....0 34.0? ? > 0.0???84.1???1 100 c1t8d0 > >???407.1? ? 0.0? ? > 1.9? ? 0.0? 0.0 31.2? ? > 0.0???76.6???1 100 c1t9d0 > >???407.5? ? 0.0? ? > 2.0? ? 0.0? 0.0 33.2? ? > 0.0???81.4???1 100 c1t10d0 > >???402.8? ? 0.0? ? > 2.0? ? 0.0? 0.0 33.5? ? > 0.0???83.2???1 100 c1t11d0 > >???408.9? ? 0.0? ? > 2.0? ? 0.0? 0.0 32.8? ? > 0.0???80.3???1 100 c1t12d0 > >? > ???9.6???10.8? ? > 0.1? ? 0.9? 0.0? 0.4? ? > 0.0???20.1???0? 17 > c1t13d0 > >? > ???0.0???22.7? ? > 0.0? ? 0.5? 0.0? 0.5? ? > 0.0???22.8???0? 33 > c1t14d0 > > &g...
2009 Dec 24
1
high read iops - more memory for arc?
...31.9 0.0 78.8 1 100 c1t7d0 404.1 0.0 1.9 0.0 0.0 34.0 0.0 84.1 1 100 c1t8d0 407.1 0.0 1.9 0.0 0.0 31.2 0.0 76.6 1 100 c1t9d0 407.5 0.0 2.0 0.0 0.0 33.2 0.0 81.4 1 100 c1t10d0 402.8 0.0 2.0 0.0 0.0 33.5 0.0 83.2 1 100 c1t11d0 408.9 0.0 2.0 0.0 0.0 32.8 0.0 80.3 1 100 c1t12d0 9.6 10.8 0.1 0.9 0.0 0.4 0.0 20.1 0 17 c1t13d0 0.0 22.7 0.0 0.5 0.0 0.5 0.0 22.8 0 33 c1t14d0 Is this an indicator that we need more physical memory? From http://blogs.sun.com/brendan/...
2007 Jan 10
4
[osol-discuss] Re: bare metal ZFS ? How To ?
this is off list on purpose ? > run zpool import, it will search all attached storage and give you a list > of availible pools. then run zpool import poolname or add a -f if you > didn''t export before the install/upgrade. assume worst case someone walks up to you and drops an array on you. They say "its ZFS an'' I need that der stuff ''k? " all
2010 Sep 29
10
Resliver making the system unresponsive
....0G resilvered, 22.42% done config: NAME STATE READ WRITE CKSUM data01 DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 c1t12d0 ONLINE 0 0 0 c1t13d0 ONLINE 0 0 0 c1t14d0 ONLINE 0 0 0 raidz2-1 DEGRADED 0 0 0 c1t22d0 ONLINE 0 0 0 c1t15d0 ONLINE 0 0 0 c1t16d...
2007 Jul 31
0
controller number mismatch
...0.1 1.1 1 5 c1t5d0 0.5 53.7 32.0 106.7 0.0 0.1 0.1 1.6 0 3 c1t8d0 0.5 51.8 32.0 106.7 0.0 0.1 0.1 1.7 0 3 c1t9d0 0.2 52.5 10.7 106.9 0.0 0.1 0.1 1.3 0 3 c1t10d0 0.3 51.3 21.3 107.5 0.0 0.1 0.1 1.5 0 3 c1t11d0 0.3 52.3 21.3 107.6 0.0 0.1 0.1 1.7 0 3 c1t12d0 $ zpool iostat -v 6 capacity operations bandwidth pool used avail read write read write ----------- ----- ----- ----- ----- ----- ----- zpool 930G 566G 16 114 41.5...
2010 Aug 19
0
Unable to mount legacy pool in to zone
...ol ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 c2t13d0 ONLINE 0 0 0 c1t12d0 ONLINE 0 0 0 c1t13d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0...
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this: # zpool status pool: pool01 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux