search for: sd6

Displaying 14 results from an estimated 14 matches for "sd6".

Did you mean: sd
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
...==== newserver:/# tail -20 /var/adm/messages Jul 9 14:49:26 sbknwsxapd3 scsi: [ID 107833 kern.notice] Device is gone Jul 9 14:49:26 sbknwsxapd3 last message repeated 1 time Jul 9 14:49:26 sbknwsxapd3 scsi: [ID 107833 kern.warning] WARNING: /pci at 0/pci at 0/pci at 2/scsi at 0/sd at 6,0 (sd6): Jul 9 14:49:26 sbknwsxapd3 offline or reservation conflict Jul 9 14:51:01 sbknwsxapd3 scsi: [ID 107833 kern.notice] Device is gone Jul 9 14:51:01 sbknwsxapd3 last message repeated 1 time Jul 9 14:51:01 sbknwsxapd3 scsi: [ID 107833 kern.warning] WARNING: /pci at 0/pci at 0/pci at 2/s...
2014 Sep 25
3
[5/5] ARM: tegra: jetson-tk1: enable GK20A GPU
...;okay"; > + > + vdd-supply = <&vdd_gpu>; > + }; > + > pinmux: pinmux at 0,70000868 { > pinctrl-names = "default"; > pinctrl-0 = <&state_default>; > @@ -1505,7 +1511,7 @@ > regulator-always-on; > }; > > - sd6 { > + vdd_gpu: sd6 { > regulator-name = "+VDD_GPU_AP"; > regulator-min-microvolt = <650000>; > regulator-max-microvolt = <1200000>; -- If you put it off long enough, it might go away.
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...d1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd5 0.0 118.0 0.0 15099.9 0.0 35.0 296.7 0 100 sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd1...
2014 Sep 25
0
[5/5] ARM: tegra: jetson-tk1: enable GK20A GPU
...t;&vdd_gpu>; >> + }; >> + >> pinmux: pinmux at 0,70000868 { >> pinctrl-names = "default"; >> pinctrl-0 = <&state_default>; >> @@ -1505,7 +1511,7 @@ >> regulator-always-on; >> }; >> >> - sd6 { >> + vdd_gpu: sd6 { >> regulator-name = "+VDD_GPU_AP"; >> regulator-min-microvolt = <650000>; >> regulator-max-microvolt = <1200000>; >
2014 May 19
0
[PATCH 5/5] ARM: tegra: jetson-tk1: enable GK20A GPU
...0,12 @@ }; }; + gpu at 0,57000000 { + status = "okay"; + + vdd-supply = <&vdd_gpu>; + }; + pinmux: pinmux at 0,70000868 { pinctrl-names = "default"; pinctrl-0 = <&state_default>; @@ -1505,7 +1511,7 @@ regulator-always-on; }; - sd6 { + vdd_gpu: sd6 { regulator-name = "+VDD_GPU_AP"; regulator-min-microvolt = <650000>; regulator-max-microvolt = <1200000>; -- 1.9.2
2014 May 19
10
[PATCH 0/5] drm/nouveau: platform devices and GK20A probing
This patch series is the final (?) step towards the initial support of GK20A, allowing it to be probed and used (currently at a very slow speed, and for offscreen rendering only) on the Jetson TK1 and Venice 2 boards. The main piece if the first patch which adds platform devices probing support to Nouveau. There are probably lots of things that need to be discussed about it, e.g.: * The way the
2008 Jan 10
2
NCQ
...impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0 46239.2 0.0 0.0 6.6 14.7 0 79 sd7 452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79 sd8 460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73 sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79 sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 9...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2007 Apr 30
4
B62 AHCI and ZFS
...6769 kern.notice] sd3 is /pci at 0,0/pci15d9, 8180 at 1f,2/disk at 2,0 Apr 27 09:30:04 weston genunix: [ID 408114 kern.notice] /pci at 0,0/pci15d9,8180 at 1f ,2/disk at 2,0 (sd3) online ........ Apr 27 09:30:04 weston genunix: [ID 408114 kern.notice] /pci at 0,0/pci15d9,8180 at 1f ,2/disk at 5,0 (sd6) online Apr 27 09:30:04 weston unix: [ID 469452 kern.notice] NOTICE: ncrs: 64-bit driver module not found Apr 27 09:30:04 weston unix: [ID 469452 kern.notice] NOTICE: hpfc: 64-bit driver module not found Apr 27 09:30:04 weston unix: [ID 469452 kern.notice] NOTICE: adp: 64-bit driver module not fo...
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have
2020 Feb 27
2
[PATCH] Update the 5 year logo to 10 year logo
...erPrHy|iZTur2O(C^*A@-auYAKTxnhj{G*uPfu zC;`G&sZ`E?{q@&{_U$V+@xGg+q$J*d|9w0?J()iJ!S(fPlCZEaCQO*XqD71NY3)E> z>%F3;r&JCfk3`JO%s6}DI)pnz(F5z$R}TWXx=Xma=aQQP@#j7#_LwPoIXa39N-!`m zz}U<XTiX)OiQU+kS(g8g^A052rsDGEXyb6=PVL3B+vsxJz1sAl*Z!gGa~MFD`i=~T zN-XAst3&YaGF&E_2XOtLWn485;)bbysd6Y=tbWITH*@}Zo`lo;X)*pQ+0FEBiZc`0 zH@iRH9S7po^>z6*BX@DyLLO)yg*>G#JDo?c(|H8u+6=<e*3)I__jt@7LXue^2R!=g zJuXY_%s$(GG(NUP_L&T=95FC9$Dufu6r*~2kMEH36R*cjA)wcr_`YycuRx?EiKL&| zhySJ@NI5>4Go@`ww+^PI-zi%AN8)THuTFZ}(TUQ(htBa5kPh76D_AY&Q3Av)V`F3E znKNhJ_W_e8O(Huxn?L{jvsNxjSH;I4f6Tgd>zM...