search for: sd8

Displaying 11 results from an estimated 11 matches for "sd8".

Did you mean: s8
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
...d3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd5 0.0 118.0 0.0 15099.9 0.0 35.0 296.7 0 100 sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 sd1...
2008 Jan 10
2
NCQ
...e r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0 46239.2 0.0 0.0 6.6 14.7 0 79 sd7 452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79 sd8 460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73 sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79 sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 90 sd9 430.7 0.0 44266.5 0.0 5.8 0.8 15.5 78 84 sd10 421.7 0.0 44451.4 0.0 6.3 0.9 17.1 80 8...
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone, As one of the steps of improving my ZFS home fileserver (snv_134) I wanted to replace a 1TB disk with a newer one of the same vendor/model/size because this new one has 64MB cache vs. 16MB in the previous one. The removed disk will be use for backups, so I thought it''s better off to have a 64MB cache disk in the on-line pool than in the backup set sitting off-line all
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Jan 17
9
ATA UDMA data parity error
...dmesg output is missing) ------------------- # dmesg Thu Jan 17 06:42:37 EST 2008 Jan 17 06:41:19 san sata: [ID 349649 kern.info] Supported queue depth 32 Jan 17 06:41:19 san sata: [ID 349649 kern.info] capacity = 1953525168 sectors Jan 17 06:41:19 san scsi: [ID 193665 kern.info] sd8 at marvell88sx0: target 4 lun 0 Jan 17 06:41:19 san genunix: [ID 936769 kern.info] sd8 is /pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 4/disk at 4,0 Jan 17 06:41:19 san genunix: [ID 408114 kern.info] /pci at 0,0/pci10de,376 at a/pci1033,125 at 0/pci11ab,11ab at 4/disk at 4,0 (sd...
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2008 Jul 25
18
zfs, raidz, spare and jbod
...rn.warning] WARNING: arcmsr0: tran reset level=1 Jul 25 13:15:00 malene arcmsr: [ID 658202 kern.warning] WARNING: arcmsr0: tran reset level=0 Jul 25 13:15:00 malene scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci8086,25f9 at 6/pci10b5,8533 at 0/pci10b5,8533 at 9/pci17d3,1680 at 0/sd at 1,3 (sd8): Jul 25 13:15:00 malene offline or reservation conflict /usr/sbin/zpool status pool: ef1 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replac...
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...zdR}cDHIM9U(7%brP7zQ1;uadObc at Y)k<`J12-CB(v)3&wEL?XM{uLkbh?9$pA%5as za<C{_rd;Ntm3EPot4Hcv+d;tFFN8M%&xHGX+W|lthG%EH7pcKohVcl(b>%`_IDaYD zE}m2Gn{b<z at 0clsN`)sw=R2_XBan!eas$0i;9Mu*91U#1|Hr~;z<+yd1TeZB>zoUD zRys_g=>L)5Hig*}Hz$}4=zLg+FMH+(ICFMG7|WBkQsApc1HEH09IjQs4IOs`z`sd8 zF;H)w1a!r$ne7e_C^W4BO+#dpY7Jg!He7T))$MJ-Oub`tq~FsuJh5%tww;bKv2EM7 zolK00?M!S=tm)XctvA2_y`Ht+)w{3jLx1YC&aT>3r%n|~_#w1oPa}IRz^-edjL*Jq z>{+D&NEa?u!}s?W+8NeQ@>Q7M?=P9udWlL-w41eh;xq=fQl2j9#W&1wems-$8E73O zMXvcA#mMb|>DP>t;~KB#g!FhP>D-rX^TmUFj`;XTNqpHV=;rr*!V5SeS-SDaA7jxu z...
2008 Jun 30
4
Rebuild of kernel 2.6.9-67.0.20.EL failure
Hello list. I'm trying to rebuild the 2.6.9.67.0.20.EL kernel, but it fails even without modifications. How did I try it? Created a (non-root) build environment (not a mock ) Installed the kernel.scr.rpm and did a rpmbuild -ba --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee prep-out.log The build failed at the end: Processing files: kernel-xenU-devel-2.6.9-67.0.20.EL Checking