Displaying 4 results from an estimated 4 matches for "snv_68".
2007 Sep 06
0
Zfs with storedge 6130
...n better when I export each disk in my array as a single raid0 x14
then create the zpool :)
#zpool create -f vol0 c2t1d12 c2t1d11 c2t1d10 c2t1d9 c2t1d8 c2t1d7 c2t1d6
c2t1d5 c2t1d4 c2t1d3 c2t1d2 c2t1d1 c2t1d0 spare c2t1d13
>
>> The storedge shelf has 14 FC 72gb disks attached to a solaris snv_68.
>>
>> I was thinking that since I cant export all the disks un-raided out to the
>> solaris system that I would instead:
>>
>> (on the 6130)
>> Create 3 raid5 volumes of 200gb each using the "Sun_ZFS" pool (128k segment
>> size, read ahead enab...
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
...il.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html
when I luupgrade a ufs partition with a
dvd-b62 that was bfu to b68 with a dvd of b74
it booted fine and I was doing the same thing
that I had done on another machine (/usr can
live on raidz if boot is ufs) on a
zfs destroy -r z/snv_68 with lzjb and {usr var opt} partitions
it crashed with:
Oct 11 14:28:11 nas ^Mpanic[cpu0]/thread=b4b6ee00:
freeing free segment (vdev=1 offset=122842f400 size=10400)
824aabac genunix:vcmn_err+16 (3, f49966e4, 824aab)
824aabcc zfs:zfs_panic_recover+28 (f49966e4, 1, 0, 284)
824aac20 zfs:metaslab_...
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
...r when running the test app against a Solaris/UFS file system.
Machine 1:
OpenSolaris Community Edition,
snv_72, no BFU (not DEBUG)
SCSI Drives, Fibre Channel
ZFS Pool is six drive stripe set
Machine 2:
OpenSolaris Community Edition
snv_68 with BFU (kernel has DEBUG enabled)
SATA Drives
ZFS Pool is four RAIDZ sets, two disks in each RAIDZ set
(Please forgive me if I have posted in the wrong place. I am new to ZFS and this forum. However, this forum appears to be the best place to get good quality ZFS information. Tha...
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can