Displaying 20 results from an estimated 1000 matches similar to: "ZFS snv_b39 and S10U2"
2006 Mar 29
3
ON 20060327 and upcoming solaris 10 U2 / coreutils
So, I noticed that a lot of the fixes discussed here recently,
including the ZFS/NFS interaction bug fixes and the deadlock fix has
made it into 20060327 that was released this morning. My question is
whether we''ll see all these up to the minute bug fixes in the Solaris
10 update that brings ZFS to that product, or if there is a specific
date where no further updates will make it in to
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2006 Jun 08
7
Wrong reported free space over NFS
NFS server (b39):
bash-3.00# zfs get quota nfs-s5-s8/d5201 nfs-s5-p0/d5110
NAME PROPERTY VALUE SOURCE
nfs-s5-p0/d5110 quota 600G local
nfs-s5-s8/d5201 quota 600G local
bash-3.00#
bash-3.00# df -h | egrep "d5201|d5110"
nfs-s5-p0/d5110 600G 527G 73G 88% /nfs-s5-p0/d5110
2008 Nov 17
14
Storage 7000
I''m not sure if this is the right place for the question or not, but I''ll
throw it out there anyways. Does anyone know, if you create your pool(s)
with a system running fishworks, can that pool later be imported by a
standard solaris system? IE: If for some reason the head running fishworks
were to go away, could I attach the JBOD/disks to a system running
snv/mainline
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz).
I then re-installed solaris express b28, preserving the zfs filesystems.
When I tried to "zpool import" my zfs filesystems I got a kernel panic:
> debugging crash dump vmcore.0 (32-bit) from blackbird
> operating system: 5.11 snv_28 (i86pc)
> panic message:
> ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2006 Sep 13
16
Comments on a ZFS multiple use of a pool, RFE.
I filed this RFE earlier, since there is no way for non sun personel
to see this RFE for a while I am posting it here, and asking for
feedback from the community.
[Fwd: CR 6470231 Created P5 opensolaris/triage-queue Add an inuse
check that is inforced even if import -f is used.] Inbox
Assign a GTD Label to this Conversation: [Show]
Statuses: Next Action, Action, Waiting On, SomeDay, Finished
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2007 Jun 01
10
SMART
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE
SMART data? With the Predictive Self Healing feature, I assumed that
Solaris would have at least some SMART support, but what I''ve googled so
far has been discouraging.
http://prefetch.net/blog/index.php/2006/10/29/solaris-needs-smart-support-please-help/
Bug ID: 4665068 SMART support in IDE driver
2006 Jul 26
9
zfs questions from Sun customer
Please reply to david.curtis at sun.com
******** Background / configuration **************
zpool will not create a storage pool on fibre channel storage. I''m
attached to an IBM SVC using the IBMsdd driver. I have no problem using
SVM metadevices and UFS on these devices.
List steps to reproduce the problem(if applicable):
Build Solaris 10 Update 2 server
Attach to an external
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP.
Here is the error message when I try to create my pool:
bash-3.00# zpool create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool create tank /dev/dsk/vpath1c
cannot open
2005 Nov 16
3
yay for zfs
This zfs looks great!
I really hope this gets put into solaris soon since I don''t think I could live with Solaris express on a production machine.
The easy of adding disk and moving directories looks like a life saver, especially for me who deals with trying to store digital media which piles up a couple of gig a day!
This message posted from opensolaris.org
2006 Aug 01
5
ZFS, block device and Xen?
Hi There,
I looked at the ZFS admin guide in attempt to find a way to leverage ZFS
capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen
domU file systems that are not ZFS. Couldn''t find an answer whether ZFS
could be used only as a "regular" volume manager to create logical
volumes for UFS or even a Linux ext2fs, with ideally, the ability to
create
2007 Apr 15
3
Bitrot and panics
IIRC, uncorrectable bitrot even in a nonessential file detected by ZFS used to cause a kernel panic.
Bug ID 4924238 was closed with the claim that bitrot-induced panics is not a bug, but the description did mention an open bug ID 4879357, which suggests that it''s considered a bug after all.
Can somebody clarify the intended behavior? For example, if I''m running Solaris in a VM,
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2006 Jul 03
8
[raidz] file not removed: No space left on device
On a system still running nv_30, I''ve a small RaidZ filled to the brim:
2 3 root at mir pts/9 ~ 78# uname -a
SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP
0 3 root at mir pts/9 ~ 50# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mirpool1 33.6G 0 137K /mirpool1
mirpool1/home 12.3G 0 12.3G /export/home
mirpool1/install 12.9G
2005 Dec 14
2
format(1M) quits if there are pools on disks and no zfs module
Hi.
While submitting SDR-0149 on ZFS bug I encountered problem with
format utility. This is v240 with snv_29 with internal disks. On s0 slices
there is zfs pool (which is not imported). I did unload zfs modules then
moved zfs driver and run format. Now format quits just because there''s no
zfs module (and not even one zfs pool is imported). It shouldn''t behave
that
2007 Jun 09
41
zfs reports small st_size for directories?
Why does ZFS report such small directory sizes? For example, take a maildir directory with ten entries:
total 2385
drwx------ 8 17121 vmail 10 Jun 8 23:50 .
drwx--x--x 14 root root 14 May 12 2006 ..
drwx------ 5 17121 vmail 5 May 25 18:16 .Trash
drwx------ 5 17121 staff 6 Jun 9 00:01 .testing
-rw------- 1 17121 staff 0 Jun