search for: eschrock

Displaying 20 results from an estimated 100 matches for "eschrock".

Did you mean: schrock
2007 Apr 15
3
Bitrot and panics
IIRC, uncorrectable bitrot even in a nonessential file detected by ZFS used to cause a kernel panic. Bug ID 4924238 was closed with the claim that bitrot-induced panics is not a bug, but the description did mention an open bug ID 4879357, which suggests that it''s considered a bug after all. Can somebody clarify the intended behavior? For example, if I''m running Solaris in a VM,
2009 Feb 02
8
ZFS core contributor nominations
...mbers to both Contributor and Core contributor levels. First the current list of Core contributors: Bill Moore (billm) Cindy Swearingen (cindys) Lori M. Alt (lalt) Mark Shellenbaum (marks) Mark Maybee (maybee) Matthew A. Ahrens (ahrens) Neil V. Perrin (perrin) Jeff Bonwick (bonwick) Eric Schrock (eschrock) Noel Dellofano (ndellofa) Eric Kustarz (goo)* Georgina A. Chua (chua)* Tabriz Holtz (tabriz)* Krister Johansen (johansen)* All of these should be renewed at Core contributor level, except for those with a "*". Those with a "*" are no longer involved with ZFS and we should l...
2008 Nov 17
14
Storage 7000
I''m not sure if this is the right place for the question or not, but I''ll throw it out there anyways. Does anyone know, if you create your pool(s) with a system running fishworks, can that pool later be imported by a standard solaris system? IE: If for some reason the head running fishworks were to go away, could I attach the JBOD/disks to a system running snv/mainline
2006 Jul 26
9
zfs questions from Sun customer
Please reply to david.curtis at sun.com ******** Background / configuration ************** zpool will not create a storage pool on fibre channel storage. I''m attached to an IBM SVC using the IBMsdd driver. I have no problem using SVM metadevices and UFS on these devices. List steps to reproduce the problem(if applicable): Build Solaris 10 Update 2 server Attach to an external
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP. Here is the error message when I try to create my pool: bash-3.00# zpool create tank /dev/dsk/vpath1a warning: device in use checking failed: No such device internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c bash-3.00# zpool create tank /dev/dsk/vpath1c cannot open
2007 Jun 01
10
SMART
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE SMART data? With the Predictive Self Healing feature, I assumed that Solaris would have at least some SMART support, but what I''ve googled so far has been discouraging. http://prefetch.net/blog/index.php/2006/10/29/solaris-needs-smart-support-please-help/ Bug ID: 4665068 SMART support in IDE driver
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device "replacement" gone awry. I posted all about it here: http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html In a nutshell, an resilver of a single log device with itself, due to the fact one can''t remove a log device from a pool once defined, cause ZFS to fully resilver but then attach the log
2006 Aug 01
5
ZFS, block device and Xen?
Hi There, I looked at the ZFS admin guide in attempt to find a way to leverage ZFS capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen domU file systems that are not ZFS. Couldn''t find an answer whether ZFS could be used only as a "regular" volume manager to create logical volumes for UFS or even a Linux ext2fs, with ideally, the ability to create
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss, Just to be sure - if I create ZFS filesystems on snv_39 and then later I would want just to import that pool on S10U2 - can I safely assume it will just work (I mean nothing new to on-disk format was added or changed in last few snv releases which is not going to be in u2)? I want to put right now (I have to do it now) some data on ZFS and later I want to
2005 Nov 16
3
yay for zfs
This zfs looks great! I really hope this gets put into solaris soon since I don''t think I could live with Solaris express on a production machine. The easy of adding disk and moving directories looks like a life saver, especially for me who deals with trying to store digital media which piles up a couple of gig a day! This message posted from opensolaris.org
2007 Oct 08
6
zfs boot issue, changing device id
Hi, Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... 1/ Standard install on DISK A. 2/ zfs boot install on DISK B. 3/ I change the boot order and my zfs boot works fine. 4/ I install grub on the mbr of DISK B 5/ I disconnect and replace DISK A with DISK B 6/ Reboot, get the grub menu select Solaris ZFS and it panics that it cannot mount root path @ device XXX... This is not a ZFS
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2006 Jul 03
8
[raidz] file not removed: No space left on device
On a system still running nv_30, I''ve a small RaidZ filled to the brim: 2 3 root at mir pts/9 ~ 78# uname -a SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP 0 3 root at mir pts/9 ~ 50# zfs list NAME USED AVAIL REFER MOUNTPOINT mirpool1 33.6G 0 137K /mirpool1 mirpool1/home 12.3G 0 12.3G /export/home mirpool1/install 12.9G
2006 Mar 29
3
ON 20060327 and upcoming solaris 10 U2 / coreutils
So, I noticed that a lot of the fixes discussed here recently, including the ZFS/NFS interaction bug fixes and the deadlock fix has made it into 20060327 that was released this morning. My question is whether we''ll see all these up to the minute bug fixes in the Solaris 10 update that brings ZFS to that product, or if there is a specific date where no further updates will make it in to
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID SUNW-MSG-ID Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2008 Mar 20
5
Snapshots silently eating user quota
All, I assume this issue is pretty old given the time ZFS has been around. I have tried searching the list but could not get understand the structure of how ZFS actually takes snapshot space into account. I have a user walter on whom I try to do the following ZFS operations bash-3.00# zfs get quota store/catB/home/walter NAME PROPERTY VALUE SOURCE
2006 Jun 08
7
Wrong reported free space over NFS
NFS server (b39): bash-3.00# zfs get quota nfs-s5-s8/d5201 nfs-s5-p0/d5110 NAME PROPERTY VALUE SOURCE nfs-s5-p0/d5110 quota 600G local nfs-s5-s8/d5201 quota 600G local bash-3.00# bash-3.00# df -h | egrep "d5201|d5110" nfs-s5-p0/d5110 600G 527G 73G 88% /nfs-s5-p0/d5110
2006 Aug 04
11
Assertion raised during zfs share?
Working to get ZFS to run on a minimal Solaris 10 U2 configuration. In this scenario, ZFS is included the miniroot which is booted into RAM. When trying to share one of the filesystems, an assertion is raised - see below. If the version of source on OpenSolaris.org matches Solaris 10 U2, then it looks like it''s associated with a popen of /usr/sbin/share. Can anyone shed any