Displaying 20 results from an estimated 32 matches for "kustarz".
Did you mean:
custard
2009 Feb 02
8
ZFS core contributor nominations
...butor levels.
First the current list of Core contributors:
Bill Moore (billm)
Cindy Swearingen (cindys)
Lori M. Alt (lalt)
Mark Shellenbaum (marks)
Mark Maybee (maybee)
Matthew A. Ahrens (ahrens)
Neil V. Perrin (perrin)
Jeff Bonwick (bonwick)
Eric Schrock (eschrock)
Noel Dellofano (ndellofa)
Eric Kustarz (goo)*
Georgina A. Chua (chua)*
Tabriz Holtz (tabriz)*
Krister Johansen (johansen)*
All of these should be renewed at Core contributor level, except for
those with a "*". Those with a "*" are no longer involved with ZFS and
we should let their grants expire.
I am nominating...
2006 Apr 27
5
Porting ZFS to OSX
Here''s some exciting news!
Chris Emura, the Filesystem Development Manager within Apple''s CoreOS organization is interested in porting ZFS to OS X. For more information, please e-mail him directly at cemura at apple.com.
Speaking for the zfs team (at Sun), this is great news and we fully support the effort.
my powerbook hungers for ZFS,
eric
This message posted from
2007 Jun 21
2
Bug in "zpool history"
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/rootfs at mysnapshot
2007-06-20.10:20:03 zfs clone syspool/rootfs at mysnapshot syspool/myrootfs
2007-06-20.10:23:21 zfs set bootfs=syspool/myrootfs syspool
As you can see it says I did a
2007 Sep 21
3
The ZFS-Man.
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
http://people.freebsd.org/~pjd/misc/zfs/zfs-man.swf
BTW. Inspired by ZFS demos from OpenSolaris page I created few demos of
ZFS on FreeBSD:
2008 Jul 29
8
questions about ZFS Send/Receive
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same files:
We could simply copy the contents on both x4500 , which is an option
because the "new
2008 Jun 24
1
zfs primarycache and secondarycache properties
Moved from PSARC to zfs-code...this discussion is seperate from the case.
Eric kustarz wrote:
>
> On Jun 23, 2008, at 1:20 PM, Darren Reed wrote:
>
>> eric kustarz wrote:
>>>
>>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote:
>>>
>>>> Tim Haley wrote:
>>>>> ....
>>>>> primarycache=all | none | metada...
2007 Jan 23
4
Assertion in arc_change_state
Hi,
My current code is tripping the following assertion:
lib/libzpool/build-kernel/arc.c:736: arc_change_state: Assertion
`new_state->size + to_delta >= new_state->lsize (0x2a60000 >= 0x2a64000)`
failed.
gdb info:
Program terminated with signal 6, Aborted.
#0 0x00002afcd767847b in raise () from /lib/libc.so.6
(gdb) bt
#0 0x00002afcd767847b in raise () from /lib/libc.so.6
#1
2007 May 15
2
Clear corrupted data
Hey,
I''m currently running on Nexenta alpha 6 and I have some corrupted data in a
pool.
The output from sudo zpool status -v data is:
pool: data
> state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption. Applications may be affected.
> action: Restore the file in question if possible. Otherwise restore the
> entire
2007 Oct 08
16
Fileserver performance tests
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following:
[i]zpool create
2006 Jul 31
20
ZFS vs. Apple XRaid
Hello all,
After setting up a Solaris 10 machine with ZFS as the new NFS server,
I''m stumped by some serious performance problems. Here are the
(admittedly long) details (also noted at
http://www.netmeister.org/blog/):
The machine in question is a dual-amd64 box with 2GB RAM and two
broadcom gigabit NICs. The OS is Solaris 10 6/06 and the filesystem
consists of a single zpool stripe
2007 Jul 07
12
ZFS Performance as a function of Disk Slice
First Post!
Sorry, I had to get that out of the way to break the ice...
I was wondering if it makes sense to zone ZFS pools by disk slice, and if it makes a difference with RAIDZ. As I''m sure we''re all aware, the end of a drive is half as fast as the beginning ([i]where the zoning stipulates that the physical outside is the beginning and going towards the spindle increases hex
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
...9;zpool iostat -v 1'' that writes are issued to disk only once in 10 secs, and then its 2000rq one sec.
Reads are sustained at cca 800rq/s.
Is there a way to tune this read/write ratio? Is this know problem?
I tried to change vq_max_pending as suggested by Eric in http://blogs.sun.com/erickustarz/entry/vq_max_pending
But no change in this write behaviour.
Iostat shows cca 20-30ms asvc_t, 0%w, and cca 30% busy on all drives so these are not saturated it seems. (before with UTF they had 90%busy, 1%wait).
System is Sol 10 U2, sun x4200, 4GB RAM.
Please if you could give me some hint to real...
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed?
I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit.
OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2008 May 01
9
ZFS and Linux
Hi All ;
What is the status of ZFS on linux and what are the kernel''s supported?
Regards
Mertol
<http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +902123352222
Email <mailto:Ayca.Yalcin at Sun.COM> mertol.ozyoney at
2007 May 29
6
NCQ performance
I''ve been looking into the performance impact of NCQ. Here''s what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there''s not too much performance data on NCQ available via
a google search ...
enjoy,
eric
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello.
We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt
scsi buses, skge GigE network) as a NFS backend with ZFS for
distribution of free software like Debian (cdimage.debian.org,
ftp.se.debian.org) and have run into some performance issues.
We are running SX snv_48 and have run with a raidz2 with 7x300G for a
while now, just added another 7x300G raidz2 today but
2007 Oct 10
6
server-reboot
Hi.
Just migrated to zfs on opensolaris. I copied data to the server using
rsync and got this message:
Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80:
Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP:
type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000
Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice]
Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs
is a very non-trivial exercise with zero tolerance for developer bugs.
I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and
a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108)
hooked up to the built-in SCSI controller (the only device on the SCSI
bus).
My initial ZFS test was to