Displaying 20 results from an estimated 25 matches for "bourbonnais".
2006 Sep 28
13
jbod questions
Folks,
We are in the process of purchasing new san/s that our mail server
runs on (JES3). We have moved our mailstores to zfs and continue to
have checksum errors -- they are corrected but this improves on the
ufs inode errors that require system shutdown and fsck.
So, I am recommending that we buy small jbods, do raidz2 and let zfs
handle the raiding of these boxes. As we need more
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2006 Sep 26
8
Matching Malloc and Free
I would like to profile heap usage on a per thread basis in a large application process. To do this I am tracking calls to malloc and free with the attached script. Everything seems to look OK with some simple test programmes, however, when I track the live system, the results suggest that one thread has grown by approximately 1Gb and I would be surprised if this was true because I ran pmap -x on
2006 Mar 30
8
iostat -xn 5 _donot_ update: how to use DTrace
on Solaris 10
5.10 Generic_118822-23 sun4v sparc SUNW,Sun-Fire-T200
I run
#iostat -xn 5
to monitor the IO statistics on SF T2000 server. The system also have a heavy IO load, for some reason iostat donot refresh (no any update). It seems like iostat is calling pause() and stucked there. Also my HBA driver''s interrupt stack trace indicates there is a lot of swtch(), the overall IOPS
2006 May 08
13
monitoring tcp writes
i''m using the following probe to calculate how many bytes are being written by tcp write calls, by process and total:
fbt:ip:tcp_output:entry
{
this->tcpout_size = msgdsize(args[1]);
@tcpout_size[execname] = sum(this->tcpout_size);
@tcpout_size["TOTAL_TCP_OUT"] = sum(this->tcpout_size);
}
I run this probe for N seconds.
I suppose that if i get the
2007 Apr 27
2
ARC, mmap, pagecache...
Hi,
I was wondering about the ARC and its interaction with the VM
pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
cache get mapped to the process'' virtual memory? Or is there another copy?
-Manoj
2005 Nov 25
28
ZFS and memcntl(..., MC_SYNC, ...)
It wouldn''t be proper to start my first post here without congratulations
and thanks to the ZFS team for such an impressive piece of work.
Anyway, on to my query. I''ve been trying out ZFS, with a particular focus in
reducing latency in a specific application. This application has a fair
amount of random writing going on in the background (which, of course, ZFS
will make
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2006 May 31
12
3510 configuration for ZFS
hi all,
I am hoping to move roughly 1TB of maildir format email to ZFS, but
I am unsure of what the most appropriate disk configuration on a 3510
would be.
based on the desired level of redundancy and usable space, my thought
was to create a pool consisting of 2x RAID-Z vdevs (either double
parity, or single parity with two hot-spares). using 300GB drives
this would give roughly 2.4TB of usable
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2006 Oct 26
3
Re: ZFS hangs systems during copy
> ZFS 11.0 on Solaris release 06/06, hangs systems when
> trying to copy files from my VXFS 4.1 file system.
> any ideas what this problem could be?.
What kind of system is that? How much memory is installed?
I''m able to hang an Ultra 60 with 256 MByte of main memory,
simply by writing big files to a ZFS filesystem. The problem
happens with both Solaris 10 6/2006 and Solaris
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2007 Jan 08
11
NFS and ZFS, a fine combination
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
____________________________________________________________________________________
Performance, Availability & Architecture Engineering
Roch Bourbonnais Sun Microsystems, Icnc-Grenoble
Senior Performance Analyst 180, Avenue De L''Europe, 38330,
Montbonnot Saint Martin, France
http://icncweb.france/~rbourbon http://blogs.sun.com/roch
Roch.Bourbonnais at Sun.Com (+33).4.76.18.83.20
2007 May 29
6
NCQ performance
I''ve been looking into the performance impact of NCQ. Here''s what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there''s not too much performance data on NCQ available via
a google search ...
enjoy,
eric
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_min....Anyway, I set c_max to 1GB.
After a workload run....:
> arc::print -tad
{
. . .
ffffffffc02e29e8
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random reads
/(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID or
Large filesystem configurations?
2) What is any hindrance is Z-Raid to this configuration, given the
complete randomness and size of these accesses?
Would
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty
of grunt for this.
Comments?
Ian
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org