Displaying 5 results from an estimated 5 matches for "segmap".
Did you mean:
regmap
2006 Oct 31
0
6256083 Need a lightweight file page mapping mechanism to substitute segmap
Author: praks
Repository: /hg/zfs-crypto/gate
Revision: 4c3b7ab574cc73502effa96c11c293e04fd54309
Log message:
6256083 Need a lightweight file page mapping mechanism to substitute segmap
6387639 segkpm segment set to incorrect size for amd64
Files:
create: usr/src/uts/common/vm/vpm.c
create: usr/src/uts/common/vm/vpm.h
update: usr/src/pkgdefs/SUNWhea/prototype_com
update: usr/src/uts/common/Makefile.files
update: usr/src/uts/common/fs/nfs/nfs3_vnops.c
update: usr/src/uts/com...
2010 Apr 28
3
Solaris 10 default caching segmap/vpm size
Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
2008 Feb 19
1
ZFS and small block random I/O
...all block random I/O on ZFS, or any other documents that
might explain what we''re seeing and give us guidance on how to most
effectively deploy ZFS in an environment with heavy small block random
I/O.
The first anomaly, Brendan Gregg''s CacheKit Perl script fcachestat shows
the segmap cache is hardly used (occasionally during the IOzone random
read benchmark, while the disks are grabbing 20MB/s in aggregate, the
segmap cache gets 100% hits for 1-3 attempts *every 10 seconds*--while
all other samples are zero% for zero attempts. I don''t know the kernel
I/O path as well...
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that''s a quasi-database. It reads files by
mmap()ing them. (writes are done via write()). We''re talking 100TB of
data in files that are 100k->50G in size (the files have headers to tell
the app what segment to map, so mapped chunks
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever!
Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9.
I have two machines, both identical 800MHz P3 with