similar to: mdb ::memstat including zfs buffer details?

Displaying 20 results from an estimated 400 matches similar to: "mdb ::memstat including zfs buffer details?"

2008 Mar 27
3
kernel memory and zfs
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones. I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point. root at servername:~/zonecfg #mdb -k Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello. I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago). Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system. Memory
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2007 Jan 23
0
Understanding ::memstat in terms of the ARC
Hello all, I have a question. Below are two ::memstat outputs about 5 days apart. The interesting thing is the "anonymous" memory shows 2GB, though the two major hogs of that memory (two MySQL instances) claim to be consuming about 6.2GB (checked via pmap). Also, it seems like the ARC keeps creeping the kernel memory over the 4GB limit I set for the ARC (zfs_arc_max). What I was also,
2009 Jul 09
3
performance troubleshooting
We have a serious performance problem on our server. Here is some data: <pre> > ::memstat Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1133252 4426 31% Anon 1956988 7644 53% Exec and libs 31104 121 1% Page cache
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated. My current setup is : OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing) iSCSI Storage Array that is capable of 20 MB/s random writes @ 4k and 70 MB random reads
2007 Apr 19
5
Available free memory.
Hi, Can I use DTrace to determine memory status? 1.Total Physical Memory, Used Memory. 2.Total Swap Space and Used Swap Space. I did find few DTrace scripts but had too much in them and I am unable to chop off unwanted lines of code due to lack of knowledge. It will be very helpful if some one can share the piece of code that serves my purpose as mentioned above. Regards, Ramesh. Ramesh
2010 Apr 05
0
Why does ARC grow above hard limit?
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it happening? mdb''s ::memstat reports 1.5 GB used - does this include ARC as well or is
2006 Apr 06
4
Why is my kernel eating my memory
Can someone, more learned in the ways of dtrace point me at what to look at to help understand why the kernel on one machine is using tons of memory, while another machine doing the same task/same user load is not. swapinfo for the "afflicted" machine shows RAM _______Total 16384 Mb RAM Unusable 73 Mb RAM Kernel 9226 Mb RAM Locked 2 Mb RAM Used
2010 Nov 11
4
[PATCH]: An implementation of HyperV KVP functionality
I am enclosing a patch that implements the KVP (Key Value Pair) functionality for Linux guests on HyperV. This functionality allows Microsoft Management stack to query information from the guest. This functionality is implemented in two parts: (a) A kernel component that communicates with the host and (b) A user level daemon that implements data gathering. The attached patch (kvp.patch) implements
2010 Nov 11
4
[PATCH]: An implementation of HyperV KVP functionality
I am enclosing a patch that implements the KVP (Key Value Pair) functionality for Linux guests on HyperV. This functionality allows Microsoft Management stack to query information from the guest. This functionality is implemented in two parts: (a) A kernel component that communicates with the host and (b) A user level daemon that implements data gathering. The attached patch (kvp.patch) implements
2010 Nov 22
1
[PATCH 3/3]: An implementation of HyperV KVP functionality
An implementation of key/value pair feature (KVP) for Linux on HyperV. In this version of the patch I have addressed all the comments I have received to date. I have also included the code for the user-level daemon here for your reference. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> -------------- next part -------------- An embedded and charset-unspecified text was
2010 Nov 22
1
[PATCH 3/3]: An implementation of HyperV KVP functionality
An implementation of key/value pair feature (KVP) for Linux on HyperV. In this version of the patch I have addressed all the comments I have received to date. I have also included the code for the user-level daemon here for your reference. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> -------------- next part -------------- An embedded and charset-unspecified text was
2010 Dec 08
1
[PATCH 1/4] Add a connector Index to support HyperV KVP functionality
[This email is either empty or too large to be displayed at this time]
2010 Dec 08
1
[PATCH 1/4] Add a connector Index to support HyperV KVP functionality
[This email is either empty or too large to be displayed at this time]
2010 Nov 22
2
[PATCH 1/3]: An implementation of HyperV KVP functionality
From: K. Y. Srinivasan <ksrinivasan at novell.com> Subject: Reserve a connector index for implementing HyperV Key Value Pair (KVP) functionality. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> Index: linux.trees.git/include/linux/connector.h =================================================================== --- linux.trees.git.orig/include/linux/connector.h 2010-11-15
2010 Nov 22
2
[PATCH 1/3]: An implementation of HyperV KVP functionality
From: K. Y. Srinivasan <ksrinivasan at novell.com> Subject: Reserve a connector index for implementing HyperV Key Value Pair (KVP) functionality. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> Index: linux.trees.git/include/linux/connector.h =================================================================== --- linux.trees.git.orig/include/linux/connector.h 2010-11-15
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8
2006 Jul 20
3
Newbie question-----Downloading a file and sending it to DB
Hi everyone, I''m new with RoR and I''m working on a project that uploads information (keys-values) from a file into a database. Here is the situation: Somewhere in this file there is a strin-table such as the following: "hello" = "hello world" "sayHIgh" = "Saying hello to the world" I have parsed the files with a ruby script in such a