Hi , I am new to the Dtrace . i have C++ application which have multiple modules . i want to test memoryleaks from entire project . i tested sample C++ programs with Dtrace , but here i am getting count of new and deletes . I want details of causes of memory leaks from application . can i get those from dtrace . please help me on this . other wise give me suggestion how can i approach apart from purifyplus. Thanks, Venkat -- This message posted from opensolaris.org
max at bruningsystems.com
2008-Nov-21 14:40 UTC
[dtrace-discuss] C++ Applications with Dtrace
Hi Venkat, venkat wrote:> Hi , > I am new to the Dtrace . i have C++ application which have multiple modules . > > i want to test memoryleaks from entire project . i tested sample C++ programs > > with Dtrace , but here i am getting count of new and deletes . > > I want details of causes of memory leaks from application . can i get those from dtrace . please help me on this . other wise give me suggestion how can i approach apart from purifyplus. >Take a look at http://developers.sun.com/solaris/articles/dtrace_cc.html max
Thank you very much . -- This message posted from opensolaris.org
On Fri, Nov 21, 2008 at 03:40:04PM +0100, max at bruningsystems.com wrote:> Hi Venkat, > > venkat wrote: > > Hi , > > I am new to the Dtrace . i have C++ application which have multiple modules . > > > > i want to test memoryleaks from entire project . i tested sample C++ programs > > > > with Dtrace , but here i am getting count of new and deletes . > > > > I want details of causes of memory leaks from application . can i get those from dtrace . please help me on this . other wise give me suggestion how can i approach apart from purifyplus. > > > Take a look at http://developers.sun.com/solaris/articles/dtrace_cc.htmlYou may also want to use libumem; it''s a better tool for memory leak detection. Doing something like: % LD_PRELOAD=libumem.so \ UMEM_DEBUG=audit=30,maxverify=0,verbose \ ./my_program (the "30" in "audit=30" is how many stack frames to record) while it''s running, you can do: % mdb -p pid Loading modules: [ ... ]> $GC++ symbol demangling enabled> ::findleaks -dThat will report all leaked buffers, grouped by size and stack trace. You can do ":c" to continue the process, or "$q" to quit MDB. Cheers, - jonathan
thanks jonathan , thank you much. -- This message posted from opensolaris.org
Hi , umem_alloc_1152 leak: 12 buffers, 1152 bytes each, 13824 bytes total ADDR BUFADDR TIMESTAMP THREAD CACHE LASTLOG CONTENTS cbf40 c8fc0 23a15b6615b810 1 47308 0 0 libumem.so.1`umem_cache_alloc+0x13c libumem.so.1`umem_alloc+0x60 libumem.so.1`malloc+0x28 char*TestClass::ClassNameconst+8 main+0xfc _start+0x108 with MDB option am able to get memory leaks and all . like above trace char*TestClass::ClassNameconst function i just allocate memory to one variable and not releasing .its fine am able to see memory leak. can i know variable name cause for memory leak at this time. becz in my project i have lot of statements in single function . can we know whole trace upto variable . and my project is threading basis. have multiple threads are running paralally here we need follow same procedure and any extra options need to apply for threads . Thanks in advance, Venkat -- This message posted from opensolaris.org
Hi, I am using DTrace for find memory leaks the delete probefunction values like void operator delete(void*): 0 void operator delete(void*): 8eee0 what is 0 value for delete probe function. can any body help me on this? Thanks, Venkat -- This message posted from opensolaris.org
James Litchfield
2008-Dec-27 04:57 UTC
[dtrace-discuss] S10U4 _ DTrace - Neverending ENOENTs
When finished with a DTrace script that created some 2 MB aggregations on a 128 CPU 5240, I noted that DTrace seemed hung... trussing it produced a long stream of: ioctl(3, DTRACEIOC_BUFSNAP, 0x100111A58) Err#2 ENOENT ioctl(3, DTRACEIOC_BUFSNAP, 0x100111A58) Err#2 ENOENT ioctl(3, DTRACEIOC_BUFSNAP, 0x100111A58) Err#2 ENOENT Out of space? Something else? Seems as if retrying is pretty hopeless, so why not just quit? Jim Litchfield
Hi , I am using MDB to find memory leaks . i am getting Good report with mdb. In one process , MDB showing zero memory leaks but process usage size is increasing day by day . How can we know reason for increasing size . if any body know to find this one please let me know I have another process with same problem but for this process MDb showing some memory leaks , the cause for that leaks is i am allocating memory to some variable when the process startup only . After that am just using those variables . in this case MDB showing only leaks at one time allocated variables , but process usage size increasing. Thanks in advance, Venkat -- This message posted from opensolaris.org
venkat <venki.dammalapati at gmail.com> wrote:> > Hi , > > I am using MDB to find memory leaks . i am getting Good report > with mdb.Are you using mdb with libumem for this, or something else?> > In one process , MDB showing zero memory leaks but process usage size > is increasing day by day . > How can we know reason for increasing size . if any body know > to find this one please let me knowIf you are using libumem, you could use the ::umausers dcmd to try to find the biggest users of the allocator. However, that only works for allocations of less than 16k. You can tell if you are making allocations greater than 16k by checking for non-zero usage of umem_oversize, as reported by ::umastat. I recently blogged about finding memory leaks with libumem at http://blogs.sun.com/dlutz/entry/memory_leak_detection_with_libumem but only addressed the case where you actually lose track of memory you allocated, not where you continually grow your memory foot print but still maintain pointers to all of the memory. HTH, David> > I have another process with same problem but > for this process MDb showing some memory leaks , the cause for that > leaks is i am allocating memory to some variable when the process > startup only . After that am just using those variables . in this > case MDB showing only leaks at one time allocated variables , but > process usage size increasing. > > Thanks in advance, > Venkat > --
Hi David. I am using mdb with libumem only LD_PRELOAD=libumem.so UMEM_DEBUG=audit=30,maxverify=0,verbose ./myservice and while running my service i took core dump by gcore command 3 times with some time gap and appied dcmd commands while issue dcmd ::umastat i saw some moery allocations and use sizes as increased , here find one cache name details cache buf buf buf memory alloc alloc name size in use total in use succeed fail umem_alloc_64 64 24637 25024 3203072 172506 0 umem_alloc_64 64 24637 25088 3211264 417738 0 umem_alloc_64 64 24972 25088 3211264 1215782 0 If this inceasing only causes for my problem. please tell me how can i get stack for that cache . Thanks in advance, Venkat -- This message posted from opensolaris.org
Hi Venkat, Try running "::umausers umem_alloc_64" to see the top users of that cache. The output will be a list of stack traces and their allocation stats. David ----- Original Message ----- From: venkat <venki.dammalapati at gmail.com> Date: Friday, January 16, 2009 12:04 pm> Hi David. > > I am using mdb with libumem only > > LD_PRELOAD=libumem.so UMEM_DEBUG=audit=30,maxverify=0,verbose ./myservice > > > and while running my service i took core dump by gcore command 3 > times with some time gap and appied dcmd commands > > while issue dcmd ::umastat i saw some moery allocations and use > sizes as increased , here find one cache name details > > > > > cache buf buf buf memory > alloc alloc > name size in use total in use > succeed fail > > umem_alloc_64 64 24637 25024 3203072 172506 0 > umem_alloc_64 64 24637 25088 3211264 417738 0 > umem_alloc_64 64 24972 25088 3211264 1215782 0 > > > > If this inceasing only causes for my problem. please tell > me how can i get stack for that cache . > > > Thanks in advance, > Venkat > --
Hi david, What is allocated succeed block from umastat dcmd . that value is keep on increasing . Is that memory occupieng by process? like that way my process memory usage also keep on increasing ? can u clarify plz ? Thanks, Venkat -- This message posted from opensolaris.org
Hi Venkat, I believe "alloc succeed" is a count of memory requests that were successful. That memory may have been freed later, so it doesn''t necessarily point to the reason for a growing memory foot print. The column to be concerned with is "memory in use". David ----- Original Message ----- From: venkat <venki.dammalapati at gmail.com> Date: Friday, January 16, 2009 2:44 pm> Hi david, > > > What is allocated succeed block from umastat dcmd . that value is > keep on increasing . Is that memory occupieng by process? > like that way my process memory usage also keep on increasing ? > > can u clarify plz ? > > > Thanks, > Venkat > -- > This message posted from opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss at opensolaris.org
Pavesi, Valdemar (NSN - US/Boca Raton)
2009-Jan-16 23:22 UTC
[dtrace-discuss] C++ Applications with Dtrace
Hello, I have a example of memory leak. What does means the alloc fail= 335 ? # mdb -p 1408 Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ]> ::findleaks -dvfindleaks: maximum buffers => 14920 findleaks: actual buffers => 14497 findleaks: findleaks: potential pointers => 316574898 findleaks: dismissals => 309520985 (97.7%) findleaks: misses => 6929221 ( 2.1%) findleaks: dups => 110601 ( 0.0%) findleaks: follows => 14091 ( 0.0%) findleaks: findleaks: elapsed wall time => 54 seconds findleaks: BYTES LEAKED VMEM_SEG CALLER 4096 4 fffffd7ffc539000 MMAP 16384 1 fffffd7ffe83d000 MMAP 4096 1 fffffd7ffe812000 MMAP 8192 1 fffffd7ffd7bc000 MMAP 24016 397 124a2a0 libstdc++.so.6.0.8`_Znwm+0x1e ------------------------------------------------------------------------ Total 401 oversized leaks, 9567120 bytes CACHE LEAKED BUFCTL CALLER 00000000004cf468 1 000000000050ed20 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c000 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ea80 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c0e0 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ee00 libstdc++.so.6.0.8`_Znwm+0x1e ---------------------------------------------------------------------- Total 5 buffers, 80 bytes mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes umem_oversize leak: 397 vmem_segs, 24016 bytes each, 9534352 bytes total ADDR TYPE START END SIZE THREAD TIMESTAMP 124a2a0 ALLC 1252000 1257dd0 24016 1 56bd6f2a6fe1 libumem.so.1`vmem_hash_insert+0x90 libumem.so.1`vmem_seg_alloc+0x1c4 libumem.so.1`vmem_xalloc+0x50b libumem.so.1`vmem_alloc+0x15a libumem.so.1`umem_alloc+0x60 libumem.so.1`malloc+0x2e libstdc++.so.6.0.8`_Znwm+0x1e libstdc++.so.6.0.8`_Znam+9> ::umastatcache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- umem_magazine_1 16 5 101 4096 6 0 umem_magazine_3 32 356 378 24576 356 0 umem_magazine_7 64 20 84 8192 92 0 umem_magazine_15 128 11 21 4096 11 0 umem_magazine_31 256 0 0 0 0 0 umem_magazine_47 384 0 0 0 0 0 umem_magazine_63 512 0 0 0 0 0 umem_magazine_95 768 0 0 0 0 0 umem_magazine_143 1152 0 0 0 0 0 umem_slab_cache 56 638 650 53248 638 0 umem_bufctl_cache 24 0 0 0 0 0 umem_bufctl_audit_cache 192 15328 15336 3489792 15328 0 umem_alloc_8 8 0 0 0 0 0 umem_alloc_16 16 79 170 8192 2098631 0 umem_alloc_32 32 267 320 20480 306 0 umem_alloc_48 48 4653 4692 376832 6028 0 umem_alloc_64 64 5554 5568 712704 12642 0 umem_alloc_80 80 2492 2520 286720 5185 0 umem_alloc_96 96 492 512 65536 654 0 umem_alloc_112 112 95 112 16384 103 0 umem_alloc_128 128 38 42 8192 42 0 umem_alloc_160 160 12 21 4096 86 0 umem_alloc_192 192 2 16 4096 2 0 umem_alloc_224 224 5 16 4096 848 0 umem_alloc_256 256 1 12 4096 1 0 umem_alloc_320 320 7 1010 413696 560719 0 umem_alloc_384 384 34 36 16384 41 0 umem_alloc_448 448 5 8 4096 10 0 umem_alloc_512 512 1 7 4096 2 0 umem_alloc_640 640 11 22 16384 16 0 umem_alloc_768 768 2 9 8192 424 0 umem_alloc_896 896 1 4 4096 2 0 umem_alloc_1152 1152 11 20 24576 127 0 umem_alloc_1344 1344 4 40 61440 17179 0 umem_alloc_1600 1600 3 7 12288 5 0 umem_alloc_2048 2048 2 9 20480 6 0 umem_alloc_2688 2688 5 7 20480 10 0 umem_alloc_4096 4096 6 7 57344 335 0 umem_alloc_8192 8192 118 119 1462272 565 0 umem_alloc_12288 12288 20 21 344064 485 0 umem_alloc_16384 16384 1 1 20480 1 0 ------------------------- ------ ------ ------ --------- --------- ----- Total [umem_internal] 3584000 16431 0 Total [umem_default] 4001792 2704455 0 ------------------------- ------ ------ ------ --------- --------- ----- vmem memory memory memory alloc alloc name in use total import succeed fail ------------------------- --------- ---------- --------- --------- ----- sbrk_top 25309184 25399296 0 3192 335 sbrk_heap 25309184 25309184 25309184 3192 0 vmem_internal 2965504 2965504 2965504 366 0 vmem_seg 2875392 2875392 2875392 351 0 vmem_hash 51200 53248 53248 7 0 vmem_vmem 46200 55344 36864 15 0 umem_internal 3788864 3792896 3792896 900 0 umem_cache 42968 57344 57344 41 0 umem_hash 142336 147456 147456 36 0 umem_log 131776 135168 135168 3 0 umem_firewall_va 0 0 0 0 0 umem_firewall 0 0 0 0 0 umem_oversize 14130869 14413824 14413824 1286 0 umem_memalign 0 0 0 0 0 umem_default 4001792 4001792 4001792 638 0 ------------------------- --------- ---------- --------- --------- ----->-----Original Message----- From: dtrace-discuss-bounces at opensolaris.org [mailto:dtrace-discuss-bounces at opensolaris.org] On Behalf Of ext David Lutz Sent: Friday, January 16, 2009 6:07 PM To: venkat Cc: dtrace-discuss at opensolaris.org Subject: Re: [dtrace-discuss] C++ Applications with Dtrace Hi Venkat, I believe "alloc succeed" is a count of memory requests that were successful. That memory may have been freed later, so it doesn''t necessarily point to the reason for a growing memory foot print. The column to be concerned with is "memory in use". David ----- Original Message ----- From: venkat <venki.dammalapati at gmail.com> Date: Friday, January 16, 2009 2:44 pm> Hi david, > > > What is allocated succeed block from umastat dcmd . that value is> keep on increasing . Is that memory occupieng by process? > like that way my process memory usage also keep on increasing ? > > can u clarify plz ? > > > Thanks, > Venkat > -- > This message posted from opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss at opensolaris.org_______________________________________________ dtrace-discuss mailing list dtrace-discuss at opensolaris.org
Pavesi, Valdemar (NSN - US/Boca Raton)
2009-Jan-16 23:24 UTC
[dtrace-discuss] C++ Applications with Dtrace
Here the memory leaks was fixed. We still have allocation fail = 169 . mdb -p 2103 Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ]> ::findleaks -dvfindleaks: maximum buffers => 14499 findleaks: actual buffers => 14080 findleaks: findleaks: potential pointers => 316492344 findleaks: dismissals => 309494621 (97.7%) findleaks: misses => 6875689 ( 2.1%) findleaks: dups => 107963 ( 0.0%) findleaks: follows => 14071 ( 0.0%) findleaks: findleaks: elapsed wall time => 9 seconds findleaks: BYTES LEAKED VMEM_SEG CALLER 4096 4 fffffd7ffc539000 MMAP 16384 1 fffffd7ffe83d000 MMAP 4096 1 fffffd7ffe812000 MMAP 8192 1 fffffd7ffd7bc000 MMAP ------------------------------------------------------------------------ Total 4 oversized leaks, 32768 bytes CACHE LEAKED BUFCTL CALLER 00000000004cf468 1 000000000050ed20 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c000 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ea80 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c0e0 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ee00 libstdc++.so.6.0.8`_Znwm+0x1e ---------------------------------------------------------------------- Total 5 buffers, 80 bytes mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes> ::umastatcache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- umem_magazine_1 16 5 101 4096 6 0 umem_magazine_3 32 356 378 24576 356 0 umem_magazine_7 64 25 84 8192 85 0 umem_magazine_15 128 11 21 4096 11 0 umem_magazine_31 256 0 0 0 0 0 umem_magazine_47 384 0 0 0 0 0 umem_magazine_63 512 0 0 0 0 0 umem_magazine_95 768 0 0 0 0 0 umem_magazine_143 1152 0 0 0 0 0 umem_slab_cache 56 638 650 53248 638 0 umem_bufctl_cache 24 0 0 0 0 0 umem_bufctl_audit_cache 192 15328 15336 3489792 15328 0 umem_alloc_8 8 0 0 0 0 0 umem_alloc_16 16 82 170 8192 876682 0 umem_alloc_32 32 267 320 20480 306 0 umem_alloc_48 48 4654 4692 376832 5785 0 umem_alloc_64 64 5555 5568 712704 10210 0 umem_alloc_80 80 2492 2520 286720 3727 0 umem_alloc_96 96 492 512 65536 654 0 umem_alloc_112 112 95 112 16384 103 0 umem_alloc_128 128 38 42 8192 42 0 umem_alloc_160 160 12 21 4096 86 0 umem_alloc_192 192 2 16 4096 2 0 umem_alloc_224 224 5 16 4096 361 0 umem_alloc_256 256 1 12 4096 1 0 umem_alloc_320 320 7 1010 413696 234010 0 umem_alloc_384 384 34 36 16384 41 0 umem_alloc_448 448 5 8 4096 10 0 umem_alloc_512 512 1 7 4096 2 0 umem_alloc_640 640 11 22 16384 16 0 umem_alloc_768 768 2 9 8192 180 0 umem_alloc_896 896 1 4 4096 2 0 umem_alloc_1152 1152 11 20 24576 127 0 umem_alloc_1344 1344 4 40 61440 7175 0 umem_alloc_1600 1600 3 7 12288 5 0 umem_alloc_2048 2048 2 9 20480 6 0 umem_alloc_2688 2688 5 7 20480 10 0 umem_alloc_4096 4096 6 7 57344 335 0 umem_alloc_8192 8192 118 119 1462272 321 0 umem_alloc_12288 12288 19 21 344064 241 0 umem_alloc_16384 16384 1 1 20480 1 0 ------------------------- ------ ------ ------ --------- --------- ----- Total [umem_internal] 3584000 16424 0 Total [umem_default] 4001792 1140441 0 ------------------------- ------ ------ ------ --------- --------- ----- vmem memory memory memory alloc alloc name in use total import succeed fail ------------------------- --------- ---------- --------- --------- ----- sbrk_top 14344192 14520320 0 2387 169 sbrk_heap 14344192 14344192 14344192 2387 0 vmem_internal 2371584 2371584 2371584 293 0 vmem_seg 2285568 2285568 2285568 279 0 vmem_hash 49152 49152 49152 6 0 vmem_vmem 46200 55344 36864 15 0 umem_internal 3788864 3792896 3792896 900 0 umem_cache 42968 57344 57344 41 0 umem_hash 142336 147456 147456 36 0 umem_log 131776 135168 135168 3 0 umem_firewall_va 0 0 0 0 0 umem_firewall 0 0 0 0 0 umem_oversize 3996133 4042752 4042752 554 0 umem_memalign 0 0 0 0 0 umem_default 4001792 4001792 4001792 638 0 ------------------------- --------- ---------- --------- --------- ----->-----Original Message----- From: Pavesi, Valdemar (NSN - US/Boca Raton) Sent: Friday, January 16, 2009 6:22 PM To: ''ext David Lutz''; venkat Cc: dtrace-discuss at opensolaris.org Subject: RE: [dtrace-discuss] C++ Applications with Dtrace Hello, I have a example of memory leak. What does means the alloc fail= 335 ? # mdb -p 1408 Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ]> ::findleaks -dvfindleaks: maximum buffers => 14920 findleaks: actual buffers => 14497 findleaks: findleaks: potential pointers => 316574898 findleaks: dismissals => 309520985 (97.7%) findleaks: misses => 6929221 ( 2.1%) findleaks: dups => 110601 ( 0.0%) findleaks: follows => 14091 ( 0.0%) findleaks: findleaks: elapsed wall time => 54 seconds findleaks: BYTES LEAKED VMEM_SEG CALLER 4096 4 fffffd7ffc539000 MMAP 16384 1 fffffd7ffe83d000 MMAP 4096 1 fffffd7ffe812000 MMAP 8192 1 fffffd7ffd7bc000 MMAP 24016 397 124a2a0 libstdc++.so.6.0.8`_Znwm+0x1e ------------------------------------------------------------------------ Total 401 oversized leaks, 9567120 bytes CACHE LEAKED BUFCTL CALLER 00000000004cf468 1 000000000050ed20 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c000 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ea80 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050c0e0 libstdc++.so.6.0.8`_Znwm+0x1e 00000000004cf468 1 000000000050ee00 libstdc++.so.6.0.8`_Znwm+0x1e ---------------------------------------------------------------------- Total 5 buffers, 80 bytes mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes umem_oversize leak: 397 vmem_segs, 24016 bytes each, 9534352 bytes total ADDR TYPE START END SIZE THREAD TIMESTAMP 124a2a0 ALLC 1252000 1257dd0 24016 1 56bd6f2a6fe1 libumem.so.1`vmem_hash_insert+0x90 libumem.so.1`vmem_seg_alloc+0x1c4 libumem.so.1`vmem_xalloc+0x50b libumem.so.1`vmem_alloc+0x15a libumem.so.1`umem_alloc+0x60 libumem.so.1`malloc+0x2e libstdc++.so.6.0.8`_Znwm+0x1e libstdc++.so.6.0.8`_Znam+9> ::umastatcache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- umem_magazine_1 16 5 101 4096 6 0 umem_magazine_3 32 356 378 24576 356 0 umem_magazine_7 64 20 84 8192 92 0 umem_magazine_15 128 11 21 4096 11 0 umem_magazine_31 256 0 0 0 0 0 umem_magazine_47 384 0 0 0 0 0 umem_magazine_63 512 0 0 0 0 0 umem_magazine_95 768 0 0 0 0 0 umem_magazine_143 1152 0 0 0 0 0 umem_slab_cache 56 638 650 53248 638 0 umem_bufctl_cache 24 0 0 0 0 0 umem_bufctl_audit_cache 192 15328 15336 3489792 15328 0 umem_alloc_8 8 0 0 0 0 0 umem_alloc_16 16 79 170 8192 2098631 0 umem_alloc_32 32 267 320 20480 306 0 umem_alloc_48 48 4653 4692 376832 6028 0 umem_alloc_64 64 5554 5568 712704 12642 0 umem_alloc_80 80 2492 2520 286720 5185 0 umem_alloc_96 96 492 512 65536 654 0 umem_alloc_112 112 95 112 16384 103 0 umem_alloc_128 128 38 42 8192 42 0 umem_alloc_160 160 12 21 4096 86 0 umem_alloc_192 192 2 16 4096 2 0 umem_alloc_224 224 5 16 4096 848 0 umem_alloc_256 256 1 12 4096 1 0 umem_alloc_320 320 7 1010 413696 560719 0 umem_alloc_384 384 34 36 16384 41 0 umem_alloc_448 448 5 8 4096 10 0 umem_alloc_512 512 1 7 4096 2 0 umem_alloc_640 640 11 22 16384 16 0 umem_alloc_768 768 2 9 8192 424 0 umem_alloc_896 896 1 4 4096 2 0 umem_alloc_1152 1152 11 20 24576 127 0 umem_alloc_1344 1344 4 40 61440 17179 0 umem_alloc_1600 1600 3 7 12288 5 0 umem_alloc_2048 2048 2 9 20480 6 0 umem_alloc_2688 2688 5 7 20480 10 0 umem_alloc_4096 4096 6 7 57344 335 0 umem_alloc_8192 8192 118 119 1462272 565 0 umem_alloc_12288 12288 20 21 344064 485 0 umem_alloc_16384 16384 1 1 20480 1 0 ------------------------- ------ ------ ------ --------- --------- ----- Total [umem_internal] 3584000 16431 0 Total [umem_default] 4001792 2704455 0 ------------------------- ------ ------ ------ --------- --------- ----- vmem memory memory memory alloc alloc name in use total import succeed fail ------------------------- --------- ---------- --------- --------- ----- sbrk_top 25309184 25399296 0 3192 335 sbrk_heap 25309184 25309184 25309184 3192 0 vmem_internal 2965504 2965504 2965504 366 0 vmem_seg 2875392 2875392 2875392 351 0 vmem_hash 51200 53248 53248 7 0 vmem_vmem 46200 55344 36864 15 0 umem_internal 3788864 3792896 3792896 900 0 umem_cache 42968 57344 57344 41 0 umem_hash 142336 147456 147456 36 0 umem_log 131776 135168 135168 3 0 umem_firewall_va 0 0 0 0 0 umem_firewall 0 0 0 0 0 umem_oversize 14130869 14413824 14413824 1286 0 umem_memalign 0 0 0 0 0 umem_default 4001792 4001792 4001792 638 0 ------------------------- --------- ---------- --------- --------- ----->-----Original Message----- From: dtrace-discuss-bounces at opensolaris.org [mailto:dtrace-discuss-bounces at opensolaris.org] On Behalf Of ext David Lutz Sent: Friday, January 16, 2009 6:07 PM To: venkat Cc: dtrace-discuss at opensolaris.org Subject: Re: [dtrace-discuss] C++ Applications with Dtrace Hi Venkat, I believe "alloc succeed" is a count of memory requests that were successful. That memory may have been freed later, so it doesn''t necessarily point to the reason for a growing memory foot print. The column to be concerned with is "memory in use". David ----- Original Message ----- From: venkat <venki.dammalapati at gmail.com> Date: Friday, January 16, 2009 2:44 pm> Hi david, > > > What is allocated succeed block from umastat dcmd . that value is> keep on increasing . Is that memory occupieng by process? > like that way my process memory usage also keep on increasing ? > > can u clarify plz ? > > > Thanks, > Venkat > -- > This message posted from opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss at opensolaris.org_______________________________________________ dtrace-discuss mailing list dtrace-discuss at opensolaris.org
If I understand it correctly, the alloc fail for sbrk_top is just an indication that the heap had to be grown, which is different than other failures, which would indicate that we ran out of memory. Have a look at: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libumem/common/vmem_sbrk.c David ----- Original Message ----- From: "Pavesi, Valdemar (NSN - US/Boca Raton)" <valdemar.pavesi at nsn.com> Date: Friday, January 16, 2009 3:24 pm> Hello, > > I have a example of memory leak. > > What does means the alloc fail= 335 ? > > > # mdb -p 1408 > Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ] > > ::findleaks -dv > findleaks: maximum buffers => 14920 > findleaks: actual buffers => 14497 > findleaks: > findleaks: potential pointers => 316574898 > findleaks: dismissals => 309520985 (97.7%) > findleaks: misses => 6929221 ( 2.1%) > findleaks: dups => 110601 ( 0.0%) > findleaks: follows => 14091 ( 0.0%) > findleaks: > findleaks: elapsed wall time => 54 seconds > findleaks: > BYTES LEAKED VMEM_SEG CALLER > 4096 4 fffffd7ffc539000 MMAP > 16384 1 fffffd7ffe83d000 MMAP > 4096 1 fffffd7ffe812000 MMAP > 8192 1 fffffd7ffd7bc000 MMAP > 24016 397 124a2a0 libstdc++.so.6.0.8`_Znwm+0x1e > ------------------------------------------------------------------------ > Total 401 oversized leaks, 9567120 bytes > > CACHE LEAKED BUFCTL CALLER > 00000000004cf468 1 000000000050ed20 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050c000 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050ea80 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050c0e0 libstdc++.so.6.0.8`_Znwm+0x1e > 00000000004cf468 1 000000000050ee00 libstdc++.so.6.0.8`_Znwm+0x1e > ---------------------------------------------------------------------- > Total 5 buffers, 80 bytes > > mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes > mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes > mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes > mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes > umem_oversize leak: 397 vmem_segs, 24016 bytes each, 9534352 bytes total > ADDR TYPE START END SIZE > THREAD TIMESTAMP > 124a2a0 ALLC 1252000 1257dd0 24016 > 1 56bd6f2a6fe1 > libumem.so.1`vmem_hash_insert+0x90 > libumem.so.1`vmem_seg_alloc+0x1c4 > libumem.so.1`vmem_xalloc+0x50b > libumem.so.1`vmem_alloc+0x15a > libumem.so.1`umem_alloc+0x60 > libumem.so.1`malloc+0x2e > libstdc++.so.6.0.8`_Znwm+0x1e > libstdc++.so.6.0.8`_Znam+9 > > > > > ::umastat > cache buf buf buf memory alloc alloc > name size in use total in use succeed fail > ------------------------- ------ ------ ------ --------- --------- ----- > umem_magazine_1 16 5 101 4096 6 > 0 > umem_magazine_3 32 356 378 24576 356 > 0 > umem_magazine_7 64 20 84 8192 92 > 0 > umem_magazine_15 128 11 21 4096 11 > 0 > umem_magazine_31 256 0 0 0 0 > 0 > umem_magazine_47 384 0 0 0 0 > 0 > umem_magazine_63 512 0 0 0 0 > 0 > umem_magazine_95 768 0 0 0 0 > 0 > umem_magazine_143 1152 0 0 0 0 > 0 > umem_slab_cache 56 638 650 53248 638 > 0 > umem_bufctl_cache 24 0 0 0 0 > 0 > umem_bufctl_audit_cache 192 15328 15336 3489792 15328 > 0 > umem_alloc_8 8 0 0 0 0 > 0 > umem_alloc_16 16 79 170 8192 2098631 > 0 > umem_alloc_32 32 267 320 20480 306 > 0 > umem_alloc_48 48 4653 4692 376832 6028 > 0 > umem_alloc_64 64 5554 5568 712704 12642 > 0 > umem_alloc_80 80 2492 2520 286720 5185 > 0 > umem_alloc_96 96 492 512 65536 654 > 0 > umem_alloc_112 112 95 112 16384 103 > 0 > umem_alloc_128 128 38 42 8192 42 > 0 > umem_alloc_160 160 12 21 4096 86 > 0 > umem_alloc_192 192 2 16 4096 2 > 0 > umem_alloc_224 224 5 16 4096 848 > 0 > umem_alloc_256 256 1 12 4096 1 > 0 > umem_alloc_320 320 7 1010 413696 560719 > 0 > umem_alloc_384 384 34 36 16384 41 > 0 > umem_alloc_448 448 5 8 4096 10 > 0 > umem_alloc_512 512 1 7 4096 2 > 0 > umem_alloc_640 640 11 22 16384 16 > 0 > umem_alloc_768 768 2 9 8192 424 > 0 > umem_alloc_896 896 1 4 4096 2 > 0 > umem_alloc_1152 1152 11 20 24576 127 > 0 > umem_alloc_1344 1344 4 40 61440 17179 > 0 > umem_alloc_1600 1600 3 7 12288 5 > 0 > umem_alloc_2048 2048 2 9 20480 6 > 0 > umem_alloc_2688 2688 5 7 20480 10 > 0 > umem_alloc_4096 4096 6 7 57344 335 > 0 > umem_alloc_8192 8192 118 119 1462272 565 > 0 > umem_alloc_12288 12288 20 21 344064 485 > 0 > umem_alloc_16384 16384 1 1 20480 1 > 0 > ------------------------- ------ ------ ------ --------- --------- ----- > Total [umem_internal] 3584000 16431 > 0 > Total [umem_default] 4001792 2704455 > 0 > ------------------------- ------ ------ ------ --------- --------- ----- > > vmem memory memory memory alloc alloc > name in use total import succeed fail > ------------------------- --------- ---------- --------- --------- ----- > sbrk_top 25309184 25399296 0 3192 335 > sbrk_heap 25309184 25309184 25309184 3192 > 0 > vmem_internal 2965504 2965504 2965504 366 > 0 > vmem_seg 2875392 2875392 2875392 351 > 0 > vmem_hash 51200 53248 53248 7 > 0 > vmem_vmem 46200 55344 36864 15 > 0 > umem_internal 3788864 3792896 3792896 900 > 0 > umem_cache 42968 57344 57344 41 > 0 > umem_hash 142336 147456 147456 36 > 0 > umem_log 131776 135168 135168 3 > 0 > umem_firewall_va 0 0 0 0 > 0 > umem_firewall 0 0 0 0 > 0 > umem_oversize 14130869 14413824 14413824 1286 > 0 > umem_memalign 0 0 0 0 > 0 > umem_default 4001792 4001792 4001792 638 > 0 > ------------------------- --------- ---------- --------- --------- ----- > > > > > -----Original Message----- > From: dtrace-discuss-bounces at opensolaris.org > [mailto:dtrace-discuss-bounces at opensolaris.org] On Behalf Of ext David > Lutz > Sent: Friday, January 16, 2009 6:07 PM > To: venkat > Cc: dtrace-discuss at opensolaris.org > Subject: Re: [dtrace-discuss] C++ Applications with Dtrace > > Hi Venkat, > > I believe "alloc succeed" is a count of memory requests that > were successful. That memory may have been freed later, > so it doesn''t necessarily point to the reason for a growing > memory foot print. The column to be concerned with is > "memory in use". > > David > > ----- Original Message ----- > From: venkat <venki.dammalapati at gmail.com> > Date: Friday, January 16, 2009 2:44 pm > > > Hi david, > > > > > > What is allocated succeed block from umastat dcmd . that value > is > > > keep on increasing . Is that memory occupieng by process? > > like that way my process memory usage also keep on increasing ? > > > > can u clarify plz ? > > > > > > Thanks, > > Venkat > > -- > > This message posted from opensolaris.org > > _______________________________________________ > > dtrace-discuss mailing list > > dtrace-discuss at opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss at opensolaris.org
Pavesi, Valdemar (NSN - US/Boca Raton)
2009-Jan-19 13:22 UTC
[dtrace-discuss] C++ Applications with Dtrace
For this case there is no leak and still alloc fail=135. # mdb -p 4846 Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ]> ::umastatcache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- umem_magazine_1 16 45 202 8192 248 0 umem_magazine_3 32 95 126 8192 173 0 umem_magazine_7 64 150 168 16384 282 0 umem_magazine_15 128 34 42 8192 60 0 umem_magazine_31 256 0 24 8192 10 0 umem_magazine_47 384 0 9 4096 8 0 umem_magazine_63 512 0 7 4096 8 0 umem_magazine_95 768 0 4 4096 3 0 umem_magazine_143 1152 18 27 36864 36 0 umem_slab_cache 56 366 450 36864 568 0 umem_bufctl_cache 24 0 0 0 0 0 umem_bufctl_audit_cache 192 7088 7110 1617920 7860 0 umem_alloc_8 8 0 0 0 0 0 umem_alloc_16 16 294 425 20480 4653 0 umem_alloc_32 32 641 768 49152 471695 0 umem_alloc_48 48 978 1122 90112 556139 0 umem_alloc_64 64 1624 2944 376832 4695289 0 umem_alloc_80 80 727 756 86016 29302 0 umem_alloc_96 96 290 320 40960 21999 0 umem_alloc_112 112 38 84 12288 15671 0 umem_alloc_128 128 21 63 12288 20729 0 umem_alloc_160 160 8 42 8192 14648 0 umem_alloc_192 192 3 128 32768 6024 0 umem_alloc_224 224 7 32 8192 3500 0 umem_alloc_256 256 2 24 8192 4297 0 umem_alloc_320 320 3 20 8192 8889 0 umem_alloc_384 384 11 27 12288 10014 0 umem_alloc_448 448 0 16 8192 13490 0 umem_alloc_512 512 0 14 8192 680 0 umem_alloc_640 640 8 22 16384 4789 0 umem_alloc_768 768 9 27 24576 2930 0 umem_alloc_896 896 1 20 20480 3735 0 umem_alloc_1152 1152 6 20 24576 145 0 umem_alloc_1344 1344 1 8 12288 10 0 umem_alloc_1600 1600 0 7 12288 31 0 umem_alloc_2048 2048 1 9 20480 128 0 umem_alloc_2688 2688 9 35 102400 4671 0 umem_alloc_4096 4096 1 7 57344 1147 0 umem_alloc_8192 8192 87 111 1363968 56215 0 umem_alloc_12288 12288 30 34 557056 666 0 umem_alloc_16384 16384 2 3 61440 4 0 ------------------------- ------ ------ ------ --------- --------- ----- Total [umem_internal] 1753088 9256 0 Total [umem_default] 3055616 5951490 0 ------------------------- ------ ------ ------ --------- --------- ----- vmem memory memory memory alloc alloc name in use total import succeed fail ------------------------- --------- ---------- --------- --------- ----- sbrk_top 216223744 216535040 0 2089 135 sbrk_heap 216223744 216223744 216223744 2089 0 vmem_internal 1372160 1372160 1372160 173 0 vmem_seg 1310720 1310720 1310720 160 0 vmem_hash 22528 24576 24576 6 0 vmem_vmem 46200 55344 36864 15 0 umem_internal 1871936 1875968 1875968 452 0 umem_cache 42968 57344 57344 41 0 umem_hash 58880 61440 61440 35 0 umem_log 131776 135168 135168 3 0 umem_firewall_va 0 0 0 0 0 umem_firewall 0 0 0 0 0 umem_oversize 209684987 209784832 209784832 916 0 umem_memalign 0 0 0 0 0 umem_default 3055616 3055616 3055616 546 0 ------------------------- --------- ---------- --------- --------- -----> ::help umastatNAME umastat - umem allocator stats SYNOPSIS ::umastat ATTRIBUTES Target: proc Module: libumem.so.1 Interface Stability: Unstable> ::findleaks -dvfindleaks: maximum buffers => 6898 findleaks: actual buffers => 4981 findleaks: findleaks: potential pointers => 201355793 findleaks: dismissals => 174141000 (86.4%) findleaks: misses => 26203919 (13.0%) findleaks: dups => 1005902 ( 0.4%) findleaks: follows => 4972 ( 0.0%) findleaks: findleaks: elapsed wall time => 7 seconds findleaks: BYTES LEAKED VMEM_SEG CALLER 12288 9 fffffd7f8cdf6000 MMAP 16384 1 fffffd7fff20d000 MMAP 4096 1 fffffd7fff1e2000 MMAP 16384 1 fffffd7ffcffb000 MMAP 4096 1 fffffd7ffc7f9000 MMAP 8192 1 fffffd7f8d3ec000 MMAP 8192 1 fffffd7f8d1de000 MMAP 4096 1 fffffd7f8d1ce000 MMAP 8192 1 fffffd7f8cdff000 MMAP ------------------------------------------------------------------------ Total 9 oversized leaks, 81920 bytes CACHE LEAKED BUFCTL CALLER ---------------------------------------------------------------------- Total 0 buffers, 0 bytes mmap(2) leak: [fffffd7f8cdf6000, fffffd7f8cdf9000), 12288 bytes mmap(2) leak: [fffffd7fff20d000, fffffd7fff211000), 16384 bytes mmap(2) leak: [fffffd7fff1e2000, fffffd7fff1e3000), 4096 bytes mmap(2) leak: [fffffd7ffcffb000, fffffd7ffcfff000), 16384 bytes mmap(2) leak: [fffffd7ffc7f9000, fffffd7ffc7fa000), 4096 bytes mmap(2) leak: [fffffd7f8d3ec000, fffffd7f8d3ee000), 8192 bytes mmap(2) leak: [fffffd7f8d1de000, fffffd7f8d1e0000), 8192 bytes mmap(2) leak: [fffffd7f8d1ce000, fffffd7f8d1cf000), 4096 bytes mmap(2) leak: [fffffd7f8cdff000, fffffd7f8ce01000), 8192 bytes> ::quit# -----Original Message----- From: ext David Lutz [mailto:David.Lutz at Sun.COM] Sent: Friday, January 16, 2009 6:38 PM To: Pavesi, Valdemar (NSN - US/Boca Raton) Cc: venkat; dtrace-discuss at opensolaris.org Subject: Re: RE: [dtrace-discuss] C++ Applications with Dtrace If I understand it correctly, the alloc fail for sbrk_top is just an indication that the heap had to be grown, which is different than other failures, which would indicate that we ran out of memory. Have a look at: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libume m/common/vmem_sbrk.c David ----- Original Message ----- From: "Pavesi, Valdemar (NSN - US/Boca Raton)" <valdemar.pavesi at nsn.com> Date: Friday, January 16, 2009 3:24 pm> Hello, > > I have a example of memory leak. > > What does means the alloc fail= 335 ? > > > # mdb -p 1408 > Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 ] > > ::findleaks -dv > findleaks: maximum buffers => 14920 > findleaks: actual buffers => 14497 > findleaks: > findleaks: potential pointers => 316574898 > findleaks: dismissals => 309520985 (97.7%) > findleaks: misses => 6929221 ( 2.1%) > findleaks: dups => 110601 ( 0.0%) > findleaks: follows => 14091 ( 0.0%) > findleaks: > findleaks: elapsed wall time => 54 seconds > findleaks: > BYTES LEAKED VMEM_SEG CALLER > 4096 4 fffffd7ffc539000 MMAP > 16384 1 fffffd7ffe83d000 MMAP > 4096 1 fffffd7ffe812000 MMAP > 8192 1 fffffd7ffd7bc000 MMAP > 24016 397 124a2a0libstdc++.so.6.0.8`_Znwm+0x1e>------------------------------------------------------------------------> Total 401 oversized leaks, 9567120 bytes > > CACHE LEAKED BUFCTL CALLER > 00000000004cf468 1 000000000050ed20libstdc++.so.6.0.8`_Znwm+0x1e> 00000000004cf468 1 000000000050c000libstdc++.so.6.0.8`_Znwm+0x1e> 00000000004cf468 1 000000000050ea80libstdc++.so.6.0.8`_Znwm+0x1e> 00000000004cf468 1 000000000050c0e0libstdc++.so.6.0.8`_Znwm+0x1e> 00000000004cf468 1 000000000050ee00libstdc++.so.6.0.8`_Znwm+0x1e> ---------------------------------------------------------------------- > Total 5 buffers, 80 bytes > > mmap(2) leak: [fffffd7ffc539000, fffffd7ffc53a000), 4096 bytes > mmap(2) leak: [fffffd7ffe83d000, fffffd7ffe841000), 16384 bytes > mmap(2) leak: [fffffd7ffe812000, fffffd7ffe813000), 4096 bytes > mmap(2) leak: [fffffd7ffd7bc000, fffffd7ffd7be000), 8192 bytes > umem_oversize leak: 397 vmem_segs, 24016 bytes each, 9534352 bytestotal> ADDR TYPE START ENDSIZE> THREAD TIMESTAMP > 124a2a0 ALLC 1252000 1257dd024016> 1 56bd6f2a6fe1 > libumem.so.1`vmem_hash_insert+0x90 > libumem.so.1`vmem_seg_alloc+0x1c4 > libumem.so.1`vmem_xalloc+0x50b > libumem.so.1`vmem_alloc+0x15a > libumem.so.1`umem_alloc+0x60 > libumem.so.1`malloc+0x2e > libstdc++.so.6.0.8`_Znwm+0x1e > libstdc++.so.6.0.8`_Znam+9 > > > > > ::umastat > cache buf buf buf memory allocalloc> name size in use total in use succeedfail> ------------------------- ------ ------ ------ --------- --------------> umem_magazine_1 16 5 101 4096 6> 0 > umem_magazine_3 32 356 378 24576 356> 0 > umem_magazine_7 64 20 84 8192 92> 0 > umem_magazine_15 128 11 21 4096 11> 0 > umem_magazine_31 256 0 0 0 0> 0 > umem_magazine_47 384 0 0 0 0> 0 > umem_magazine_63 512 0 0 0 0> 0 > umem_magazine_95 768 0 0 0 0> 0 > umem_magazine_143 1152 0 0 0 0> 0 > umem_slab_cache 56 638 650 53248 638> 0 > umem_bufctl_cache 24 0 0 0 0> 0 > umem_bufctl_audit_cache 192 15328 15336 3489792 15328> 0 > umem_alloc_8 8 0 0 0 0> 0 > umem_alloc_16 16 79 170 8192 2098631> 0 > umem_alloc_32 32 267 320 20480 306> 0 > umem_alloc_48 48 4653 4692 376832 6028> 0 > umem_alloc_64 64 5554 5568 712704 12642> 0 > umem_alloc_80 80 2492 2520 286720 5185> 0 > umem_alloc_96 96 492 512 65536 654> 0 > umem_alloc_112 112 95 112 16384 103> 0 > umem_alloc_128 128 38 42 8192 42> 0 > umem_alloc_160 160 12 21 4096 86> 0 > umem_alloc_192 192 2 16 4096 2> 0 > umem_alloc_224 224 5 16 4096 848> 0 > umem_alloc_256 256 1 12 4096 1> 0 > umem_alloc_320 320 7 1010 413696 560719> 0 > umem_alloc_384 384 34 36 16384 41> 0 > umem_alloc_448 448 5 8 4096 10> 0 > umem_alloc_512 512 1 7 4096 2> 0 > umem_alloc_640 640 11 22 16384 16> 0 > umem_alloc_768 768 2 9 8192 424> 0 > umem_alloc_896 896 1 4 4096 2> 0 > umem_alloc_1152 1152 11 20 24576 127> 0 > umem_alloc_1344 1344 4 40 61440 17179> 0 > umem_alloc_1600 1600 3 7 12288 5> 0 > umem_alloc_2048 2048 2 9 20480 6> 0 > umem_alloc_2688 2688 5 7 20480 10> 0 > umem_alloc_4096 4096 6 7 57344 335> 0 > umem_alloc_8192 8192 118 119 1462272 565> 0 > umem_alloc_12288 12288 20 21 344064 485> 0 > umem_alloc_16384 16384 1 1 20480 1> 0 > ------------------------- ------ ------ ------ --------- --------------> Total [umem_internal] 3584000 16431> 0 > Total [umem_default] 4001792 2704455> 0 > ------------------------- ------ ------ ------ --------- --------------> > vmem memory memory memory allocalloc> name in use total import succeedfail> ------------------------- --------- ---------- --------- --------------> sbrk_top 25309184 25399296 0 3192335> sbrk_heap 25309184 25309184 25309184 3192> 0 > vmem_internal 2965504 2965504 2965504 366> 0 > vmem_seg 2875392 2875392 2875392 351> 0 > vmem_hash 51200 53248 53248 7> 0 > vmem_vmem 46200 55344 36864 15> 0 > umem_internal 3788864 3792896 3792896 900> 0 > umem_cache 42968 57344 57344 41> 0 > umem_hash 142336 147456 147456 36> 0 > umem_log 131776 135168 135168 3> 0 > umem_firewall_va 0 0 0 0> 0 > umem_firewall 0 0 0 0> 0 > umem_oversize 14130869 14413824 14413824 1286> 0 > umem_memalign 0 0 0 0> 0 > umem_default 4001792 4001792 4001792 638> 0 > ------------------------- --------- ---------- --------- --------------> > > > > -----Original Message----- > From: dtrace-discuss-bounces at opensolaris.org > [mailto:dtrace-discuss-bounces at opensolaris.org] On Behalf Of ext David > Lutz > Sent: Friday, January 16, 2009 6:07 PM > To: venkat > Cc: dtrace-discuss at opensolaris.org > Subject: Re: [dtrace-discuss] C++ Applications with Dtrace > > Hi Venkat, > > I believe "alloc succeed" is a count of memory requests that > were successful. That memory may have been freed later, > so it doesn''t necessarily point to the reason for a growing > memory foot print. The column to be concerned with is > "memory in use". > > David > > ----- Original Message ----- > From: venkat <venki.dammalapati at gmail.com> > Date: Friday, January 16, 2009 2:44 pm > > > Hi david, > > > > > > What is allocated succeed block from umastat dcmd . that value > is > > > keep on increasing . Is that memory occupieng by process? > > like that way my process memory usage also keep on increasing ? > > > > can u clarify plz ? > > > > > > Thanks, > > Venkat > > -- > > This message posted from opensolaris.org > > _______________________________________________ > > dtrace-discuss mailing list > > dtrace-discuss at opensolaris.org > _______________________________________________ > dtrace-discuss mailing list > dtrace-discuss at opensolaris.org
"Pavesi, Valdemar (NSN - US/Boca Raton)" <valdemar.pavesi at nsn.com> wrote:> > For this case there is no leak and still alloc fail=135. > >[snip]> > vmem memory memory memory alloc alloc > name in use total import succeed fail > ------------------------- --------- ---------- --------- --------- ----- > sbrk_top 216223744 216535040 0 2089 135[snip] As I mentioned before, I believe that alloc fail for sbrk_top just means that you had to grow your heap to accommodate your memory foot print. Growing the heap is a normal part of memory management. David
Pavesi, Valdemar (NSN - US/Boca Raton)
2009-Jan-19 17:04 UTC
[dtrace-discuss] C++ Applications with Dtrace
Thanks so much. -----Original Message----- From: ext David Lutz [mailto:David.Lutz at Sun.COM] Sent: Monday, January 19, 2009 11:26 AM To: Pavesi, Valdemar (NSN - US/Boca Raton) Cc: dtrace-discuss at opensolaris.org; venkat Subject: Re: [dtrace-discuss] C++ Applications with Dtrace "Pavesi, Valdemar (NSN - US/Boca Raton)" <valdemar.pavesi at nsn.com> wrote:> > For this case there is no leak and still alloc fail=135. > >[snip]> > vmem memory memory memory allocalloc> name in use total import succeedfail> ------------------------- --------- ---------- --------- --------------> sbrk_top 216223744 216535040 0 2089135 [snip] As I mentioned before, I believe that alloc fail for sbrk_top just means that you had to grow your heap to accommodate your memory foot print. Growing the heap is a normal part of memory management. David