Hi Henk,
This one is a slight nuisance on systems with large CPU counts.
When using dtrace(1M), the DTrace subsystem ends up allocating
a pair of 4MB buffers for its principal buffers and the same again
if you are using aggregations. This becomes a ton of memory to
setup and tear down on a system with 256 CPUs.
If you can, try tuning the principal buffer size down using the
''bufsize'' tunable. Set it as small as you can but you may
start
to see drops. Adjust as needed.
dtrace(1M) needs to have be a bit smarter in this situation. I''ll
try to remember to log an RFE on this as I don''t think one exists.
Jon.
> have been playing with dtrace for the past week and besides some beginner
errors I''m doing basically fine (thanks again Adam).
> I just switched to a different system, and all of a sudden it takes about
15 seconds for the dtrace script to start and then an other 6 seconds to
terminate.
> System is empty, just rebooted:
> SunOS sbm-5440a 5.10 Generic_139555-07 sun4v sparc SUNW,T5440
>
> There should be plenty of cpu cycles available (system is empty).
> The sparcv9 processor operates at 1164 MHz,
> (sbm-5440a) ~/swat303/solaris: psrinfo -v | grep -c MHz
> 256
>
> Since I am experimenting I am running loads of very short tests, so losing
20+ seconds each time gets annoying (Patience is not one of my virtues)
>
> Below some output of the test script which is attached.
> Any thoughts?
>
> thx
>
> Henk.
>
> (sbm-5440a) ~/swat303/solaris: date ; h.d 5 ; date
> Mon Mar 15 14:15:16 MDT 2010
> * 2010 Mar 15 14:15:31.860 Starting
> * 2010 Mar 15 14:15:32.240 Tick
> * 2010 Mar 15 14:15:33.240 Tick
> * 2010 Mar 15 14:15:34.240 Tick
> * 2010 Mar 15 14:15:35.240 Tick
> * 2010 Mar 15 14:15:36.240 Tick
> *
> * 2010 Mar 15 14:15:37.890 Ending
> *
> Mon Mar 15 14:15:44 MDT 2010
> (sbm-5440a) ~/swat303/solaris:
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> dtrace-discuss mailing list
> dtrace-discuss at opensolaris.org
>