search for: systim

Displaying 20 results from an estimated 20 matches for "systim".

Did you mean: system
2010 Aug 08
2
Importing arguments for use by functions in a script
...n external file to a script so that they can be used directly by functions within the script? I have a series of interdependent functions. I wish to test the time for processing various datasets. I was initially doing something along the lines of the following (yes, I am new to R): rm(list= ls()) systime1<-system.time(source("seq_imp_fct.R")) systime2<-system.time(source("pattern_fct.R")) systime3<-system.time(source("AAdistribution_fct.R")) # run function systime101<-system.time(seqres<-seq_imp_fct("testprot.txt")) systime102<-system.time...
2009 Jan 07
3
[LLVMdev] LLVM optmization
...nsigned long) ret; } ///cronometro unsigned long tini; unsigned long tfim; #define getmilisecs(x) (x) #define num_th 100 unsigned long milisecs() { return getmilisecs(tfim-tini);}; unsigned long secs() { return milisecs()/1000;}; const char *spenttime () { static char buffer[64]; unsigned long systime = secs(); unsigned long milisectime = milisecs()%1000; sprintf(buffer,"%02d:%02d:%02d:%03d",systime/3600,(systime%3600)/60,(systime%3600)%60,milisectime); return (const char*) buffer; }; //fim cronometro int main(int a, char **b) { int i; DWORD iThreadId; HANDLE mainThread[nu...
2001 Nov 21
3
Faking system time
...irating licenses and stuff, and the solution is to revert the system date back to some valid thing and then compile. Since I am now running it under WINE (yeah! it *almost* works great), I wonder if it is possible to hack wine so it tells the executed program some different date and time (changing systime in Linux tends to wreak havoc with makefiles and stuff) Can someone point out where the unix-to-windoze sysdate translation is done so I can hack it? TIA -- -- SNIP -- "The difference between genius and stupidity is that genius has limits" +--------------------------------------+...
2018 Jul 26
4
Problem with definition of slist in CFEngine
...t)", "sys.release ................ = $(sys.release)", "sys.resolv ................. = $(sys.resolv)", "sys.statedir ............... = $(sys.statedir)", "sys.sysday ................. = $(sys.sysday)", "sys.systime ................ = $(sys.systime)", "sys.update_policy_path ..... = $(sys.update_policy_path)", "sys.uptime ................. = $(sys.uptime)", "sys.user_data .............. = $(sys.user_data)", "sys.uqhost ..................
2008 Jul 10
6
Xen guests clock is exactly 2 hours before dom0 time
Hi list, one and hopefully last strange thing I figured out ist the systime of my guests. Dom0 uses ntp for time syncronisation. I set the time on my guests manually but after reboot any machine (Windows server, XP, Freebsd, even PV Machines like ubuntu) all run local time - 2 hours. /proc/sys/xen/independent_wallclock is set to 0 so actually the time should be...
2012 Mar 28
1
[API reference] confused by CPU time term
Hi, everyone I'm writing a virtual machine monitor based on libvirt. As I read the api reference, I found I'm confused by some terms. 1, What is cumulative I/O wait CPU time? API reference says that VIR_NODE_CPU_STATS_IOWAIT indicate cumulative I/O wait CPU time. I'm confused by this time. As far as I know, when cpu meets IO wait situation, it will schedule another task, so,
2009 Jan 08
0
[LLVMdev] LLVMdev Digest, Vol 55, Issue 16
...ine getmilisecs(x) (x) > > #define num_th 100 > > unsigned long milisecs() { return getmilisecs(tfim-tini);}; > > unsigned long secs() { return milisecs()/1000;}; > > const char *spenttime () > > { > > static char buffer[64]; > > unsigned long systime = secs(); > > unsigned long milisectime = milisecs()%1000; > > sprintf(buffer,"%02d:%02d:%02d:%03d",systime/3600,(systime%3600)/60,(systime%3600)%60,milisectime); > > return (const char*) buffer; > &g...
2018 Jul 26
0
Problem with definition of slist in CFEngine
..."sys.release ................ = $(sys.release)", > "sys.resolv ................. = $(sys.resolv)", > "sys.statedir ............... = $(sys.statedir)", > "sys.sysday ................. = $(sys.sysday)", > "sys.systime ................ = $(sys.systime)", > "sys.update_policy_path ..... = $(sys.update_policy_path)", > "sys.uptime ................. = $(sys.uptime)", > "sys.user_data .............. = $(sys.user_data)", > "sys.uqhost...
2008 Mar 25
0
No subject
Shows that as the MCU increases, the OpenMP extra overhead is amortized and OpenMP becomes as fast as the pthreads implementation. The last chart http://lampiao.lsc.ic.unicamp.br/~piga/gsoc_2008/systime.png Shows that both pthreads and OpenMP overhead decreases as what seems to be a logarithmic function of the MCU size. This was a great experiment, and from what I can conclude, the OpenMP implementation can be as good as the pthread. Therefore maybe it is worth to work on a OpenMP implementat...
2008 Aug 15
1
GSoC - Theora multithread decoder
...rows) as explained in my previous email. But the results were not good. They were equal the implementation without pipeline. http://lampiao.lsc.ic.unicamp.br/~piga/gsoc_2008/comparison.png http://lampiao.lsc.ic.unicamp.br/~piga/gsoc_2008/speedup.png http://lampiao.lsc.ic.unicamp.br/~piga/gsoc_2008/systime.png Next I tried to do some improvements, but without success. I think that the implementation as it is, could not be improved more with parallelism. Another approach could be tried to decode the next frame, as the current frame is decoding, but this is very chanlenging, because most of the fra...
2004 Jun 09
4
how to initialize random seed properly ?
I want to start R processes on multiple processors from single shell script and I want all of them to have different random seeds. One way of doing this is sleep 2 # (with 'sleep 1' I am often getting the same number) ... set.seed(unclass(Sys.time())) Is there a simpler way without a need to sleep between invoking different R processes ? Ryszard
2011 Mar 04
3
Updating hardware clock from cron
Is there a package to do this? Normally the hardware clock is set during shutdown if one is running ntpd. But if a long-running server shuts down unexpectedly, this isn't done, and the hardware clock might be off by a lot when it comes back up. So setting it periodically from a cron job could be useful. What do others do? Adding a one liner to /etc/cron.daily that invokes
2010 Nov 22
6
[PATCH 2/3]: An Implementation of HyperV KVP functionality
The hv_utils module will be composed of more than one file; rename hv_utils.c to accommodate this without changing the module name. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: hv_util_cleanup.patch Url:
2010 Nov 22
6
[PATCH 2/3]: An Implementation of HyperV KVP functionality
The hv_utils module will be composed of more than one file; rename hv_utils.c to accommodate this without changing the module name. Signed-off-by: K. Y. Srinivasan <ksrinivasan at novell.com> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: hv_util_cleanup.patch Url:
2006 Nov 03
0
a strange behavior on a small memory system with tun0
...me=32.8 ms --- 172.27.0.1 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 32.8/32.8/32.8 ms Sat Jan 1 01:00:15 CET 2000 --> storing crontabs Sat Jan 1 01:00:15 CET 2000 --> restarting cron Sat Jan 1 01:00:15 CET 2000 --> getting systime from server 172.27.0.1 Sat Jan 1 01:00:15 CET 2000 --> creating rsync_key Fri Nov 3 11:47:00 CET 2006 --> creating rsync_exclude Fri Nov 3 11:47:00 CET 2006 --> activating swap Fri Nov 3 11:47:00 CET 2006 --> swap is activated ( 249976 kb)! Fri Nov 3 11:47:00 CET 2006 -->...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...l. However, the qspinlock performance in virtual guest should still be comparable with ticket spinlock at low load and much better at high load. Performance of kernel with qspinlock patch ------------------------------------------ In term of the performance benefit of this patch, I ran the high_systime workload (which does a lot of fork() and exit()) at various load levels (500, 1000, 1500 and 2000 users) on a 4-socket IvyBridge-EX bare-metal system (60 cores, 120 threads) with intel_pstate driver and performance scaling governor. The JPM (jobs/minutes) and execution time results were as follows...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...l. However, the qspinlock performance in virtual guest should still be comparable with ticket spinlock at low load and much better at high load. Performance of kernel with qspinlock patch ------------------------------------------ In term of the performance benefit of this patch, I ran the high_systime workload (which does a lot of fork() and exit()) at various load levels (500, 1000, 1500 and 2000 users) on a 4-socket IvyBridge-EX bare-metal system (60 cores, 120 threads) with intel_pstate driver and performance scaling governor. The JPM (jobs/minutes) and execution time results were as follows...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...on for bare metal. However, the qspinlock performance in virtual guest should still be comparable with ticket spinlock at low load and much better at high load. Native qspinlock patch performance ---------------------------------- In term of the performance benefit of this patch, I ran the high_systime workload (which does a lot of fork() and exit()) at various load levels (500, 1000, 1500 and 2000 users) on a 4-socket IvyBridge-EX bare-metal system (60 cores, 120 threads) with intel_pstate driver and performance scaling governor. The JPM (jobs/minutes) and execution time results were as follows...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...on for bare metal. However, the qspinlock performance in virtual guest should still be comparable with ticket spinlock at low load and much better at high load. Native qspinlock patch performance ---------------------------------- In term of the performance benefit of this patch, I ran the high_systime workload (which does a lot of fork() and exit()) at various load levels (500, 1000, 1500 and 2000 users) on a 4-socket IvyBridge-EX bare-metal system (60 cores, 120 threads) with intel_pstate driver and performance scaling governor. The JPM (jobs/minutes) and execution time results were as follows...
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
...When processes are slow, they often block in states like '*unp_l' or 'pipewr' according to top, and overall cpu time spent in kernel grows by factors (and system load too). See [0], [1], [2] for top screenshots when things work badly. Under normal circumstances load is below 5 and systime below 5%. Load during daytime makes this situation more likely to occur. At the same time it does _not_ seem to be triggered by any sudden load peaks (e.g. more network connections etc.), so I guess it's some kind of race condition. Today I had the chance to get a few ddb traces of the 't...