search for: microsec

Displaying 20 results from an estimated 26 matches for "microsec".

2011 Sep 01
4
[PATCH] xen,credit1: Add variable timeslice
...mp;prv->master_ticker, + NOW() + MILLISECS(prv->tslice_ms)); } init_timer(&spc->ticker, csched_tick, (void *)(unsigned long)cpu, cpu); - set_timer(&spc->ticker, NOW() + MILLISECS(CSCHED_MSECS_PER_TICK)); + set_timer(&spc->ticker, NOW() + MICROSECS(prv->tick_period_us) ); INIT_LIST_HEAD(&spc->runq); spc->runq_sort_last = prv->runq_sort; @@ -1002,7 +1001,7 @@ csched_acct(void* dummy) * for one full accounting period. We allow a domain to earn more * only when the system-wide credit balance is ne...
2013 Nov 13
3
[Patch] credit: Update other parameters when setting tslice_ms
...ce_ms; +} + static int csched_sys_cntl(const struct scheduler *ops, struct xen_sysctl_scheduler_op *sc) @@ -1089,7 +1100,7 @@ csched_sys_cntl(const struct scheduler *ops, || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN)) || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_ms) ) goto out; - prv->tslice_ms = params->tslice_ms; + __csched_set_tslice(prv, params->tslice_ms); prv->ratelimit_us = params->ratelimit_us; /* FALLTHRU */ case XEN_SYSC...
2008 Dec 17
5
Trouble pulling data from a messy ACII file...
...5.81 version of Universal Library 10 20081121.145730 when this file was written 10 Windows_XP operating system used operating system used * * radar characteristics 11 WF-100 11 20000000 A/D rate, samples/second 11 7.5 bin width, m 11 800 nominal PRF, Hz 11 0.25 nominal pulse width, microsec 11 0 tuning, volts 11 3.19779 nominal wave length, cm ----------------------------------------------------------------------------------------------- ..the file goes on from there... How would I go about getting this data into some kind of useful format? This is one of about 1000 files I will ne...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...dirs per dir : 10 > threads share directories? : N > filename prefix : > filename suffix : > hash file number into dir.? : N > fsync after modify? : N > pause between files (microsec) : 0 > finish all requests? : Y > stonewall? : Y > measure response times? : N > verify read? : Y > verbose? : False > log to st...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ? ? ? ? ? ? ? ?...
2018 Oct 03
2
Non-matching linkedid on CDR Records [SEC=UNCLASSIFIED]
...to our PSTN upstream provider as per their requirements. The first record is taken from Asterisk Svr2, the second from Asterisk Svr1 (Svr1 replicates MySQL to Svr2) As you can see, the linkedid records are different (1538531501.18974 vs 1538531488.11368) The difference appears to be the matter of microsecs that it takes to connect the call legs (over a satellite connection) so I could probably 'guess' that these two are the same call, however for billing purposes this is not accurate enough. Can someone shed some light on why the linkedid is not being shared between IAX channels? Cheers, C...
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
...0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ? ? ? ? ? ? ? ?...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ?...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...eads share directories? : N >> filename prefix : >> filename suffix : >> hash file number into dir.? : N >> fsync after modify? : N >> pause between files (microsec) : 0 >> finish all requests? : Y >> stonewall? : Y >> measure response times? : N >> verify read? : Y >> verbose? : False...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ? ? ? ? ? ? ? ?...
2008 Feb 29
0
[Fwd: [ofa-general] Announcing the release of MVAPICH 1.0]
...s can be obtained by visiting the following URL: http://mvapich.cse.ohio-state.edu/overview/mvapich/features.shtml MVAPICH 1.0 continues to deliver excellent performance. Sample performance numbers include: - with OpenFabrics/Gen2 on EM64T quad-core with PCIe and ConnectX-DDR: - 1.51 microsec one-way latency (4 bytes) - 1404 MB/sec unidirectional bandwidth - 2713 MB/sec bidirectional bandwidth - with PSM on Opteron with Hypertransport and QLogic-SDR: - 1.25 microsec one-way latency (4 bytes) - 953 MB/sec unidirectional bandwidth - 1891 MB...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ? ? ? ? ? ? ? ?...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...dirs per dir : 10 > threads share directories? : N > filename prefix : > filename suffix : > hash file number into dir.? : N > fsync after modify? : N > pause between files (microsec) : 0 > finish all requests? : Y > stonewall? : Y > measure response times? : N > verify read? : Y > verbose? : False > log to st...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all, ? ? One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :) ? Initial cluster was created like this: ? gluster volume create www replica 3 transport tcp
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
...0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? dirs per dir : 10 ? ? ? ? ? ? ? threads share directories? : N ? ? ? ? ? ? ? ? ? ? ? ? ?filename prefix : ? ? ? ? ? ? ? ? ? ? ? ? ?filename suffix : ? ? ? ? ? ? ?hash file number into dir.? : N ? ? ? ? ? ? ? ? ? ? ?fsync after modify? : N ? ? ? ? ? pause between files (microsec) : 0 ? ? ? ? ? ? ? ? ? ? finish all requests? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? stonewall? : Y ? ? ? ? ? ? ? ? ?measure response times? : N ? ? ? ? ? ? ? ? ? ? ? ? ? ? verify read? : Y ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? verbose? : False ? ? ? ? ? ? ? ? ? ? ? ? ? log to stderr? : False ? ? ? ? ? ? ? ?...
2017 Sep 18
0
Confusing lstat() performance
...dirs per dir : 10 threads share directories? : N filename prefix : filename suffix : hash file number into dir.? : N fsync after modify? : N pause between files (microsec) : 0 finish all requests? : Y stonewall? : Y measure response times? : N verify read? : Y verbose? : False log to stderr? : False...
2011 Mar 01
1
OCFS2 shared volume getting slow when you add more nodes
Hello, I have a cluster with two nodes, with SLES10 as base system. First I powered on one node, and the system is working just fine. Then, when a second node was added, the performance came down pretty bad. Any hints or ideas about this behaviour? TIA, M -- Saludos, Mauro Parra-Miranda Consultor Senior Novell - mparra at novell.com openSUSE Developer - mauro at openSUSE.org BB PIN - 22600AE9
2017 Sep 14
5
Confusing lstat() performance
Hi, I have a gluster 3.10 volume with a dir with ~1 million small files in them, say mounted at /mnt/dir with FUSE, and I'm observing something weird: When I list and stat them all using rsync, then the lstat() calls that rsync does are incredibly fast (23 microseconds per call on average, definitely faster than a network roundtrip between my 3-machine bricks connected via Ethernet). But when I try to back them up with the `bup` tool (https://github.com/bup/bup), which (at least according to strace) does the same syscalls as rsync to stat all files, it takes...
2003 Oct 17
2
--bwlimit not working right
Hello! I cant get the bwlimit option working right. If i set this option over 400 kbyte per sec i still only get 400kbyte per sec, whether wich value i set. I try this option with a 100MB big file. I use a debian stable System with rsync version 2.5.6cvs protocol version 26. Can someone tell me how i can this get working? thx Rene dpkg -l "rsync*" ii rsync 2.5.5-0.1 fast remote
2006 Feb 08
4
DO NOT REPLY [Bug 3491] New: throttle disk IO during filelist/directory parsing
...ntact: rsync-qa@samba.org rsync was bringing our webserver to crawl while during file list generation. since the job wasn't time critical, i made it be less aggressive during this step - a rather trivial change to microsleep between each readdir(). --slow-down=100 will usleep() for 1000usec (microseconds) before each readdir. if i'm not mistaken with 10k directories that'd be ~10second of sleep. I've seen people try to do this using --bwlimit and/or loop checking loadavg and sending sigstop/sigcont. re: http://lists.samba.org/archive/rsync/2004-February/008651.html not the best f...