Displaying 20 results from an estimated 26 matches for "microsecs".
Did you mean:
microsec
2011 Sep 01
4
[PATCH] xen,credit1: Add variable timeslice
...mp;prv->master_ticker,
+ NOW() + MILLISECS(prv->tslice_ms));
}
init_timer(&spc->ticker, csched_tick, (void *)(unsigned long)cpu, cpu);
- set_timer(&spc->ticker, NOW() + MILLISECS(CSCHED_MSECS_PER_TICK));
+ set_timer(&spc->ticker, NOW() + MICROSECS(prv->tick_period_us) );
INIT_LIST_HEAD(&spc->runq);
spc->runq_sort_last = prv->runq_sort;
@@ -1002,7 +1001,7 @@ csched_acct(void* dummy)
* for one full accounting period. We allow a domain to earn more
* only when the system-wide credit balance is neg...
2013 Nov 13
3
[Patch] credit: Update other parameters when setting tslice_ms
...ce_ms;
+}
+
static int
csched_sys_cntl(const struct scheduler *ops,
struct xen_sysctl_scheduler_op *sc)
@@ -1089,7 +1100,7 @@ csched_sys_cntl(const struct scheduler *ops,
|| params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN))
|| MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_ms) )
goto out;
- prv->tslice_ms = params->tslice_ms;
+ __csched_set_tslice(prv, params->tslice_ms);
prv->ratelimit_us = params->ratelimit_us;
/* FALLTHRU */
case XEN_SYSCT...
2008 Dec 17
5
Trouble pulling data from a messy ACII file...
Hi all,
I am a new graduate student who is also new to R. I am ok with the basics,
but the problem I am having right now seems beyond what I can do..so I am
looking for advice. I am trying to pull data from flat ASCII files, but they
do not have a "nice" structure so a simple "read.table" doesn't work. An
example first half of a data file is below:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello Joe,
>
>
>
>
>
> I just did a mount like this (added the bold):
>
>
> mount -t glusterfs -o
> *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache*
> ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
>
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I just did a mount like this (added the bold):
?
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?Results:
?
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2018 Oct 03
2
Non-matching linkedid on CDR Records [SEC=UNCLASSIFIED]
...to our PSTN upstream provider as per their requirements.
The first record is taken from Asterisk Svr2, the second from Asterisk Svr1 (Svr1 replicates MySQL to Svr2)
As you can see, the linkedid records are different (1538531501.18974 vs 1538531488.11368)
The difference appears to be the matter of microsecs that it takes to connect the call legs (over a satellite connection) so I could probably 'guess' that these two are the same call, however for billing purposes this is not accurate enough.
Can someone shed some light on why the linkedid is not being shared between IAX channels?
Cheers,
Ca...
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay,
?
?
What do you mean exactly? What info is missing?
?
PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again.
?
?
Regards
Jo
?
-----Original message-----
From:Vijay Bellur <vbellur at redhat.com>
Sent:Tue 11-07-2017 18:16
Subject:Re: [Gluster-users] Gluster native mount is
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow.
?
I did not try the mount options yet, but I will now!
?
?
With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote:
> RE: [Gluster-users] Gluster native mount is really slow compared to nfs
>
> Hello Joe,
>
> I really appreciate your feedback, but I already tried the opcache
> stuff (to not valildate at all). It improves of course then, but not
> completely somehow. Still quite slow.
>
> I did not try the mount options yet, but I will now!
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is the volume info as requested by soumya:
?
#gluster volume info www
?Volume Name: www
Type: Replicate
Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.140.41:/gluster/www
Brick2: 192.168.140.42:/gluster/www
Brick3: 192.168.140.43:/gluster/www
Options Reconfigured:
2008 Feb 29
0
[Fwd: [ofa-general] Announcing the release of MVAPICH 1.0]
Per the announcement from the MVAPICH team, I am pleased to let you know
that the MPI-IO support for Lustre has been integrated into the new
release of MVAPICH, version 1.0.
> - Optimized and high-performance ADIO driver for Lustre
> - This MPI-IO support is a contribution from Future Technologies
> Group, Oak Ridge National Laboratory.
>
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:
?
?
root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www
traffic is generally, "you're doing it wrong".
https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/
That said, you might also look at these mount options:
attribute-timeout, entry-timeout, negative-timeout (set to some large
amount of time), and fopen-keep-cache.
On 07/11/2017 07:48 AM, Jo
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all,
?
?
One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :)
?
Initial cluster was created like this:
?
gluster volume create www replica 3 transport tcp
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case??
?
This would better be default I suppose ...?
?
I'm still
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled:
[root at dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2:
2011 Mar 01
1
OCFS2 shared volume getting slow when you add more nodes
Hello,
I have a cluster with two nodes, with SLES10 as base system. First I powered
on one node, and the system is working just fine. Then, when a second node
was added, the performance came down pretty bad.
Any hints or ideas about this behaviour?
TIA,
M
--
Saludos,
Mauro Parra-Miranda
Consultor Senior Novell - mparra at novell.com
openSUSE Developer - mauro at openSUSE.org
BB PIN - 22600AE9
2017 Sep 14
5
Confusing lstat() performance
Hi,
I have a gluster 3.10 volume with a dir with ~1 million small files in
them, say mounted at /mnt/dir with FUSE, and I'm observing something weird:
When I list and stat them all using rsync, then the lstat() calls that
rsync does are incredibly fast (23 microseconds per call on average,
definitely faster than a network roundtrip between my 3-machine bricks
connected via Ethernet).
But
2003 Oct 17
2
--bwlimit not working right
Hello!
I cant get the bwlimit option working right.
If i set this option over 400 kbyte per sec i still only get 400kbyte
per sec, whether wich value i set.
I try this option with a 100MB big file.
I use a debian stable System with rsync version 2.5.6cvs protocol
version 26.
Can someone tell me how i can this get working?
thx
Rene
dpkg -l "rsync*"
ii rsync 2.5.5-0.1 fast remote
2006 Feb 08
4
DO NOT REPLY [Bug 3491] New: throttle disk IO during filelist/directory parsing
https://bugzilla.samba.org/show_bug.cgi?id=3491
Summary: throttle disk IO during filelist/directory parsing
Product: rsync
Version: 2.6.4
Platform: All
URL: http://vilius.multiply.com/video/item/10
OS/Version: Linux
Status: NEW
Severity: enhancement
Priority: P3
Component: core