Displaying 20 results from an estimated 21 matches for "improvemnt".
Did you mean:
improvement
2005 Oct 20
2
New to Wine
I am new to Wine and recently tried to get an application to run under
Wine. I was able to get the application to work by copying a working
Program Files folder to the .wine directory. There were two (really 3)
problems.
1) Install Shield failed to work, I gather this is a common problem
2) The App uses a Rainbow Technologies hardware lock.
3) Install Shield for the hardware lock drivers
2005 Nov 28
2
unreachable trusted domains in enterprise environment
Hi All
We have quite a complex enterprise environment which includes a global
domain and lots of little asteroid domains all trusted by the central
domain. We have (imaginatively) called this central domain ENTERPRISE.
I have configured samba to be an ADS member server successfully, but due
to our network design many of the asteroid domains's DC's are
uncontactable from our regional
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...e increasing in the normalized throughput in some cases.
>> - Throughput of tx were increased (at most 105%) expect for the huge
>> write (16384). And we can send more packets in the case (+tpkts were
>> increased).
>> - Very minor rx regression in some cases.
>> - Improvemnt on TCP_RR (at most 16%).
>
>Forget to mention, the following test results by order are:
>
>1) Guest TX
>2) Guest RX
>3) TCP_RR
>
>> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>> 64/ 1/ +9%/ -17%/ +5%/ +10%/ -2%
>> 64/ 2/...
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...e increasing in the normalized throughput in some cases.
>> - Throughput of tx were increased (at most 105%) expect for the huge
>> write (16384). And we can send more packets in the case (+tpkts were
>> increased).
>> - Very minor rx regression in some cases.
>> - Improvemnt on TCP_RR (at most 16%).
>
>Forget to mention, the following test results by order are:
>
>1) Guest TX
>2) Guest RX
>3) TCP_RR
>
>> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>> 64/ 1/ +9%/ -17%/ +5%/ +10%/ -2%
>> 64/ 2/...
2003 Dec 01
0
About experience of wine........
Hi
I,ve been meaning to write something in praise of wine, and seeing a
posting about personal experience of wine is just the excuse I need.
the 20031116.tar.gz version, at least for me, is an improvemnt over the
previous months version.
I am amazed at how well it can run windows programs...well the ones i need.
So I think its great and fantastic and hats off to the developers.
Nice one....................david
2011 Oct 21
2
glm-poisson fitting 400.000 records
Hi,
I am trying to fi a glm-poisson model to 400.000 records. I have tried biglm
and glmulti but i have problems... can it really be the case that 400.000
are too many records???
I am thinking of using random samples of my dataset.....
Many thanks,
--
View this message in context: http://r.789695.n4.nabble.com/glm-poisson-fitting-400-000-records-tp3925100p3925100.html
Sent from the R help
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...ess. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throughput of tx were increased (at most 105%) expect for the huge
write (16384). And we can send more packets in the case (+tpkts were
increased).
- Very minor rx regression in some cases.
- Improvemnt on TCP_RR (at most 16%).
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
64/ 1/ +9%/ -17%/ +5%/ +10%/ -2%
64/ 2/ +8%/ -18%/ +6%/ +10%/ -1%
64/ 4/ +4%/ -21%/ +6%/ +10%/ -1%
64/ 8/ +9%/ -17%/ +6%/ +9%/ -2%
256/ 1/ +20%/...
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...ess. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throughput of tx were increased (at most 105%) expect for the huge
write (16384). And we can send more packets in the case (+tpkts were
increased).
- Very minor rx regression in some cases.
- Improvemnt on TCP_RR (at most 16%).
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
64/ 1/ +9%/ -17%/ +5%/ +10%/ -2%
64/ 2/ +8%/ -18%/ +6%/ +10%/ -1%
64/ 4/ +4%/ -21%/ +6%/ +10%/ -1%
64/ 8/ +9%/ -17%/ +6%/ +9%/ -2%
256/ 1/ +20%/...
2015 Nov 12
0
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...still see
> some increasing in the normalized throughput in some cases.
> - Throughput of tx were increased (at most 105%) expect for the huge
> write (16384). And we can send more packets in the case (+tpkts were
> increased).
> - Very minor rx regression in some cases.
> - Improvemnt on TCP_RR (at most 16%).
Forget to mention, the following test results by order are:
1) Guest TX
2) Guest RX
3) TCP_RR
> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
> 64/ 1/ +9%/ -17%/ +5%/ +10%/ -2%
> 64/ 2/ +8%/ -18%/ +6%/ +10%/ -1%
>...
2010 Jan 26
0
[LLVMdev] Evaluatin llvm for an application
Yeah, this is the right place, just ask :)
On Tue, Jan 26, 2010 at 4:12 PM, Maurizio De Cecco <jmax at dececco.name> wrote:
> Hallo,
>
> i am evaluating the possibility of using llvm for an application,
> and i wondering if this is the right place where to pose questions.
> I haven't found a llvm users mailing list ...
>
> Maurizio De Cecco
2010 Jan 26
3
[LLVMdev] Evaluatin llvm for an application
Hallo,
i am evaluating the possibility of using llvm for an application,
and i wondering if this is the right place where to pose questions.
I haven't found a llvm users mailing list ...
Maurizio De Cecco
2015 Oct 28
11
[Bug 11578] New: Rsync does start with an error directly connecting an USB drive to a port
https://bugzilla.samba.org/show_bug.cgi?id=11578
Bug ID: 11578
Summary: Rsync does start with an error directly connecting an
USB drive to a port
Product: rsync
Version: 3.1.1
Hardware: x64
OS: Linux
Status: NEW
Severity: normal
Priority: P5
Component: core
2015 Nov 13
0
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
...normalized throughput in some cases.
>>> - Throughput of tx were increased (at most 105%) expect for the huge
>>> write (16384). And we can send more packets in the case (+tpkts were
>>> increased).
>>> - Very minor rx regression in some cases.
>>> - Improvemnt on TCP_RR (at most 16%).
>> Forget to mention, the following test results by order are:
>>
>> 1) Guest TX
>> 2) Guest RX
>> 3) TCP_RR
>>
>>> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>>> 64/ 1/ +9%/ -17%/ +5%/ +10%/...
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
...re or less. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throughput of tx were increased (at most 50%) expect for the huge
write (16384). And we can send more packets in the case (+tpkts were
increased).
- Very minor rx regression in some cases.
- Improvemnt on TCP_RR (at most 17%).
Guest TX:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
64/ 1/ +18%/ -10%/ +7%/ +11%/ 0%
64/ 2/ +14%/ -13%/ +7%/ +10%/ 0%
64/ 4/ +8%/ -17%/ +7%/ +9%/ 0%
64/ 8/ +11%/ -15%/ +7%/ +10%/ 0%
256/ 1/ +35%/ +9%/ +21%/ +12%/ -11%
256/ 2/ +26%/ +2%/ +20%/ +9%/ -10%
256/...
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
...re or less. That porbably why we can still see
some increasing in the normalized throughput in some cases.
- Throughput of tx were increased (at most 50%) expect for the huge
write (16384). And we can send more packets in the case (+tpkts were
increased).
- Very minor rx regression in some cases.
- Improvemnt on TCP_RR (at most 17%).
Guest TX:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
64/ 1/ +18%/ -10%/ +7%/ +11%/ 0%
64/ 2/ +14%/ -13%/ +7%/ +10%/ 0%
64/ 4/ +8%/ -17%/ +7%/ +9%/ 0%
64/ 8/ +11%/ -15%/ +7%/ +10%/ 0%
256/ 1/ +35%/ +9%/ +21%/ +12%/ -11%
256/ 2/ +26%/ +2%/ +20%/ +9%/ -10%
256/...
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
...- Netperf 2.6
- Two machines with back to back connected ixgbe
- Two guests each wich 1 vcpu and 1 queue
- pin two vhost threads to the same cpu on host to simulate the cpu
contending
Results:
- In this radical case, we can still get at most 14% improvement on
TCP_RR.
- For guest tx stream, minor improvemnt with at most 5% regression in
one byte case. For guest rx stream, at most 5% regression were seen.
Guest TX:
size /-+% /
1 /-5.55%/
64 /+1.11%/
256 /+2.33%/
512 /-0.03%/
1024 /+1.14%/
4096 /+0.00%/
16384/+0.00%/
Guest RX:
size /-+% /
1 /-5.11%/
64 /-0.55%/
256 /-2.35%/
512 /-3.39%/
1024 /+6.8% /...
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
...- Netperf 2.6
- Two machines with back to back connected ixgbe
- Two guests each wich 1 vcpu and 1 queue
- pin two vhost threads to the same cpu on host to simulate the cpu
contending
Results:
- In this radical case, we can still get at most 14% improvement on
TCP_RR.
- For guest tx stream, minor improvemnt with at most 5% regression in
one byte case. For guest rx stream, at most 5% regression were seen.
Guest TX:
size /-+% /
1 /-5.55%/
64 /+1.11%/
256 /+2.33%/
512 /-0.03%/
1024 /+1.14%/
4096 /+0.00%/
16384/+0.00%/
Guest RX:
size /-+% /
1 /-5.11%/
64 /-0.55%/
256 /-2.35%/
512 /-3.39%/
1024 /+6.8% /...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...+34%| +1%
1| 50| +27%| 0%
1| 100| +29%| +1%
64| 1| -9%| -13%
64| 20| +31%| 0%
64| 50| +26%| -1%
64| 100| +30%| +1%
256| 1| -8%| -11%
256| 20| +33%| +1%
256| 50| +23%| -3%
256| 100| +29%| +1%
- TCP_CRR shows improvemnt of multiple sessions of TCP_CRR. Get regression of
single session of TCP_CRR test, looks like the TCP_CRR will miss the flow
director of both ixgbe and tun, which cause almost all physical queues has
been used in host.
Guest TX:
size|session|+thu%|+normalize%
1| 1| -6%| 0%
1...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...+34%| +1%
1| 50| +27%| 0%
1| 100| +29%| +1%
64| 1| -9%| -13%
64| 20| +31%| 0%
64| 50| +26%| -1%
64| 100| +30%| +1%
256| 1| -8%| -11%
256| 20| +33%| +1%
256| 50| +23%| -3%
256| 100| +29%| +1%
- TCP_CRR shows improvemnt of multiple sessions of TCP_CRR. Get regression of
single session of TCP_CRR test, looks like the TCP_CRR will miss the flow
director of both ixgbe and tun, which cause almost all physical queues has
been used in host.
Guest TX:
size|session|+thu%|+normalize%
1| 1| -6%| 0%
1...
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
..., multiqueue
virtio-net device could be specified by:
qemu -netdev tap,id=h0,queues=2 -device virtio-net-pci,netdev=h0,queues=2
Performace numbers:
I post them in the threads of RFC of multiqueue virtio-net driver:
http://www.spinics.net/lists/kvm/msg75386.html
Multiqueue with vhost shows improvemnt in TCP_RR, and degradate for small packet
transmission.
Changes from V2:
- split vhost patch from virtio-net
- add the support of queue number negotiation through control virtqueue
- hotplug, set_link and migration support
- bug fixes
Changes from V1:
- rebase to the latest
- fix memory leak in...