similar to: How to improve the throughput

Displaying 20 results from an estimated 300 matches similar to: "How to improve the throughput"

2010 Jan 26
1
TCP throughput?
Dear I am testing tinc on Windows and found some interesting behavior. I used "TCPOnly" parameter for both end because I wanted to go through NAT. However, by using TCPOnly parameter, the response of PING slowed down significantly. The PING response with UDP is about 4ms, but it bumped up to 2000ms with TCP. I am just curious whether this is caused by using TCP. Regards Masateru
2007 Apr 03
4
Replacing ERB with Erubis
Hey guys, I''ve been hearing a lot about erubis: http://www.kuwata-lab.com/erubis/ Especially about how much faster it is than straight ERB. In their Ruby on Rails support docs: http://www.kuwata-lab.com/erubis/users-guide.05.html#topics-rails They state that with a few added lines to your environment.rb it will replace ERB completely. I''m wondering if anyone has done this in
2008 Jun 13
2
Rails 2.1: invalid date automatically convertion
Hi all, Rails 2.1 seems to converts the following parameters {''birth(1i)''=''1990'', ''birth(2i)''=>''2'', ''birth(3i)''=>''31''} into date ''1990-03-02'' automatically. Is it able to inhibit this convertion? -- makoto kuata
2006 Dec 19
7
Improve the rendering speed of rhtml
I found through the log file that the render of the rhtml template would take too much time if there exists ruby code in the rhtml files.In many cases,these ruby code are tied with these rhtml files closely and can not be decoupled from that,therefore,i have a question that,is there exist some way to improve the rendering speed of rhtml when some ruby codes are included? -- Posted via
2002 Jun 21
1
Re: measured throughput variations
Hi Stephen, "Stephen C. Tweedie" wrote: > Hi, > > On Thu, Jun 20, 2002 at 05:35:45PM +0200, chacron1 wrote: > > > I redo some test with 2.4.14 ext3 and raw device . > > Please try the current ext3, from -ac or ext3 cvs, to make sure you're > not hitting something that's been fixed since 2.4.14. 2.4.14 is a > very old, and known buggy kernel. >
2004 Jul 02
0
Best throughput routing or least latency routing
Correct me if I am wrong, RIP is kind least hop routing, but is there a way for me to have best throughput routing or least latency routing ? _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
2012 Jan 13
0
iscsi throughput in DomU
Hey Guys, when starting a DomU we use iscsi as disks for our vms. When testing throughput I get half the troughput in contrast to tests on the Dom0 itself. For Example: I log in the target on the xenhost (Dom0) itself and get a troughput of 80mb/s. that''s ok Now im starting the VM (DomU) and the same target is now attached as xvdb. When doing troughput tests the speed is about
1998 Jul 01
0
Performance/Throughput
I've used SAMBA for quite a long time moving small amounts of data and backing up my PC to an HP9000 and been happy with the performance and reliability. ;-) A friend has a requirement for the movement of large amounts (18GB) of data a day between an HP-UX system and a few NT workstations and we need to know whether we're pushing the technology and/or what kind of load this would put on
2005 Aug 31
0
throughput differences
Hi all, I notice a big throughput differences between a normal user and root on Dom-0. I did the command scp largefile root@xx.yyy.zzz.sss:~ and got the following result (test is a normal user): >From Dom-0 to Dom-1 Login root --> largefile 100% 4096MB 8.5MB/s 08:01 Login test --> largefile 100% 4096MB 2.9MB/s 23:46 <--??? >From Dom-1 to Dom-0 Login root -->
2008 Jun 23
0
Reg: Throughput b/w domU & dom0
Hi all, I used netperf to measure throughput between dom0 & domU. The throughput between dom0 -> domU was 256.00 Mb/sec domU -> dom0 was 401.15 Mb/sec. There is a huge variation in the throughput between dom0 & domU. To my surprise the throughput between domU -> dom0 is more. The value which I specified are consistent values.
2008 Jul 02
0
FW: Reg: Throughput b/w domU & dom0
Hi all, I used netperf to measure throughput between dom0 & domU. The throughput between dom0 -> domU was 256.00 Mb/sec domU -> dom0 was 401.15 Mb/sec. 1) Dom0 -> DomU: Recv Socket Size bytes Send Socket Size bytes Send Message Size bytes Elapsed Time Secs. Throughput Mbs/sec 87380 16384 16384 10.01 231.61
2008 Jul 02
0
FW: Reg: Throughput b/w domU & dom0
Hi all, I used netperf to measure throughput between dom0 & domU. The throughput between dom0 -> domU was 256.00 Mb/sec domU -> dom0 was 401.15 Mb/sec. 1) Dom0 -> DomU: Recv Socket Size bytes Send Socket Size bytes Send Message Size bytes Elapsed Time Secs. Throughput Mbs/sec 87380 16384 16384 10.01 231.61
2013 May 27
1
Query on improving throughput
Dear All, We have a small setup of lustre with 7 OSTs on 8gb FC . We have kept one OST per FC port. We have lustre 2.3 with CentOS 6.3. There are 32 clients which access this over FDR IB. We can achieve more than 1.3GB/s throughput using IOR, without cache. Which is roughly 185MB/s per OST. We wanted to know if this is normal. Should we expect more from 8gb FC port. OSTs are on 8+2 RAID6 .
2007 Sep 28
0
samba-3.0.24 on openbsd: low throughput
greetings list. am serving SMB using the samba-3.0.24 package on an openbsd 4.1-release machine and am seeing really low throughput from the server, even when both the server and client are on gigabit ethernet. the maximum throughput i've been able to attain is ~6 MBps, which is pretty slow, both on 100 Mbps and 1 Gbps segments. this top speed is identical on both 100 Mbps and 1 Gbps.
2006 Jan 30
0
Network throughput slowdown - need help
HELP!!! Over the last few weeks, my samba throughput has slowed down big-time. I have checked out the hardware. Network Operations has checked out the port, no joy. I have copied the same file over NFS and and Samba. The throughput is about half over Samba vs. NFS. The only error messages I see are the infamous string_to_sid messages that I have seen many questions about, and absolutely no
2011 Feb 10
0
Problem with Memory Throughput Difference between Two Nodes(sockets)
Hi all, I installed xen4.0.1-rc3 & 2.6.18.8 (dom0) on my machine (INTEL Xeon X5650, Westemere, 12cores, 6cores per socket, 2sockets, 12MB L3,.. ) I figured out after running SPECCPU 2006 Libquantum benchmark that two nodes have different throughput. I set up 6vm on each node, and ran the workload in each VM. VM in node1 got 1500sec exec time while VM in node2 got 1990sec
2017 Aug 13
0
throughput question
Hi everybody I have a some question about throughput for glusterfs volume. I have 3 server for glusterfs, each have one brick and 1GbE for their network, I have made distributed replica 3 volume with these 3 bricks. the network between the clients and the servers are 1GbE. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Perform ance_in_a_Gluster_Systemv6F.pdf I have setup
2014 Sep 08
0
QEMU disk migration not using entire connection throughput
Hello everybody, I've been observing that, during the disk migration usign --copy-storage-inc, the data transfer hardly ever uses the entire available connection bandwidth (in my case 1 Gb/s), and the migration is at a non-constant speed. On the other hand, at the memory migration step, everything goes perfect. I am almost sure that this is not a nard disk throughput limitation, because
2005 Jan 11
1
Tool Recommendations for measuring UDP throughput / loss / jitter
Tool Recommendations for measuring UDP throughput / loss / jitter Hello all, I'm looking for recommendations on getting some measurement software set up so we can work with our ISP to determine where packet loss is occurring. Linux and windows tools are fine, it's just every time I start searching on google I run into many possibilities and am not sure what I should be trying. Maybe
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote: > This series tries to increase the throughput of virtio-vsock with slight > changes: > - patch 1/4: reduces the number of credit update messages sent to the > transmitter > - patch 2/4: allows the host to split packets on multiple buffers, > in this way, we can remove the packet