Displaying 20 results from an estimated 400 matches similar to: "Samba Tuning to increase Throughput"
2002 Jun 21
1
Re: measured throughput variations
Hi Stephen,
"Stephen C. Tweedie" wrote:
> Hi,
>
> On Thu, Jun 20, 2002 at 05:35:45PM +0200, chacron1 wrote:
>
> > I redo some test with 2.4.14 ext3 and raw device .
>
> Please try the current ext3, from -ac or ext3 cvs, to make sure you're
> not hitting something that's been fixed since 2.4.14. 2.4.14 is a
> very old, and known buggy kernel.
>
2004 Jul 02
0
Best throughput routing or least latency routing
Correct me if I am wrong, RIP is kind least hop routing, but
is there a way for me to have best throughput routing or
least latency routing ?
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
2012 Jan 13
0
iscsi throughput in DomU
Hey Guys,
when starting a DomU we use iscsi as disks for our vms. When testing throughput I get half the troughput in contrast to tests on the Dom0 itself.
For Example:
I log in the target on the xenhost (Dom0) itself and get a troughput of 80mb/s. that''s ok
Now im starting the VM (DomU) and the same target is now attached as xvdb. When doing troughput tests the speed is about
1998 Jul 01
0
Performance/Throughput
I've used SAMBA for quite a long time moving small amounts of data and
backing up my PC to an HP9000 and been happy with the performance and
reliability. ;-)
A friend has a requirement for the movement of large amounts (18GB) of data
a day between an HP-UX system and a few NT workstations and we need to know
whether we're pushing the technology and/or what kind of load this would put
on
2005 Aug 31
0
throughput differences
Hi all,
I notice a big throughput differences between a normal user and root on
Dom-0. I did the command scp largefile root@xx.yyy.zzz.sss:~ and got the
following result (test is a normal user):
>From Dom-0 to Dom-1
Login root --> largefile 100% 4096MB 8.5MB/s 08:01
Login test --> largefile 100% 4096MB 2.9MB/s 23:46 <--???
>From Dom-1 to Dom-0
Login root -->
2008 Jun 23
0
Reg: Throughput b/w domU & dom0
Hi all,
I used netperf to measure throughput between dom0 & domU.
The throughput between dom0 -> domU was 256.00 Mb/sec
domU -> dom0 was 401.15 Mb/sec.
There is a huge variation in the throughput between dom0 & domU. To my
surprise the throughput between domU -> dom0 is more. The value which I
specified are consistent values.
2008 Jul 02
0
FW: Reg: Throughput b/w domU & dom0
Hi all,
I used netperf to measure throughput between dom0 & domU.
The throughput between dom0 -> domU was 256.00 Mb/sec
domU -> dom0 was 401.15 Mb/sec.
1) Dom0 -> DomU:
Recv
Socket
Size bytes
Send
Socket
Size bytes
Send
Message
Size bytes
Elapsed
Time
Secs.
Throughput
Mbs/sec
87380
16384
16384
10.01
231.61
2008 Jul 02
0
FW: Reg: Throughput b/w domU & dom0
Hi all,
I used netperf to measure throughput between dom0 & domU.
The throughput between dom0 -> domU was 256.00 Mb/sec
domU -> dom0 was 401.15 Mb/sec.
1) Dom0 -> DomU:
Recv
Socket
Size bytes
Send
Socket
Size bytes
Send
Message
Size bytes
Elapsed
Time
Secs.
Throughput
Mbs/sec
87380
16384
16384
10.01
231.61
2013 May 27
1
Query on improving throughput
Dear All,
We have a small setup of lustre with 7 OSTs on 8gb FC . We have kept one
OST per FC port. We have lustre 2.3 with CentOS 6.3. There are 32 clients
which access this over FDR IB. We can achieve more than 1.3GB/s
throughput using IOR, without cache. Which is roughly 185MB/s per OST. We
wanted to know if this is normal. Should we expect more from 8gb FC port.
OSTs are on 8+2 RAID6 .
2007 Sep 28
0
samba-3.0.24 on openbsd: low throughput
greetings list. am serving SMB using the samba-3.0.24 package on an
openbsd 4.1-release machine and am seeing really low throughput from the
server, even when both the server and client are on gigabit ethernet.
the maximum throughput i've been able to attain is ~6 MBps, which is
pretty slow, both on 100 Mbps and 1 Gbps segments. this top speed is
identical on both 100 Mbps and 1 Gbps.
2006 Jan 30
0
Network throughput slowdown - need help
HELP!!!
Over the last few weeks, my samba throughput has slowed down big-time. I
have checked out the hardware. Network Operations has checked out the port,
no joy.
I have copied the same file over NFS and and Samba. The throughput is about
half over Samba vs. NFS. The only error messages I see are the infamous
string_to_sid messages that I have seen many questions about, and absolutely
no
2011 Feb 10
0
Problem with Memory Throughput Difference between Two Nodes(sockets)
Hi all,
I installed xen4.0.1-rc3 & 2.6.18.8 (dom0) on my machine (INTEL Xeon X5650,
Westemere, 12cores, 6cores per socket, 2sockets, 12MB L3,.. )
I figured out after running SPECCPU 2006 Libquantum benchmark that two nodes
have different throughput.
I set up 6vm on each node, and ran the workload in each VM.
VM in node1 got 1500sec exec time while VM in node2 got 1990sec
2017 Aug 13
0
throughput question
Hi everybody
I have a some question about throughput for glusterfs volume.
I have 3 server for glusterfs, each have one brick and 1GbE for their
network, I have made distributed replica 3 volume with these 3 bricks. the
network between the clients and the servers are 1GbE.
refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Perform
ance_in_a_Gluster_Systemv6F.pdf
I have setup
2014 Sep 08
0
QEMU disk migration not using entire connection throughput
Hello everybody,
I've been observing that, during the disk migration usign
--copy-storage-inc, the data transfer hardly ever uses the entire
available connection bandwidth (in my case 1 Gb/s), and the migration is
at a non-constant speed. On the other hand, at the memory migration
step, everything goes perfect.
I am almost sure that this is not a nard disk throughput limitation,
because
2005 Jan 11
1
Tool Recommendations for measuring UDP throughput / loss / jitter
Tool Recommendations for measuring UDP throughput / loss / jitter
Hello all,
I'm looking for recommendations on getting some measurement software set up
so we can work with our ISP to determine where packet loss is occurring.
Linux and windows tools are fine, it's just every time I start searching on
google I run into many possibilities and am not sure what I should be
trying. Maybe
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote:
> This series tries to increase the throughput of virtio-vsock with slight
> changes:
> - patch 1/4: reduces the number of credit update messages sent to the
> transmitter
> - patch 2/4: allows the host to split packets on multiple buffers,
> in this way, we can remove the packet
2019 Apr 08
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On 2019/4/4 ??6:58, Stefano Garzarella wrote:
> This series tries to increase the throughput of virtio-vsock with slight
> changes:
> - patch 1/4: reduces the number of credit update messages sent to the
> transmitter
> - patch 2/4: allows the host to split packets on multiple buffers,
> in this way, we can remove the packet size limit to
>
2019 Apr 08
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Fri, Apr 05, 2019 at 09:49:17AM +0200, Stefano Garzarella wrote:
> On Thu, Apr 04, 2019 at 02:04:10PM -0400, Michael S. Tsirkin wrote:
> > On Thu, Apr 04, 2019 at 06:47:15PM +0200, Stefano Garzarella wrote:
> > > On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote:
> > > > I simply love it that you have analysed the individual impact of
> >
2019 Apr 08
1
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:
> Another thing that may help is to implement sendpage(), which will greatly
> improve the performance.
I can't find documentation for ->sendpage(). Is the idea that you get a
struct page for the payload and can do zero-copy tx? (And can userspace
still write to the page, invalidating checksums in the header?)
Stefan
2019 Jul 22
0
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
On Wed, Jul 17, 2019 at 01:30:25PM +0200, Stefano Garzarella wrote:
> This series tries to increase the throughput of virtio-vsock with slight
> changes.
> While I was testing the v2 of this series I discovered an huge use of memory,
> so I added patch 1 to mitigate this issue. I put it in this series in order
> to better track the performance trends.
>
> v4:
> - rebased