Displaying 20 results from an estimated 6000 matches similar to: "Throughput problem with Samba 3.3.4 over VPN"
2011 Mar 25
1
Samba Tuning to increase Throughput
Hi All,
I have gone through threads related to throughput issue in this list. Found few
similar issue, but could not get the solution.
So looking for some advice from group.
I am trying to use the samba to access a USB disk connected to our evaluation
board which has xtensa core running at 400 MHz.
Samba 3.5.x is running on the board. We are getting below throughput as tested
with the
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
On Tue, Jul 30, 2019 at 11:54:53AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes.
> > While I was testing the v2 of this series I discovered an huge use of memory,
> > so I added patch 1 to mitigate this issue. I put it in this
2009 Sep 16
3
DomU to DomU throughput issue
Hi
Is there anyone who has successfully resolved the low throughput problem in Guest communication.
I am using Xen3.4.1 with PV os kernel 2.6.30-rc6-tip On fedora11. While running netperf benchmark on guests throughput results are very low
DomU to DomU 0.29 Mbps
Why is it so? What would be the problem. Is there any issue with fc11 platform?
Regards,
Fasiha Ashraf
Add whatever you love
2004 Nov 10
5
etherbridge bottleneck
I ran some iperf tests today and it looks like the etherbridge
is the limiting factor on throughput. In the beforetime, I saw great
throughput to the VMs; over 800 Mbps. With the bridge, the numbers
are in the 400s somewhere.
Is this the speed I can expect from the bridge?
Is there some tuning I should try, or another way to get more bandwidth
into the VMs?
This is with xen-2.0, 2.4.27-xen0
2007 Sep 28
0
samba-3.0.24 on openbsd: low throughput
greetings list. am serving SMB using the samba-3.0.24 package on an
openbsd 4.1-release machine and am seeing really low throughput from the
server, even when both the server and client are on gigabit ethernet.
the maximum throughput i've been able to attain is ~6 MBps, which is
pretty slow, both on 100 Mbps and 1 Gbps segments. this top speed is
identical on both 100 Mbps and 1 Gbps.
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote:
>
>
> On 2017?04?19? 04:21, Willem de Bruijn wrote:
>>
>> +static void virtnet_napi_tx_enable(struct virtnet_info *vi,
>> + struct virtqueue *vq,
>> + struct napi_struct *napi)
>> +{
>> + if
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote:
>
>
> On 2017?04?19? 04:21, Willem de Bruijn wrote:
>>
>> +static void virtnet_napi_tx_enable(struct virtnet_info *vi,
>> + struct virtqueue *vq,
>> + struct napi_struct *napi)
>> +{
>> + if
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he
>>> want
>>> check affinity_hint_set just for speculative tx polling on rx napi
>>> instead
>>> of disabling it at all.
>>>
>>> And I'm not convinced this is really needed, driver only provide affinity
>>> hint instead of affinity, so it's
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he
>>> want
>>> check affinity_hint_set just for speculative tx polling on rx napi
>>> instead
>>> of disabling it at all.
>>>
>>> And I'm not convinced this is really needed, driver only provide affinity
>>> hint instead of affinity, so it's
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
This series tries to increase the throughput of virtio-vsock with slight
changes.
While I was testing the v2 of this series I discovered an huge use of memory,
so I added patch 1 to mitigate this issue. I put it in this series in order
to better track the performance trends.
v5:
- rebased all patches on net-next
- added Stefan's R-b and Michael's A-b
v4:
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
This series tries to increase the throughput of virtio-vsock with slight
changes.
While I was testing the v2 of this series I discovered an huge use of memory,
so I added patch 1 to mitigate this issue. I put it in this series in order
to better track the performance trends.
v5:
- rebased all patches on net-next
- added Stefan's R-b and Michael's A-b
v4:
2010 Dec 27
2
E1000 eth1 link flakiness - causes??
Have you experienced this? What's going on when this occurs? What do I
need to do to keep it from occurring? Please advise. Thanks.
Dec 4 10:18:17 localhost kernel: e1000: eth1 NIC Link is Down
Dec 4 10:18:19 localhost kernel: e1000: eth1 NIC Link is Up 100 Mbps
Full Duplex, Flow Control: RX/TX
Dec 4 10:18:21 localhost kernel: e1000: eth1 NIC Link is Down
Dec 4 10:18:23 localhost kernel:
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Mon, Apr 24, 2017 at 12:40 PM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote:
>> >>> Maybe I was wrong, but according to Michael's comment it looks like he
>> >>> want
>> >>> check affinity_hint_set just for speculative tx polling on rx napi
>> >>> instead
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Mon, Apr 24, 2017 at 12:40 PM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote:
>> >>> Maybe I was wrong, but according to Michael's comment it looks like he
>> >>> want
>> >>> check affinity_hint_set just for speculative tx polling on rx napi
>> >>> instead
2017 Jul 23
2
Slow Samba
Hello friends,
I have a Gigabit network with few Windows and Centos 7 machines and I
noticed that when copying files via Samba from:
Windows to Windows I can copy files with speed of +- 120 MBps (I think this
is the max speed gigabit network can provide)
But when copying files from:
Centos to Centos I get only speeds of about 40 MBps
Windows to Centos 40 MBps
Centos to Windows 40 MBps
I
2009 Sep 04
2
Xen & netperf
First, I apologize if this message has been received multiple times.
I''m having problems subscribing to this mailing list:
Hi xen-users,
I am trying to decide whether I should run a game server inside a Xen
domain. My primary reason for wanting to virtualize is because I want
to isolate this environment from the rest of my server. I really like
the idea of isolating the game server
2009 Sep 04
3
bridge throughput problem
I have set-up xen on my Intel quad core server. Now running different experiment to measure network throughput in virtualized environment. these are some of the results:
Netperf-4.5 results for inter-domain communication.
Sr.No. Client Server Time(sec) Throughput(Mbps)
1 Guest-1 Dom0
2006 Dec 11
3
VPN As SIP Tunneling?
Hi All
Could a VPN be used to help with SIP Tunneling and QoS issues.
State 1:
Two IP Networks Connected via the Public Internet transmitting VoIP Traffic
Say a VoIP User and VoIP Termination Provider.
Each side can put QoS onto their part, but if QoS does NOT exist between
them
then call quality will be bad anyhow.
State 2:
Same as above except a VPN tunnel is setup between each side.
Thus
2019 Sep 15
2
nfsmount default timeo=7 causes timeouts on 100 Mbps
I can't explain why 700 msecs aren't enough to avoid timeouts in 100
Mbps networks, but my tests verify it, so I'm writing to the list to
request that you increase the default timeo to at least 30, or to 600
which is the default for `mount -t nfs`.
How to reproduce:
1) Cabling:
server <=> 100 Mbps switch <=> client
Alternatively, one can use a 1000 Mbps switch and
2009 Apr 21
2
tg3 BCM5755 intermittantly stops working after upgrade to 5.3.
Dear All,
I am having a HP xw4400 with following ethernet controller
as reported by lspci
Broadcom Corporation NetXtreme BCM5755 Gigabit Ethernet PCI Express (rev 02)
This machine was running CentOS 5.2 without any problem. After
updating the machine with yum update on 8 April, after which it is showing
to be CentOS 5.3, this machine stops communicating