similar to: SMBFS mounts slow across gigabit connection

Displaying 20 results from an estimated 2000 matches similar to: "SMBFS mounts slow across gigabit connection"

2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2016 Jan 07
0
Samba over slow connections
Am 07.01.2016 um 11:58 schrieb Sébastien Le Ray: > Hi list (and happy new year), > > I'm experiencing some troubles using Samba (4.1.17 debian version) over > VPN. Basically we've following setup : > > PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server > > Copying big (say > 1MiB) files from PC to Samba file server almost > always ends up with a
2016 Jan 07
1
Samba over slow connections
Le 07/01/2016 12:22, Reindl Harald a écrit : > > /usr/sbin/ifconfig eth0 txqueuelen 100 > ______________________________________________ > > ifcfg-eth0: > > ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" > ______________________________________________ > > sysctl.conf: > > net.core.rmem_max = 65536 > net.core.wmem_max = 65536
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
Currently I'm running CentOS 4.4 on a Dell Poweredge 850 with an Intel Pro/1000 Quad-port adapter. I seem to be able to only achieve 80% utilization on the adapter, while on the same box running Fedora Core 5 I was able to reach 99% utilization. I am using iSCSI Enterprise Target as my application and I am using the nullio feature, it just discards any write and sends back random data for
2010 Dec 10
1
UDP buffer overflows?
Hi, On one of our asterisk systems that is quite busy, we are seeing the following from 'netstat -s': Udp: 17725210 packets received 36547 packets to unknown port received. 44017 packet receive errors 17101174 packets sent RcvbufErrors: 44017 <--- this When this number increases, we see SIP errors, and in particular Qualify packets are lost, and
2005 May 13
4
Gigabit Throughput too low
Hi I was wondering if you ever got better performance out of your Gigabit/IDE/Fc2? I am facing a similar situation. I am running FC2 with Samba 3.x My problem lies in not that I am limited to 10 MBytes per second sustained. I think it's related to this pdflush and how it's buffers are setup. (I have been doing some research and before 2.6 kernels bdflush was the method that was used and
2005 May 23
0
problem in speeds [Message from superlinux]
i am assigned a network to replace its "Windows server with ISA caching proxy" with another "debian linux with squid proxy" with both "linux" and "ISA" are completely differnet boxes. i am using linux 2.6 kernel since the linux server has SATA hard disks . the network has downlink with a penta@net DVB card for down-link ; then it''s connected
2010 Mar 16
2
What kernel params to use with KVM hosts??
Hi all, I order to reach maximum performance on my centos kvm hosts I have use these params: - On /etc/grub.conf: kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet - On sysctl.conf # Special network params net.core.rmem_default = 8388608 net.core.wmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
2007 Jul 30
0
multiple mounts for a smbfs on the same mount point
Hi all, I'm just wondering why smbmount allows the same mount point to mount several times the same share while a normal mount operation (with a non-smbfs) returns a busy mount point error. [luca@fluca:~]$ mount ... //server/sys on /mnt/target type smbfs (rw) [luca@fluca:~]$ smbmount //server/sys /mnt/target/ -o ip=192.168.4.1,guest [luca@fluca:~]$ mount ... //server/sys on /mnt/target type
2004 Sep 23
1
smbfs mounts cause hangs in kde/gnome
I'm using debian Sarge, Samba 3.0.6-3, Kernel 2.6.8 I use smbmount //server/share mymountpt -oguest which connects fine, but it all dies (hangs) if I try to look in that mounted directory with konqueror in kde OR nautillus in gnome. In dmesg of the server computer I get: smb_proc_readdir_long: error=-512, breaking and the same but with error -13 smb_add_request ... Timed Out! this last
2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this "the hang". It has been doing this for four days now. There are no log messages of any kind pertaining
2005 Mar 16
0
SMBFS Performance Oddity
I posted back in January about a huge performance gap I was seeing on a gigabit network when comparing smbfs and smbclient operations. I was advised to try CIFS, which I did. I didn't make much difference and I gave up and switched to NFS. Recently, however, I noticed something odd. If I move files separately across my smbfs mount, things go MUCH faster. Here's an example. I mount my
2006 Jun 04
4
Maximum samba file transfer speed on gigabit...
Ok so maybe someone can explain this to me. I've been banging my head against the wall on this one for several weeks now and the powers that be are starting to get a little impatient. What we've got is an old FoxPro application, the FoxPro .dbf's being stored on a Linux fileserver using Samba (Fedora 3 currently, using Fedora 5 on the new test server). We're having speed
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
Hi everyone, I am seeing slower-than-expected performance in Gluster 3.2.3 between 4 hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu 10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian Squeeze - same results on those as well). All of the hosts mount the volume using the FUSE
2006 May 06
0
Gigabit Ethernet with multiple VLAN's or Fast Ehternet and with two separate cards?
Hello everyone. What's better for Asterisk: have 2 distinct 100Mb network cards in the system, one on the "internet" and one on the "local net" OR have one 1000Mb network card with 2 separate VLAN's set up? It's a difficult decision because 2 cards are using 2 IRQ's etc but a single 1000Mb card might generate more PCI interrupts and get me into different
2010 Jan 29
2
Lockup using stock r8169 on 4.8 in gigabit mode during heavy transfer on lan
I have no output in the logs, but /etc/init.d/network restart fixes the issue. I am rsync/scp (10GB) data from one 100MB Full duplex host to an ASUS M3A78-EM (RTL8111B/C). It locks up around the 1.5GB Tx mark. If I ethtool -s eth0 autoneg off speed 100 duplex full it does not happen again. Ideas? -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -
2010 Jun 21
3
Increasing NFS Performance
Greetings all- I have a CentOS 5 (Final) system that is serving up content to several other hosts via NFS. The amount of data transferred is rather small as most of the files are under 100kb and each export has maybe 100 files that are accessed regularly. I'm finding that as I add more hosts accessing the NFS server, the performance seems to be getting poorer. There are obvious delays when
2010 Nov 24
1
slow network throughput, how to improve?
would like some input on this one please. Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch transfer files to each other at about 30MB/s between each other. Both servers have the following setup: CentOS 5.5 x64 XEN 1GB NIC's 7200rpm SATA HDD's The hardware configuration can't change, I need to use these servers as they are. They are both used in production
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132
2019 Aug 04
2
[Bug 1359] New: nft 0.9.1 - table family inet, chain type nat, fails to auto-load modules
https://bugzilla.netfilter.org/show_bug.cgi?id=1359 Bug ID: 1359 Summary: nft 0.9.1 - table family inet, chain type nat, fails to auto-load modules Product: nftables Version: unspecified Hardware: x86_64 OS: other Status: NEW Severity: normal Priority: P5 Component: