Displaying 20 results from an estimated 800 matches similar to: "TCP Tuning/Apache question (possibly OT)"
2011 Mar 11
1
UDP Perfomance tuning
Hi,
We are running on 5.5 on a HP ProLiant DL360 G6.
Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest
available kernel kernel-2.6.18-238.1.1.el5.x86_64)
We running some performance tests using the "iperf" utility.
We are seeing very bad and inconsistent performance on the UDP testing.
The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2010 Dec 10
1
UDP buffer overflows?
Hi,
On one of our asterisk systems that is quite busy, we are seeing the
following from 'netstat -s':
Udp:
17725210 packets received
36547 packets to unknown port received.
44017 packet receive errors
17101174 packets sent
RcvbufErrors: 44017 <--- this
When this number increases, we see SIP errors, and in particular
Qualify packets are lost, and
2020 Jan 13
0
UDPbuffer adjustment
Hi,
Saw the below config from tinc’s manual:
UDPRcvBuf = bytes (OS default)
Sets the socket receive buffer size for the UDP socket, in bytes. If unset, the default buffer size will be used
by the operating system.
UDPSndBuf = bytes (OS default)
Sets the socket send buffer size for the UDP socket, in bytes. If unset, the default buffer size
2010 Mar 16
2
What kernel params to use with KVM hosts??
Hi all,
I order to reach maximum performance on my centos kvm hosts I have use these params:
- On /etc/grub.conf:
kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet
- On sysctl.conf
# Special network params
net.core.rmem_default = 8388608
net.core.wmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values
-------------------------------------------
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
# vm.max-readahead = ?
# vm.min-readahead = ?
# HW Controler Off
# max-readahead = 1024
# min-readahead = 256
# Memory over-commit
# vm.overcommit_memory=2
# Memory to
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
Currently I'm running CentOS 4.4 on a Dell Poweredge 850 with an Intel
Pro/1000 Quad-port adapter.
I seem to be able to only achieve 80% utilization on the adapter, while
on the same box running Fedora Core 5 I was able to reach 99%
utilization.
I am using iSCSI Enterprise Target as my application and I am using the
nullio feature, it just discards any write and sends back random data
for
2005 May 23
0
problem in speeds [Message from superlinux]
i am assigned a network to replace its "Windows server with ISA caching
proxy" with another "debian linux with squid proxy" with both "linux"
and "ISA" are completely differnet boxes. i am using linux 2.6 kernel
since the linux server has SATA hard disks .
the network has downlink with a penta@net DVB card for down-link ; then
it''s connected
2013 Nov 07
0
GlusterFS with NFS client hang up some times
I have the following setup with GlusterFS.
Server: 4
- CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
- RAM: 32G
- HDD: 1T, 7200 RPM (x 10)
- Network card: 1G x 4 (bonding)
OS: Centos 6.4
- File system: XFS
> Disk /dev/sda: 1997.1 GB, 1997149306880 bytes
> 255 heads, 63 sectors/track, 242806 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
2016 Jan 07
0
Samba over slow connections
Am 07.01.2016 um 11:58 schrieb Sébastien Le Ray:
> Hi list (and happy new year),
>
> I'm experiencing some troubles using Samba (4.1.17 debian version) over
> VPN. Basically we've following setup :
>
> PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server
>
> Copying big (say > 1MiB) files from PC to Samba file server almost
> always ends up with a
2016 Jan 07
1
Samba over slow connections
Le 07/01/2016 12:22, Reindl Harald a écrit :
>
> /usr/sbin/ifconfig eth0 txqueuelen 100
> ______________________________________________
>
> ifcfg-eth0:
>
> ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
> ______________________________________________
>
> sysctl.conf:
>
> net.core.rmem_max = 65536
> net.core.wmem_max = 65536
2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS
5.7, x86_64. The number of NFS clients is about 170.
A few days ago, one of the four, with no apparent changes, stopped
responding to NFS requests for two minutes every half an hour (approx).
Let's call this "the hang". It has been doing this for four days now.
There are no log messages of any kind pertaining
2017 May 17
0
Improving packets/sec and data rate - v1.0.24
Hi,
Terribly sorry about the duplicated message.
I've completed the upgrade to Tinc 1.0.31 but, have not seen much of a
performance increase. The change looks to be similar to switching to
both aes-256-cbc w/ sha256 (which are now the default so, that makes
sense).
Out tinc.conf is reasonably simple:
Name = $hostname_for_node
Device = /dev/net/tun
PingTimeout = 60
ReplayWindow = 625
2013 Sep 05
0
windows guest network kept down automatically when several windows guest running in one KVM host,
Hi all:
I have some kvm host(rhel 6.4, 2.6.32-358.0.1.el6.x86_64), I ran several windows guest on it(more than 10 guest on one host and the guest os are win7-32/win7-64/win2k8), but the guest network kept down automatically, lost package. I tried virtio drive and e1000 drive,but it didn't work. However, when I run cmd.exe ping some other subnet ip it worked.
The host and guest are connected by
2006 Jan 09
4
Problem with habtm and resulting SQL insert
Cheers,
I have a problem with 1.0 and a habtm relationship between User and Article.
I want to save all articles that users read. I have these models:
class User < ActiveRecord::Base
has_and_belongs_to_many :read_articles, :class_name => "Article",
:join_table => "read_articles"
...
end
class Article < ActiveRecord::Base
has_and_belongs_to_many :readers,
2007 Dec 28
7
Xen and networking.
I have a beefy machine
(Intel dual-quad core, 16GB memory 2 x GigE)
I have loaded RHEL5.1-xen on the hardware and have created two logical systems:
4 cpus, 7.5 GB memory 1 x Gige
Following RHEL guidelines, I have it set up so that eth0->xenbr0 and
eth1->xenbr1
Each of the two RHEL5.1 guests uses one of the interfaces and this is
verified at the
switch by seeing the unique MAC addresses.
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
I'm using Samba & smbfs to make directories on a Linux file server
available across a switched Gigabit network. Unfortunately, when
mounting the shares to another Linux system with smbfs, the performance
is terrible.
To test the setup, I created both a 100mb and 650mb file and transferred
them with ftp, smbclient, and smbfs (mounted share). I also used iperf
to send each file, just out of
2009 Jan 20
6
Apache Server Tuning for Performance
Hi all,
I am facing facing performance issues with our web servers which is
working for concurrent 250 requests properly and then stops
responding when the requests are more than 250 .
The current configuration parameters are as follows :
apachectl -version
Server version: Apache/2.0.52
Server built: Jan 30 2007 09:56:16
Kernel : 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64
2006 Sep 22
0
Re: samba Digest, Vol 45, Issue 29
You really meant 300 _M_bps as the upper bound according to Enterasys.
My switches are from Enterasys too. After a firmware updates I get about
722 Mbps both ways. The client's disk drive (Maxtor 250 GB) can't read or
write faster.
If you don't get transfer rates in the immediate neighbourhood of the
read/write
speeds of your client's disk drives, then your network setup (hardware
2017 May 17
2
Improving packets/sec and data rate - v1.0.24
Hi Jared,
I've seen the same while testing on digital ocean, I think it's the context
switching that happens when sending a packet.
I've done some testing with wireguard and that has a lot better performance
but it's still changing quite a lot and only does a subset of what
tinc does so probably not a stable solution.
Martin
On Wed, 17 May 2017 at 18:05 Jared Ledvina <jared at
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
Hi Srivatsa,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on vhost/linux-next]
[also build test ERROR on xen-tip/linux-next linus/master v5.7-rc3 next-20200428]
[cannot apply to swiotlb/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base