similar to: Scaling Asterisk: Dual-Core CPUs not yielding gains at high call volumes

Displaying 20 results from an estimated 6000 matches similar to: "Scaling Asterisk: Dual-Core CPUs not yielding gains at high call volumes"

2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All, We have a xen server and using 8 core processor. I can see that there is 99% iowait on only core 0. 02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88 02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11 02:28:54 AM
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2009 Apr 03
35
Xen system hang or freeze
Hi all, This is my first post to the list, I hope someone out there can help! I am running xen 3.0.3, with CentOS 5.2 based Dom0 (kernel-xen-2.6.18-92.1.22.el5) Recently I have noticed some complete system lockups on a few different servers. Neither Dom0 or any of the guests respond to pings, connecting a keyboard and monitor to the system only shows a blank screen. Nothing is written to logs
2008 Jan 15
0
strange sar statistics: total CPU 90% in userspace, but CPU 0 and 1 only 1%
Hi Centos Users The server (ProLiant DL380 G5, Centos 4.3) was not responsible anymore (SSH), fans rotated at full power and the console was blank. After restart it seems to be fine. Now I am analyzing sar statistics (in system logs nothing special, unusual). The hardware monitoring with HP tools didn't work. # sar -P ALL Linux 2.6.9-34.ELsmp (cent061) 01/15/2008 12:00:01 AM
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi. I entered one of our domU tonight and see following problem: # iowait -k 5 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.00 100.00 0.00 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0
2010 Mar 10
3
Logrotate/cron and major I/O contention with KVM.
Is anyone else having major I/O peaks due to logrotate or other jobs running simultaneously across multiple guests. I have one KVM server running Centos 5.4 with local disk that is seriously suffering as most of the guests rotate their syslog at the same time. Looking at the KVM server I'm seeing 11:00:01 PM CPU %user %nice %system %iowait %steal %idle 03:40:01 AM
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey! I have a strange condition in one of the servers that I don't where to start looking. I login to the server via SSH (cant doit any other way) and anything that I type is slow HTTP sessions timeout waiting for screen redraw. So, the server is acting "slow". server is bare metal. no virtual services. no alarms in the disk raid note: server was restarted because of power failure.
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick, what was the value of 'si' in top ? Best Regards, Strahil Nikolov ?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????: >It was found that the software NIC team created in Centos was having >issues due to a failing network cable. The team was going berserk with >up/down changes. > > >On Fri, Jul 3,
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2007 Oct 18
1
Vista performance (uggh)
Issue: Vista reads slowly from a samba server. This appears to pop up periodically here and elsewhere. My samba.conf file has: [homes] ... vfs objects = readahead As suggested elsewhere. Writes are approximately 17-18MB/s which is acceptable. Reads are in the 8MB/s range which is appalingly slow. Using linux smbclient and windows XP clients I can read at 25+MB/s. I've enabled vfs
2016 Mar 10
2
Soft lockups with Xen4CentOS 3.18.25-18.el6.x86_64
I've been running 3.18.25-18.el6.x86_64 + our build of xen 4.4.3-9 on one host for the last couple of weeks and have gotten several soft lockups within the last 24 hours. I am posting here first in case anyone else has experienced the same issue. Here is the first instance: sched: RT throttling activated NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] Modules linked in:
2018 May 01
0
Finding performance bottlenecks
Hi, So is the KVM or Vmware as the host(s)? I basically have the same setup ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware using NFS disk was pretty slow (40% of a single disk) but this was over 1gb networking which was clearly saturating. Hence I am moving to KVM to use glusterfs hoping for better performance and bonding, it will be interesting to see
2018 Apr 30
3
Finding performance bottlenecks
Hi I'm trying to setup a 3 node gluster, and am hitting huge performance bottlenecks. The 3 servers are connected over 10GB and glusterfs is set to create a 3 node replica. With a single VM performance was poor, but I could have lived with it. I tried to stress it by putting copies of a bunch of VMs on the servers and seeing what happened with parallel nodes.. network load never broke
2016 Mar 12
1
Soft lockups with Xen4CentOS 3.18.25-18.el6.x86_64
On 03/10/2016 12:05 AM, Sarah Newman wrote: > On 03/09/2016 08:15 PM, Sarah Newman wrote: >> I've been running 3.18.25-18.el6.x86_64 + our build of xen 4.4.3-9 on one host for the last couple of weeks and have gotten several soft lockups >> within the last 24 hours. I am posting here first in case anyone else has experienced the same issue. >> > > Here is mpstat
2011 May 07
7
kswapd taking 100% cpu with no swap on system
Hi All I have xeon server with 16 Gb Ram and no Swap memory.I am running cassandra server on two node in cluster.When there is high load on server kswapd0 kicks inn and take 100% cpu and make machine very slow and we need to restart out cassandra server.I have latest kernel 2.6.18-238.9.1.el5.Please let me know how can i fix this issue .Its hurting us badly this our production server any