similar to: High load/corrupted indexes after moving mailboxes?

Displaying 20 results from an estimated 3000 matches similar to: "High load/corrupted indexes after moving mailboxes?"

2013 Apr 27
1
virt-install creates a snapshot as the volume backend
Greetings All, I was running libvirt-0.9.10 on CentOS 6.3 and it was working perfectly until yesterday when I decided to update to 6.4, which upgraded libvirt-0.9.10 to libvirt-0.10.2. I have a storage pool of type volume group, upon upgrading to libvirt-0.10.2, the disk image gets created as a snapshot on the volume group not as a regular volume. Now every time I create a vm using
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi. I entered one of our domU tonight and see following problem: # iowait -k 5 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.00 100.00 0.00 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0
2013 Nov 13
1
SIP Mass exodus
Hi all, I've been seeing some strangeness lately on my 10.2.1 server. It's gotten to the point that a few times each day, I see masses of SIP clients becoming unreachable. They're not all on the same network, and we don't see any calls drop. In a few seconds, they all come back. I don't think it's a connectivity issue because we don't drop calls, and the endpoints
2008 Mar 29
1
Help in troubleshoot cause of high kernel activity
Hi, I had been experiencing a problem on our dedicated server running Centos 5, and unable to successfully track down the problem. Since about 6 days ago, I noticed a spike in load/CPU utilization which went from a typical 0.2x-0.3x to 3.x. At the same time, average traffic also went up and so did the log usage. Prior to this, the server was working fine and there had been no changes to the
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2013 Oct 20
3
kvm cluster w/ c6
In our development lab, I am installing 4 new servers, that I want to use for hosting KVM. each server will have its own direct attached raid. I'd love to be able to 'pool' this storage, but over gigE, I probably shouldn't even try. most of the VM's will be running CentOS 5 and 6, some of the VM's will be postgresql database dev/test servers, others will be
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All, We have a xen server and using 8 core processor. I can see that there is 99% iowait on only core 0. 02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88 02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11 02:28:54 AM
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey! I have a strange condition in one of the servers that I don't where to start looking. I login to the server via SSH (cant doit any other way) and anything that I type is slow HTTP sessions timeout waiting for screen redraw. So, the server is acting "slow". server is bare metal. no virtual services. no alarms in the disk raid note: server was restarted because of power failure.
2006 Jun 07
2
SAR
Folks, At what point in iowait should I start to worry about having a bottleneck, or is this something that can't be answerd with a single integer? According to sar, after my last reboot to turn off hyperthreading as a test, at one time, I see 4.9% iowait, but then one minute later, it droped back to 0.01%, and rarely even gets to 1.0%, at least what I remember from yesterday. ?
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2013 Aug 21
2
High Load Average on POP/IMAP.
Hi, We have a serious issue running on our POP/IMAP servers these days. The load average of a servers spikes up to 400-500 as a uptime command result, for a particular time period , to be specific mostly in noon time and evening, but it last for few minutes only. We have 2 servers running dovecot 1.1.20 , in loadbanlancer, We have used KEEPLIVE (1.1.13) for loadbalacing. Server
2007 Oct 18
1
Vista performance (uggh)
Issue: Vista reads slowly from a samba server. This appears to pop up periodically here and elsewhere. My samba.conf file has: [homes] ... vfs objects = readahead As suggested elsewhere. Writes are approximately 17-18MB/s which is acceptable. Reads are in the 8MB/s range which is appalingly slow. Using linux smbclient and windows XP clients I can read at 25+MB/s. I've enabled vfs
2010 Sep 14
5
IOwaits over NFS
Hello. We have a number of Xen 3.4.2. boxes which have constant iowaits at around 10% with spikes up to 100% when accessing data over NFS. We have been unable to nail down the issue. Any advice? System info: release : 2.6.18-194.3.1.el5xen version : #1 SMP Thu May 13 13:49:53 EDT 2010 machine : x86_64 nr_cpus : 16 nr_nodes
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick, what was the value of 'si' in top ? Best Regards, Strahil Nikolov ?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????: >It was found that the software NIC team created in Centos was having >issues due to a failing network cable. The team was going berserk with >up/down changes. > > >On Fri, Jul 3,
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2007 May 25
9
Scaling Asterisk: Dual-Core CPUs not yielding gains at high call volumes
List users, Using Asterisk in an inbound call center environment has led us to pushing the limits of vertical scaling. In order to treat each caller fairly and to utilize our agents as efficiently as possible, it is desirable to configure each client as a single queue. As far as I know, Asterisk's queues cannot be distributed across servers, so the size of the largest queue we service
2002 Jul 29
0
Partial BDC functionality ...
I'm investigating deploying multiple samba servers to remote offices which have slow links to central LAN with NT4 as PDC, hoping to provide the following :- 1) Local samba fileserver in each office for faster access to shares, which can be rsynch'ed or backed up across the slow links to the central LAN overnight. So far so good :-) 2) Impove logon speed for for users in each office by
2011 May 07
7
kswapd taking 100% cpu with no swap on system
Hi All I have xeon server with 16 Gb Ram and no Swap memory.I am running cassandra server on two node in cluster.When there is high load on server kswapd0 kicks inn and take 100% cpu and make machine very slow and we need to restart out cassandra server.I have latest kernel 2.6.18-238.9.1.el5.Please let me know how can i fix this issue .Its hurting us badly this our production server any