Displaying 20 results from an estimated 3000 matches similar to: "High load average on dom0"
2010 Mar 18
0
Extremely high iowait
Hello,
We have a 5 node OCFS2 volume backed by a Sun (Oracle) StorageTek 2540. Each system is running OEL5.4 and OCFS2 1.4.2, using device-mapper-multipath to load balance over 2 active paths. We are using the default multipath configuration for our SAN. We are observing iowait time between 60% - 90%, sustaining at over 80% as I'm writing this, driving load averages to >25 during an rsync
2009 Aug 06
0
High iowait in dom0 after creating a new guest, with file-based domU disk on GFS
Hopefully I summed up the jist of the problem in the subject line. ;)
I have a GFS cluster with ten Xen 3.0 dom0s sharing an iSCSI LUN. There are
on average 8 domUs running on each Xen server. The dom0 on each server is
hard-coded to 2 CPUs and 2GB of RAM with no ballooning, and has 2GB of
partition-based swap.
When creating a new domU on any of the Xen servers, just after the
completion of
2009 Nov 17
2
High load averages with latest kernel and USB drives?
I'm having a server report a high load average when backing up Postgres
database files to an external USB drive. This is driving my loadbalancers all
out of kilter and causing a large volume of network monitor alerts.
I have a 1TB USB drive plugged into a USB2 port that I use to back up the
production drives (which are SCSI). It's working fine, but while doing backups
(hourly) the
2013 Apr 05
0
DRBD + Remus High IO load frozen
Dear all,
I have installed DRBD 8.3.11 compiled from sources. However the backend
block will freeze if there is high IO load. I use Remus to support high
availability and checkpointing is controlled by remus for each 400ms.
If I check the Iostat I got the idle CPU will decreasing extremely each
checkpointing and when its reach 0% of idle cpu the local backing device
will freeze and damage the
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2004 Jun 26
1
OCFS Performance on a Hitachi SAN
I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having.
I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12. I have six LUNs presented to the hosts using HDLM,
2007 Oct 18
1
Vista performance (uggh)
Issue: Vista reads slowly from a samba server. This appears to pop up
periodically here and elsewhere.
My samba.conf file has:
[homes]
...
vfs objects = readahead
As suggested elsewhere.
Writes are approximately 17-18MB/s which is acceptable. Reads are in
the 8MB/s range which is appalingly slow. Using linux smbclient and
windows XP clients I can read at 25+MB/s. I've enabled vfs
2020 Jul 03
0
Slow terminal response Centos 7.7 1908
It was found that the software NIC team created in Centos was having
issues due to a failing network cable. The team was going berserk with
up/down changes.
On Fri, Jul 3, 2020 at 10:12 AM Erick Perez - Quadrian Enterprises <
eperez at quadrianweb.com> wrote:
> Hey!
> I have a strange condition in one of the servers that I don't where to
> start looking.
> I login to the
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick,
what was the value of 'si' in top ?
Best Regards,
Strahil Nikolov
?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????:
>It was found that the software NIC team created in Centos was having
>issues due to a failing network cable. The team was going berserk with
>up/down changes.
>
>
>On Fri, Jul 3,
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey!
I have a strange condition in one of the servers that I don't where to
start looking.
I login to the server via SSH (cant doit any other way) and anything that I
type is slow
HTTP sessions timeout waiting for screen redraw. So, the server is acting
"slow".
server is bare metal. no virtual services.
no alarms in the disk raid
note: server was restarted because of power failure.
2008 May 08
7
100% iowait in domU with no IO tasks.
Hi.
I entered one of our domU tonight and see following problem:
# iowait -k 5
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 100.00 0.00 0.00
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda1 0.00 0.00 0.00 0 0
sda2 0.00 0.00 0.00 0
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem.
When we have some VMWare clients running (mostly MS windows clients),
than the IO-write performance on the host becomes very bad.
The guest os's do not do anything, just having them started,
sitting at the login prompt, is enough to trigger the problem.
The host has plenty of 4G of RAM, and all clients fit easily into
the space.
The disksystem is a
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2011 Oct 27
2
ps locking up
I have a client running a CentOS 6.0 machine with cPanel. The machine is
fully updated with both cPanel (RELEASE) and the OS.
At first, I noticed that after cPanel's dcpumon ran (even once),
applications that depend on ps lock up and iowait jumps to around 50%.
Load averages start out around 20 when this happens and slowly crawl up
into the hundreds. Aside from not being able to run commands
2018 Jan 09
2
Samba 4.7.x IOWAIT / load average
Hi,
I want to report a strange problem with samba 4.7.x and IO wait / CPU usage.
I am using linux (archlinux, kernel 4.14.2) and encounter a very strange bug with only samba 4.7.x (from 4.7.0 to last 4.7.4).
With these versions CPU usage and IO WAIT is very high with a single CIFS client with very low transfert (about 1MB/s).
I rollback to 4.6.9 to 4.6.12 and no problem with these versions.
2018 Jan 10
2
Samba 4.7.x IOWAIT / load average
Hi Jeremy,
What do you need exactly ?
Thanks
--
Christophe Yayon
> On 10 Jan 2018, at 01:38, Jeremy Allison <jra at samba.org> wrote:
>
>> On Tue, Jan 09, 2018 at 03:27:17PM +0100, Christophe Yayon via samba wrote:
>> Hi,
>>
>> I want to report a strange problem with samba 4.7.x and IO wait / CPU usage.
>>
>> I am using linux (archlinux, kernel
2018 Jan 10
0
Samba 4.7.x IOWAIT / load average
Hi,
I found the problem !
It was the params "strict sync" which is default to "Yes" on 4.7.x and "No" on 4.6.x and previous.
When i force to "strict sync = no" in my config file, no more abnormal load average.
is this normal to default to "yes" on 4.7.x ?
Thanks
--
Christophe Yayon
cyayon at nbux.org
On Wed, Jan 10, 2018, at 07:11,
2004 Feb 16
0
nmbd load problem
Hi,
I'm running Samba 3.0 on a Solaris 9 server.
If you look at the output of top you'll see that that nmbd process is
killing this machine:
I've restarted Samba numerous times, and each time the load goes down
immediately and when I restart it the load goes up and nmbd starts to take
up all the processor time.
load averages: 21.18, 19.61, 24.65
11:04:20
210 processes: 187
2006 Jun 07
2
SAR
Folks,
At what point in iowait should I start to worry about having a
bottleneck, or is this something that can't be answerd with a single
integer? According to sar, after my last reboot to turn off
hyperthreading as a test, at one time, I see 4.9% iowait, but then one
minute later, it droped back to 0.01%, and rarely even gets to 1.0%, at
least what I remember from yesterday. ?