Displaying 20 results from an estimated 700 matches similar to: "iostat results for multi path disks"
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem.
When we have some VMWare clients running (mostly MS windows clients),
than the IO-write performance on the host becomes very bad.
The guest os's do not do anything, just having them started,
sitting at the login prompt, is enough to trigger the problem.
The host has plenty of 4G of RAM, and all clients fit easily into
the space.
The disksystem is a
2011 Oct 09
1
Btrfs High IO-Wait
Hi,
I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9
kernel.
I also experience high IO-rates, around 500IO/s reported via iostat.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 6.80 0.00 62.40
18.35 0.04 5.29 0.00 5.29 5.29 3.60
sdb
2006 Apr 07
0
How to interpret the output of 'iostat -x /dev/sdb1 20 100' ??
Hi,
I'm a newbie to tool 'iostat' and I've read the
manual for iostat several times. But it doesn't help.
I still get confused with the output of 'iostat', the
manual seems too abstract, or high-level, for me.
Let's post the output first:
avg-cpu: %user %nice %sys %idle
5.70 0.00 3.15 91.15
Device: rrqm/s wrqm/s r/s w/s
2006 Jan 30
0
Help - iSCSI and SAMBA?
Hi All,
I have a client trying to us a Promise Tech iSCSI array to share 2.8TB
via SAMBA. I have CentOS 4.2 with all updates installed on an IBM
server. The installation and setup was pretty straightforward. The
Promise box is using Gigabit Ethernet, and is the only device on that
net (I think they are using a cross-over cable - I didn't set up the
hardware). We're experiencing
2010 Aug 20
0
awful i/o performance on xen paravirtualized guest
Hi. I'm testing a centos 5.4 xen PV guest on top of a centos 5.4 host.
for some reason, the disk performance from the guest is awful. when I do an
import , the io is fine for a while then climbs to 100% and stays there most of
the time.
at first I tougth it was because I was using file-backed disks, so deleted
those and changed to LVM, but the situation did't improve.
Here's an
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078.
The server is used as a samba file server.
Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth.
Here are a snip from the iostat
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec.
I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2015 Sep 17
1
poor performance with dom0 on centos7
Am 2015-09-17 09:29, schrieb Pasi K?rkk?inen:
>
> Are you using nfs over UDP or TCP ?
>
TCP, but Network cant be the bottleneck, have tested it with iperf
between bare metal/domU's and the nfs domU and it was perfectly fast...
>
> I don't think.
>
>
> If you used NFS over UDP, try running it over TCP.
no I use it over TCP...
>
> What does
2005 Oct 26
1
which process & file taking up disk IO?
I'm having load problems on a server. The bottleneck appears to be disk IO.
iostat would show ~100, under %util, during peak usage.
i'm running things like clam antivirus, pop, exim, apache, mysql on the server.
is there a way to check which process and which file is taking up disk IO? or
see what is being written to the disk?
i'm very puzzled as the amount of writes is 10 times
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2017 Feb 17
0
vm running slowly in powerful host
Hi,
i have a vm which has a poor performance.
E.g. top needs seconds to refresh its output on the console. Same with netstat.
The guest is hosting a MySQL DB with a webfrontend, its response is poor too.
I'm looking for the culprit.
Following top in the guest i get these hints:
Memory is free enough, system is not swapping.
System has 8GB RAM and two cpu's.
Cpu 0 is struggling with a
2009 Apr 16
2
Weird performance problem
Hi,
I'm running a CentOS 4. server and I sometimes face a weird problem.
It is a weird performance problem, and here is how I discovered it.
This server runs OpenVZ virtual machines, and one of them is an asterisk
server for my personal use. The first symptom of the problem is that
the voice quality became flaky. So I logged on the server to see what
could be eating cpu cycles, when I
2012 Dec 10
8
home directory server performance issues
I?m looking for advice and considerations on how to optimally setup
and deploy an NFS-based home directory server. In particular: (1) how
to determine hardware requirements, and (2) how to best setup and
configure the server. We actually have a system in place, but the
performance is pretty bad---the users often experience a fair amount
of lag (1--5 seconds) when doing anything on their home
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks,
I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello,
I'm writing from the otherside of the world from where my systems are,
so details are coming in slow. We have a 6TB OCFS2 volume across 20 or
so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked
fairly well for the last 6-8 months. Something has happened over the
last few weeks which has driven write performance nearly to a halt.
I'm not sure how to proceed, and
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all,
I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0.
After some hours of syncing a raid10 array (8 sata disk) I noticed a
steadily increasing loadavg. I think without reasonable i/o wait or cpu
utilization the loadavg on this system should be very lower. If this
loadavg is normal I would be greatful if somone could explain why. The
screenshots below show that there is
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav.
On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl
> wrote:
> Hello,
>
> I would really appreciate some help/guidance with this problem. First of
> all sorry for the long message. I would file a bug, but do not know if it
> is my fault, dm-cache, qemu or (probably) a combination of both. And i can
> imagine some of