Displaying 20 results from an estimated 31 matches for "rrqm".
Did you mean:
rrq
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi,
I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1)
on Ubuntu Server 8.04 x86_64.
Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64
GNU/Linux
The Cluster is connected to a 1Tera iSCSI Device presented by an IBM
3300 Storage System, running over a 1Gig Network.
Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...t within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
8.87 0.02 1.28 0.21 0.00 89.62
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdd 0.02 0.55 0.15 27.06 0.03 11.40 859.89 1.02 37.40 36.13 37.41 6.86 18.65
sdf 0.02 0.48 0.15 26.99 0.03 11.40 862.17...
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...ithout the RAID check running:
>
>[root at r2k1 ~] # iostat -xdmc 1 10
>Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
>
>avg-cpu: %user %nice %system %iowait %steal %idle
> 8.87 0.02 1.28 0.21 0.00 89.62
>
>Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
>sdd 0.02 0.55 0.15 27.06 0.03 11.40 859.89 1.02 37.40 36.13 37.41 6.86 18.65
>sdf 0.02 0.48 0.15 26.99 0.03 11.40...
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
...te and associated iostat -xm 5 output. Previously I
was obtaining > 90MB/s:
$ dd if=/dev/zero of=/home/testdump count=1000 bs=1024k
...and associated iostat output:
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.43 12.25 0.00 87.22
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 1.80 0.00 8.40 0.00 0.04 9.71
0.01 0.64 0.05 0.04
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
sda2...
2006 Apr 07
0
How to interpret the output of 'iostat -x /dev/sdb1 20 100' ??
...he
manual for iostat several times. But it doesn't help.
I still get confused with the output of 'iostat', the
manual seems too abstract, or high-level, for me.
Let's post the output first:
avg-cpu: %user %nice %sys %idle
5.70 0.00 3.15 91.15
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
rkB/s wkB/s avgrq-sz avgqu-sz await svctm
%util
/dev/sdb1 0.60 4.70 12.60 1.50 105.60 49.60
52.80 24.80 11.01 1.54 10.92 8.65
12.20
I'll ask about the rrqm/s, r/s, rsec/s, avgrq-sz,
avgqu-sz, await, svctm and %util in t...
2008 Oct 05
1
io writes very slow when using vmware server
...lized, and the processor is
spending most of it's time (99-100%!) in iowait.
Here is some output of iostat on an actually "idle" machine,
running 3 MS Windows clients, all guest OS's are just sitting doing
almost nothing, not even a antivirus program installed:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await svctm %util
sda 0.00 33.80 0.60 76.80 2.40 443.20 11.51
110.46 1269.85 12.93 100.04
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 1.50 75.98 0.00 22.42...
2014 Jun 20
1
iostat results for multi path disks
...iostat on a server that has a LUN from a SAN with multiple paths. I am specifying a device list that just grabs the bits related to the multi path device:
$ iostat -dxkt 1 2 sdf sdg sdh sdi dm-7 dm-8 dm-9
Linux 2.6.18-371.8.1.el5 (db21b.den.sans.org) 06/20/2014
Time: 02:30:23 PM
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdf 0.66 52.32 3.57 34.54 188.38 347.52 28.13 0.14 3.62 0.87 3.31
sdg 0.66 52.29 3.57 34.56 189.79 347.48 28.18 0.14 3.72 0.87 3.32
sdh...
2011 Oct 09
1
Btrfs High IO-Wait
Hi,
I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9
kernel.
I also experience high IO-rates, around 500IO/s reported via iostat.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 6.80 0.00 62.40
18.35 0.04 5.29 0.00 5.29 5.29 3.60
sdb 0.00 249.80 0.40 669.60 1.60 4118.40
12.30 8...
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks,
I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2006 Jan 30
0
Help - iSCSI and SAMBA?
...it? Thanks.
-Scott
------------------------------------------------------------------------
This was sent from one of the techs working the problem:
I think I located the problem. Collecting iostat data during the last
lockup yielded the following information.
Time: 03:20:38 PM
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 3.09 0.00 24.74 0.00 12.37 0.00
8.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 85.57 0.00 684.54 0.00 342.27 0.00
8.00 1.03 12.06 12...
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2010 Aug 20
0
awful i/o performance on xen paravirtualized guest
...import , the io is fine for a while then climbs to 100% and stays there most of
the time.
at first I tougth it was because I was using file-backed disks, so deleted
those and changed to LVM, but the situation did't improve.
Here's an iostat output from within the DomU:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await svctm %util
xvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
xvda2...
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
...s 1078.
The server is used as a samba file server.
Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth.
Here are a snip from the iostat -kx 1:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb1 0.00 133811.00 0.00 1889.00 0.00 513660.00 543.84 126.24 65.00 0.47 89.40
sdb1 0.00 138.61 0.00 109.90 0.00 29845.54 543.14 2.54 54.32 0.37 4.06
sdb1...
2015 Sep 17
1
poor performance with dom0 on centos7
...0,0 st
KiB Mem : 1013016 total, 19548 free, 591456 used, 402012
buff/cache
KiB Swap: 1048572 total, 990776 free, 57796 used. 353468 avail
Mem
iostat:
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 0,00 50,00 0,00 50,00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
xvda 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00 0,00 0,00
xvdb 0,00 0,00 0,00 0,00 0,00 0,00
0,0...
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
...st
Mem: 7805952k total, 809612k used, 6996340k free, 112092k buffers
Swap: 0k total, 0k used, 0k free, 341304k cached
[root at vserver ~]# iostat -d -x sda sdb sdc sdd sde sdf sg sdh
Linux 2.6.18-164.6.1.el5xen (vserver.zimmermann.com) 20.11.2009
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 1364,57 0,66 1820,12 0,89 25477,44 12,37
14,00 1,11 0,61 0,41 75,10
sdb 1167,12 0,68 2017,45 0,89 25476,53 12,47
12,63 1,16 0,57 0,39 79,49
sdc...
2010 Jan 05
4
Software RAID1 Disk I/O
...o disk I/O though. I have no idea whats going on.
Here is a bit of iostat -x output.
[root at server ~]# iostat -x
Linux 2.6.18-164.9.1.el5 (server.x.us) 01/05/2010
avg-cpu: %user %nice %system %iowait %steal %idle
0.13 0.10 0.03 38.23 0.00 61.51
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.18 0.33 0.04 0.20 27.77 4.22 132.30
3.70 15239.91 2560.21 61.91
sda1 0.00 0.00 0.00 0.00 0.34 0.00 654.72
0.01 12358.84 3704.24 0.19
sda2...
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...t
40.000 IOPS on 4k rand read/write)! But then after a while (and a lot of
random IO, ca 10 - 20 G) it effectively turns in to a writethrough cache
although there's much space left on the cachedlv.
When working as expected on KVM host all writes go to SSDs
iostat -x -m 2
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 324.50 0.00 22.00 0.00 14.94
1390.57 1.90 86.39 0.00 86.39 5.32 11.70
sdb 0.00 324.50 0.00 22.00 0.00 14.94
1390.57...
2009 Apr 16
2
Weird performance problem
...76 113 1
2 93 4
0 0 592 189844 381732 537944 0 0 0 0 33 69 1
1 98 0
0 0 592 189844 381732 537944 0 0 0 0 24 32 0
0 100 0
0 0 592 190340 381732 537944 0 0 0 0 28 42 0
0 100 0
iostat -x 1 (excerpt)
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util
sda 0.00 171.00 0.00 124.00 0.00 2368.00 0.00 1184.00
19.10 0.14 1.13 0.02 0.20
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00...