similar to: Linux IO Performance Monitor

Displaying 20 results from an estimated 1200 matches similar to: "Linux IO Performance Monitor"

2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem. When we have some VMWare clients running (mostly MS windows clients), than the IO-write performance on the host becomes very bad. The guest os's do not do anything, just having them started, sitting at the login prompt, is enough to trigger the problem. The host has plenty of 4G of RAM, and all clients fit easily into the space. The disksystem is a
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078. The server is used as a samba file server. Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth. Here are a snip from the iostat
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2015 Sep 17
1
poor performance with dom0 on centos7
Am 2015-09-17 09:29, schrieb Pasi K?rkk?inen: > > Are you using nfs over UDP or TCP ? > TCP, but Network cant be the bottleneck, have tested it with iperf between bare metal/domU's and the nfs domU and it was perfectly fast... > > I don't think. > > > If you used NFS over UDP, try running it over TCP. no I use it over TCP... > > What does
2023 Oct 08
2
Could not convert SID S-0-0, error is NT_STATUS_NONE_MAPPED
Hi all, I know this is kind of an old thread, but I've got some new "developments". And some questions too. Let's see... So, like I said before, my file server is clogging my logs with ../../source3/winbindd/winbindd_getgroups.c:259(winbindd_getgroups_recv) Could not convert sid S-0-0: NT_STATUS_NONE_MAPPED Every 2 seconds. Now, I'm using netdata
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2023 Oct 19
1
Could not convert SID S-0-0, error is NT_STATUS_NONE_MAPPED
Hi all, In my case I see this happen when rsync'ing sysvol from one samba DC to a different one on the target DC when the target DC is on Debian Bookworm with both samba 4.17.<many> and 4.18.8 . It looks like a different behaviour of rsync that I never saw on Bullseye or before, with many different samba versions over the years. I'm using rsync through ssh with rsync -avAX --delete
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks, I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all, I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0. After some hours of syncing a raid10 array (8 sata disk) I noticed a steadily increasing loadavg. I think without reasonable i/o wait or cpu utilization the loadavg on this system should be very lower. If this loadavg is normal I would be greatful if somone could explain why. The screenshots below show that there is
2010 Sep 27
1
RAID rebuild time and disk utilization....
So I'm in the process of building and testing a raid setup and it appeared to take along time to build I came across some settings for setting the min amount of time and that helped but it appears that one of the disks is struggling (100 utilization) vs the other one...I was wondering if anyone else has seen this and if so, is their a solution for it...my 2 disks are 1 Samsung F3 1tb /dev/sdb
2024 Nov 06
1
Status app for mobile
I?ll second that Zabbix motion. With Zabbix you can monitor both Proxmox, and Samba shares. The problem I?m fighting now, is after trying to upgrade Samba on AIX I can?t get it to join the domain. -- See Ya? Howard Coles From: samba <samba-bounces at lists.samba.org> on behalf of Adam Tauno Williams via samba <samba at lists.samba.org> Date: Wednesday, November 6, 2024 at 12:20?PM
2006 Apr 07
0
How to interpret the output of 'iostat -x /dev/sdb1 20 100' ??
Hi, I'm a newbie to tool 'iostat' and I've read the manual for iostat several times. But it doesn't help. I still get confused with the output of 'iostat', the manual seems too abstract, or high-level, for me. Let's post the output first: avg-cpu: %user %nice %sys %idle 5.70 0.00 3.15 91.15 Device: rrqm/s wrqm/s r/s w/s
2011 May 07
7
kswapd taking 100% cpu with no swap on system
Hi All I have xeon server with 16 Gb Ram and no Swap memory.I am running cassandra server on two node in cluster.When there is high load on server kswapd0 kicks inn and take 100% cpu and make machine very slow and we need to restart out cassandra server.I have latest kernel 2.6.18-238.9.1.el5.Please let me know how can i fix this issue .Its hurting us badly this our production server any