similar to: Xen and I/O Intensive Loads

Displaying 20 results from an estimated 1000 matches similar to: "Xen and I/O Intensive Loads"

2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078. The server is used as a samba file server. Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth. Here are a snip from the iostat
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2015 Sep 17
1
poor performance with dom0 on centos7
Am 2015-09-17 09:29, schrieb Pasi K?rkk?inen: > > Are you using nfs over UDP or TCP ? > TCP, but Network cant be the bottleneck, have tested it with iperf between bare metal/domU's and the nfs domU and it was perfectly fast... > > I don't think. > > > If you used NFS over UDP, try running it over TCP. no I use it over TCP... > > What does
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2014 Jun 20
1
iostat results for multi path disks
Here is a sample of running iostat on a server that has a LUN from a SAN with multiple paths. I am specifying a device list that just grabs the bits related to the multi path device: $ iostat -dxkt 1 2 sdf sdg sdh sdi dm-7 dm-8 dm-9 Linux 2.6.18-371.8.1.el5 (db21b.den.sans.org) 06/20/2014 Time: 02:30:23 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2009 Apr 16
2
Weird performance problem
Hi, I'm running a CentOS 4. server and I sometimes face a weird problem. It is a weird performance problem, and here is how I discovered it. This server runs OpenVZ virtual machines, and one of them is an asterisk server for my personal use. The first symptom of the problem is that the voice quality became flaky. So I logged on the server to see what could be eating cpu cycles, when I
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all, I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0. After some hours of syncing a raid10 array (8 sata disk) I noticed a steadily increasing loadavg. I think without reasonable i/o wait or cpu utilization the loadavg on this system should be very lower. If this loadavg is normal I would be greatful if somone could explain why. The screenshots below show that there is
2012 Dec 10
8
home directory server performance issues
I?m looking for advice and considerations on how to optimally setup and deploy an NFS-based home directory server. In particular: (1) how to determine hardware requirements, and (2) how to best setup and configure the server. We actually have a system in place, but the performance is pretty bad---the users often experience a fair amount of lag (1--5 seconds) when doing anything on their home
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem. When we have some VMWare clients running (mostly MS windows clients), than the IO-write performance on the host becomes very bad. The guest os's do not do anything, just having them started, sitting at the login prompt, is enough to trigger the problem. The host has plenty of 4G of RAM, and all clients fit easily into the space. The disksystem is a
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2006 Jan 30
0
Help - iSCSI and SAMBA?
Hi All, I have a client trying to us a Promise Tech iSCSI array to share 2.8TB via SAMBA. I have CentOS 4.2 with all updates installed on an IBM server. The installation and setup was pretty straightforward. The Promise box is using Gigabit Ethernet, and is the only device on that net (I think they are using a cross-over cable - I didn't set up the hardware). We're experiencing
2010 Aug 20
0
awful i/o performance on xen paravirtualized guest
Hi. I'm testing a centos 5.4 xen PV guest on top of a centos 5.4 host. for some reason, the disk performance from the guest is awful. when I do an import , the io is fine for a while then climbs to 100% and stays there most of the time. at first I tougth it was because I was using file-backed disks, so deleted those and changed to LVM, but the situation did't improve. Here's an
2011 Feb 22
6
how to optimize CentOS XEN dom0?
Hi, I have a problematic CentOS XEN server and hope someone could point me in the right direction to optimize it a bit. The server runs on a Core2Quad 9300, with 8GB RAM (max motherboard can take, 1U chassis) on an Intel motherboard with a 1TB SATA HDD. dom0 is set to 512MB limit with a few small XEM VM's running: root at zaxen01:[~]$ xm list Name ID
2015 Jan 29
2
C6 server responding extremely slow on ssh interactive
Op 29-01-15 om 00:00 schreef Gordon Messmer: > On 01/28/2015 12:12 PM, Patrick Bervoets wrote: >> >> ARPING 192.168.1.15 from 0.0.0.0 br0 >> Unicast reply from 192.168.1.15 [AC:16:2D:72:67:D4] 0.723ms >> Sent 1 probes (1 broadcast(s)) >> Received 1 response(s) >> >> Thanks anyway > > I'm not sure what you mean by "thanks anyway".