Displaying 20 results from an estimated 20000 matches similar to: "slow read IO in domU"
2010 Jan 20
3
Slow fsck on adaptec SAS/SATA raid
I'm trying to do an fsck on an ext3 partition but I'm seeing abysmally slow
disk throughput; monitoring with "dstat" (like vmstat) shows ~1200-1500KB/s
throughput to the disks. Even with 24hrs of fsck-ing I only get ~3% (still in
pass1).
The filesystem is ext3 running "e2fsck -C0 /dev/sda3" and about 3.7TB on an
x86_64-based system with 4GB RAM. e2fsprogs is 1.41.9.
2002 May 24
3
High load on Squid server after change from reiserfs to ext3
We are running Zope behind Squid 2.4Stable6 with squid in acceleration mode.
The squid box (dual Pentium III 1 GHz, RH 7.2, Linux 2.4.9-21smp, 2GB Ram)
has during busy hours a normal load of 0.2-0.3 . From time to time
we see spikes over some hours where the load average of the machine
is higher than 1.5 although there are no spikes in the CPU
utilization. Also there is no increase in the number
2006 Jun 18
3
Migrate a domU
Hi,
i''ve 5 domU in one dom0 with kernel 2.6.11.12-xen0.
All domU are in LVM partitions on dom0.
Each domU have 1 /, 1 swap and 1 /home in reiserfs.
I must migrate all domU to another server, but how can a migrate all the
data and all processes without errors ?
I can stop each domU, it''s not a problem ...
Where can i find docs about migrating a domU to another dom0 ?
2006 Mar 08
12
AW: Problem booting domU
Hello,
Can you check following entrys:
Old:
disk = [''phy:vm_volumes/root.dhcp1,sda1,w'',
''phy:vm_volumes/var.dhcp1,sda2,w'',
''phy:vm_volumes/swap.dhcp1,sda3,w'']
New:
disk = [''phy:/vm_volumes/root.dhcp1,sda1,w'',
''phy:/vm_volumes/var.dhcp1,sda2,w'',
2011 Nov 07
1
Monitoring IO -- vmstat doesn't match snmp
I made the mistake of looking at disk IO numbers in two different ways --
now I'm confused, because they give inconsistent answers.
First way was using 'vmstat 10'. This gave me (apologies for wrapped lines):
r b swpd free buff cache si so bi bo in cs us sy id
wa st
2 0 2162944 4071928 162444 4218456 0 0 0 286 1103 528 3 2
95 0 0
1 0
2005 Sep 07
1
mkinitrd
I''ve compiled xen without any problems. but now i have to create an initrd file. When i use the command mkinitrd (without any options) the i''ve got some errors:
# mkinitrd
Root device: /dev/sda3 (mounted on / as reiserfs)
Module list: ata_piix mptbase mptscsih qla2300 reiserfs
Kernel image: /boot/vmlinuz-2.6.11.12-xen0
Initrd image: /boot/initrd-2.6.11.12-xen0
Shared
2013 Dec 02
3
Moving/merging a filesystem back into /
Hello,
I'm going to be moving a filesystem around, and was planning on using
rsync to do it, so like to get some advice from those more experienced
than I (both using rsync, and moving filesystems)...
I currently have a system that has a separate /usr on an LVM partition.
I want to merge this back into the / (root) filesystem.
This is a production server (mail server, gentoo linux), so
2002 Dec 11
12
File Systems - Which one to use?
We are looking at implementing a Linux box running samba in the near
future with about 1TB of disk online. The purpose of this box will be
for basic file and printer sharing needs. I am doing research on the
different journaling file systems avaible in RH 7.3 and up (ext3,
reiserFS, and JFS) and was wondering if anyone has had any real world
experience with them (mostly reiserFS and JFS) and
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
Hi,
Sorry for the need of the long winded email. Looking for some answers
to the following.
I am setting up a xen PV domU on top of a LVM partitioned DRBD
device. Everything was going just fine until I tried to test the
filesystems in the domU.
Here is my setup;
Dom0 OS: CentOS release 5 (Final)
Kernel: 2.6.18-8.1.4.el5.centos.plusxen
Xen: xen-3.0.3-25.0.3.el5
DRBD:
2007 Jun 25
9
xen 3.1 - domU hangs just after "xm create"
What are methods to debug domU when it hangs? Using xen 3.1 compiled from sources I could not manage to launch no domU. Fot instance, I run something like this:
=====8<=====================
disk = [ ''file:/oradata-act/sles.disk,hda1,w'', '',hdc:cdrom,r'' ]
kernel = "/boot/vmlinuz-2.6.18-xen"
ramdisk = "/boot/initrd-2.6.18-xen"
cpus =
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see
if anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID
1 plus 2 hot spare config. The array is properly initialized, write
cache is on, as is queueing (and supported by the drives). StoreSave
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2007 Jun 11
3
domU on gfs
Hey All,
I have a cluster setup and exporting gfs storage everything is
working ok(as far as I know anyway). But instead of mounting the gfs
storage I want the xen guest to be installed on the shared gfs storage.
But with my current setup when I install the domU on the gfs storage it
changes it to ext3. Is it possible this way or does the domU have to be
on an ext file system?
2007 Oct 23
3
xfs problems with xen3.1 on domu
Have amd64 xen 3.1 installed on debian etch.
Have all the domu file systems in lvm.
Problem 1
If I have a xfs files system when I mount it on domu I get this error:
Filesystem "sda5": Disabling barriers, not supported by the underlying
device
XFS mounting filesystem sda5
Ending clean XFS mount for filesystem: sda5
If I mount with nobarriers the message goes away.
If I mount under
2005 Oct 11
8
More on domU not starting
I get the following warning when running xend start, and a similar
warning sometimes when doing xm commands:
/usr/lib/python/xen/xend/XendNode.py:26: RuntimeWarning: Python C API
version mismatch for module xen.lowlevel.xc: This Python has API version
1012, module xen.lowlevel.xc has version 1011.
import xen.lowlevel.xc
/usr/lib/python/xen/xend/xenstore/xstransact.py:10: RuntimeWarning:
2008 Sep 12
2
Can''t see changes in LV Size inside domU (after lvextend on dom0)
Hi guys,
I''ve seen this post regarding "lvm resize" and this contains all the
information I need to resize a disk inside domU :
http://lists.xensource.com/archives/html/xen-users/2006-11/msg00019.html
>From this post:
"With this I can extend the LVs in the Dom0, and either shutdown the DomU
and
resize2fs from Dom0 or even (tested on test VMs and low-use
2006 Dec 27
5
Problem with ext3 filesystem
Hey,
I've a problem with an ext3 filesystem and don't know how to fix it or
find the failure :(
The Hardware:
Tyan mainboard, AMD Athlon CPU, ARECA ARC-1120 RaidController Raid5 with
400GB Seagate HD's, 756 MB Ram, other harddisks for system, network and
avm isdn controller.
Couse of the filesystem problems I run memtest and found one bad memory
module which I replaced yet.
The
2009 Nov 09
6
Move domU lvm based to another dom0
Hi guys, I need to move an lvm based domU from one dom0 to another dom0.
How do you guys do ths?
xm save/restore doesnt have the option to specify lvm target as the storage.
Thanks
Chris
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2020 Mar 09
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
On 08.03.20 05:47, Tyler Sanderson wrote:
> Tested-by: Tyler Sanderson <tysand at google.com>
>
> Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
> GB file full of random bytes that we continually cat to /dev/null.
> This fills the page cache as the file is read. Meanwhile we trigger
> the balloon to inflate, with a target size of 53 GB. This setup
2020 Mar 09
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
On 08.03.20 05:47, Tyler Sanderson wrote:
> Tested-by: Tyler Sanderson <tysand at google.com>
>
> Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
> GB file full of random bytes that we continually cat to /dev/null.
> This fills the page cache as the file is read. Meanwhile we trigger
> the balloon to inflate, with a target size of 53 GB. This setup