similar to: Centos + Xen 3.0 + Root on md = bad

Displaying 20 results from an estimated 3000 matches similar to: "Centos + Xen 3.0 + Root on md = bad"

2006 Apr 20
5
Memory bound applications
Hello, we''re using several test- and development-servers as virtual Xen-machines running on one host. Unfortunately our main application needs much memory. Its not a problem at a first view, because usually only very few instances are running at the same time. But if I understand Xen correctly, its only possible to distribute the physical memory of the machine, not the whole
2005 Oct 01
2
Problem getting x86_64 dom0 to boot on a FC4 machine
I''m struggling to get Xen to boot on a FC4 Opteron box. I''ve included the tail of the boot log below [1]. I suspect that the problems relates to the software RAID 1 root and boot partitions, and how it relates to the initrd image. The RAID volumes fail to mount. I compiled the Xen snapshot from the 23rd September, and the only change I''ve made is to enable the
2006 Feb 24
3
Dom0 lvm/software raid rhel4.1 booting issues.
Basically the issue comes down to my Volume Groups not being found by this initrd, causing good ole kernel panic. initrd-2.6.12.6-xen3_12.1_rhel4.1.img [root@xen01 lvm]# uname -a Linux xen01.inside.***.com 2.6.9-22.0.2.ELsmp #1 SMP Thu Jan 5 17:13:01 EST 2006 i686 i686 i386 GNU/Linux [root@xen01 lvm]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 2) Everything
2005 Aug 10
0
Xen and LVM snapshots on FC4
Hello, I''ve tried to make Xen and LVM snapshots work with RHEL3 (upgraded to FC4 partly, using FC4 kernel 2.6.12-1.1387_FC4, xen-2-20050522 installed from RPM). I had a problem where non-snapshot LVM images worked, but as soon as I create a snapshot of one and tried to use that, I would get a crash like this as soon as I booted up a Xen domain: Freeing unused kernel memory: 160k freed
2011 Nov 24
1
[PATCH] New API: md-stop for stopping MD devices
This API is used to stop a md device. When we want to move a device to another md array, we should stop the md device which contained this device first. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/md.c | 16 ++++++++++++++++ generator/generator_actions.ml | 9 +++++++++ regressions/test-mdadm.sh | 14 ++++++++++++++ src/MAX_PROC_NR
2011 Nov 23
2
[PATCH] New API: mdadm-stop for stopping MD devices.
This API is used to stop a md device. When we want to move a device to another md array, we should stop the md device which contained this device first. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/md.c | 16 ++++++++++++++++ generator/generator_actions.ml | 9 +++++++++ regressions/test-mdadm.sh | 14 ++++++++++++++ src/MAX_PROC_NR
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use md. There are (at least) 2 reasons for this: Firstly, the appliance doesn't include mdadm. Without this, md devices aren't detected during the boot process. Simply adding mdadm to the appliance package list fixes this. Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't handled
2006 Aug 10
3
MD raid tools ... did i missed something?
Hi I have a degraded array /dev/md2 ===================================================================== $ mdadm -D /dev/md2 /dev/md2: Version : 00.90.01 Creation Time : Thu Oct 6 20:31:57 2005 Raid Level : raid5 Array Size : 221953536 (211.67 GiB 227.28 GB) Device Size : 110976768 (105.84 GiB 113.64 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2011 Nov 25
2
[PATCH 0/2] MD device inspection
These patches are rebased on top of current master. In addition, I've made the following changes: * Fixed whitespace error. * Functions return -1 on error. * Added a debug message when guest contains md devices, but nothing was parsed from mdadm.conf.
2011 Dec 01
2
[PATCH 0/2] handle MD devices in fstab
Only change from previous post is explicitly checking md_map for NULL before hash_free and lookup.
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2006 Feb 08
1
rc.sysinit problem in domU
When booting a CentOS4 domU, I''m getting some errors in the /etc/sysinit file. I''ve read that some edits are required in this file, but cannot find any specific references. Has anyone seen this before? Is there a workaround? EXT2-fs warning (device sda1): ext2_fill_super: mounting ext3 filesystem as ext2 VFS: Mounted root (ext2 filesystem) readonly. Freeing unused kernel
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.
2006 Aug 04
4
RSS feeds
Hi all, Can anybody tell me how we can create RSS feed using Ruby on Rails. How do I go about constructing this RSS file? Can I find an example in any site? regards, Prasad -- Posted via http://www.ruby-forum.com/.
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2011 Nov 23
8
[PATCH 0/8] Add MD inspection support to libguestfs
This series fixes inspection in the case that fstab contains references to md devices. I've made a few changes since the previous posting, which I've summarised below. [PATCH 1/8] build: Create an MD variant of the dummy Fedora image I've double checked that no timestamp is required in the Makefile. The script will not run a second time to build fedora-md2.img. [PATCH 2/8] build:
2005 Apr 27
23
xen on suse 9.3 and software raid
Has anyone had issues starting xen0 on a md? I have installed it a few times now w/ and w/o raid. Anytime I have a raid1 mirror, Xen panics on boot when trying to mount /. It gets past waiting for /dev/md0 to appear. John _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2005 Apr 27
23
xen on suse 9.3 and software raid
Has anyone had issues starting xen0 on a md? I have installed it a few times now w/ and w/o raid. Anytime I have a raid1 mirror, Xen panics on boot when trying to mount /. It gets past waiting for /dev/md0 to appear. John _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark