similar to: [PATCH] Create an addition MD variant of the dummy Fedora image

Displaying 20 results from an estimated 30000 matches similar to: "[PATCH] Create an addition MD variant of the dummy Fedora image"

2011 Nov 22
3
[PATCH 1/2] Create an MD variant of the dummy Fedora image
This change involves rewriting make-fedora-img.sh in perl. This allows the flexibility to write mdadm.conf containing whichever uuids where randomly generated when the md devices were created. --- .gitignore | 2 + images/Makefile.am | 18 +++- images/guest-aux/make-fedora-img.pl | 194 +++++++++++++++++++++++++++++++++++
2012 Jan 23
0
[PATCH] maint: use $var notation rather than ${var} when possible
I noticed some uses of ${srcdir} in shell scripts. That is almost always better written as $srcdir. The patch below converts most such variable references. Here are the few remaining candidates: $ git grep -i -E '\$\{[a-zA-Z_0-9]+\}'|grep -v Makefile.in.in configure.ac: JAR_INSTALL_DIR=\${prefix}/share/java configure.ac: JNI_INSTALL_DIR=\${libdir} debian/rules: for TEST in
2011 Nov 23
2
[PATCH] New API: mdadm-stop for stopping MD devices.
This API is used to stop a md device. When we want to move a device to another md array, we should stop the md device which contained this device first. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/md.c | 16 ++++++++++++++++ generator/generator_actions.ml | 9 +++++++++ regressions/test-mdadm.sh | 14 ++++++++++++++ src/MAX_PROC_NR
2011 Nov 24
1
mdadm / Ubuntu 10.04 error
md_create: mdadm: boot: mdadm: boot is not a block device. at /home/rjones/d/libguestfs/images/guest-aux/make-fedora-img.pl line 95. Looking into this, it appears the old version of mdadm shipped in Ubuntu (mdadm 2.6.7) doesn't support the notion of giving arbitrary names to devices. Thus you have to do: mdadm --create /dev/md0 [devices] We do: mdadm --create boot [devices] which it
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use md. There are (at least) 2 reasons for this: Firstly, the appliance doesn't include mdadm. Without this, md devices aren't detected during the boot process. Simply adding mdadm to the appliance package list fixes this. Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't handled
2011 Nov 24
1
[PATCH] New API: md-stop for stopping MD devices
This API is used to stop a md device. When we want to move a device to another md array, we should stop the md device which contained this device first. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/md.c | 16 ++++++++++++++++ generator/generator_actions.ml | 9 +++++++++ regressions/test-mdadm.sh | 14 ++++++++++++++ src/MAX_PROC_NR
2014 Oct 22
2
[PATCH] tests: rename $SRCDIR to $srcdir
No functional changes to the tests. --- tests/guests/Makefile.am | 12 ++++++------ tests/guests/guest-aux/make-debian-img.sh | 6 +++--- tests/guests/guest-aux/make-fedora-img.pl | 10 +++++----- tests/guests/guest-aux/make-ubuntu-img.sh | 4 ++-- tests/guests/guest-aux/make-windows-img.sh | 6 +++--- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.
2018 Jan 14
0
[PATCH v2 1/3] appliance: init: Avoid running degraded md devices
The issue: - raid1 will be in degraded state if one of its components is logical volume (LV) - raid0 will be inoperable at all (inacessible from within appliance) if one of its component is LV - raidN: you can expect the same issue for any raid level depends on how many components are inaccessible at the time mdadm is running and raid redundency. It happens because mdadm is launched prior to lvm
2010 Mar 04
1
removing a md/software raid device
Hello folks, I successfully stopped the software RAID. How can I delete the ones found on scan? I also see them in dmesg. [root at extragreen ~]# mdadm --stop --scan ; echo $? 0 [root at extragreen ~]# mdadm --examine --scan ARRAY /dev/md0 level=raid5 num-devices=4 UUID=89af91cb:802eef21:b2220242:b05806b5 ARRAY /dev/md0 level=raid6 num-devices=4 UUID=3ecf5270:339a89cf:aeb092ab:4c95c5c3 [root
2011 Nov 22
2
[PATCH] inspection: Handle MD devices in fstab
This patch fixes inspection when fstab contains devices md devices specified as /dev/mdN. The appliance creates these devices without reference to the guest's mdadm.conf so, for e.g. /dev/md0 in the guest will often be created as /dev/md127 in the appliance. With this patch, we match the uuids of detected md devices against uuids specified in mdadm.conf, and map them appropriately when we
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2014 Oct 03
0
[PATCH v3] tests: Introduce test harness for running tests.
We would like to have a more flexible way to run tests, including running them on an installed copy of libguestfs, running them in parallel, and being able to express dependencies and ordering between tests and data files properly. Therefore introduce a test harness (test-harness) program which can run tests either from the locally built copy, or from an installed copy of the tests (in
2011 Nov 23
8
[PATCH 0/8] Add MD inspection support to libguestfs
This series fixes inspection in the case that fstab contains references to md devices. I've made a few changes since the previous posting, which I've summarised below. [PATCH 1/8] build: Create an MD variant of the dummy Fedora image I've double checked that no timestamp is required in the Makefile. The script will not run a second time to build fedora-md2.img. [PATCH 2/8] build:
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > >
2005 Dec 14
4
Centos + Xen 3.0 + Root on md = bad
Howdy folks, I''m trying to boot Xen 3.0 on a Centos 4(.2) system with root on a software raid1 volume. When booting the Xen kernel, ''raidautorun'' in the /init nash script fails to assemble either the root md or the raid5 md containing my LVM volumes. If I move root to /dev/hda1, the system boots fine. raidautorun still fails to assemble the md devices, but mdadm
2014 Oct 05
0
[PATCH v5 1/7] tests: Introduce test harness for running tests.
We would like to have a more flexible way to run tests, including running them on an installed copy of libguestfs, running them in parallel, and being able to express dependencies and ordering between tests and data files properly. Therefore introduce a test harness (test-harness) program which can run tests either from the locally built copy, or from an installed copy of the tests (in
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2011 Nov 25
2
[PATCH 0/2] MD device inspection
These patches are rebased on top of current master. In addition, I've made the following changes: * Fixed whitespace error. * Functions return -1 on error. * Added a debug message when guest contains md devices, but nothing was parsed from mdadm.conf.
2015 Aug 06
0
[PATCH v4 01/17] tests: Introduce test harness for running tests.
We would like to have a more flexible way to run tests, including running them on an installed copy of libguestfs, running them in parallel, and being able to express dependencies and ordering between tests and data files properly. Therefore introduce a test harness (test-harness) program which can run tests either from the locally built copy, or from an installed copy of the tests (in