similar to: libguestfs and md devices

Displaying 20 results from an estimated 4000 matches similar to: "libguestfs and md devices"

2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2012 Jan 20
0
CamelName patch
On 01/19/2012 08:43 PM, Richard W.M. Jones wrote: > I don't remember this commit coming up for review, although it seems > to have been pushed upstream: Well spotted! I was just about to point out that you did review it, when I noticed I'd mixed this one up with a similar one. You reviewed the other one. I have pushed this one accidentally. > commit
2009 Jul 24
1
virt-v2v
I've attached v2v/STATUS. There's still a bit to do. I'm not yet proposing this for inclusion, just discussion. Apart from the tool itself, I think there's mileage in considering how the functionality of Sys::Guestfs::Lib could be given more structure. I think there's considerable mileage in moving much of Sys::Guestfs::Lib into Sys::Guestfs::GuestOS. I haven't tried
2009 Nov 13
1
guestmount symlink issues
I'm trying to use guestmount to install some kernel modules in a guest: [mbooth at mbooth linux-2.6 (amit)]$ make modules_install INSTALL_MOD_PATH=~/etch ln: creating symbolic link `/home/mbooth/etch/lib/modules/2.6.32-rc6/source': No such file or directory make: *** [_modinst_] Error 1 I think something's screwy with symlinks. In the following, /tmp/source is a symlink, and I
2010 Aug 19
1
Proposed new libguestfs file APIS
As part of a new virt-v2v feature, I've been thinking about how to write data to an arbitrary block device in the appliance. I need to be able to write arbitrary chunks of data to specific places on the device. This will need a new API, as guestfs_pread can't open a block device. While I'm at it, I'd like to create a new family of APIs which operate on a file handle: int
2010 Aug 23
1
Proposed new file apis
I've attached a patch to generator.ml for the proposed new file apis. Note that hread, hpread, hwrite and hpwrite are slightly different to the apis I proposed previously. I've also added hallocate for good measure. Matt -- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 --------------
2011 Dec 05
0
New release 0.8.5 of virt-v2v and virt-p2v
We just released virt-v2v 0.8.5, which also covers virt-p2v. This is primarily a bugfix release, with a couple of new features thrown in. The major changes are summarised below: V2V *** * Default -ic and -oc to qemu:///session or qemu:///system as appropriate depending on root. * Allow Windows conversions to succeed when firstboot.bat, rhsrvany.exe and rhev-apt.exe aren't available. *
2012 Jan 13
0
Gobject binding for sylistic review
This is a snapshot of gobject bindings. This is literally mid-edit, and contains numerous known errors in its output! I'm posting it for review of the ocaml code. Matt -- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 -------------- next part -------------- An embedded and charset-unspecified
2012 Jan 17
0
GObject bindings (generated source)
I've attached the generated gobject bindings for direct review. Matt -- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 -------------- next part -------------- A non-text attachment was scrubbed... Name: Guestfs-1.0.gir Type: text/xml Size: 435504 bytes Desc: not available URL:
2010 Feb 08
2
Order of list-devices changes when libguestfs uses virtio
Output from guestfish after upgrading to rawhide libguestfs, compiled with virtio: ><fs> add_drive "/dev/Guests/RHEL52PV32" ><fs> add_cdrom "/var/lib/virt-v2v/transfer.iso" ><fs> launch ><fs> list-devices /dev/sr0 /dev/vda Note that the order has swapped. If these aren't consistent we don't have a good way to determine which host
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2009 Aug 18
1
CHROOT_IN and CHROOT_OUT
I hit the following weirdness in guestfish: ><fs> ll /../proc/modules -r--r--r-- 1 root root 0 Aug 18 10:37 /sysroot/../proc/modules ><fs> cat /../proc/modules libguestfs: error: open: /../proc/modules: No such file or directory The underlying reason for this seems to be that ll uses sysroot_path to establish a path before operating on it, whereas cat uses CHROOT_IN and
2012 Dec 20
1
Supporting btrfs subvolumes during inspection
We've currently got a bug in libguestfs which means we can't inspect filesystems in btrfs subvolumes: https://bugzilla.redhat.com/show_bug.cgi?id=824021 This is the default configuration if you select btrfs in F17+. The issue is that it requires an api to fix it, as the return values of inspect_os, inspect_get_filesystems and inspect_get_mountpoints can't express a btrfs
2019 Apr 03
4
New post message
Hello! On my server PC i have Centos 7 installed. CentOS Linux release 7.6.1810. There are four arrays RAID1 (software RAID) md124 - /boot/efi md125 - /boot md126 - /bd md127 - / I have configured booting from both drives, everything works fine if both drives are connected. But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2009 Jul 15
1
Pseudo code for v2v
I've attached my initial thoughts on the design for the v2v tool. -- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team M: +44 (0)7977 267231 GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: v2v-pseudo.txt URL:
2019 Apr 04
2
RAID1 boot issue
Right, that's my problem. a drive is unplugged... while the system is not running mdadm will not reassemble the array on boot. Red Hat Bugzilla ? Bug 1451660 Write that Fixed In Version:dracut-033-546.el7 I have drucat version 033-554.el7 and this bag don't fixed! >I believe you are hitting this bug: > > ? https://bugzilla.redhat.com/show_bug.cgi?id=1451660 > >That is,
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)