similar to: recover lvm from pv

Displaying 20 results from an estimated 200 matches similar to: "recover lvm from pv"

2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote: > Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: >> In the rescue mode, recreate the partition table which was on the sdb >> by copying over what is on sda >> >> >> sfdisk ?d /dev/sda | sfdisk /dev/sdb >> >> This will give the kernel enough to know it has things to do on >> rebuilding parts. >
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2019 Jan 10
0
Help finishing off Centos 7 RAID install
On 1/10/19 8:35 AM, Simon Matter via CentOS wrote: > Are you sure? Yes. > How is the EFI firmware going to know about the RAID1? It doesn't specifically.? Anaconda will create two EFI boot entries, each referring to one of the mirror components: # efibootmgr -v BootCurrent: 0001 Timeout: 1 seconds BootOrder: 0001,0000 Boot0000* CentOS Linux
2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for root, with RAID-1 and luks encryption. Layout: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ??sda1 8:1 0 200M 0 part /boot/efi ??sda2
2019 Jan 10
1
Help finishing off Centos 7 RAID install
> It doesn't specifically.? Anaconda will create two EFI boot entries, > each referring to one of the mirror components: > > # efibootmgr -v > BootCurrent: 0001 > Timeout: 1 seconds > BootOrder: 0001,0000 > Boot0000* CentOS Linux > HD(1,GPT,534debcc-f3d6-417a-b5d4-10b4ba5c1f7d,0x800,0x5f000)/File(\EFI\CENTOS\SHIM.EFI) > Boot0001* CentOS Linux >
2018 Dec 25
0
upgrading 7.5 ==> 7.6
On Fri, Dec 21, 2018 at 06:13:26AM +0100, Simon Matter via CentOS wrote: > > I didn't see any issues with RAID. I think those problems arise only if > you have old RAID devices created with older CentOS releases than 7. Those > were probably created with 0.9 metadata version which may be a problem. > Currently 1.2 metadata version is created and I didn't see any issues with
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2011 Nov 24
1
mdadm / RHEL 6 error
libguestfs: error: md_detail: mdadm: md device /dev/md125 does not appear to be active. FAIL: test-mdadm.sh Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://et.redhat.com/~rjones/virt-df/
2019 Jul 23
1
mdadm issue
> Am 23.07.2019 um 22:39 schrieb Gordon Messmer <gordon.messmer at gmail.com>: > > I still don't understand how this relates to md125. I don't see it referenced in mdadm.conf. It sounds like you see it in the output from lsblk, but only because you manually assembled it. Do you expect there to be a luks volume there? To check: cryptsetup isLuks <device>
2019 Jul 08
2
Server fails to boot
First some history. This is an Intel MB and processor some 6 years old, initially running CentOS 6. It has 4 x 1TB sata drives set up in two mdraid 1 mirrors. It has performed really well in a rural setting with frequent power cuts which the UPS has dealt with and auto shuts down the server after a few minutes and then auto restarts when power is restored. The clients needed a Windoze server
2019 Apr 03
4
New post message
Hello! On my server PC i have Centos 7 installed. CentOS Linux release 7.6.1810. There are four arrays RAID1 (software RAID) md124 - /boot/efi md125 - /boot md126 - /bd md127 - / I have configured booting from both drives, everything works fine if both drives are connected. But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.
2018 Dec 04
5
Accidentally nuked my system - any suggestions ?
Hi, My workstation is running CentOS 7 on two disks (sda and sdb) in a software RAID 1 setup. It looks like I accidentally nuked it. I wanted to write an installation ISO file to a USB disk, and instead of typing dd if=install.iso of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. I tried a hard reset, but of course, the boot process would stop short very early in
2018 Dec 04
0
Accidentally nuked my system - any suggestions ?
Nicolas Kovacs wrote: > > My workstation is running CentOS 7 on two disks (sda and sdb) in a > software RAID 1 setup. > > It looks like I accidentally nuked it. I wanted to write an installation > ISO file to a USB disk, and instead of typing dd if=install.iso > of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. > > I tried a hard reset, but
2019 Jul 23
0
mdadm issue
I still don't understand how this relates to md125.? I don't see it referenced in mdadm.conf.? It sounds like you see it in the output from lsblk, but only because you manually assembled it.? Do you expect there to be a luks volume there?