similar to: Excluding block device from mdadm scan at boot

Displaying 20 results from an estimated 8000 matches similar to: "Excluding block device from mdadm scan at boot"

2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for root, with RAID-1 and luks encryption. Layout: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ??sda1 8:1 0 200M 0 part /boot/efi ??sda2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
On 05/12/2018 05:37, Nicolas Kovacs wrote: > Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: >> In the rescue mode, recreate the partition table which was on the sdb >> by copying over what is on sda >> >> >> sfdisk ?d /dev/sda | sfdisk /dev/sdb >> >> This will give the kernel enough to know it has things to do on >> rebuilding parts. >
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2018 Dec 25
0
upgrading 7.5 ==> 7.6
On Fri, Dec 21, 2018 at 06:13:26AM +0100, Simon Matter via CentOS wrote: > > I didn't see any issues with RAID. I think those problems arise only if > you have old RAID devices created with older CentOS releases than 7. Those > were probably created with 0.9 metadata version which may be a problem. > Currently 1.2 metadata version is created and I didn't see any issues with
2020 Nov 15
5
(C8) root on mdraid
Hello everyone. I'm trying to install CentOS 8 with root and swap partitions on software raid. The plan is: - create md0 raid level 1 with 2 hard drives: /dev/sda and /dev/sdb, using Linux Rscue CD, - install CentOS 8 with Virtual Box on my laptop, - rsync CentOS 8 root partition on /dev/md0p1, - chroot in CentOS 8 root partition, - configure /etc/mdadm.conf, grub.cfg, initramfs, install
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use md. There are (at least) 2 reasons for this: Firstly, the appliance doesn't include mdadm. Without this, md devices aren't detected during the boot process. Simply adding mdadm to the appliance package list fixes this. Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't handled
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2019 Apr 03
4
New post message
Hello! On my server PC i have Centos 7 installed. CentOS Linux release 7.6.1810. There are four arrays RAID1 (software RAID) md124 - /boot/efi md125 - /boot md126 - /bd md127 - / I have configured booting from both drives, everything works fine if both drives are connected. But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2013 Feb 11
1
mdadm: hot remove failed for /dev/sdg: Device or resource busy
Hello all, I have run into a sticky problem with a failed device in an md array, and I asked about it on the linux raid mailing list, but since the problem may not be md-specific, I am hoping to find some insight here. (If you are on the MD list, and are seeing this twice, I humbly apologize.) The summary is that during a reshape of a raid6 on an up to date CentOS 6.3 box, one disk failed, and
2015 Mar 18
0
unable to recover software raid1 install
On Tue, 2015-03-17 at 23:28 +0100, johan.vermeulen7 at telenet.be wrote: > > on a Centos5 system installed with software raid I'm getting: > > raid1: raid set md127 active with 2 out of 2 mirrors > > md:.... autorun DONE > > md: Autodetecting RAID arrays > > md: autorun..... > > md : autorun DONE > > trying to resume form /dev/md1 Hi
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2016 Oct 19
0
renaming mdadm name
Hi I have a disk which two of the partitions are part of a RAID1 setup. I'm trying to rename the the second raided partition mdadm -E /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 83d7657b:ebfddcb7:36b0fa14:d29a350c Name : oldname:2 Creation Time : Tue Aug 30 15:25:10 2016 Raid Level : raid1 Raid Devices : 2
2015 Mar 07
2
which uuid to specify a raid in fstab
I'm confused about which UUID to use to identify a software RAID in fstab. lsblk -fs shows: md127p1 ext4 c43af789-82aa-49e9-a8ed-acd52b1cdd58 /y --- md127 ext4 39c20575-4257-4fd7-b5c8-8a15757e9e8e --- sdb1 linux_r hostname:0 af77830e-8cfd-9012-62ce-e57105c3bf6c --- sdb --- sdc1 linux_r hostname:0 af77830e-8cfd-9012-62ce-e57105c3bf6c
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>, Jobst Schmalenbach <jobst at barrett.com.au> wrote: > Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
2019 Oct 28
1
NFS shutdown issue
Hi all, I have an odd interaction on a CentOS 7 file server. The basic setup is a minimal 7.x install. I have 4 internal drives (/dev/sd[a-d]) configured in a RAID5 and mounted locally on /data. This is exported via NFS to ~12 workstations which use the exported file systems for /home. I have an external drive connected via USB (/dev/sde) and mounted on /rsnapshot. I use rsnapshot to back up
2019 Jul 23
0
mdadm issue
I still don't understand how this relates to md125.? I don't see it referenced in mdadm.conf.? It sounds like you see it in the output from lsblk, but only because you manually assembled it.? Do you expect there to be a luks volume there?