Jobst Schmalenbach
2019-Feb-25 05:01 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 /dev/sdc3 then I did a lsblk and realized that I used --level=0 instead of --level=1 (spelling mistake) The SIZE was reported double as I created a striped set by mistake, yet I wanted the mirrored. Here starts my problem, I cannot get rid of the /dev/mdX no matter what I do (try to do). I tried to delete the MDX, I removed the disks by failing them, then removing each array md0, md1 and md2. I also did dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz /dev/sdX)-1024)) count=1024 dd if=/dev/zero of=/dev/sdX bs=512 count=1024 mdadm --zero-superblock /dev/sdX Then I wiped each partition of the drives using fdisk. Now every time I start fdisk to setup a new set of partitions I see in /var/log/messages as soon as I hit "W" in fdisk: Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives before activating degraded array md2.. Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives before activating degraded array md1.. Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives before activating degraded array md0.. Feb 25 15:38:32 webber kernel: md/raid1:md0: active with 1 out of 2 mirrors Feb 25 15:38:32 webber kernel: md0: detected capacity change from 0 to 5363466240 Feb 25 15:39:02 webber systemd: Created slice system-mdadm\x2dlast\x2dresort.slice. Feb 25 15:39:02 webber systemd: Starting Activate md array md1 even though degraded... Feb 25 15:39:02 webber systemd: Starting Activate md array md2 even though degraded... Feb 25 15:39:02 webber kernel: md/raid1:md1: active with 0 out of 2 mirrors Feb 25 15:39:02 webber kernel: md1: failed to create bitmap (-5) Feb 25 15:39:02 webber mdadm: mdadm: failed to start array /dev/md/1: Input/output error Feb 25 15:39:02 webber systemd: mdadm-last-resort at md1.service: main process exited, code=exited, status=1/FAILURE I check /proc/mdstat and sure enough, there it is trying to assemble an Array I DID NOT TOLD IT TO DO. I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect array) over and over again (systemd frustration). So I tried to delete them again, wiped them again, killed processes, wiped disks. No matter what I do as soon as I hit the "w" in fdisk systemd tries to assemble the array again without letting me to decide what to do. Help! Jobst -- windoze 98: <n.> useless extension to a minor patch release for 32-bit extensions and a graphical shell for a 16-bit patch to an 8-bit operating system originally coded for a 4-bit microprocessor, written by a 2-bit company that can't stand for 1 bit of competition! | |0| | Jobst Schmalenbach, General Manager | | |0| Barrett & Sales Essentials |0|0|0| +61 3 9533 0000, POBox 277, Caulfield South, 3162, Australia
Simon Matter
2019-Feb-25 05:50 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade > new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 > /dev/sdc1 > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 > /dev/sdc2 > mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 > /dev/sdc3 > > then I did a lsblk and realized that I used --level=0 instead of --level=1 > (spelling mistake) > The SIZE was reported double as I created a striped set by mistake, yet I > wanted the mirrored. > > Here starts my problem, I cannot get rid of the /dev/mdX no matter what I > do (try to do). > > I tried to delete the MDX, I removed the disks by failing them, then > removing each array md0, md1 and md2. > I also did > > dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz > /dev/sdX)-1024)) count=1024I didn't check but are you really sure you're cleaning up the end of the drive? Maybe you should clean the end of every partition first because metadata may be written there.> dd if=/dev/zero of=/dev/sdX bs=512 count=1024 > mdadm --zero-superblock /dev/sdX > > Then I wiped each partition of the drives using fdisk. > > Now every time I start fdisk to setup a new set of partitions I see in > /var/log/messages as soon as I hit "W" in fdisk: > > Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives > before activating degraded array md2.. > Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives > before activating degraded array md1.. > Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives > before activating degraded array md0.. > Feb 25 15:38:32 webber kernel: md/raid1:md0: active with 1 out of 2 > mirrors > Feb 25 15:38:32 webber kernel: md0: detected capacity change from 0 to > 5363466240 > Feb 25 15:39:02 webber systemd: Created slice > system-mdadm\x2dlast\x2dresort.slice. > Feb 25 15:39:02 webber systemd: Starting Activate md array md1 even > though degraded... > Feb 25 15:39:02 webber systemd: Starting Activate md array md2 even > though degraded... > Feb 25 15:39:02 webber kernel: md/raid1:md1: active with 0 out of 2 > mirrors > Feb 25 15:39:02 webber kernel: md1: failed to create bitmap (-5) > Feb 25 15:39:02 webber mdadm: mdadm: failed to start array /dev/md/1: > Input/output error > Feb 25 15:39:02 webber systemd: mdadm-last-resort at md1.service: main > process exited, code=exited, status=1/FAILURE > > I check /proc/mdstat and sure enough, there it is trying to assemble an > Array I DID NOT TOLD IT TO DO. > > I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect > array) over and over again (systemd frustration).Noooooo, you're wiping it wrong :-)> So I tried to delete them again, wiped them again, killed processes, wiped > disks. > > No matter what I do as soon as I hit the "w" in fdisk systemd tries to > assemble the array again without letting me to decide what to do.<don't try this at home> Nothing easier than that, just terminate systemd while doing the disk management and restart it after you're done. BTW, PID is 1. </don't try this at home> Seriously, there is certainly some systemd unit you may be able to deactivate before doing such things. However, I don't know which one it is. I've been fighting a similar crap: On HPE servers when running cciss_vol_status through the disk monitoring system, whenever cciss_vol_status is run and reports hardware RAID status, systemd scans all partition tables and tries to detect LVM2 devices and whatever. Kernel log is just filled with useless scans and I have no idea how to get rid of it. Nice new systemd world. Regards, Simon
Jobst Schmalenbach
2019-Feb-25 06:06 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
On Mon, Feb 25, 2019 at 06:50:11AM +0100, Simon Matter via CentOS (centos at centos.org) wrote:> > Hi. > > > > dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz > > /dev/sdX)-1024)) count=1024 > > I didn't check but are you really sure you're cleaning up the end of the > drive? Maybe you should clean the end of every partition first because > metadata may be written there.Mmmmhhh, not sure. I run fdisk on it, basically re-creating everything from the start. The "trying to re-create the MDX's" happens when I use "w" in fdisk. As soon as I hit the "w" it starts re-creating the MDx! Thats the annoying part. [snip]> > No matter what I do as soon as I hit the "w" in fdisk systemd tries to > > assemble the array again without letting me to decide what to do. > > <don't try this at home>I am not ;-), it's @ work. Jobst -- You seem (in my (humble) opinion (which doesn.t mean much)) to be (or possibly could be) more of a Lisp programmer (but I could be (and probably am) wrong) | |0| | Jobst Schmalenbach, General Manager | | |0| Barrett & Sales Essentials |0|0|0| +61 3 9533 0000, POBox 277, Caulfield South, 3162, Australia
Tony Mountifield
2019-Feb-25 11:23 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>, Jobst Schmalenbach <jobst at barrett.com.au> wrote:> Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 > mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 /dev/sdc3 > > then I did a lsblk and realized that I used --level=0 instead of --level=1 (spelling mistake) > The SIZE was reported double as I created a striped set by mistake, yet I wanted the mirrored. > > Here starts my problem, I cannot get rid of the /dev/mdX no matter what I do (try to do). > > I tried to delete the MDX, I removed the disks by failing them, then removing each array md0, md1 and md2. > I also did > > dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz /dev/sdX)-1024)) count=1024 > dd if=/dev/zero of=/dev/sdX bs=512 count=1024 > mdadm --zero-superblock /dev/sdX > > Then I wiped each partition of the drives using fdisk.The superblock is a property of each partition, not just of the whole disk. So I believe you need to do: mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb2 mdadm --zero-superblock /dev/sdb3 mdadm --zero-superblock /dev/sdc1 mdadm --zero-superblock /dev/sdc2 mdadm --zero-superblock /dev/sdc3 Cheers Tony -- Tony Mountifield Work: tony at softins.co.uk - http://www.softins.co.uk Play: tony at mountifield.org - http://tony.mountifield.org
Gordon Messmer
2019-Feb-26 01:24 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
On 2/24/19 9:01 PM, Jobst Schmalenbach wrote:> I tried to delete the MDX, I removed the disks by failing them, then removing each array md0, md1 and md2. > I also did > > dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz /dev/sdX)-1024)) count=1024Clearing the initial sectors doesn't do anything to clear the data in the partitions.? They don't become blank just because you remove them. Partition your drives, and then use "wipefs -a /dev/sd{b,c}{1,2,3}"> I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect array) over and over again (systemd frustration).What makes you think this has *anything* to do with systemd? Bitching about systemd every time you hit a problem isn't helpful.? Don't.
Simon Matter
2019-Feb-26 05:54 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote: >> I tried to delete the MDX, I removed the disks by failing them, then >> removing each array md0, md1 and md2. >> I also did >> >> dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz >> /dev/sdX)-1024)) count=1024 > > > Clearing the initial sectors doesn't do anything to clear the data in > the partitions.? They don't become blank just because you remove them. > > Partition your drives, and then use "wipefs -a /dev/sd{b,c}{1,2,3}" > > >> I do NOT WANT this to happen, it creates the same "SHIT" (the incorrect >> array) over and over again (systemd frustration). > > > What makes you think this has *anything* to do with systemd? Bitching > about systemd every time you hit a problem isn't helpful.? Don't.If it's not systemd, who else does it? Can you elaborate, please? Regards, Simon
Jobst Schmalenbach
2019-Feb-26 22:41 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
On Mon, Feb 25, 2019 at 05:24:44PM -0800, Gordon Messmer (gordon.messmer at gmail.com) wrote:> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote: >[snip]> > What makes you think this has *anything* to do with systemd? Bitching about > systemd every time you hit a problem isn't helpful.? Don't.Becasue of this. Feb 25 15:38:32 webber systemd: Started Timer to wait for more drives before activating degraded array md2.. -- When you want a computer system that works, just choose Linux; When you want a computer system that works, just, choose Microsoft. | |0| | Jobst Schmalenbach, General Manager | | |0| Barrett & Sales Essentials |0|0|0| +61 3 9533 0000, POBox 277, Caulfield South, 3162, Australia
Jobst Schmalenbach
2019-Feb-26 22:59 UTC
[CentOS] Problem with mdadm, raid1 and automatically adds any disk to raid
On Mon, Feb 25, 2019 at 11:23:12AM +0000, Tony Mountifield (tony at softins.co.uk) wrote:> In article <20190225050144.GA5984 at button.barrett.com.au>, > Jobst Schmalenbach <jobst at barrett.com.au> wrote: > > Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > > > I was trying to setup two disks as a RAID1 array, using these lines > > > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 > > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 > > mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/sdb3 /dev/sdc3 > > > > then I did a lsblk and realized that I used --level=0 instead of --level=1 (spelling mistake) > > So I believe you need to do: > > mdadm --zero-superblock /dev/sdb1 > mdadm --zero-superblock /dev/sdb2 >I actually deleted the partitions, at first using fdisk than parted (read a few ideas on the internet). Also from the second try onwards I also changed the partition sizes, filesystems. Also I tried with one disk missing (either sda or sdb). Jobst -- If proof denies faith, and uncertainty denies proof, then uncertainty is proof of God's existence. | |0| | Jobst Schmalenbach, General Manager | | |0| Barrett & Sales Essentials |0|0|0| +61 3 9533 0000, POBox 277, Caulfield South, 3162, Australia
Seemingly Similar Threads
- Problem with mdadm, raid1 and automatically adds any disk to raid
- Problem with mdadm, raid1 and automatically adds any disk to raid
- Problem with mdadm, raid1 and automatically adds any disk to raid
- Problem with mdadm, raid1 and automatically adds any disk to raid
- mdmonitor not triggering program on fail events