similar to: Convert "bare partition" to RAID1 / mdadm?

Displaying 20 results from an estimated 9000 matches similar to: "Convert "bare partition" to RAID1 / mdadm?"

2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS > <centos at centos.org> > wrote: > >> > >> > What makes you think this has *anything* to do with systemd? Bitching >> > about systemd every time you hit a problem isn't helpful. Don't. >> >> If it's not systemd, who else does it? Can you elaborate, please? >> >
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade > new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 > /dev/sdc1 > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 > /dev/sdc2 > mdadm
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>, Jobst Schmalenbach <jobst at barrett.com.au> wrote: > Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
2019 Feb 26
0
Problem with mdadm, raid1 and automatically adds any disk to raid
On 2/24/19 9:01 PM, Jobst Schmalenbach wrote: > I tried to delete the MDX, I removed the disks by failing them, then removing each array md0, md1 and md2. > I also did > > dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz /dev/sdX)-1024)) count=1024 Clearing the initial sectors doesn't do anything to clear the data in the partitions.? They don't become blank
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On 2/24/19 9:01 PM, Jobst Schmalenbach wrote: >> I tried to delete the MDX, I removed the disks by failing them, then >> removing each array md0, md1 and md2. >> I also did >> >> dd if=/dev/zero of=/dev/sdX bs=512 seek=$(($(blockdev --getsz >> /dev/sdX)-1024)) count=1024 > > > Clearing the initial sectors doesn't do anything to clear the
2012 Jun 10
1
Centos 6 / Kickstart Using degraded mdadm RAID1
I'm trying to install a bunch of C6 involving initially degraded mdadm RAID 1 Anaconda refuses to let me create a RAID 1 array with only one member. Based on some reading, it seems that I should be able to use kickstart with the PRE scripts to do this. However, after trying for a couple of hours, it doesn't seem that anaconda will allow it, it just boots the created arrays. At best I end
2012 Jun 19
1
CentOS 6.2 on partitionable mdadm RAID1 (md_d0) - kernel panic with either disk not present
Environment: CentOS 6.2 amd64 (min. server install) 2 virtual hard disks of 10GB each Linux KVM Following the instructions on CentOS Wiki <http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1> I installed a min. server in Linux KVM setup (script shown below) <script> #!/bin/bash nic_mac_addr0=00:07:43:53:2b:bb kvm \ -vga std \ -m 1024 \ -cpu core2duo \ -smp 2,cores=2 \
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2012 Oct 26
4
Can't replace a faulty disk of raid1
Hello, I had a raid1 btrfs (540GB) on vanilla 3.6.3, a disk failed, and removed it at power off, plugged in a new one, partitioned it (to 110GB, by error), and added it to btrfs. I tried to remove the missing device, and it said "Input/output error" after a while. Next attempts simply gave "Invalid argument". I repartitioned, rebooted the system, and made the partition grow:
2012 Jan 29
2
Advise on recovering 2TB RAID1
Hi all, I have one drive fails on a software 2TB RAID1. I have removed the failed partition from mdraid and now ready to replace the failed drive. I want to ask for opinion if there is better way to do that other than: 1. Put the new HDD. 2. Use parted to recreate the same partition scheme. 3. Use mdadm to rebuild the RAID. Especially #2 is rather tricky. I have to create an exact partition
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
> In article <6566355.ijNRhnPfCt at tesla.schoolpathways.com>, > Benjamin Smith <lists at benjamindsmith.com> wrote: >> System is CentOS 6 all up to date, previously had two drives in MD RAID >> configuration. >> >> md0: sda1/sdb1, 20 GB, OS / Partition >> md1: sda2/sdb2, 1 TB, data mounted as /home >> >> Installed kmod ZFS via yum,
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2015 Mar 17
3
unable to recover software raid1 install
Hello All, on a Centos5 system installed with software raid I'm getting: raid1: raid set md127 active with 2 out of 2 mirrors md:.... autorun DONE md: Autodetecting RAID arrays md: autorun..... md : autorun DONE trying to resume form /dev/md1 creating root device mounting root device mounting root filesystem ext3-fs : unable to read superblock mount :
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2019 Feb 26
0
Problem with mdadm, raid1 and automatically adds any disk to raid
On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS <centos at centos.org> wrote: > > > > What makes you think this has *anything* to do with systemd? Bitching > > about systemd every time you hit a problem isn't helpful. Don't. > > If it's not systemd, who else does it? Can you elaborate, please? > I'll wager it's the mdadm.service
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2015 Mar 29
1
RAID1 bootloader configuration on CentOS 6.x and 7
Hi, The CentOS wiki sports a page about setting up software RAID1 on CentOS 5.x. There's a section about making both members of the RAID1 bootable by setting up GRUB on both disks. Now I wonder how this should be done on CentOS 6.x and 7. I have two sandbox machines in my office, one running a minimal CentOS 6.6, the other one with a CentOS 7 installation. Correct me if I'm wrong,
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2011 Dec 06
4
/dev/sda
We're just using Linux software RAID for the first time - RAID1, and the other day, a drive failed. We have a clone machine to play with, so it's not that critical, but.... I partitioned a replacement drive. On the clone, I marked the RAID partitions on /dev/sda failed, and remove, and pulled the drive. After several iterations, I waited a minute or two, until all messages had stopped,