similar to: RAID handling in UPS Shutdown script

Displaying 20 results from an estimated 9000 matches similar to: "RAID handling in UPS Shutdown script"

2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade > new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 > /dev/sdc1 > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 > /dev/sdc2 > mdadm
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2010 Mar 04
1
removing a md/software raid device
Hello folks, I successfully stopped the software RAID. How can I delete the ones found on scan? I also see them in dmesg. [root at extragreen ~]# mdadm --stop --scan ; echo $? 0 [root at extragreen ~]# mdadm --examine --scan ARRAY /dev/md0 level=raid5 num-devices=4 UUID=89af91cb:802eef21:b2220242:b05806b5 ARRAY /dev/md0 level=raid6 num-devices=4 UUID=3ecf5270:339a89cf:aeb092ab:4c95c5c3 [root
2007 Jul 27
2
Major problem with software raid
Ok, this is the case: I've got two raid-5 arrays with software raid, both with three disks. Setup: md0 has hdb2, hdd1 and sda1 md1 has hdb5, hdd3 and sda3 Tonight, the system lost power due to a power spike. The result was a reboot where it attempted to fix the raid, but it didn't exactly work. I have now booted a live CD and using utilities there. It seems the checksum value is
2008 Nov 26
2
Reassemble software RAID
I have a machine on CentOS 5 with two disks in RAID1 using Linux software RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the disk(s). /dev/md1 is managed by LVM and holds the system partition and several other partitions. I had to take out disk sda from the RAID and low level format it with the tool provided by Samsung. Now I put it back and want to reassemble the array.
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>, Jobst Schmalenbach <jobst at barrett.com.au> wrote: > Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings- I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain. Option #1: My understanding of RAID10 using 4
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2009 Dec 20
1
mdadm help
Hey List, So I had a 4 drive software RAID 5 set up consisting of /dev/sdb1, /dev/sdc1, /dev/sdd1 and /dev/sde1. I reinstalled my OS and after the reinstall I made the mistake of re-assembling the array incorrectly by typing "sudo mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde" in a moment of stupidity. Obviously this didn't work and the array wouldn't mount and
2011 Sep 07
1
boot problem after disk change on raid1
Hello, I have two disks sda and sdb. One of the was broken so I have changed the broken disk with a working one. I started the server in rescue mode, and created the partional table, and added all the partitions to the software raid. *I have added the partitions to the RAID, and reboot.* # mdadm /dev/md0 --add /dev/sdb1 # mdadm /dev/md1 --add /dev/sdb2 # mdadm /dev/md2 --add /dev/sdb3 # mdadm
2008 Apr 18
1
create raid /dev/md2
Hi , currently i have 2 raid devices /dev/md0 and /dev/md1 , i have added 2 new disks, fdisked , created 2 primary partitions with type fd (linux raid autodetect) Now i want to create raid from them root at vmhost1 ~]# mdadm --create --verbose /dev/md2 --level=1 /dev/sdc1 /dev/sdd1 mdadm: error opening /dev/md2: No such file or directory will return that error, what shouldi do? Thanks!
2008 Jul 20
1
moving software RAIDed disks to other machine
I just replaced two md-raided (RAID1) disks with bigger ones and decided to check out how far I get with them when I put them in another machine. The kernel boots and then panics when it wants to mount the root filesystem on the disk. md: Autodetecting RAID arrays md: autorun md: autorun DONE < not sure if this means it was successful or failed, I rather think it failed because it
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2005 Jan 13
2
Debian Sarge Root Raid + LVM + XEN install guide (LONG)
Hello fellow xenophiles and happy new year! I''ve documented the install procedure for a prototype server here since I found no similar document Anywhere on the net. It''s a Sarge-based Domain0 on linux root raid from scratch, using LVM to store the data for the domU mail server and its mailstore. I humbly submit my notes in the hope that they are useful to some weary traveller.