Displaying 20 results from an estimated 5000 matches similar to: "Turning root partition into a RAID array"
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2006 Feb 10
1
question on software raid-1
I have a system that is RAID -1 configured as
/dev/md0 is /dev/hda1 /dev/hdb1
/dev/md1 is /dev/hdb3 /dev/hdb3
it seems as though /dev/hda has failed....
I have another disk (identical model) that I can replace hda with.
I know about the commands fdisk to repartion and raidhotadd /dev/md0
/dev/hda1
and raidhotadd /dev/md1 /dev/hda3 (to be ran after the system boots).
BUT... how do I now get
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2006 Feb 10
1
4.2 install w/250GB raid arrays won't boot
hi!
raid 1 arrays: I already have 2 systems running this same raid1 config.
1 sys has 2 120 gb
1 sys has 1 120gb and 1 200gb but matching the raid partitions
this system here that is giving me fits right now has 1 250gb and 1
200gb. I tried it w/a new 250 gb for the 2nd drive but the same
results. Will not boot.
in druid, when I am config. the raid arrays, I always create the
boot partitions
2005 Apr 17
5
MIrrored drives won't boot after installation
I have a p4 motherboard with 2 ide interfaces, I connect 2 40 GB drives as
hda and hdc, I install Centos 4 from a CDROM, and partition the drives as 2
x raid partition each plus a swap partition on hda, the make md0 and md1 to
install /boot and / respectively. Install goes well, everything looks great,
go to reboot from drives, and all I get is "grub" but no boot. I have tried
this ten
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi,
When I perform a software RAID 1 or RAID 5 installation on a LAN server
with several hard disks, I wonder if GRUB already gets installed on each
individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x,
this had to be done like this:
# grub
grub> device (hd0) /dev/sda
grub> device (hd1) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> root (hd1,0)
grub>
2010 Dec 04
2
Fiddling with software RAID1 : continue working with one of two disks failing?
Hi,
I'm currently experimenting with software RAID1 on a spare PC with two
40 GB hard disks. Normally, on a desktop PC with only one hard disk, I
have a very simple partitioning scheme like this :
/dev/hda1 80 MB /boot ext2
/dev/hda2 1 GB swap
/dev/hda3 39 GB / ext3
Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb)
with three
2008 Feb 06
4
Installation problems with large mirrored drives
I am trying to install CentOS 4.6 to a pair of 750GB hard drives. I can
successfully install to either of the drives as a single drive, but when
I try to use both drives and mirror the partitions, I start having
problems. Anaconda crashes as it is trying to format the drives.
This is what I'm trying to create:
/dev/md0: 200MB, /boot
/dev/md1: 2GB, swap
/dev/md2: rest of the
2008 Jan 18
1
Recover lost data from LVM RAID1
Guys,
The other day while working on my old workstation it get frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh install. It seems that the
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys,
The other day while working on my old workstation it got frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
--------------------------
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh
2010 Jul 23
5
install on raid1
Hi All,
I'm currently trying to install centos 5.4 x86-64bit on a raid 1, so if one the 2 disks fails the server will still be available.
i installed grub on /dev/sda using the advanced grub configuration option during the install.
after the install is done i boot in linux rescue mode , chroot the filesystem and copy grub to both drives using:
grub>root (hd0,0)
grub>setup (hd0)
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2007 Jul 27
2
Major problem with software raid
Ok, this is the case:
I've got two raid-5 arrays with software raid, both with three disks.
Setup:
md0 has hdb2, hdd1 and sda1
md1 has hdb5, hdd3 and sda3
Tonight, the system lost power due to a power spike. The result was a reboot
where it attempted to fix the raid, but it didn't exactly work. I have now
booted a live CD and using utilities there.
It seems the checksum value is
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID
configuration.
md0: sda1/sdb1, 20 GB, OS / Partition
md1: sda2/sdb2, 1 TB, data mounted as /home
Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data
2x, then stopped the sd[ab]2 partition with:
mdadm --stop /dev/md1;
mdadm --zero-superblock /dev/sd[ab]1;
Removed /home in /etc/fstab. Used
2004 Dec 02
3
Tbench benchmark numbers seem to be limiting samba performance in the 2.4 and 2.6 kernel.
Hi,
I'm getting horrible performance on my samba server, and I am
unsure of the cause after reading, benchmarking, and tuning.
My server is a K6-500 with 43MB of RAM, standard x86 hardware. The
OS is Slackware 10.0 w/ 2.6.7 kernel I've had similar problems with the 2.4.26
kernel. I use samba version 3.0.5. I've listed my partitions below, as well
as the drive models. I have a
2006 Mar 14
2
Help. Failed event on md1
Hi all,
This morning I received this notification from mdadm:
This is an automatically generated mail message from mdadm
running on server-mail.mydomain.kom
A Fail event had been detected on md device /dev/md1.
Faithfully yours, etc.
In /proc/mdstat I see this:
Personalities : [raid1]
md1 : active raid1 sdb2[2](F) sda2[0]
77842880 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda1[0]
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid?
specifically say
mdraid 1/5 on top of mdraid multipath.
e.g. 4 storage machines exporting iSCSI targets via two different
physical network switches
then use multipath to create md block devices
then use mdraid on these md block devices
The purpose being the storage array surviving a physical network switch