Displaying 20 results from an estimated 3000 matches similar to: "unable to recover software raid1 install"
2015 Mar 18
0
unable to recover software raid1 install
On Tue, 2015-03-17 at 23:28 +0100, johan.vermeulen7 at telenet.be wrote:
>
> on a Centos5 system installed with software raid I'm getting:
>
> raid1: raid set md127 active with 2 out of 2 mirrors
>
> md:.... autorun DONE
>
> md: Autodetecting RAID arrays
>
> md: autorun.....
>
> md : autorun DONE
>
> trying to resume form /dev/md1
Hi
2008 Jan 18
1
Recover lost data from LVM RAID1
Guys,
The other day while working on my old workstation it get frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh install. It seems that the
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys,
The other day while working on my old workstation it got frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
--------------------------
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2009 May 08
1
domU corrupt after server crash, help needed trying to recover domU LVM
Hi all,
One of our Dell servers has failed badly, and one of the domU's has been
corrupted in the process. It boots up to a point and then gives me a kernel
panic:
Loading dm-zero.ko module
Loading dm-snapshot.ko module
Scanning and configuring dmraid supported devices
Scanning logical volumes
Reading all physical volumes. This may take a while...
No volume groups found
Activating
2008 Jul 20
1
moving software RAIDed disks to other machine
I just replaced two md-raided (RAID1) disks with bigger ones and decided
to check out how far I get with them when I put them in another machine.
The kernel boots and then panics when it wants to mount the root
filesystem on the disk.
md: Autodetecting RAID arrays
md: autorun
md: autorun DONE
< not sure if this means it was successful or failed, I rather think it
failed because it
2010 Jul 01
1
Superblock Problem
Hi all,
After rebooting my CentOS 5.5 server, i have the following message:
==================================
Red Hat nash version 5.1.19.6 starting
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3: invalid argument
setuproot: moving /root failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting
2007 Apr 22
1
Centos5: RAID1 on root/boot? [+lilo mkinitrd issues]
Hi,
I did a command-line upgrade of a RHL73 server to Centos 5. It was a
bit rocky road, but in the end it was successful.
There's one thing that bugged me. I'm using software RAID1 consisting
of /dev/hd{a,c}. No LVM or anything fancy, a number of /dev/mdX
partitions, including the root (+/boot).
I'd have preferred to continue using lilo as it works more easily with
RAID1
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2006 Apr 02
2
raid setup
Hi,
I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i
did is install the centos 4.2 serverCD on the first IBM and set the HDD to
raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st
Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The
1st server rebuild the array fine. My problem is the Second server, after
rebuilding it and
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi,
I just replaced Slackware64 14.1 running on my office's HP Proliant
Microserver with a fresh installation of CentOS 7.
The server has 4 x 250 GB disks.
Every disk is configured like this :
* 200 MB /dev/sdX1 for /boot
* 4 GB /dev/sdX2 for swap
* 248 GB /dev/sdX3 for /
There are supposed to be no spare devices.
/boot and swap are all supposed to be assembled in RAID level 1 across
2008 Apr 18
1
create raid /dev/md2
Hi , currently i have 2 raid devices /dev/md0 and /dev/md1 , i have added 2
new disks, fdisked , created 2 primary partitions with type fd (linux raid
autodetect)
Now i want to create raid from them
root at vmhost1 ~]# mdadm --create --verbose /dev/md2 --level=1 /dev/sdc1
/dev/sdd1
mdadm: error opening /dev/md2: No such file or directory
will return that error, what shouldi do?
Thanks!
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello,
I would really appreciate some help/guidance with this problem. First of
all sorry for the long message. I would file a bug, but do not know if
it is my fault, dm-cache, qemu or (probably) a combination of both. And
i can imagine some of you have this setup up and running without
problems (or maybe you think it works, just like i did, but it does not):
PROBLEM
LVM cache writeback
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use
md. There are (at least) 2 reasons for this: Firstly, the appliance
doesn't include mdadm. Without this, md devices aren't detected during
the boot process. Simply adding mdadm to the appliance package list
fixes this.
Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't
handled
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2007 Apr 08
2
Boot Error My server running Centos Blue Quartz
Greetings,
One of our Servers has a problem.
My server running Centos Blue Quartz has a bit of a problem booting. P4 3.0g
processor with twin 250g hd and 1g of RAM.
The boot screen is locked at:
md: ...autorun DONE.
Creating root device
VFS: Can't find ext3 filesystem on dev md1
mount: error 22 mounting ext3
mount: error 2 mounting none
Switching to new root
switchroot: mount failed: 22
2007 Mar 29
2
EXT3 fs error on RAID1 device
Hi all.
I have a Dell SC440 running Centos 4.4. It has two 500GB disks in a
RAID1 array using linux software raid (md1 is / and md0 is /boot).
Recently the root file system was remounted read-only for some reason.
The logs don't show anything unusual, presumably the file system was
read-only before anythng was logged. Running dmesg showed this error
repeated many times:
EXT3-fs error (device
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID
configuration.
md0: sda1/sdb1, 20 GB, OS / Partition
md1: sda2/sdb2, 1 TB, data mounted as /home
Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data
2x, then stopped the sd[ab]2 partition with:
mdadm --stop /dev/md1;
mdadm --zero-superblock /dev/sd[ab]1;
Removed /home in /etc/fstab. Used
2008 Apr 01
1
RAID1 migration - /dev/md1 is not there
I am trying to convert an existing IDE one-disk system to RAID1 using the
general strategy found here:
http://lists.centos.org/pipermail/centos/2005-March/003813.html
But I am stuck on one thing - when I went to create the second md device with
mdadm,
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hdb2 missing
mdadm: error opening /dev/md1: No such file or directory
And indeed,
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2