similar to: Help. Failed event on md1

Displaying 20 results from an estimated 4000 matches similar to: "Help. Failed event on md1"

2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2006 Apr 02
2
raid setup
Hi, I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i did is install the centos 4.2 serverCD on the first IBM and set the HDD to raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The 1st server rebuild the array fine. My problem is the Second server, after rebuilding it and
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives in two slots of a 4-slot hot swap bay configured like this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2008 Aug 13
1
Boot from degraded sw RAID 1
OK, this is probably long, and your answer will surely make me slap my forehead really hard... please help me understand what goes on. I intend to install CentOS 5.1 afresh over software RAID level 1. SATA drives are in AHCI mode. I follow basically [1] though I have made some mistakes as will be explained. AFAIK GRUB does not boot off LVM, so I: 1. Build a 100MB RAID-type partition on each
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2019 Feb 26
2
Problem with mdadm, raid1 and automatically adds any disk to raid
> On Mon, Feb 25, 2019 at 11:54 PM Simon Matter via CentOS > <centos at centos.org> > wrote: > >> > >> > What makes you think this has *anything* to do with systemd? Bitching >> > about systemd every time you hit a problem isn't helpful. Don't. >> >> If it's not systemd, who else does it? Can you elaborate, please? >> >
2013 Jan 04
2
Syslinux 5.00 - Doesn't boot my system / Not passing the kernel options to the kernel?
Hi, I encounter a problem with Syslinux 5.00 I cannot really describe. So I created two small videos: Booting with Syslinux 5.00 (1.3 MB): <https://www.dropbox.com/s/b6g8cdf2t9v48c6/boot-syslinux5-fail.mp4> How I fixed the problem by downgrading to Syslinux 4.06 and how booting should look like (6.5 MB): <https://www.dropbox.com/s/lt7cpgfm0qvqtba/boot-syslinux5-how-i-fixed-it.mp4>
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2014 Dec 09
2
DegradedArray message
On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote: > On 12/04/2014 05:45 AM, David McGuffey wrote: > In practice, however, there's a bunch of information you didn't provide, > so some of those steps are wrong. > > I'm not sure what dm-0, dm-2 and dm-3 are, but they're indicated in your > mdstat. I'm guessing that you made partitions, and then made
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0
2010 Jul 01
1
Superblock Problem
Hi all, After rebooting my CentOS 5.5 server, i have the following message: ================================== Red Hat nash version 5.1.19.6 starting EXT3-fs: unable to read superblock mount: error mounting /dev/root on /sysroot as ext3: invalid argument setuproot: moving /root failed: No such file or directory setuproot: error mounting /proc: No such file or directory setuproot: error mounting
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit : > The system should boot normally if you disconnect sdb. Have you > tried that? Unfortunately that didn't work. The boot process stops here: [OK] Reached target Basic System. Now what ? -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Blog :
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID configuration. md0: sda1/sdb1, 20 GB, OS / Partition md1: sda2/sdb2, 1 TB, data mounted as /home Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data 2x, then stopped the sd[ab]2 partition with: mdadm --stop /dev/md1; mdadm --zero-superblock /dev/sd[ab]1; Removed /home in /etc/fstab. Used