similar to: Software RAID muck up

Displaying 20 results from an estimated 10000 matches similar to: "Software RAID muck up"

2011 Apr 01
5
question on software raid
dmesg is not reporting any issues. The /proc/mdstat looks fine. md0 : active raid1 sdb1[1] sda1[0] X blocks [2/2] [UU] however /var/log/messages says: smartd[3392] Device /dev/sda 20 offline uncorrectable sectors The machine is running fine.. raid array looks good - what is up with smartd? THanks, Jerry
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2007 Oct 07
1
Replacing failed software RAID drive
CentOS release 4.5 Hi All: First of all I will admit to being spoiled by my MegaRAID SCSI RAID controllers. When a drive fails on one of them I just replace the drive and carry on with out having to do anything else. I now find myself in the situation where I have a failed drive on a non-MegaRAID controller, specifically an Adaptec 29160 SCSI controller. The system is an Acer G700 with 8
2020 Nov 16
2
Intel RST RAID 1, partition tables and UUIDs
On 11/16/2020 01:23 PM, Jonathan Billings wrote: > On Sun, Nov 15, 2020 at 07:49:09PM -0500, H wrote: >> I have been having some problems with hardware RAID 1 on the >> motherboard that I am running CentOS 7 on. After a BIOS upgrade of >> the system, I lost the RAID 1 setup and was no longer able to boot >> the system. > The Intel RST RAID (aka Intel Matrix RAID) is
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2009 Nov 02
5
info about hdds in raid
How can I tell wich HDD to swap, when the "cat /proc/mdstat" says one HDD of the RAID1 array has died? Does the HDD's has some serial numbers, that I can see in "reality", and I can get that number from e.g.: a commands output? How could I know wich HDD to swap in e.g.: a RAID1 array? thank you -------------- next part -------------- An HTML attachment
2006 Apr 11
1
SATA Raid 5 and losing a drive
Hi Folks - Using CentOS on a server destined to have a dozen SATA drives in it. The server is fine, raid 5 is set up on groups of 4 SATA drives. Today we decide to disconnect one SATA drive to simulate a failure. The box trucked on fine... a little too fine. We waited some minutes but no problem was visible in /proc/mdstat or in /var/log/messages or on the console. I ran mdadm --monitor
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2020 Nov 16
1
Intel RST RAID 1, partition tables and UUIDs
the main advantage I know of for bios fake-raid is that the bios can boot off either of the two mirrored boot devices. usually if the sata0 device has failed, the BIOS isn't smart enough to boot from sata1 the only other reason is if you're running MS Windows desktop which can't do mirroring on its own On Mon, Nov 16, 2020 at 10:23 AM Jonathan Billings <billings at
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2010 Jun 09
2
software raid - better management advice needed
Hi, I've used mdadm for years now to manage software raids. The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now. After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious. Any
2006 Oct 05
1
Cannot re-make a software raid pair
Apologies if you get this twice - the first one didn't seem to make it... Hi Guys, I have just replaced a faulty Max...woah, wait...this one's a Seagate... IDE hard disk but I cannot remake the software raid pair. The currently running disk is hda and I am trying to add back hdg - both are master drives on separate controllers. I have run fdisk on hdg and created the same partition
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I see this: md7 : active raid1 sda2[0] sdb2[1] 26627648 blocks [2/2] [UU] [-->> it's OK] md1 : active raid1 sdb3[1] sda3[0] 4192896 blocks [2/2] [UU] [-->> it's OK] md2 : active raid1 sda5[0] sdb5[1] 4192832 blocks [2/2] [UU] [-->> it's OK] md3 : active raid1 sdb6[1] sda6[0] 4192832 blocks [2/2]