similar to: Intel RST RAID 1, partition tables and UUIDs

Displaying 20 results from an estimated 10000 matches similar to: "Intel RST RAID 1, partition tables and UUIDs"

2020 Nov 16
2
Intel RST RAID 1, partition tables and UUIDs
On 11/16/2020 01:23 PM, Jonathan Billings wrote: > On Sun, Nov 15, 2020 at 07:49:09PM -0500, H wrote: >> I have been having some problems with hardware RAID 1 on the >> motherboard that I am running CentOS 7 on. After a BIOS upgrade of >> the system, I lost the RAID 1 setup and was no longer able to boot >> the system. > The Intel RST RAID (aka Intel Matrix RAID) is
2020 Nov 16
1
Intel RST RAID 1, partition tables and UUIDs
the main advantage I know of for bios fake-raid is that the bios can boot off either of the two mirrored boot devices. usually if the sata0 device has failed, the BIOS isn't smart enough to boot from sata1 the only other reason is if you're running MS Windows desktop which can't do mirroring on its own On Mon, Nov 16, 2020 at 10:23 AM Jonathan Billings <billings at
2020 Nov 17
0
Intel RST RAID 1, partition tables and UUIDs
On Mon, 2020-11-16 at 18:06 -0500, H wrote: > On 11/16/2020 01:23 PM, Jonathan Billings wrote: > > On Sun, Nov 15, 2020 at 07:49:09PM -0500, H wrote: > > > I have been having some problems with hardware RAID 1 on the > > > motherboard that I am running CentOS 7 on. After a BIOS upgrade of > > > the system, I lost the RAID 1 setup and was no longer able to boot
2020 Nov 16
0
Intel RST RAID 1, partition tables and UUIDs
On Sun, Nov 15, 2020 at 07:49:09PM -0500, H wrote: > > I have been having some problems with hardware RAID 1 on the > motherboard that I am running CentOS 7 on. After a BIOS upgrade of > the system, I lost the RAID 1 setup and was no longer able to boot > the system. The Intel RST RAID (aka Intel Matrix RAID) is also known as a fakeraid. It isn't a hardware RAID, but instead
2020 Nov 17
2
Intel RST RAID 1, partition tables and UUIDs
> On Nov 17, 2020, at 1:07 AM, hw <hw at gc-24.de> wrote: > > On Mon, 2020-11-16 at 18:06 -0500, H wrote: >> On 11/16/2020 01:23 PM, Jonathan Billings wrote: >>> On Sun, Nov 15, 2020 at 07:49:09PM -0500, H wrote: >>>> I have been having some problems with hardware RAID 1 on the >>>> motherboard that I am running CentOS 7 on. After a BIOS
2020 Nov 18
0
Intel RST RAID 1, partition tables and UUIDs
On Tue, 2020-11-17 at 08:01 -0600, Valeri Galtsev wrote: > > > On Nov 17, 2020, at 1:07 AM, hw <hw at gc-24.de> wrote: > [...] > > If you don't require Centos, you could go for Fedora instead. Fedora has btrfs > > as default file system now which has software raid built-in, and Fedora can have > > advantages over Centos. > > > > There are
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2011 Apr 01
5
question on software raid
dmesg is not reporting any issues. The /proc/mdstat looks fine. md0 : active raid1 sdb1[1] sda1[0] X blocks [2/2] [UU] however /var/log/messages says: smartd[3392] Device /dev/sda 20 offline uncorrectable sectors The machine is running fine.. raid array looks good - what is up with smartd? THanks, Jerry
2005 Jun 25
3
Software RAID muck up
Hi, I hope someone can help. Just started to play with software RAID on Centos 3.5 and was trying to simulate a faulty drive by using the -f switch on mdadm to mark the partition (drive) as faulty and I then I removed and readded the drive, which quit happily rebuilt according the /proc/mdstat and the output from the --detail switch of mdadm. After all this mucking around I shutdown the
2014 Apr 01
2
Adding a new disk to an existing raid 10
Dear all, I'm not used to handling software raid. I've inherited a server which has raid 10 set. one of our disks failed, and it's to be replaced today. My question is; any hint how to add this new disk to the existing raid array ? first thought is :- Create identical partitions as other disks in the array i'd like to add it to.- add it to raid. Though i'm extremely worried of
2012 Jul 18
1
RAID card selection - JBOD mode / Linux RAID
I don't think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I'm going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250 Hard to get answers I can trust out of vendors :-) I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it. It has the following spec: Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2009 Nov 02
5
info about hdds in raid
How can I tell wich HDD to swap, when the "cat /proc/mdstat" says one HDD of the RAID1 array has died? Does the HDD's has some serial numbers, that I can see in "reality", and I can get that number from e.g.: a commands output? How could I know wich HDD to swap in e.g.: a RAID1 array? thank you -------------- next part -------------- An HTML attachment
2017 Jun 30
2
mdraid doesn't allow creation: device or resource busy
Dear fellow CentOS users, I have never experienced this problem with hard disk management before and cannot explain it to myself on any rational basis. The setup: I have a workstation for testing, running latest CentOS 7.3 AMD64. I am evaluating oVirt and a storage-ha as part of my bachelors thesis. I have already been running a RAID1 (mdraid, lvm2) for the system and some oVirt 4.1 testing.
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2014 Feb 17
1
deleting FakeRaid -> what happens to the partitions/data
Hi A server has FakeRAID installed, I want to remove it to make it mdadm driven .... If I delete the FakeRAID including - disabling it in the BIOS - removing the dmraid driver from initrd - deleting all meta data from partitions - deleting all dmraid packages is the data still available on the drives, i.e. the partitions, filesystem and files are still ok? I know that FakeRAID controller