similar to: Help with software raid + LVM on Centos 6

Displaying 20 results from an estimated 7000 matches similar to: "Help with software raid + LVM on Centos 6"

2008 Dec 12
1
Upgrade to new drives in raid, larger
Hi all, As part of my raid experience, I have yet to have to do this, but was wondering how you guys would attempt it. I have 3 drives in a raid 1, with one as a hot spare. They are 250gb with all space used by two raid devices, 1 with boot, the other with LVMs filling them up. Now, lets say down the road I want to put in 500gb drives and replace them....yikes. I was thinking of taking out the
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All, I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine: Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3 During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message:
2008 Nov 26
2
Reassemble software RAID
I have a machine on CentOS 5 with two disks in RAID1 using Linux software RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the disk(s). /dev/md1 is managed by LVM and holds the system partition and several other partitions. I had to take out disk sda from the RAID and low level format it with the tool provided by Samsung. Now I put it back and want to reassemble the array.
2010 Dec 04
2
Fiddling with software RAID1 : continue working with one of two disks failing?
Hi, I'm currently experimenting with software RAID1 on a spare PC with two 40 GB hard disks. Normally, on a desktop PC with only one hard disk, I have a very simple partitioning scheme like this : /dev/hda1 80 MB /boot ext2 /dev/hda2 1 GB swap /dev/hda3 39 GB / ext3 Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb) with three
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2023 Jan 09
2
RAID1 setup
Hi > Continuing this thread, and focusing on RAID1. > > I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn > it off if I want). What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all. > > I am planning two groupings of RAID1 (it has 4 bays). > > There is also an internal USB boot port. >
2008 Oct 05
3
Software Raid Expert Needed
Hello all, I have 2 x 250GB sata disks (sda and sdb). # fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 14939 119997486 fd Linux raid autodetect /dev/sda2 14940 29878
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1. I rebooted one of my machines after doing some changes to RAID/LVM and now the two RAID partitions that I made changes to are "gone". I cannot boot into the system. On bootup it tells me that the devices md2 and md3 are busy or mounted and drops me to the repair shell. When I run fs check manually it just tells
2010 Nov 18
1
kickstart raid disk partitioning
Hello. A couple of years ago I installed two file-servers using kickstart. The server has two 1TB sata disks with two software raid1 partitions as follows: # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb4[1] sda4[0] 933448704 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda2[2](F) 40957568 blocks [2/1] [_U] Now the drives are starting to be failing and next week