Displaying 20 results from an estimated 6000 matches similar to: "Upgrade to new drives in raid, larger"
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has
done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size.
2 devices in the mirror, one boot (about 100MB), one that fills the rest of
disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive.
I
2012 Apr 27
1
Help with software raid + LVM on Centos 6
Hi all,
Please excuse the many posts.
Wondering if any one can help me with the the setup.
I have 2x2TBdisks.
I would like to mirror them.
I would like to create two LVMs so that I can snap shot from one to the other.
During Centos 6 install, how would I go about this as its confusing?
So far I am here;
1) Created the following raid devices;
md0 500MB (use it for /boot)
md1 4000MB (use it
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote:
> On 04/19/2022 09:57 AM, Roberto Ragusa wrote:
>> On 4/18/22 1:27 PM, H wrote:
>>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all,
I'm setting up Centos4.2 on 2x80GB SATA drives.
The partition scheme is like this:
/boot = 300MB
/ = 9.2GB
/home = 70GB
swap = 500MB
The RAID is RAID 1.
md0 = 300MB = /boot
md1 = 9.2GB = LVM
md2 = 70GB = LVM
md3 = 500MB = LVM
Now, the confusing part is:
1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then
create the LV.
2. When setting up RAID 1, should I
2008 Nov 26
2
Reassemble software RAID
I have a machine on CentOS 5 with two disks in RAID1 using Linux software
RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the
disk(s). /dev/md1 is managed by LVM and holds the system partition and
several other partitions. I had to take out disk sda from the RAID and low
level format it with the tool provided by Samsung. Now I put it back and
want to reassemble the array.
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello,
I have a weird problem after adding new PV do LMV volume group.
It seems the error comes out only during boot time. Please read the story.
I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens
SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm.
First pair of disks has always two arrays (md0, md1). Small md0 is used
for booting and the rest - md1
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my
CentOS 5 machine:
Raid Partitions:
/dev/sda1,sdb1
/dev/sda2,sdb2
/dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for
the boot partition and then added sda2,sdb2 to a separate RAID 1
volume as well (md1). I then setup md1 as a LVM physical volume for
volume group 'system'. I
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks.
Partitioning is lvm over raid.
If i am using "logvol --grow i get "ValueError: not enough free space in volume group"
Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available.
(10 extents or 320Mb per created logical volume)
Following snippet is failing with
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck
everything. But I cannot.
The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot,
/, swap). OS is up-to-date CentOS 5.
So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file
systems, and try to run
fsck -y /dev/md0
fsck -y /dev/md1
fsck -y /dev/md2
For each try I get an error message:
2005 Apr 25
4
Suse 9.3 boot problem
Hi there
I got Suse 9.3 on a raid 1 (md0 :boot and root and md1
home)
When I try to get the xen kernel booted, the process
goes up to a certain point and then reboots
I got two problems:
1) Since I used default suse parameters I would assume
all my settings should be OK, so why does it not boot?
2) When booting and getting to the reboot point, it
holds the messages only for one second. How can
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2008 Oct 05
3
Software Raid Expert Needed
Hello all,
I have 2 x 250GB sata disks (sda and sdb).
# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14939 119997486 fd Linux raid
autodetect
/dev/sda2 14940 29878
2010 Dec 04
2
Fiddling with software RAID1 : continue working with one of two disks failing?
Hi,
I'm currently experimenting with software RAID1 on a spare PC with two
40 GB hard disks. Normally, on a desktop PC with only one hard disk, I
have a very simple partitioning scheme like this :
/dev/hda1 80 MB /boot ext2
/dev/hda2 1 GB swap
/dev/hda3 39 GB / ext3
Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb)
with three