Displaying 20 results from an estimated 10000 matches similar to: "RAID + LVM Addition to CentOS 5 Install"
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has
done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size.
2 devices in the mirror, one boot (about 100MB), one that fills the rest of
disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive.
I
2010 Jul 23
5
install on raid1
Hi All,
I'm currently trying to install centos 5.4 x86-64bit on a raid 1, so if one the 2 disks fails the server will still be available.
i installed grub on /dev/sda using the advanced grub configuration option during the install.
after the install is done i boot in linux rescue mode , chroot the filesystem and copy grub to both drives using:
grub>root (hd0,0)
grub>setup (hd0)
2006 Apr 02
2
raid setup
Hi,
I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i
did is install the centos 4.2 serverCD on the first IBM and set the HDD to
raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st
Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The
1st server rebuild the array fine. My problem is the Second server, after
rebuilding it and
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi,
I just replaced Slackware64 14.1 running on my office's HP Proliant
Microserver with a fresh installation of CentOS 7.
The server has 4 x 250 GB disks.
Every disk is configured like this :
* 200 MB /dev/sdX1 for /boot
* 4 GB /dev/sdX2 for swap
* 248 GB /dev/sdX3 for /
There are supposed to be no spare devices.
/boot and swap are all supposed to be assembled in RAID level 1 across
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I
see this:
md7 : active raid1 sda2[0] sdb2[1]
26627648 blocks [2/2] [UU] [-->> it's OK]
md1 : active raid1 sdb3[1] sda3[0]
4192896 blocks [2/2] [UU] [-->> it's OK]
md2 : active raid1 sda5[0] sdb5[1]
4192832 blocks [2/2] [UU] [-->> it's OK]
md3 : active raid1 sdb6[1] sda6[0]
4192832 blocks [2/2]
2019 Mar 12
1
CentOS 7 Installation Problems
I attempted to install CentOS 7 x86_64 on my machine that has the
following hardware:
Motherboard:????? ASRock X99 Taichi
BIOS:???????????? AMI v P1.40? 08/04/2016
CPU:????????????? Intel Core I7-5820K
RAM:????????????? 64 GB (8 x 8 GB DIMM)
Optical:????????? LG Blu Ray 25 G / 50 G burner
Storage:????????? 2 - 120GB?? PNY CS1311 SSD? 4 - 4 TB Western Digital
hard drives
????????????????? The 2
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of
RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root at server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 1.4T 1.4G 1.3T 1% /
/dev/md0 99M 19M 76M 20% /boot
tmpfs 4.0G 0 4.0G 0% /dev/shm
[root at server ~]#
Its barebones
2006 Jan 18
1
4.2 Lockup on Fujitsu-Siemens Primergy Econel 200
Hello everybody,
I'm looking for some insights on reproductible lockups with this server
so please bear the long description that follows:
Installed the 4.0 Centos than yum updated it so it results a 4.2 Centos
fully updated. During the instalation and update there were no cold
boots. That's important because if (updated Centos or not) you cold boot
the server it stucks at the
2010 Jul 01
1
Superblock Problem
Hi all,
After rebooting my CentOS 5.5 server, i have the following message:
==================================
Red Hat nash version 5.1.19.6 starting
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3: invalid argument
setuproot: moving /root failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present.
This will prevent starting arrays in degraded state.
Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state.
Two new tests are added.
This fixes rhbz1527852.
Here is boot-benchmark
2018 May 28
9
CentOS6: HELP! EFI boot fails after replacing disks...
OK, I wanted to replace the 500G disks in a Dell T20 server with new 2TB
disks. The machine has 4 SATA ports, one used for the optical disk and three
for the hard drives. It is set up with /dev/sda and /dev/sdb with each three
partitions:
1 -- VFAT (for EFI)
2 -- ext4 (for /boot)
3 -- LVM
/dev/sda2 and /dev/sdb2 are a mirror raid (/dev/md0)
/dev/sda3 and /dev/sdb3 are a mirror raid
2008 Oct 05
3
Software Raid Expert Needed
Hello all,
I have 2 x 250GB sata disks (sda and sdb).
# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14939 119997486 fd Linux raid
autodetect
/dev/sda2 14940 29878
2008 Mar 28
3
questions on kickstart
I have 2 questions dealing with 2 different kickstart files.
1) my kickstart sections for RAID disk setup and kickstart reports it
cannot find sda. Why is that. sda is there and works.
clearpart --all --initlabel
part raid.01 --asprimary --bytes-per-inode=4096 --fstype="raid"
--onpart=sda1 --size=20000
part swap --asprimary --bytes-per-inode=4096 --fstype="swap"
2008 Dec 12
1
Upgrade to new drives in raid, larger
Hi all,
As part of my raid experience, I have yet to have to do this, but was
wondering how you guys would attempt it.
I have 3 drives in a raid 1, with one as a hot spare.
They are 250gb with all space used by two raid devices, 1 with boot, the
other with LVMs filling them up.
Now, lets say down the road I want to put in 500gb drives and replace
them....yikes.
I was thinking of taking out the