Displaying 20 results from an estimated 30000 matches similar to: "logical volume and drive names after mirroring a centos installation via rsync"
2013 Sep 15
1
grub command line
Hello Everyone
I have a remote CentOS 6.4 server (with KVM access), when I received
the server it was running with LVM on single disk (sda)
I managed to remove LVM and install raid 1 in sda and sdb disks
the mirroring is working fine, my only issue now is that everytime I
reboot the server I got the grub command line and I have manually boot
using comand
grub> configfile
2020 Nov 15
5
(C8) root on mdraid
Hello everyone.
I'm trying to install CentOS 8 with root and swap partitions on
software raid. The plan is:
- create md0 raid level 1 with 2 hard drives: /dev/sda and /dev/sdb,
using Linux Rscue CD,
- install CentOS 8 with Virtual Box on my laptop,
- rsync CentOS 8 root partition on /dev/md0p1,
- chroot in CentOS 8 root partition,
- configure /etc/mdadm.conf, grub.cfg, initramfs, install
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
> In the rescue mode, recreate the partition table which was on the sdb
> by copying over what is on sda
>
>
> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>
> This will give the kernel enough to know it has things to do on
> rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks
2018 Jul 14
2
ssm vs. lvm: moving physical drives and volume group to another system
I did the following test:
###############################################
1.
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
Everything works.
/dev/lvm_pool/lvol001 is mounted to
2008 Oct 28
1
changing partition priority on install
Greetings.
I am in the process of installing 5.1 64 bit on a server. The server
has 2 3ware cards: 9550SX 12 port, and a 8006 2 port, both SATA.
I want the 8006 board to be /dev/sda, and the 9550 to be /dev/sdb. My
plan is to install the os on /dev/sda (8006), and data on /dev/sdb
(9550). Unfortunately, the 9550 comes up as /dev/sda.
The 8006 is installed in slot 3, which is a 100Mhz
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2014 Feb 11
1
A puzzle with grub-install
I ran into a problem when using grub-install experimentally
in what is obviously a foolish way,
since I was unable to boot the machine afterwards.
I got round the problem, as I shall explain,
but I'm still interested to know why the problem arose.
Having added a second hard disk to my CentOS-6.5 server,
as an experiment I gave the command
grub-install /dev/sdb
after checking (with fdisk)
2007 Oct 07
1
Replacing failed software RAID drive
CentOS release 4.5
Hi All:
First of all I will admit to being spoiled by my MegaRAID SCSI RAID
controllers. When a drive fails on one of them I just replace the
drive and carry on with out having to do anything else.
I now find myself in the situation where I have a failed drive on a
non-MegaRAID controller, specifically an Adaptec 29160 SCSI controller.
The system is an Acer G700 with 8
2007 Apr 13
2
Anaconda can't squeeze out the repomd.xml
Greetings.
There must be some minor changes to anaconda. I'm getting the error:
"Cannot open repomd.xml file...."
the file seems to be located in the repodata directory...
I'm using the following .cf taken directly from the CentOS 4.4 install :
install
url --url ftp://centos.westmancom.com/5.0/os/i386/
#cdrom
lang en_US.UTF-8
langsupport --default=en_US.UTF-8 en_US.UTF-8
2011 Sep 07
1
boot problem after disk change on raid1
Hello,
I have two disks sda and sdb. One of the was broken so I have changed the
broken disk with a working one. I started the server in rescue mode, and
created the partional table, and added all the partitions to the software
raid.
*I have added the partitions to the RAID, and reboot.*
# mdadm /dev/md0 --add /dev/sdb1
# mdadm /dev/md1 --add /dev/sdb2
# mdadm /dev/md2 --add /dev/sdb3
# mdadm
2017 Jan 25
2
CentOS 7 install on one RAID 1 [not-so-SOLVED]
Let me see if I can, um, reboot this thread....
I made a RAID 1 of two raw disks, /dev/sda and /dev/sdb, *not* /dev/sdax
/dev/sdbx. Then I installed CentOS 7 on the RAID, with /boot, /, and swap
being partitions on the RAID. My problem is that grub2-install absolutely
and resolutely refuses to install on /dev/sda or /dev/sdb.
I've currently got it up in a half-assed rescue mode, and have
2007 May 04
1
CentOS 5 + Nforce 4 SLI Intel SATA problems
I've been having an extremely hard time using the on board SATA
controller on my nvidia based board. I have two 160gig Maxtor SATA
drives attached to the on board controller, setup as software raid0. The
problem is the array is extremely unreliable, where I'm getting constant
I/O errors like:
sd 0:0:0:0: SCSI error: return code = 0x00040000
end_request: I/O error, dev sda, sector
2014 Dec 05
3
CentOS 7 install software Raid on large drives error
----- Original Message -----
From: "Mark Milhollan" <mlm at pixelgate.net>
To: "Jeff Boyce" <jboyce at meridianenv.com>
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives error
> On Wed, 3 Dec 2014, Jeff Boyce wrote:
>
>>I am trying to install CentOS 7 into a new Dell Precision 3610. I have
2011 Feb 17
0
Can't create mirrored LVM: Insufficient suitable allocatable extents for logical volume : 2560 more required
I'm trying to setup a LVM mirror on 2 iSCS targets, but can't.
I have added both /dev/sda & /dev/sdb to the LVM-RAID PV, and both
have 500GB space.
[root at HP-DL360 by-path]# pvscan
PV /dev/cciss/c0d0p2 VG LVM lvm2 [136.59 GB / 2.69 GB free]
PV /dev/sda VG LVM-RAID lvm2 [500.00 GB / 490.00 GB free]
PV /dev/sdb VG LVM-RAID lvm2 [502.70 GB /
2018 Nov 12
3
extlinux troubles....
1. ext4
mke2fs -b 4096 -m 5 -t ext4 -O^uninit_bg -r 1 -v /dev/sdb1
sdparm --command=sync /dev/sdb
2. 150G
3. gdisk /dev/sdb
x
a
2
w
y
sdparm --command=sync /dev/sdb
4. mount /dev/sdb1 /mnt/sdb1
cd /mnt/sdb1
extlinux -i /mnt/sdb1/boot
umount /mnt/sdb1
sync;sync;sync
sdparm --command=sync /dev/sdb
cat gptmbr.bin >/dev/sdb1
sync;sync;sync
sdparm --command=sync /dev/sdb
2019 Apr 03
2
Kickstart putting /boot on sda2 (anaconda partition enumeration)?
Does anyone know how anaconda partitioning enumerates disk partitions when
specified in kickstart? I quickly browsed through the anaconda installer
source on github but didn't see the relevant bits.
I'm using the centOS 6.10 anaconda installer.
Somehow I am ending up with my swap partition on sda1, /boot on sda2, and
root on sda3. for $REASONS I want /boot to be the partition #1 (sda1)
2007 May 15
5
Make Raid1 2nd disk bootable?
On earlier versions of Centos, I could boot the install CD in rescue
mode, let it find and mount the installed system on the HD even when it
was just one disk of RAID1 partitions (type=FD). When booting from the
centos5 disk the attempt find the system gives a box that says 'You
don't have any Linux partitions'. At the bottom of the screen there is
something that says:
2010 Nov 18
1
kickstart raid disk partitioning
Hello.
A couple of years ago I installed two file-servers
using kickstart. The server has two 1TB sata disks
with two software raid1 partitions as follows:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[1] sda4[0]
933448704 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda2[2](F)
40957568 blocks [2/1] [_U]
Now the drives are starting to be failing and next week
2012 Jan 05
1
Corrupt mbr and disk directory map
We are running Centos 5.6. All was fine until yesterday. I attempted
to tar a 14KB work file to a USB floppy (/dev/sdb) for transport to
another server. Unfortunately, I keyed in 'tar cvf /dev/sda filename'
instead of 'tar cvf /dev/sdb filename'. /dev/sda is our main
(boot/root/apps) scsi hard drive. I realized my mistake, but it was
too late. The system is still
2018 Jul 14
3
ssm vs. lvm: moving physical drives and volume group to another system
When I change /etc/fstab from /dev/mapper/lvol001 to
/dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
Kernel 3.10.0-862 hangs and will not boot.
On Sat, Jul 14, 2018 at 1:20 PM Mike <1100100 at gmail.com> wrote:
>
> Maybe not a good assumption afterall --
>
> I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
>
> boot.log shows:
>
> Dependency failed for