Displaying 20 results from an estimated 40000 matches similar to: "raid 1"
2008 Oct 05
3
Software Raid Expert Needed
Hello all,
I have 2 x 250GB sata disks (sda and sdb).
# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 14939 119997486 fd Linux raid
autodetect
/dev/sda2 14940 29878
2014 Dec 05
3
CentOS 7 install software Raid on large drives error
----- Original Message -----
From: "Mark Milhollan" <mlm at pixelgate.net>
To: "Jeff Boyce" <jboyce at meridianenv.com>
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives error
> On Wed, 3 Dec 2014, Jeff Boyce wrote:
>
>>I am trying to install CentOS 7 into a new Dell Precision 3610. I have
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2014 Dec 10
4
CentOS 7 grub.cfg missing on new install
Greetings -
The short story is that got my new install completed with the partitioning I
wanted and using software raid, but after a reboot I ended up with a grub
prompt, and do not appear to have a grub.cfg file. So here is a little
history of how I got here, because I know in order for anyone to help me
they would subsequently ask for this information. So this post is a little
long, but
2019 Jan 09
7
Help finishing off Centos 7 RAID install
I've just finished installing a new Bacula storeage server. Prior to doing the
install I did some research and ended up deciding to do the following
config.
6x4TB drives
/boot/efi efi_fs sda1
/boot/efi_copy efi_fs sdb1
/boot xfs RAID1 sda2 sdb2
VG RAID6 all drives containing
SWAP
/
/home
/var/bacula
Questions:
1) The big problem with this is that it is dependant on sda for
2011 Dec 07
2
failure converting Linux ESX guest to KVM hypervisor
Hi,
I am experiencing a failure running virt-v2v to convert a Linux guest
on an ESX host to a RedHat KVM hypervisor. The output with the failure
follows. Any help/guidance is appreciated.
[root at storage-024 ~]# virt-v2v -ic esx://<ip address>/?no_verify=1 -op
transferimages --bridge br0 dev-03 > /tmp/virt-v2v.output
error from Term::ReadKey::GetTerminalSize(): Unable to get
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2019 Apr 03
2
Kickstart putting /boot on sda2 (anaconda partition enumeration)?
Does anyone know how anaconda partitioning enumerates disk partitions when
specified in kickstart? I quickly browsed through the anaconda installer
source on github but didn't see the relevant bits.
I'm using the centOS 6.10 anaconda installer.
Somehow I am ending up with my swap partition on sda1, /boot on sda2, and
root on sda3. for $REASONS I want /boot to be the partition #1 (sda1)
2011 Nov 18
1
How can I create raid 1 - Centos 5.7 64 minimal installation
Hello,
I have a server working on centos 5.7-64 minimal installation. I have
3 separate physical drives:
120 gb ssd, 2x 3tb disks for storage.
My linux installation is on ssd disk, and I want to make raid 1 for
these two 3tb disks and store data, like under /mnt/data.
Can you please tell me the path how this is possible?
Thanks for your help!
Best regards,
Here are some output of commands I
2006 Apr 02
2
raid setup
Hi,
I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i
did is install the centos 4.2 serverCD on the first IBM and set the HDD to
raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st
Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The
1st server rebuild the array fine. My problem is the Second server, after
rebuilding it and
2010 Sep 18
1
Software RAID + LVM + Grub
I'm playing with software RAID and LVM in some virtual machines and
I've run into an issue that I can't find a good answer to in the docs.
I have the following RAID setup:
md0: sda1 and sdb1, RAID 1. This is /boot
md1: sda2 and sdb2, RAID 1. This is a PV for LVM.
VolGroup00, this is the volume group and md1 is the only PV in it.
LogVol00 is swap
LogVol01 is /
LogVol02 is /home
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
> In the rescue mode, recreate the partition table which was on the sdb
> by copying over what is on sda
>
>
> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>
> This will give the kernel enough to know it has things to do on
> rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks
2006 Feb 24
3
Dom0 lvm/software raid rhel4.1 booting issues.
Basically the issue comes down to my Volume Groups not being found by
this initrd, causing good ole kernel panic.
initrd-2.6.12.6-xen3_12.1_rhel4.1.img
[root@xen01 lvm]# uname -a
Linux xen01.inside.***.com 2.6.9-22.0.2.ELsmp #1 SMP Thu Jan 5 17:13:01
EST 2006 i686 i686 i386 GNU/Linux
[root@xen01 lvm]# cat /etc/redhat-release
Red Hat Enterprise Linux ES release 4 (Nahant Update 2)
Everything
2015 May 28
3
Re: Concurrent scanning of same disk
2015-05-27 15:21 GMT+03:00 Richard W.M. Jones <rjones@redhat.com>:
> On Wed, May 27, 2015 at 09:38:38AM +0300, NoxDaFox wrote:
> > * RuntimeError: file receive cancelled by daemon - On r =
> > libguestfsmod.checksums_out (self._o, csumtype, directory, sumsfile)
> > * RuntimeError: hivex_close: do_hivex_close: you must call 'hivex-open'
> > first to
2012 Oct 30
1
SCSI/IDE Devices in Guest
I'm experimenting with how to attach storage to a guest virtual machine (I
sent another message to th elist about that as it's slightly different)
Looking in the XML files for my virtual machine, I see this:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vm/myvm/tmpeuiVc9.qcow2'/>
2010 Jul 23
5
install on raid1
Hi All,
I'm currently trying to install centos 5.4 x86-64bit on a raid 1, so if one the 2 disks fails the server will still be available.
i installed grub on /dev/sda using the advanced grub configuration option during the install.
after the install is done i boot in linux rescue mode , chroot the filesystem and copy grub to both drives using:
grub>root (hd0,0)
grub>setup (hd0)
2008 Nov 26
2
Reassemble software RAID
I have a machine on CentOS 5 with two disks in RAID1 using Linux software
RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the
disk(s). /dev/md1 is managed by LVM and holds the system partition and
several other partitions. I had to take out disk sda from the RAID and low
level format it with the tool provided by Samsung. Now I put it back and
want to reassemble the array.
2009 May 18
4
unable to read partition table in log
Hi recently I noticed in the messages log, the following error
May 18 15:59:52 mail kernel: sdb: assuming drive cache: write through
May 18 15:59:52 mail kernel: sdb : READ CAPACITY failed.
May 18 15:59:52 mail kernel: sdb : status=0, message=00, host=1, driver=00
May 18 15:59:52 mail kernel: sdb : sense not available.
May 18 15:59:52 mail kernel: sdb: assuming Write Enabled
May 18
2008 Aug 13
1
Boot from degraded sw RAID 1
OK, this is probably long, and your answer will surely make me slap my
forehead really hard... please help me understand what goes on.
I intend to install CentOS 5.1 afresh over software RAID level 1. SATA
drives are in AHCI mode.
I follow basically [1] though I have made some mistakes as will be
explained. AFAIK GRUB does not boot off LVM, so I:
1. Build a 100MB RAID-type partition on each
2007 Nov 29
1
RAID, LVM, extra disks...
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I