Displaying 20 results from an estimated 5000 matches similar to: "Kickstart file for software raid"
2014 Aug 07
1
kickstart - dont wipe data
Hi,
I am struggling with kickstart.
What I want to achieve is a reinstall, but some data partitions should
survive the install, i.e. they should not be formatted.
With a single disk this works, here is the relevant part from the
kickstart file (I shortened the name of the volume group)
...
zerombr
clearpart --none --initlabel
part /boot --fstype="xfs" --label=boot --onpart=vda1
part
2008 Apr 28
1
Kickstart syntax for CentOS upgrade
I'd like to automate the upgrade from CentOS 4.6 to 5.1 as much as
possible. Since upgrades per se are not really recommended, I'm
planning to do a kickstart installation. However, I want to leave
one of the existing partitions (/scratch) untouched during the
installation. Here is my current layout (LogVol00 is swap so not
shown in the df output below):
# df -hl
2018 Aug 29
0
Kickstart file for software raid
Sorry - I did not include that I am actually "updating" a system from C6 to
C7 and it has an existing RAID /dev/md0 and /dev/md1. Hit send to quick.
Jerry
On Wed, Aug 29, 2018 at 3:52 PM Jerry Geis <jerry.geis at gmail.com> wrote:
> I am using a kickstart file for CentOS 7
>
> raid / --device=md0 --fstype="xfs"
>
2006 Mar 14
8
PXE boot, Kickstart NFS install and %include...
I was just wondering how (or indeed if) people use the %include
directive in Kickstart configuration files when building systems via
NFS. I've been trying to modularise our Kickstart files a little to
make things more readable, having generic defaults and role specific
stuff split out into separate configs.
I've tried this configuration...
[root at archive kickstart]# cat
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2008 Feb 06
4
Installation problems with large mirrored drives
I am trying to install CentOS 4.6 to a pair of 750GB hard drives. I can
successfully install to either of the drives as a single drive, but when
I try to use both drives and mirror the partitions, I start having
problems. Anaconda crashes as it is trying to format the drives.
This is what I'm trying to create:
/dev/md0: 200MB, /boot
/dev/md1: 2GB, swap
/dev/md2: rest of the
2012 Oct 15
2
ext3 partition on LVM lost all data
Hello Gentlemen,
I would like to ask a question about an issue I have with the Centos 6.3
installation.
I have installed a Centos 6.3 on a server we used before with 5.4 on
Friday. I have created a KS file to let me connect to the server via VNC
and have all repos and packages preconfigured. I only needed to
partition the hard drive using VNC.
During the partition process I selected which
2010 Nov 18
1
kickstart raid disk partitioning
Hello.
A couple of years ago I installed two file-servers
using kickstart. The server has two 1TB sata disks
with two software raid1 partitions as follows:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[1] sda4[0]
933448704 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda2[2](F)
40957568 blocks [2/1] [_U]
Now the drives are starting to be failing and next week
2008 Mar 28
3
questions on kickstart
I have 2 questions dealing with 2 different kickstart files.
1) my kickstart sections for RAID disk setup and kickstart reports it
cannot find sda. Why is that. sda is there and works.
clearpart --all --initlabel
part raid.01 --asprimary --bytes-per-inode=4096 --fstype="raid"
--onpart=sda1 --size=20000
part swap --asprimary --bytes-per-inode=4096 --fstype="swap"
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks.
Partitioning is lvm over raid.
If i am using "logvol --grow i get "ValueError: not enough free space in volume group"
Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available.
(10 extents or 320Mb per created logical volume)
Following snippet is failing with
2007 Jun 07
2
error in kickstart file for raid1 setup
Hello,
I'm trying to do a kickstart install of centos5. I'm pulling it off a
network server and i'm getting an error, in the parsing of the file. It
refers to line 31, i'm not going to show the complete file, but here is the
indicated line:
raid swap --fstype swap --level=RAID1 raid.4 raid.7
and the raid lines:
part raid.7 --size=512 --ondisk=hdb
part raid.4 --size=512
2019 Apr 03
2
Kickstart putting /boot on sda2 (anaconda partition enumeration)?
Does anyone know how anaconda partitioning enumerates disk partitions when
specified in kickstart? I quickly browsed through the anaconda installer
source on github but didn't see the relevant bits.
I'm using the centOS 6.10 anaconda installer.
Somehow I am ending up with my swap partition on sda1, /boot on sda2, and
root on sda3. for $REASONS I want /boot to be the partition #1 (sda1)
2017 Feb 15
1
Kickstart - part ignore onpart ??
I'm ill, i'm german ...
the script is looks ok, copy from a slim installation of anaconda.
Insert only the "pre part"
and
part /boot --onpart=/dev/sda1
part / --onpart=/dev/sda2
part swap --onpart=/dev/sda3
As i wrote: Jump over to another console and the partitions are there.
Sincerely
Andy
Am Mittwoch, den 15.02.2017, 11:16 -0800 schrieb John R
2011 Jan 09
5
replace x86 with x64 system and reuse existing LVM
I want to replace an existing 32bit with a 64bit installation (Centos 5).
There's an existing LVM with lots of partitions. Most are used for Xen
guests. The system itself uses only one of them plus a separate /boot
partition that is not on LVM.
What's the best course of action here? Should I do the reinstall with
kickstart or better manually and reuse the existing filesystem? As I
2007 Aug 07
4
Will this work? server+centOS5+100users?
I am an experienced MS administrator of W2003 servers & Exchange systems.
I have 5+ years UNIX mid-level experience but not in centOS. Grounded in SCO UNIX (the real SCO UNIX).
We want to use CentOS on a recently grave yarded Dell poweredge 400SC server.
This is a P4 3.0 Ghz, w/4GB memory, 2 SATA 250 GB disks.
We want to use this server w/CentOS5, to provide file and print resources to 100
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2009 Apr 28
2
new install and software raid
Is there a reason why after a software raid install (from kickstart)
that md1 is always unclean. md0 seems fine.
boot screen says md1 is dirty and
cat /proc/mdstat show md1 as being rebuilt.
Any ideas?
Jerry
--------------- my kickstart --------------
echo "bootloader --location=mbr --driveorder=$HD1SHORT --append=\"rhgb
quiet\" " >
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all,
I'm setting up Centos4.2 on 2x80GB SATA drives.
The partition scheme is like this:
/boot = 300MB
/ = 9.2GB
/home = 70GB
swap = 500MB
The RAID is RAID 1.
md0 = 300MB = /boot
md1 = 9.2GB = LVM
md2 = 70GB = LVM
md3 = 500MB = LVM
Now, the confusing part is:
1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then
create the LV.
2. When setting up RAID 1, should I