similar to: new install and software raid

Displaying 20 results from an estimated 5000 matches similar to: "new install and software raid"

2008 Aug 16
0
kickstart and 5.2 x86_64 giving errors.
When I use my kickstart file (which works on 5.1 x86_64) with 5.2 I get the following error. I put my kickstart file at the end. Do I have something incomaptible in the file? jerry -------------------------------------- Traceback (most recent call first): File "/usr/lib/anaconda/network.py", line 341, in lookupHostname ret = isys.pumpNetDevice(dev.get('device'),
2010 Nov 18
1
kickstart raid disk partitioning
Hello. A couple of years ago I installed two file-servers using kickstart. The server has two 1TB sata disks with two software raid1 partitions as follows: # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb4[1] sda4[0] 933448704 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda2[2](F) 40957568 blocks [2/1] [_U] Now the drives are starting to be failing and next week
2008 Mar 28
3
questions on kickstart
I have 2 questions dealing with 2 different kickstart files. 1) my kickstart sections for RAID disk setup and kickstart reports it cannot find sda. Why is that. sda is there and works. clearpart --all --initlabel part raid.01 --asprimary --bytes-per-inode=4096 --fstype="raid" --onpart=sda1 --size=20000 part swap --asprimary --bytes-per-inode=4096 --fstype="swap"
2008 Aug 21
0
kickstart error on 5.2 exception
Hi, I am trying to get my kickstart file that worked under 5.1 to work under 5.2 centos x86_64. This is the error that I get. On the screen it says Exception occured and gives me the option to save it. This is that file. I dont see any odd that would cause it to crash. Can anyone help. My kickstart file is in the mix below. Seems to be related to network, my line seems fine (I think) for
2014 Jul 16
1
anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery
I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks. Partitioning is lvm over raid. If i am using "logvol --grow i get "ValueError: not enough free space in volume group" Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available. (10 extents or 320Mb per created logical volume) Following snippet is failing with
2019 Jul 09
2
adding uefi to kickstart CentOS 7
I am trying to add an efi partition to my working kickstart file. bootloader --driveorder=sda --append="rhgb quiet biosdevname=0 net.ifnames=0" clearpart --all --initlabel part / --ondisk=sda --fstype xfs --size=20000 --asprimary part swap --ondisk=sda --size=4000 --asprimary part /boot/efi --ondisk=sda --fstype efi --size=1000 --asprimary part /home --ondisk=sda
2007 Apr 25
2
Raid 1 newbie question
Hi I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output: [root at server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU] md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_] md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU] What happens with md1 ? My dmesg output is: [root at
2019 Apr 03
2
Kickstart putting /boot on sda2 (anaconda partition enumeration)?
Does anyone know how anaconda partitioning enumerates disk partitions when specified in kickstart? I quickly browsed through the anaconda installer source on github but didn't see the relevant bits. I'm using the centOS 6.10 anaconda installer. Somehow I am ending up with my swap partition on sda1, /boot on sda2, and root on sda3. for $REASONS I want /boot to be the partition #1 (sda1)
2007 Jun 07
2
error in kickstart file for raid1 setup
Hello, I'm trying to do a kickstart install of centos5. I'm pulling it off a network server and i'm getting an error, in the parsing of the file. It refers to line 31, i'm not going to show the complete file, but here is the indicated line: raid swap --fstype swap --level=RAID1 raid.4 raid.7 and the raid lines: part raid.7 --size=512 --ondisk=hdb part raid.4 --size=512
2010 Apr 06
2
kickstart + domU for static IP
I've set up a local webserver to store kickstart files for domUs. All parameters are respected apart from the network settings. DomU always gets DHCP. Can any one help to unwrap this one? Does one add hostname, ip, netmask and gateway values to the /etc/xen/blah.cfg file? # ---- domU kickstart file ----# url --url http://192.168.1.120/centos/5/os/i386 lang en_US.UTF-8 keyboard uk network
2007 Apr 13
2
Anaconda can't squeeze out the repomd.xml
Greetings. There must be some minor changes to anaconda. I'm getting the error: "Cannot open repomd.xml file...." the file seems to be located in the repodata directory... I'm using the following .cf taken directly from the CentOS 4.4 install : install url --url ftp://centos.westmancom.com/5.0/os/i386/ #cdrom lang en_US.UTF-8 langsupport --default=en_US.UTF-8 en_US.UTF-8
2012 Mar 06
1
kickstart partitioning and cylinder boundary
As I understand anaconda uses parted to partition (starting from centos 6), using this as example (kickstart configuration file): clearpart --all --drives=sda --initlabel part /boot --asprimary --size=200 --fstype=ext2 --ondisk=sda part swap --asprimary --size=16384 --fstype=swap --ondisk=sda part / --asprimary --size=512000 --fstype=ext4 --ondisk=sda part /scratch --asprimary --size=1 --grow
2016 Aug 05
1
CentOS 7 kickstart question
On Thu, August 4, 2016 7:13 pm, Paul Heinlein wrote: > On Thu, 4 Aug 2016, Valeri Galtsev wrote: > >> Dear Experts, >> >> Could somebody point to kicstart HOWTO specific for CentOS 7? >> >> On CentOS 7 I somehow am always given human intervention questions >> about drive which defeats unattended ks install. >> >> At least one snag I hit
2008 Apr 30
4
kickstart question
I have a couple lines like: part / --ondisk=sda --fstype ext3 --size=20000 --asprimary part swap --ondisk=sda --size=4000 --asprimary part /home --ondisk=sda --fstype ext3 --size=1 --asprimary --grow in my kickstart file. Is there a way to have 1 kickstart file that works for hda and sda both??? So I would like to have 1 kickstart file that works for either a hda
2006 Mar 14
2
Help. Failed event on md1
Hi all, This morning I received this notification from mdadm: This is an automatically generated mail message from mdadm running on server-mail.mydomain.kom A Fail event had been detected on md device /dev/md1. Faithfully yours, etc. In /proc/mdstat I see this: Personalities : [raid1] md1 : active raid1 sdb2[2](F) sda2[0] 77842880 blocks [2/1] [U_] md0 : active raid1 sdb1[1] sda1[0]
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root: Message 257: >From root at desk4.localdomain Tue Oct 28 07:25:37 2014 Return-Path: <root at desk4.localdomain> X-Original-To: root Delivered-To: root at desk4.localdomain From: mdadm monitoring <root at desk4.localdomain> To: root at desk4.localdomain Subject: DegradedArray event on /dev/md0:desk4 Date: Tue, 28 Oct 2014 07:25:27
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed: md0 is made up of two 250G disks on which the OS and a very large /var partions resides for a number of virtual machines. md1 is made up of two 2T disks on which /home resides. Challenge is that disk 0 of md0 is the problem and it has a 524M /boot partition outside of the raid partition. My plan is to back up /home (md1) and at a
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2010 Jul 21
4
Fsck on mdraid array
Something seems to be wrong with my file systems, and I want to fsck everything. But I cannot. The setup consists of 2 hds, carrying 3 raid1 (ext3) file systems (boot, /, swap). OS is up-to-date CentOS 5. So I boot from CentOS 5.3 dvd in rescue mode, do not mount the file systems, and try to run fsck -y /dev/md0 fsck -y /dev/md1 fsck -y /dev/md2 For each try I get an error message: