similar to: Anaconda kickstart laying out / randomly; ignoring --ondisk in part command

Displaying 20 results from an estimated 3000 matches similar to: "Anaconda kickstart laying out / randomly; ignoring --ondisk in part command"

2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2006 Mar 02
0
Discrepancies in Anaconda-ks.cfg after kickstart
I am asking this here, that although a kickstart question, may have something to do with the Centos install? I took my Anaconda-ks.cfg from my system, turned it into a ks.cfg and did the install. Everything SEEMS ok, but why? First the partitioning information: ks.cfg supplied: clearpart --all --drives=hda part /boot --fstype ext3 --size=100 --ondisk=hda part / --fstype ext3 --start=14
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works > > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0: > > Interesting. If the driver really doews work flawlessly in > Xen 2, then I think the culprit has to be interrupt routeing. > > Under Xen 3, does /proc/interrupts show you''re receiving interrupts? I cannot boot with
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works > > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0: > > Interesting. If the driver really doews work flawlessly in > Xen 2, then I think the culprit has to be interrupt routeing. > > Under Xen 3, does /proc/interrupts show you''re receiving interrupts? I cannot boot with
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2007 Apr 13
2
Anaconda can't squeeze out the repomd.xml
Greetings. There must be some minor changes to anaconda. I'm getting the error: "Cannot open repomd.xml file...." the file seems to be located in the repodata directory... I'm using the following .cf taken directly from the CentOS 4.4 install : install url --url ftp://centos.westmancom.com/5.0/os/i386/ #cdrom lang en_US.UTF-8 langsupport --default=en_US.UTF-8 en_US.UTF-8
2019 Jun 14
3
zfs
Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there. I have 2 servers with this number of disks in each side: pve01:~# df | grep disco /dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0 /dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3 /dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1 /dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2 /dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1 /dev/sdc 2.0T 19G 2.0T 1%
2005 Jun 17
1
kickstart software raid on sata drives
This is what I have for configuring software raid on sata drives in my kickstart config <snip kickstart.cfg> clearpart --initlabel --all part swap --size=2048 --ondisk=sdb part swap --size=2048 --ondisk=sda part raid.01 --size=101 --ondisk=sda part raid.02 --size=101 --ondisk=sdb part raid.04 --size=1 --grow --ondisk=sdb part raid.03 --size=1 --grow --ondisk=sda raid / --fstype ext3
2010 Sep 17
1
multipath troubleshoot
Hi, My storage admin just assigned a Lun (fibre) to my server. Then re scanned using echo "1" > /sys/class/fc_host/host5/issue_lip echo "1" > /sys/class/fc_host/host6/issue_lip I can see the scsi device using dmesg But mpath device are not created for this LUN Pleas see below. The last 4 should be active and I think this is the problem Kernel:
2011 Nov 22
1
Recovering data from old corrupted file system
I have a corrupted multi-device file system that got corrupted ages ago (as I recall, one of the drives stopped responding, causing btrfs to panic). I am hoping to recover some of the data. For what it''s worth, here is the dmesg output from trying to mount the file system on a 3.0 kernel: device label Media devid 6 transid 816153 /dev/sdq device label Media devid 7 transid 816153
2010 Nov 18
1
kickstart raid disk partitioning
Hello. A couple of years ago I installed two file-servers using kickstart. The server has two 1TB sata disks with two software raid1 partitions as follows: # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb4[1] sda4[0] 933448704 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda2[2](F) 40957568 blocks [2/1] [_U] Now the drives are starting to be failing and next week
2006 Dec 25
2
Kickstart Questions
Hi, We dont run DHCP in our environment where I build our servers, is it required to get a first IP address to hit my kickstart server running on my xandros debian laptop? Just been a couple years since I last did some kickstart builds and I dont have system-config-kickstart running on a machine here in my home lab. I can turn on DHCP on my linux laptop perhaps, but just wanted to check in
2005 Nov 24
1
boot with more scsi card
hi, we've got a server with a 8 port 3ware card and 2 ide system disks. now we'd like to replace the ide disks with scsi disks or sata disks (these also recognized as scsi in the kernel). but we can't boot from it. the problem are twofold. first in the normal case the first scsi host scsi0 id the 3ware card, but grub only see the first 8 disk so if the system disk are sdi and sdj the
2016 Aug 05
1
CentOS 7 kickstart question
On Thu, August 4, 2016 7:13 pm, Paul Heinlein wrote: > On Thu, 4 Aug 2016, Valeri Galtsev wrote: > >> Dear Experts, >> >> Could somebody point to kicstart HOWTO specific for CentOS 7? >> >> On CentOS 7 I somehow am always given human intervention questions >> about drive which defeats unattended ks install. >> >> At least one snag I hit
2016 Aug 20
2
Kickstart issue with UEFi
Hi, I have a test system that booted fine using "Legacy Bios? mode and using the following Kickstart snippet configured the disks correctly:- # Clear the Master Boot Record zerombr # Partition clearing information clearpart --all --initlabel # Disk partitioning information part raid.01 --fstype="raid" --ondisk=sda --size=500 part raid.02 --fstype="raid" --grow
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup: server1:/brick1 server2:/brick1 server1:/brick2 server2:/brick2 You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.