similar to: problem building partitionable RAID-1 on Centos-6

Displaying 20 results from an estimated 3000 matches similar to: "problem building partitionable RAID-1 on Centos-6"

2009 Mar 26
4
Installing on partitionable RAID arrays
Hello Since linux 2.6, the md layer has a feature called partitionable arrays. So instead of having two disks, creating an identical partition table on both and then putting those partitions in RAID 1, you take those two disks and put them in one partitionable RAID 1 array (in mdadm terms, "mdp") and create a partition table on the new RAID device. The advantages are quite clear
2012 Nov 07
2
Install CentOS 6.3 to partitionable mdadm array
Hello all, I'm trying to install CentOS 6.3 to an mdadm partitionable array and not having any luck. The installer only allows me to create one file system per md device, or specify the md device as a LVM physical volume. I don't want to do either, I want to create one md device and create multiple partitions on top of the md device. I thought that perhaps the installer was preventing
2009 Jul 23
2
RAID problem when building new computer
Hi all! I'm building up a new box and plan to use Centos 5 on it. i've got a pair of SATA 320 GB drives to make a RAID1. I'm trying to follow the "howto" on the centos wiki for making a "partitionable RAID" installation. Given that my partition scheme has a separate /boot partition, while the one in the HOWTO apparently does not, I've had to tweak the steps
2010 Jan 20
5
Install On Partitionable RAID1
I have some suggested tweaks and changes to http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 1. The user should be instructed to start rescue mode with networking in order to be able to retrieve the patch for mkinitrd. 2. The command to create /etc/mdadm.conf will result in an extra line "spares=1" while the array is still syncing. Adding " | head -1 " to the
2009 Apr 29
4
I'd like to contribute two wiki articles
Hi, I've written two small howtos, and would like to contribute them to the CentOS Wiki. The first one is "How to install CentOS 5 on software partitionable mdadm RAID1", and the second one "How to repair a software mdadm RAID5 with two or more failed disks (if you know that information is still on the disks and readable)". I think that the first one should be somewhere
2011 Apr 03
3
KVM Host Disk Performance
Hello all, I'm having quite an interesting time getting up to speed with KVM/QEMU and the various ways of creating virtual Guest VMs. But disk I/O performance remains a bit of a question mark for me. I'm looking for suggestions and opinions .... This new machine has tons of disk space, lots of CPU cores and loads of RAM, so those are not issues. I currently have several software
2013 Oct 01
2
Partitionable Raid
Hi, After reading the tutorial at http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 I have the following question: What should I put instead of splashimage=(hd0,0)/grub/splash.xpm.gz and root (hd0,0) on /etc/grub.conf? Should I leave those lines untouched? If so, how would grub know where to boot from if /dev/sda fails? Or I would need to swap the drives in order to boot
2012 Jun 19
1
CentOS 6.2 on partitionable mdadm RAID1 (md_d0) - kernel panic with either disk not present
Environment: CentOS 6.2 amd64 (min. server install) 2 virtual hard disks of 10GB each Linux KVM Following the instructions on CentOS Wiki <http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1> I installed a min. server in Linux KVM setup (script shown below) <script> #!/bin/bash nic_mac_addr0=00:07:43:53:2b:bb kvm \ -vga std \ -m 1024 \ -cpu core2duo \ -smp 2,cores=2 \
2007 Nov 02
1
mdadm syntax
Hi All, I am trying to create an MD device. I am using the command: /sbin/mdadm --create --a /dev/md12 --level=1 --run --raid-devices=2 /dev/sda12 /dev/sdb12 to create the device, and to dynamically create the device file if needed. What I want is the device file to be created as /dev/md12, but with the -a flag it creates it as /dev/md<first unwsed minor number>. I have tried various
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 09:24, Michael Volz a ?crit : > Hi Niki, > > md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? [root at nestor:~] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232,9G 0 disk ??sda1 8:1 0 3,9G 0 part ? ??md126 9:126 0 3,9G 0 raid1 [SWAP] ??sda2 8:2
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5
2023 Mar 08
1
Mount removed raid disk back on same machine as original raid
I have a Centos 7 system with an mdraid array (raid 1).? I removed a drive from it a couple of months ago and replaced it with a new drive.? Now I want to recover some information from that old drive. I know how to mount the drive, and have done so on another system to confirm that the information I want is there. My question is this: What is going to happen when I try to mount a drive that
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2015 Mar 07
2
which uuid to specify a raid in fstab
I'm confused about which UUID to use to identify a software RAID in fstab. lsblk -fs shows: md127p1 ext4 c43af789-82aa-49e9-a8ed-acd52b1cdd58 /y --- md127 ext4 39c20575-4257-4fd7-b5c8-8a15757e9e8e --- sdb1 linux_r hostname:0 af77830e-8cfd-9012-62ce-e57105c3bf6c --- sdb --- sdc1 linux_r hostname:0 af77830e-8cfd-9012-62ce-e57105c3bf6c
2015 Mar 07
0
which uuid to specify a raid in fstab
Assuming your raid group is /dev/md127, you can run: ls -l /dev/disk/by-uuid or blkid /dev/md127 and use the ID both will show for /dev/md127
2015 Mar 07
2
which uuid to specify a raid in fstab
Thanks. On Sat, Mar 7, 2015 at 4:26 PM, Miguel Medalha <miguelmedalha at sapo.pt> wrote: > Assuming your raid group is /dev/md127, you can run: > > ls -l /dev/disk/by-uuid > > or > > blkid /dev/md127 > > and use the ID both will show for /dev/md127 > > _______________________________________________ > CentOS mailing list > CentOS at centos.org >
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2005 Jul 21
2
IDE RAID support
Sorry if this has been asked previously. Does CentOS support IDE hardware RAID on HP DL320? Thanks, -j
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2009 Jan 21
2
No bootloader with D-I in domU on part. RAID1
Hello. I just tried (as my first attempt in xenning) to set up a lenny amd64 domU on a partitionable Mirror RAID (/dev/md_d0p1 - /dev/md_d0p4), using the Debian installer from "people.debian.org/~joeyh/d-i/images/daily/". The partitions of the md will be represented inside the domU as /dev/xvda1-4 accordingly. The dom0 seems to run fine, it''s Xen 3.2.1 on amd64 lenny. The