similar to: Hosed my software RAID/LVM setup somehow

Displaying 20 results from an estimated 20000 matches similar to: "Hosed my software RAID/LVM setup somehow"

2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2008 Oct 05
3
Software Raid Expert Needed
Hello all, I have 2 x 250GB sata disks (sda and sdb). # fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 14939 119997486 fd Linux raid autodetect /dev/sda2 14940 29878
2008 Nov 26
2
Reassemble software RAID
I have a machine on CentOS 5 with two disks in RAID1 using Linux software RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the disk(s). /dev/md1 is managed by LVM and holds the system partition and several other partitions. I had to take out disk sda from the RAID and low level format it with the tool provided by Samsung. Now I put it back and want to reassemble the array.
2012 Apr 27
1
Help with software raid + LVM on Centos 6
Hi all, Please excuse the many posts. Wondering if any one can help me with the the setup. I have 2x2TBdisks. I would like to mirror them. I would like to create two LVMs so that I can snap shot from one to the other. During Centos 6 install, how would I go about this as its confusing? So far I am here; 1) Created the following raid devices; md0 500MB (use it for /boot) md1 4000MB (use it
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives in two slots of a 4-slot hot swap bay configured like this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End
2023 Jan 09
2
RAID1 setup
Hi > Continuing this thread, and focusing on RAID1. > > I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn > it off if I want). What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all. > > I am planning two groupings of RAID1 (it has 4 bays). > > There is also an internal USB boot port. >
2007 Dec 01
2
Looking for Insights
Hi Guys, I had a strange problem yesterday and I'm curious as to what everyone thinks. I have a client with a Red Hat Enterprise 2.1 cluster. All quality HP equipment with an MSA 500 storage array acting as the shared storage between the two nodes in the cluster. This cluster is configured for reliability and not load balancing. All work is handled by one node or the other not both.
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally Through df -i I got the approximate number of files is 63694442 [root at CentOS-73-64-minimal ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 131981312 30901030 101080282 24% / devtmpfs 8192893 435 8192458 1% /dev tmpfs
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on a CentOS 5.2 box before it goes online and before a drive really fails. I 'mdadm failed, removed', powered off, replaced drive, partitioned with sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'. Everything seems fine until I try to create a snapshot lv. (Creating a snapshot lv
2007 Dec 18
1
How can I extract the AIC score from a mixed model object produced using lmer?
I am running a series of candidate mixed models using lmer (package lme4) and I'd like to be able to compile a list of the AIC scores for those models so that I can quickly summarize and rank the models by AIC. When I do logistic regression, I can easily generate this kind of list by creating the model objects using glm, and doing: > md <- c("md1.lr", "md2.lr",
2012 Jul 22
1
btrfs-convert complains that fs is mounted even if it isn't
Hi, I''m trying to run btrfs-convert on a system that has three raid partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue system from md1, and try to run "btrfs-convert /dev/md3", it complains that /dev/md3 is already mounted, although it definitely is not. The only partition mounted is /dev/md1 because of the rescue system. When I replicate the setup in a
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I see this: md7 : active raid1 sda2[0] sdb2[1] 26627648 blocks [2/2] [UU] [-->> it's OK] md1 : active raid1 sdb3[1] sda3[0] 4192896 blocks [2/2] [UU] [-->> it's OK] md2 : active raid1 sda5[0] sdb5[1] 4192832 blocks [2/2] [UU] [-->> it's OK] md3 : active raid1 sdb6[1] sda6[0] 4192832 blocks [2/2]
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All, I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine: Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3 During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2019 Mar 12
1
CentOS 7 Installation Problems
I attempted to install CentOS 7 x86_64 on my machine that has the following hardware: Motherboard:????? ASRock X99 Taichi BIOS:???????????? AMI v P1.40? 08/04/2016 CPU:????????????? Intel Core I7-5820K RAM:????????????? 64 GB (8 x 8 GB DIMM) Optical:????????? LG Blu Ray 25 G / 50 G burner Storage:????????? 2 - 120GB?? PNY CS1311 SSD? 4 - 4 TB Western Digital hard drives ????????????????? The 2
2009 Sep 19
3
How does LVM decide which Physical Volume to write to?
Hi everyone. This isn't specifically a CentOS question, since it could apply for any distro but I hope someone can answer it anyway. I took the following steps but was puzzled by the outcome of the test at the end: 1. Create a RAID1 array called md3 with two 750GB drives 2. Create a RAID1 array called md9 with two 500GB drives 3. Initialise md3 then md9 as physical volumes (pvcreate) 4.