similar to: Advice on setting up Raid and LVM

Displaying 20 results from an estimated 5000 matches similar to: "Advice on setting up Raid and LVM"

2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array. Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2008 Oct 05
3
Software Raid Expert Needed
Hello all, I have 2 x 250GB sata disks (sda and sdb). # fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 14939 119997486 fd Linux raid autodetect /dev/sda2 14940 29878
2011 Nov 23
8
[PATCH 0/8] Add MD inspection support to libguestfs
This series fixes inspection in the case that fstab contains references to md devices. I've made a few changes since the previous posting, which I've summarised below. [PATCH 1/8] build: Create an MD variant of the dummy Fedora image I've double checked that no timestamp is required in the Makefile. The script will not run a second time to build fedora-md2.img. [PATCH 2/8] build:
2007 Dec 18
1
How can I extract the AIC score from a mixed model object produced using lmer?
I am running a series of candidate mixed models using lmer (package lme4) and I'd like to be able to compile a list of the AIC scores for those models so that I can quickly summarize and rank the models by AIC. When I do logistic regression, I can easily generate this kind of list by creating the model objects using glm, and doing: > md <- c("md1.lr", "md2.lr",
2012 Jul 22
1
btrfs-convert complains that fs is mounted even if it isn't
Hi, I''m trying to run btrfs-convert on a system that has three raid partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue system from md1, and try to run "btrfs-convert /dev/md3", it complains that /dev/md3 is already mounted, although it definitely is not. The only partition mounted is /dev/md1 because of the rescue system. When I replicate the setup in a
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1. I rebooted one of my machines after doing some changes to RAID/LVM and now the two RAID partitions that I made changes to are "gone". I cannot boot into the system. On bootup it tells me that the devices md2 and md3 are busy or mounted and drops me to the repair shell. When I run fs check manually it just tells
2017 Oct 17
2
Distribute rebalance issues
Hi, I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens a second or so later. Is this normal behaviour? So far it has been the same server and the same (remote)
2023 Jan 09
2
RAID1 setup
Hi > Continuing this thread, and focusing on RAID1. > > I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn > it off if I want). What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all. > > I am planning two groupings of RAID1 (it has 4 bays). > > There is also an internal USB boot port. >
2008 Feb 06
4
Installation problems with large mirrored drives
I am trying to install CentOS 4.6 to a pair of 750GB hard drives. I can successfully install to either of the drives as a single drive, but when I try to use both drives and mirror the partitions, I start having problems. Anaconda crashes as it is trying to format the drives. This is what I'm trying to create: /dev/md0: 200MB, /boot /dev/md1: 2GB, swap /dev/md2: rest of the
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2023 May 19
3
[libguestfs PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506 This small set covers the new /dev/mapper/VG-LV "--key" ID format in the libguestfs LUKS-on-LVM inspection test. Thanks, Laszlo Laszlo Ersek (3): update common submodule LUKS-on-LVM inspection test: rename VGs and LVs LUKS-on-LVM inspection test: test /dev/mapper/VG-LV translation common
2023 May 19
3
[guestfs-tools PATCH 0/3] test "/dev/mapper/VG-LV" with "--key"
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168506 This small set covers the new /dev/mapper/VG-LV "--key" ID format in the LUKS-on-LVM virt-inspector test. Thanks, Laszlo Laszlo Ersek (3): update common submodule inspector: rename VGs and LVs in LUKS-on-LVM test inspector: test /dev/mapper/VG-LV translation in LUKS-on-LVM test common
2010 Nov 04
1
orphan inodes deleted issue
Dear All, My servers running on CentOS 5.5 x86_64 with kernel 2.6.18.194.17.4.el gigabyte motherboard and 2 harddisks (seagate 500GB). My CentOS box configured RAID 1, yesterday and today I had the same problem on 2 servers with same configuration. See the following error messages for details: EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives in two slots of a 4-slot hot swap bay configured like this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End
2023 Jan 08
1
RAID1 setup
Continuing this thread, and focusing on RAID1. I got an HPE Proliant gen10+ that has hardware RAID support.? (can turn it off if I want). I am planning two groupings of RAID1 (it has 4 bays). There is also an internal USB boot port. So I am really a newbie in working with RAID.? From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device. Will it work to put / on
2005 May 21
1
Software RAID CentOS4
Hi, I have a system with two IDE controllers running RAID1. As a test I powered down, removed one drive (hdc), and powered back up. System came up fine, so powered down installed a new drive (hdc) And powered back up. /proc/mdstat indicatd RAID1 active with hda only. I thought it would Auto add the new hdc drive... Also when I removed the new drive and added The original hdc, the swap partitions
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally Through df -i I got the approximate number of files is 63694442 [root at CentOS-73-64-minimal ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 131981312 30901030 101080282 24% / devtmpfs 8192893 435 8192458 1% /dev tmpfs