similar to: uuid_fixer?

Displaying 20 results from an estimated 70000 matches similar to: "uuid_fixer?"

2010 Sep 18
1
Software RAID + LVM + Grub
I'm playing with software RAID and LVM in some virtual machines and I've run into an issue that I can't find a good answer to in the docs. I have the following RAID setup: md0: sda1 and sdb1, RAID 1. This is /boot md1: sda2 and sdb2, RAID 1. This is a PV for LVM. VolGroup00, this is the volume group and md1 is the only PV in it. LogVol00 is swap LogVol01 is / LogVol02 is /home
2017 Sep 05
0
recover from failed RAID
All, I had a "bad timing" event where I lost 3 drives in a RAID6 array and the structure of all of the LVM pools and nodes was lost. All total, nearly 100TB of storage was scrambled. This array was 1/2 of a redundant (replica 2) gluster config (will be adding additional 3rd soon for split brain/redundancy with failure issues) so the data was not lost but just running in a degraded
2015 Dec 23
0
Recovering LVM after crash
I've been trying to cover data from a disk that appeared to had been corrupted after a power outage. The original setup was lvm on md raid 1 which appears to be what is complicating the issue. Apart from /boot, everything was on LVM partitions so I don't have any backup lvm information. Following various guides online, I've recreated duplicated the raid partition with dd onto a new
2011 Feb 03
2
Recovering LVM volumes
Hello all I have two sets of eIDE hard drives from earlier servers, one centos & one fedora. Both were LVM volumes with three or four physical disks, with ext3 fs. One disk, maybe even the boot one may even be missing, either from one or both sets and we do not know the disk order. I got these left from an earlier sysadmin who left the company & nobody know what's what. I need to
2010 Jan 09
1
Moving LVM from one machine to another
My CentOS4 machine died (CPU cooler failure, causing CPU to die). In this machine I had 5 Tbyte disks in a RAID5, and LVM structures on that. Now I've moved those 5 disks onto a CentOS5 machine and the RAID array is being rebuilt. However the LVM structures weren't detected at boot time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then
2018 Oct 16
1
C 7 installation annoyances
Gordon Messmer wrote: > On 10/16/18 1:24 PM, mark wrote: > >> Gordon Messmer wrote: >> >>> As best I recall, there's no support in the UI for LVM volumes with >>> RAID level. (And I don't see any such option in the kickstart >>> documentation, which also suggests that it won't be present in the >>> UI.) >>> The supported
2015 Jun 24
4
LVM hatred, was Re: /boot on a separate partition?
Once upon a time, m.roth at 5-cent.us <m.roth at 5-cent.us> said: > Here's a question: all of the arguments you're giving have to do with VMs. > Do you have some for straight-on-the-server, non-VM cases? I've used LVM on servers with hot-swap drives to migrate to new storage without downtime a number of times. Add new drives to the system, configure RAID (software or
2018 Oct 16
1
C 7 installation annoyances
*** This response is my personal opinion and may not reflect that of my employer. *** It seems recent updates to Anaconda and in particular Blivet in the CentOS 7 .iso are restricting things rather than expanding the options. We have a scenario where we are doing a CentOS 6 to CentOS 7 upgrade (replacement) using a backup partition to hold the install image. Over the years, as we added cloud
2015 Jun 24
6
LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 08:10 PM, Marko Vojinovic wrote: > Ok, you made me curious. Just how dramatic can it be? From where I'm > sitting, a read/write to a disk takes the amount of time it takes, the > hardware has a certain physical speed, regardless of the presence of > LVM. What am I missing? Well, there's best and worst case scenarios. Best case for file-backed VMs is
2018 Oct 16
2
C 7 installation annoyances
Gordon Messmer wrote: > On 10/15/18, mark <m.roth at 5-cent.us> wrote: > >> In the disk partitioner, I can't >> 1) choose to make the LVM with root and swap be on a RAID 1. Is there >> some way to do that, rather than two separate partitions RAIDed? > > As best I recall, there's no support in the UI for LVM volumes with > RAID level. (And I don't
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys, The other day while working on my old workstation it got frozen and after reboot I lost almost all data unexpectedly. I have a RAID1 configuration with LVM. 2 IDE HDDs. md0 .. store /boot (100MB) -------------------------- /dev/hda2 /dev/hdd1 md1 .. store / (26GB) -------------------------- /dev/hda3 /dev/hdd2 The only info that still rest in was that, that I restore after the fresh
2018 Aug 01
0
(EXT) CentOS Digest, Vol 162, Issue 29
Send CentOS mailing list submissions to centos at centos.org -----Original Message----- From: CentOS <centos-bounces at centos.org> On Behalf Of centos-request at centos.org Sent: Tuesday, July 31, 2018 5:30 PM To: centos at centos.org Subject: (EXT) CentOS Digest, Vol 162, Issue 29 Send CentOS mailing list submissions to centos at centos.org To subscribe or unsubscribe via the World
2014 May 10
2
EFI and RAID questions
Hi All; I have a new server we're setting up that supports EFI or Legacy in the bios I am a solid database guy but my SA skills are limited to what I need to get by 1) I used EFI because I wanted to create a raid 10 array with 6 4TB drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?) 2) I installed the OS
2008 Jan 18
1
Recover lost data from LVM RAID1
Guys, The other day while working on my old workstation it get frozen and after reboot I lost almost all data unexpectedly. I have a RAID1 configuration with LVM. 2 IDE HDDs. md0 .. store /boot (100MB) -------------------------- /dev/hda2 /dev/hdd1 md1 .. store / (26GB) /dev/hda3 /dev/hdd2 The only info that still rest in was that, that I restore after the fresh install. It seems that the
2016 May 06
0
OT: hardware: MegaCli and initializing a RAID
On Fri, May 6, 2016 4:19 pm, m.roth at 5-cent.us wrote: > Valeri Galtsev wrote: >> >> On Fri, May 6, 2016 1:36 pm, m.roth at 5-cent.us wrote: >>> Got a new box I'm trying to set up. I configured the RAID from the >>> firmware, but "fast initialize" was sitting there at 0% (it's about 43 >>> or 45TB). The first time I tried this, I said
2016 May 06
2
OT: hardware: MegaCli and initializing a RAID
Valeri Galtsev wrote: > > On Fri, May 6, 2016 1:36 pm, m.roth at 5-cent.us wrote: >> Got a new box I'm trying to set up. I configured the RAID from the >> firmware, but "fast initialize" was sitting there at 0% (it's about 43 >> or 45TB). The first time I tried this, I said background, and >> rebooted the system. >> >> And the stupid
2019 May 17
2
is 'list_del corruption' fix available in Centos ?
Warren Young wrote: > On May 17, 2019, at 9:53 AM, John Hodrien <J.H.Hodrien at leeds.ac.uk> > wrote: >> On Fri, 17 May 2019, James Szinger wrote: >>> On Fri, May 17, 2019 at 3:17 AM John Hodrien >>> <J.H.Hodrien at leeds.ac.uk> wrote: >>> >>>> RHEL advice would clearly be not to use btrfs. >>> >>> I'm curious, is
2014 Dec 11
0
CentOS 7 grub.cfg missing on new install
On 10/12/14 18:13, Jeff Boyce wrote: > Greetings - > > The short story is that got my new install completed with the > partitioning I wanted and using software raid, but after a reboot I > ended up with a grub prompt, and do not appear to have a grub.cfg file. > So here is a little history of how I got here, because I know in order > for anyone to help me they would
2015 Dec 03
2
7.2 kernel panic on boot
Duncan Brown wrote: > On 03/12/2015 17:00, m.roth at 5-cent.us wrote: >> Duncan Brown wrote: >>> On 03/12/2015 14:29, Leon Fauster wrote: >>>> Am 03.12.2015 um 15:06 schrieb Duncan Brown <centos2 at duncb.co.uk>: >>>>> On 03/12/2015 13:54, Jonathan Billings wrote: >>>>>> On Thu, Dec 03, 2015 at 01:44:47PM +0000, Duncan Brown
2013 Apr 11
6
RAID 6 - opinions
I'm setting up this huge RAID 6 box. I've always thought of hot spares, but I'm reading things that are comparing RAID 5 with a hot spare to RAID 6, implying that the latter doesn't need one. I *certainly* have enough drives to spare in this RAID box: 42 of 'em, so two questions: should I assign one or more hot spares, and, if so, how many? mark