similar to: LVM failure after CentOS 7.6 upgrade -- possible corruption

Displaying 20 results from an estimated 3000 matches similar to: "LVM failure after CentOS 7.6 upgrade -- possible corruption"

2018 Dec 05
0
LVM failure after CentOS 7.6 upgrade -- possible corruption
> I've started updating systems to CentOS 7.6, and so far I have one > failure. > > This system has two peculiarities which might have triggered the > problem. The first is that one of the software RAID arrays on this > system is degraded. While troubleshooting the problem, I saw similar > error messages mentioned in bug reports indicating that sGNU/Linux > ystems
2018 Dec 05
0
LVM failure after CentOS 7.6 upgrade -- possible corruption
My gut feeling is that this is related to a RAID1 issue I'm seeing with 7.6. See email thread "CentOS 7.6: Software RAID1 fails the only meaningful test" I suggest trying to boot from an earlier kernel. Good luck! Ben S On Wednesday, December 5, 2018 9:27:22 AM PST Gordon Messmer wrote: > I've started updating systems to CentOS 7.6, and so far I have one failure. >
2018 Dec 05
1
LVM failure after CentOS 7.6 upgrade -- possible corruption
On Wed, 5 Dec 2018 at 14:27, Benjamin Smith <lists at benjamindsmith.com> wrote: > > My gut feeling is that this is related to a RAID1 issue I'm seeing with 7.6. > See email thread "CentOS 7.6: Software RAID1 fails the only meaningful test" > You might want to point out which list you posted it on since it doesn't seem to be this one. > I suggest trying to
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2016 Jul 26
8
[PATCH 0/5] Improve LVM handling in the appliance
Hi, this series improves the way LVM is used in the appliance: in particular, now lvmetad can eventually run at all, and with the correct configuration. Also improve the listing strategies. Thanks, Pino Toscano (5): daemon: lvm-filter: set also global_filter daemon: lvm-filter: start lvmetad better daemon: lvm: improve filter for LVs with activationskip flag set daemon: lvm: list
2015 Feb 28
9
Looking for a life-save LVM Guru
Dear All, I am in desperate need for LVM data rescue for my server. I have an VG call vg_hosting consisting of 4 PVs each contained in a separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). And this LV: lv_home was created to use all the space of the 4 PVs. Right now, the third hard drive is damaged; and therefore the third PV (/dev/sdc1) cannot be accessed anymore. I would like
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello, I am on Ubuntu Server 13.04 with Linux 3.8. I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard drives has failed, I mean it''s materially dead. :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.90GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2015 Feb 28
1
Looking for a life-save LVM Guru
Dear James, Thank you for being quick to help. Yes, I could see all of them: # vgs # lvs # pvs Regards, Khem On Sat, February 28, 2015 7:37 am, James A. Peltier wrote: > > > ----- Original Message ----- > | Dear All, > | > | I am in desperate need for LVM data rescue for my server. > | I have an VG call vg_hosting consisting of 4 PVs each contained in a > | separate
2010 May 21
1
Grub Error 22; no Windows
Hello, I have a GridEngine setup with 5 subnodes and two RAIDS attached. I backed up the OS drive - 120GB - to an external hard drive - 500GB - using ddrescue. The OS drive is partitioned as: sda1 has the OS and is about 7 GB sda2 has /var and is about 4 GB sda3 has swap and is about 1 GB After backing up, there were 4KB of errors, but all at the end of the disk around 118GB. This used to be
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2016 Feb 18
2
CentOS 7, Xeon CPUs, not booting, [SOLVED], bug filed
Paul Heinlein wrote: > On Thu, 18 Feb 2016, m.roth at 5-cent.us wrote: > >> This is happening on anything other than plain vanilla Dell servers. One >> R730, with dual Tesla cards, one R420, with a fibre card for a RAID >> device, it never switches root. All these systems have Xeons, not AMD >> CPUs. >> >> We've had this with every one of the 327
2007 Feb 06
1
Increasing existing partition and LVM size
I have a disk on which CentOS is installed and running. The disk partitions look like this: Disk /dev/sda: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 1044
2011 Dec 07
2
failure converting Linux ESX guest to KVM hypervisor
Hi, I am experiencing a failure running virt-v2v to convert a Linux guest on an ESX host to a RedHat KVM hypervisor. The output with the failure follows. Any help/guidance is appreciated. [root at storage-024 ~]# virt-v2v -ic esx://<ip address>/?no_verify=1 -op transferimages --bridge br0 dev-03 > /tmp/virt-v2v.output error from Term::ReadKey::GetTerminalSize(): Unable to get
2015 Feb 19
3
iostat a partition
Hey guys, I need to use iostat to diagnose a disk latency problem we think we may be having. So if I have this disk partition: [root at uszmpdblp010la mysql]# df -h /mysql Filesystem Size Used Avail Use% Mounted on /dev/mapper/MysqlVG-MysqlVol 9.9G 1.1G 8.4G 11% /mysql And I want to correlate that to the output of fdisk -l, so that I can feed the disk
2012 Jun 01
2
installation and configuration documentation for XCP
i''ve installed XCP 1.5-beta. i''m a little confused as to what has happened. everything so far seems to work. however, i need more information on what was done to my hard disk during the installation and how was the file system set up. in particular, i was investigating how to create a new logical volume to place my ISO file to use as my ISO storage (SR). i notice (see below with
2009 May 27
2
Problem with OCFS2 on RHEL5.0 while installing CRS 10.2.01
Hi team I had installed OCFS2 on RHEL5.0 . every thing looks fine but when I was installing CRS on the node I got error message OCFS2 is not supported. Can you please put some light on this. Please find other info below [root at eregtest1 client]# uname -a Linux eregtest1.admin.abdn.ac.uk 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux i am using RHEL 5.0
2016 Jul 26
5
[PATCH v2 0/4] Improve LVM handling in the appliance
Hi, this series improves the way LVM is used in the appliance: in particular, now lvmetad can eventually run at all, and with the correct configuration. Also improve the listing strategies. Changes in v2: - dropped patch #5, will be sent separately - move lvmetad statup in own function (patch #2) Thanks, Pino Toscano (4): daemon: lvm-filter: set also global_filter daemon: lvm-filter:
2013 Mar 01
1
Reorg of a RAID/LVM system
I have a system with 4 disk drives, two 512 Gb and two 1 Tb. It look like this: CentOS release 5.9 (Final) Disk /dev/sda: 500.1 GB, 500107862016 bytes Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes Disk /dev/sdc: 500.1 GB, 500107862016 bytes Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes ================================================================= Disk /dev/sda: 500.1 GB, 500107862016 bytes
2017 Jul 26
5
[PATCH 0/2] daemon: Reimplement handling of lvm.conf and filters.
Simplify how we handle lvm.conf.
2008 Jul 24
1
Help recovering from an LVM issue
Hi People I just updated a CentOS 5.2 Server that is a Guest inside VMware ESX 3.50 Server using "yum update". As far as I can tell the only three packages were updated Jul 24 16:37:49 Updated: php-common - 5.1.6-20.el5_2.1.i386 Jul 24 16:37:50 Updated: php-cli - 5.1.6-20.el5_2.1.i386 Jul 24 16:37:50 Updated: php - 5.1.6-20.el5_2.1.i386 But when I rebooted the Server one of my