similar to: "Can't find root device" with lvm root after moving drive on CentOS 6.3

Displaying 20 results from an estimated 1100 matches similar to: ""Can't find root device" with lvm root after moving drive on CentOS 6.3"

2014 Oct 14
2
CentOS 6.4 kernel panic on boot after upgrading kernel to 2.6.32-431.29.2
I'm on a Supermicro server, X9DA7 motherboard, Intel C602 chipset, 2x 2.4GHz Intel Xeon E5-2665 8-core CPU, 96GB RAM, and I'm running CentOS 6.4. I just tried to use yum to upgrade the kernel from 2.6.32-358 to 2.6.32-431.29.2. However, I get a kernel panic on boot. The first kernel panic I got included stuff about acpi, so I tried adding noacpi noapic to the kernel boot parameters,
2013 Aug 21
2
fsck.ext4 Failed to optimize directory
I had a rather large ext4 partition on an Areca RAID shut down uncleanly while it was writing. When I mount it again, it recommends fsck, which I do, and I get the following error: Failed to optimize directory ... EXT2 directory corrupted This error shows up every time I run fsck.ext4 on this partition. How can I fix this? The file system seems to work ok otherwise, I can mount it and it
2011 Jan 08
4
LiveCD System recovery - Mounting LVM?
Hi, I am trying to recover data from my old system which had LVM. The disk had two partitions - /dev/sda1 (boot, Linux) and /dev/sda2 (Linux LVM). I had taken a backup of both partitions using dd. Now I am booting of CentOS live cd for system restore. I recreated partitions like previous system using fdisk and then used dd to dump all the data onto it. I would like to mount sda2 as LVM, but I
2013 Mar 24
5
How to make a network interface come up automatically on link up?
I have a recently installed Mellanox VPI interface in my server. This is an InfiniBand interface, which, through the use of adapters, can also do 10GbE over fiber. I have one of the adapter's two ports configured for 10GbE in this way, with a point to point link to a Mac workstation with a Myricom 10GbE card. I've configured this interface on the Linux box (eth2) using
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, but I'm not sure about that one yet). So, if I use
2013 Mar 26
1
ext4 deadlock issue
I'm having an occasional problem with a box. It's a Supermicro 16-core Xeon, running CentOS 6.3 with kernel 2.6.32-279.el6.x86_64, 96 gigs of RAM, and an Areca 1882ix-24 RAID controller with 24 disks, 23 in RAID6 plus a hot spare. The RAID is divided into 3 partitions, two of 25 TB plus one for the rest. Lately, I've noticed sporadic hangs on writing to the RAID, which
2013 Apr 26
1
Why is my default DISPLAY suddenly :3.0?
I'm on Fedora 6.3. After a reboot, some proprietary software didn't want to run. I found out that the startup script for said software manually sets DISPLAY to :0.0, which I know is not a good idea, and I can fix. However, this still doesn't explain why my default X DISPLAY is suddenly :3.0. -- Joakim Ziegler - Supervisor de postproducci?n - Terminal joakim at terminalmx.com
2013 Aug 19
1
LVM RAID0 and SSD discards/TRIM
I'm trying to work out the kinks of a proprietary, old, and clunky application that runs on CentOS. One of its main problems is that it writes image sequences extremely non-linearly and in several passes, using many CPUs, so the sequences get very fragmented. The obvious solution to this seems to be to use SSDs for its output, and some scripts that will pick up and copy our the sequences
2011 Apr 29
2
how to access lvm inside lvm
I have a centos 5.6 server that has xen domUs installed on their on logical volumes. These logical volumes contain their own volume groups and again their own logical volumes. I want to access the domU logical volumes and tried this: [root at kr ~]# fdisk -l /dev/VolGroup00/LogVol02 Disk /dev/VolGroup00/LogVol02: 274.8 GB, 274877906944 bytes 255 heads, 63 sectors/track, 33418 cylinders Units =
2018 Jul 19
1
Re: [PATCH 2/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
On Wednesday, 18 July 2018 15:37:24 CEST Richard W.M. Jones wrote: > The old vgscan API literally ran vgscan. When we switched to using > lvmetad (in commit dd162d2cd56a2ecf4bcd40a7f463940eaac875b8) this > stopped working because lvmetad now ignores plain vgscan commands > without the --cache option. > > We documented that vgscan would rescan PVs, VGs and LVs, but without >
2018 Jul 18
5
[PATCH 0/3] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
[This email is either empty or too large to be displayed at this time]
2010 Nov 12
4
Opinion on best way to use network storage
I need the community''s opinion on the best way to use my storage SAN to host xen images. The SAN itself is running iSCSI and NFS. My goal is to keep all my xen images on the SAN device, and to be able to easily move images from one host to another as needed while minimizing storage requirements and maximizing performance. What I see are my options: 1) Export a directory through NFS.
2018 Jul 25
4
[PATCH v2 0/4] New API: lvm_scan, deprecate vgscan (RHBZ#1602353).
v2: - Changes as suggested by Pino in previous review.
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody, I am in the process of setting up a really cool xen serverfarm. Backend storage will be an LVMed AoE-device on top of DRBD. The goal is to have the backend storage completely redundant. Picture: |RAID| |RAID| |DRBD1| <----> |DRBD2| \ / |VMAC| | AoE | |global LVM VG| / | \ |Dom0a| |Dom0b| |Dom0c| | |
2011 Jul 22
4
VM backup problem
Hai, I use following steps for LV backup. * lvcreate -L 5G -s -n lv_snapshot /dev/VG_XenStorage-7b010600-3920-5526-b3ec-6f7b0f610f3c/VHD-a2db885c-9ad0-46c3-b2c3-a30cb71d83f8 lv_snapshot created* This command worked properly Then issue kpartx command kpartx -av
2011 Jul 22
4
VM backup problem
Hai, I use following steps for LV backup. * lvcreate -L 5G -s -n lv_snapshot /dev/VG_XenStorage-7b010600-3920-5526-b3ec-6f7b0f610f3c/VHD-a2db885c-9ad0-46c3-b2c3-a30cb71d83f8 lv_snapshot created* This command worked properly Then issue kpartx command kpartx -av
2016 Jul 26
8
[PATCH 0/5] Improve LVM handling in the appliance
Hi, this series improves the way LVM is used in the appliance: in particular, now lvmetad can eventually run at all, and with the correct configuration. Also improve the listing strategies. Thanks, Pino Toscano (5): daemon: lvm-filter: set also global_filter daemon: lvm-filter: start lvmetad better daemon: lvm: improve filter for LVs with activationskip flag set daemon: lvm: list
2016 Jul 26
5
[PATCH v2 0/4] Improve LVM handling in the appliance
Hi, this series improves the way LVM is used in the appliance: in particular, now lvmetad can eventually run at all, and with the correct configuration. Also improve the listing strategies. Changes in v2: - dropped patch #5, will be sent separately - move lvmetad statup in own function (patch #2) Thanks, Pino Toscano (4): daemon: lvm-filter: set also global_filter daemon: lvm-filter:
2011 Dec 21
1
for a guest accessing host "full disk", how to prevent host vgscan
Hi All. I have a dell system with a H700 raid. Within the hardware RAID config I've created a "virtual disk" which I have assigned to one of my guests. On the host the device is "/dev/sdb", on the guest it's "/dev/vdb". This works fine. Within the guest, we have created lvm PV on /dev/vdb (using the whole disk - no partitions) and created a volume
2007 Apr 24
3
Migrating DomUs from file VBD to LVM Backend
Hello, what would be the easiest and fastest way to migrate a file based DomU to a LVM based DomU. Given that the file contains a partition table + partitions. I tried with an dummy approach: dd if=/path/to/file.img of=/dev/volumegroup/logical-volume. That gives me a logical volume that xm create refuses to use.... Reinhard _______________________________________________ Xen-users mailing