Displaying 20 results from an estimated 3000 matches similar to: "mount old LVM drive"
2008 Nov 10
1
Autodetecing RAID members upon boot... need to update initrd?
Hello fellow CentOS'ers-
I've got a system running CentOS 5.0. The motherboard has two onboard SATA ports with two drives attached. I installed the system on a RAID1 setup. However, I'd like to add a hotspare disk to the array. Since there are no additional SATA ports, I've installed an additional controller. After partitioning, the additional drive was easily and successfully
2012 Jun 14
0
Two CentOS installations failed dual boot
Hello everybody,
I installed Centos 6.2 on a computer with an older version of it in
order to dual boot both of them. I managed to install the new OS on a
physically seperated hard drive, and configured grub to make the newly
installed OS the default one. Now the older OS won't boot and this
error message shows: *"error 13: invalid or unsupported executable format"*.
I attached
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is
running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives
in two slots of a 4-slot hot swap bay configured like this:
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End
2009 Jun 28
1
CentOS 5.3 and NTFS
Aaaaaa, I'm pulling out my hair over here!
I have an external USB drive which I had at work, connected just fine
to my CentOS 5.3 box. I recall there was some jiggery-pokery
involved, but do not recall just what.
So now I'm on my wife's freshly installed CentOS 5.3 laptop trying to
get it going, and I keep getting errors about
FATAL: Module fuse not found.
I saw this message from
2013 Sep 15
1
grub command line
Hello Everyone
I have a remote CentOS 6.4 server (with KVM access), when I received
the server it was running with LVM on single disk (sda)
I managed to remove LVM and install raid 1 in sda and sdb disks
the mirroring is working fine, my only issue now is that everytime I
reboot the server I got the grub command line and I have manually boot
using comand
grub> configfile
2010 Jul 23
5
install on raid1
Hi All,
I'm currently trying to install centos 5.4 x86-64bit on a raid 1, so if one the 2 disks fails the server will still be available.
i installed grub on /dev/sda using the advanced grub configuration option during the install.
after the install is done i boot in linux rescue mode , chroot the filesystem and copy grub to both drives using:
grub>root (hd0,0)
grub>setup (hd0)
2002 Jun 12
1
ext3+raid 1: Assertion failure in journal_commit_transaction()
We're getting the below errors about once a day on a system we're trying to
set up with RedHat 7.3. This has happened to multiple filesystems on
multiple physical and logical disks (basically we've got 4 drives as 2 sets
of RAID 1 arrays, details below).
Until a week ago, this box was a high-volume IMAP server running RedHat 6.2
with uptimes in the 200-day range, so I don't
2010 Jul 01
1
Superblock Problem
Hi all,
After rebooting my CentOS 5.5 server, i have the following message:
==================================
Red Hat nash version 5.1.19.6 starting
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3: invalid argument
setuproot: moving /root failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting
2006 Nov 01
1
e2fsck: Bad magic number in super-block
I posted this to the Fedora-list, but thought I might get some
additional information here as well.
I have a HD that refuses to mount with a 'bad magic number in
super-block'. I'm running FedoraCore 6 x86_64.
[root at moe ~]# fdisk -l /dev/hdc
Disk /dev/hdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
2011 Apr 24
2
Curious fdisk report on large disk
I have a 1.5TB internal disk on my server.
I partitioned this with fdisk,
and CentOS-5.6 runs perfectly on it.
But fdisk gives a very strange report.
Here is the perfectly normal response to mount:
-----------------------------
/dev/sdb10 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb2 on /boot type ext3
2011 Sep 07
1
boot problem after disk change on raid1
Hello,
I have two disks sda and sdb. One of the was broken so I have changed the
broken disk with a working one. I started the server in rescue mode, and
created the partional table, and added all the partitions to the software
raid.
*I have added the partitions to the RAID, and reboot.*
# mdadm /dev/md0 --add /dev/sdb1
# mdadm /dev/md1 --add /dev/sdb2
# mdadm /dev/md2 --add /dev/sdb3
# mdadm
2006 Sep 28
1
adding a usb drive to an existing raid1 set
it seems like I keep running into a wall.
The present raid array...well let me do an fdisk -l:
----------------------------------------
Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux
2008 Oct 18
0
Problem with Mouting iSSCI Shared Storage using OpenFiler
Hi,
I am doing a test setup of 11G RAC, and I am using Jeff Hunter's
document to achieve this.
I have 3 nodes all running Linux Redhat 4
htsscsun06-openfiler
htssclin5
htssclin4
I have created the Logical Volumes on htsscsun06-openfiler -
fdisk -l shows
fdisk -l
Disk /dev/hdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of
2013 Mar 01
1
Reorg of a RAID/LVM system
I have a system with 4 disk drives, two 512 Gb and two 1 Tb.
It look like this:
CentOS release 5.9 (Final)
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
=================================================================
Disk /dev/sda: 500.1 GB, 500107862016 bytes
2012 Jun 19
1
CentOS 6.2 on partitionable mdadm RAID1 (md_d0) - kernel panic with either disk not present
Environment:
CentOS 6.2 amd64 (min. server install)
2 virtual hard disks of 10GB each
Linux KVM
Following the instructions on CentOS Wiki
<http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1> I
installed a min. server in Linux KVM setup (script shown below)
<script>
#!/bin/bash
nic_mac_addr0=00:07:43:53:2b:bb
kvm \
-vga std \
-m 1024 \
-cpu core2duo \
-smp 2,cores=2 \
2015 Feb 19
3
iostat a partition
Hey guys,
I need to use iostat to diagnose a disk latency problem we think we may be
having.
So if I have this disk partition:
[root at uszmpdblp010la mysql]# df -h /mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/MysqlVG-MysqlVol
9.9G 1.1G 8.4G 11% /mysql
And I want to correlate that to the output of fdisk -l, so that I can feed
the disk
2007 Aug 08
0
Quick query about LVM in 4.5
Howdy,
Does anyone know if anything has changed with the LVM system from CentOS 4.4 to CentOS 4.5?
I'm having kind of a funky issue.
I've mounted LVM partitions manually quite a few times and I've never had this issue before:
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID
2011 Apr 29
2
how to access lvm inside lvm
I have a centos 5.6 server that has xen domUs installed on their on
logical volumes. These logical volumes contain their own volume groups
and again their own logical volumes. I want to access the domU logical
volumes and tried this:
[root at kr ~]# fdisk -l /dev/VolGroup00/LogVol02
Disk /dev/VolGroup00/LogVol02: 274.8 GB, 274877906944 bytes
255 heads, 63 sectors/track, 33418 cylinders
Units =
2009 Apr 24
3
extend raid volume - new drive
Hi there, I have a system with the following:
# fdisk -l
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9471 75971385 83 Linux
/dev/sda3
2009 Nov 12
1
kernel not booting after update
Hi all,
I'm having a strange problem in which a certain box won't boot
any kernel newer than 2.6.18-53.
I have a kickstart setup that installs a CentOS 5.1 base (which comes
with kernel 2.6.18-53), and then I do a "yum update" to 5.3.
However, when 2.6.18-164 gets installed, the box is rebooted, and it
dumps me in a grub prompt. If I manually enter root, kernel, initrd and