Displaying 20 results from an estimated 20000 matches similar to: "enter run level"
2011 Apr 01
5
question on software raid
dmesg is not reporting any issues.
The /proc/mdstat looks fine.
md0 : active raid1 sdb1[1] sda1[0]
X blocks [2/2] [UU]
however /var/log/messages says:
smartd[3392] Device /dev/sda 20 offline uncorrectable sectors
The machine is running fine.. raid array looks good - what
is up with smartd?
THanks,
Jerry
2020 Nov 15
5
(C8) root on mdraid
Hello everyone.
I'm trying to install CentOS 8 with root and swap partitions on
software raid. The plan is:
- create md0 raid level 1 with 2 hard drives: /dev/sda and /dev/sdb,
using Linux Rscue CD,
- install CentOS 8 with Virtual Box on my laptop,
- rsync CentOS 8 root partition on /dev/md0p1,
- chroot in CentOS 8 root partition,
- configure /etc/mdadm.conf, grub.cfg, initramfs, install
2010 May 21
1
Grub Error 22; no Windows
Hello,
I have a GridEngine setup with 5 subnodes and two RAIDS attached. I backed
up the OS drive - 120GB - to an external hard drive - 500GB - using
ddrescue. The OS drive is partitioned as:
sda1 has the OS and is about 7 GB
sda2 has /var and is about 4 GB
sda3 has swap and is about 1 GB
After backing up, there were 4KB of errors, but all at the end of the disk
around 118GB. This used to be
2017 Aug 18
4
Problem with softwareraid
Hello all,
i have already had a discussion on the software raid mailinglist and i
want to switch to this one :)
I am having a really strange problem with my md0 device running
centos7. after a new start of my server the md0 was gone. now after
trying to find the problem i detected the following:
Booting any installed kernel gives me NO md0 device. (ls /dev/md*
doesnt give anything). a 'cat
2009 Dec 31
3
Lost mdadm.conf
Hi,
I lost my mdadm.conf (and /proc/mdadm shows nothing useful) and I'd like to
mount the filesystem again. So I've booted using rescue but I was wondering
if I can do a command like this safely (i.e without losing the data
previously stored).
mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Where of course the raid devices and the /dev/x are the correct ones
2012 Oct 25
2
fsck.ext4 problem 64bit
Hi All,
Trying to run fsck on a local linux raid partition gave the following.
[root at ... /]# fsck.ext4 /dev/md0
e2fsck 1.41.12 (17-May-2010)
/dev/md0 has unsupported feature(s): 64bit
e2fsck: Get a newer version of e2fsck!
Odd as the server is 64bit running latest kernel and using
latest "e2fsprogs.x86_64".
Any ideas awould be much appreciated.
Cheers Steve
k
2008 Oct 24
1
e2fsck discrepancies
Hi,
yesterday I ran e2fsck -n on a mounted file system and got:
/dev/sdb1 contains a file system with errors, check forced.
According to Ted, the lines that followed were not to be trusted due to
the fact that the file system was mounted. But this error statement
suggests to run a check with the fs unmounted.
Today, we scheduled a downtime and ran the check. It came of completely
clean:
~:
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List,
What this means?
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 2096384 blocks.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sda2
disk 1, wo:0, o:1, dev:sdb2
sd 0:0:0:0:
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2005 Oct 31
2
ext3 + fs > 2Tbyte
Hi list
this is actually a problem on a debian system but I thought you might
be interested to hear of it and perhaps can offer some help.
I have a woody box (dell pe750, dual cpu) running a kernel from
backports.org (debian 'testing' packages built on a 'stable' box).
The kernel version is 2.6.7-1.backports.org.1.
This host is hooked up to an Apple Xserve RAID with a 2.3Tbyte
2009 Oct 25
3
mismatch_cnt after 5.3 -> 5.4 upgrade
Saturday I did an upgrade from 5.3 (original install) to 5.4. Saturday
night, /etc/cron.weekly reported the following:
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md0
md0 holds /boot and resides, mirrored, on sda1 and sdb1. md1 holds
an LVM volume containing the remaining filesytems, including swap.
The underlying hardware is just a few months hold,
2010 Jul 01
1
Superblock Problem
Hi all,
After rebooting my CentOS 5.5 server, i have the following message:
==================================
Red Hat nash version 5.1.19.6 starting
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3: invalid argument
setuproot: moving /root failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting
2006 Jan 19
3
ext3 fs errors 3T fs
Hello,
I looked through the archives a bit and could not find anything relevant,
if you know otherwise please point me in the right direction.
I have a ~3T ext3 filesystem on linux software raid that had been behaving
corectly for sometime. Not to long ago it gave the following error after
trying to mount it:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
or too many
2008 Jan 18
1
HowTo Recover Lost Data from LVM RAID1 ?
Guys,
The other day while working on my old workstation it got frozen and
after reboot I lost almost all data unexpectedly.
I have a RAID1 configuration with LVM. 2 IDE HDDs.
md0 .. store /boot (100MB)
--------------------------
/dev/hda2
/dev/hdd1
md1 .. store / (26GB)
--------------------------
/dev/hda3
/dev/hdd2
The only info that still rest in was that, that I restore after the
fresh
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my
CentOS 5 machine:
Raid Partitions:
/dev/sda1,sdb1
/dev/sda2,sdb2
/dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for
the boot partition and then added sda2,sdb2 to a separate RAID 1
volume as well (md1). I then setup md1 as a LVM physical volume for
volume group 'system'. I
2003 Aug 18
2
another seriously corrupt ext3 -- pesky journal
Hi Ted and all,
I have a couple of questions near the end of this message, but first I have
to describe my problem in some detail.
The power failure on Thursday did something evil to my ext3 file system (box
running RH9+patches, ext3, /dev/md0, raid5 driver, 400GB f/s using 3x200GB
IDE drives and one hot-spare). The f/s got corrupt badly and the symptoms
are very similar to what Eddy described
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week