Displaying 20 results from an estimated 2000 matches similar to: "centos raid 1 question"
2006 Dec 27
1
Software RAID1 issue
When a new system CentOS-4.4 is built the swap partition is always
reversed...
Note md3 below, the raidtab is OK, I have tried various raid commands to
correct.
swapoff -a
raidstop /dev/md3
mkraid /dev/md3 --really-force
swapon -a
And then I get a proper ourput for /proc/mdstat,
but when I reboot /proc/mdstat again reads as below, with md3 [0] [1]
reversed.
[root]# cat /proc/mdstat
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List,
What this means?
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 2096384 blocks.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sda2
disk 1, wo:0, o:1, dev:sdb2
sd 0:0:0:0:
2005 Nov 22
1
gentoo as dom0 on xen fails...
I wanted to boot gentoo on xen, but it doesn''t work.
What can I do?
Booting ''Xen 3.0.0 / Linux 2.6.12.5''
root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
kernel /xen.gz dom0_mem=131072 com1=115200,8n1
[Multiboot-elf, <0x100000:0x641cc:0x27e34>, shtab=0x18c078, entry=0x100000]
module /vmlinuz-2.6.12.5-xen-0 root=/dev/md0
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a
email from the cron thing...
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md10
WARNING: mismatch_cnt is not 0 on /dev/md11
ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a
dell 2850 or something dual single-core 3ghz server.
these two md's are in
2004 Jul 02
1
Samba NFS Fedora Core 2 and Software Raid -- Ext3 fs got corrupted???
I'm running:
Fedora Core 2 (2.6.6.1-435)
Samba 3.0.3-5
My shared (raid1 mirror) data directory is:
/dev/md3 (hda6,hdc6) mounted as Ext3 to /sites
This is shared to 300 users as an nfs mount point to their Digital Unix
workstations as well as a Samba share to their W2k PC's.
My users just reported a bunch of read only error messages. Turns out, it
corrupted the file system. It said
2007 Aug 21
0
Saftware RAID1 or Hardware RAID1 with Asterisk (Vidura Senadeera)
>
>
Dear all,
Thanks for the greate explanation regaing Software/H/W Raid. This details
better but on voip-info.org/wiki pages.
Thanks lot agian.
Regs,
Vidura Senadeera.
======================================
Dear All,
> >
> > I would like to get community's feedback with regard to RAID1 ( Software
> or
> > Hardware) implementations with asterisk.
> >
2011 Mar 20
2
task md1_resync:9770 blocked for more than 120 seconds and OOM errors
Hello,
yesterday night I had a problem with
my server located at a hoster (strato.de).
I couldn't ssh to it and over the remote serial console
I saw "out of memory" errors (sorry, don't have the text).
Then I had reinstall CentOS 5.5/64 bit + all my setup (2h work),
because I have a contract with a social network and
they will shut down my little card game if it is not
2007 Aug 21
6
Saftware RAID1 or Hardware RAID1 with Asterisk
Dear All,
I would like to get community's feedback with regard to RAID1 ( Software or
Hardware) implementations with asterisk.
This is my setup
Motherboard with SATA RAID1 support
CENT OS 4.4
Asterisk 1.2.19
Libpri/zaptel latest release
2.8 Ghz Intel processor
2 80 GB SATA Hard disks
256 MB RAM
digium PRI/E1 card
Following are the concerns I am having
I'm planing to put this asterisk
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2010 Jul 01
1
Superblock Problem
Hi all,
After rebooting my CentOS 5.5 server, i have the following message:
==================================
Red Hat nash version 5.1.19.6 starting
EXT3-fs: unable to read superblock
mount: error mounting /dev/root on /sysroot as ext3: invalid argument
setuproot: moving /root failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting
2001 Aug 19
1
Question About Relocating Journal on Other Device
Hi. I'm using e2fsprogs 1.23 on a Roswell system with a patched 2.4.7
kernel (using patch ext3-2.4-0.9.5-247) and am trying to create an ext3
filesystem whose journal is located on another device. The incantation I'm
trying is:
mke2fs -j -J device=/dev/hdc2 /dev/hdc3
I keep getting an error along the lines of "mke2fs: Journal superblock not
found". I've tried creating
2007 Dec 01
2
Looking for Insights
Hi Guys,
I had a strange problem yesterday and I'm curious as to what everyone
thinks.
I have a client with a Red Hat Enterprise 2.1 cluster. All quality HP
equipment with an MSA 500 storage array acting as the shared storage
between the two nodes in the cluster.
This cluster is configured for reliability and not load balancing. All
work is handled by one node or the other not both.
2010 Aug 25
3
What does this warning message (from optim function) mean?
Hi R users,
I am trying to use the optim function to maximize a likelihood funciton, and
I got the following warning messages.
Could anyone explain to me what messege 31 means exactly? Is it a cause for
concern?
Since the value of convergence turns out to be zero, it means that the
converging is successful, right?
So can I assume that the parameter estimates generated thereafter are
reliable MLE
2012 Jul 22
1
btrfs-convert complains that fs is mounted even if it isn't
Hi,
I''m trying to run btrfs-convert on a system that has three raid
partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue
system from md1, and try to run "btrfs-convert /dev/md3", it complains
that /dev/md3 is already mounted, although it definitely is not. The
only partition mounted is /dev/md1 because of the rescue system. When I
replicate the setup in a
2009 Sep 19
3
How does LVM decide which Physical Volume to write to?
Hi everyone.
This isn't specifically a CentOS question, since it could apply for
any distro but I hope someone can answer it anyway.
I took the following steps but was puzzled by the outcome of the test
at the end:
1. Create a RAID1 array called md3 with two 750GB drives
2. Create a RAID1 array called md9 with two 500GB drives
3. Initialise md3 then md9 as physical volumes (pvcreate)
4.
2007 Aug 27
3
mdadm --create on Centos5?
Is there some new trick to making raid devices on Centos5?
# mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdc1
mdadm: error opening /dev/md3: No such file or directory
I thought that worked on earlier versions. Do I have to do something
udev related first?
--
Les Mikesell
lesmikesell at gmail.com
2008 Feb 25
2
ext3 errors
I recently set up a new system to run backuppc on centOS 5 with the
archive stored on a raid1 of 750 gig SATA drives created with 3 members
with one specified as "missing". Once a week I add the 3rd partition,
let it sync, then remove it. I've had a similar system working for a
long time using a firewire drive as the 3rd member, so I don't think the
raid setup is the cause
2004 Dec 15
17
Kernel panic - not syncing: Attempted to kill init!
I am trying to create an additional domain and have created a
configuration file based on the examples. When I try to boot the
domain, it eventually hits a Kernel panic, as follows:
Red Hat nash version 4.1.18 starting
Mounted /proc filesystem
Mounting sysfs
Creating /dev
Starting udev
Creating root device
Mounting root filesystem
mount: error 6 mounting ext3
mount: error 2 mounting none
2007 Oct 17
2
Hosed my software RAID/LVM setup somehow
CentOS 5, original kernel (xen and normal) and everything, Linux RAID 1.
I rebooted one of my machines after doing some changes to RAID/LVM and now
the two RAID partitions that I made changes to are "gone". I cannot boot
into the system.
On bootup it tells me that the devices md2 and md3 are busy or mounted and
drops me to the repair shell. When I run fs check manually it just tells
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs