Displaying 20 results from an estimated 3000 matches similar to: "RAID-10 vs Nested (RAID-0 on 2x RAID-1s)"
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2012 Feb 06
1
Unknown KERNEL Warning in boot messages
CentOS Community,
Would someone who is familiar with reading boot messages and kernel
errors be able to assist with advising me on what the following errors
might mean in dmesg. It seems to come up randomly towards the end of the
logfile.
------------[ cut here ]------------
WARNING: at arch/x86/kernel/cpu/mtrr/generic.c:467
generic_get_mtrr+0x11e/0x140() (Not tainted)
Hardware name: empty
2011 Jul 22
0
Strange problem with LVM, device-mapper, and software RAID...
Running on a up-to-date CentOS 5.6 x86_64 machine:
[heller at ravel ~]$ uname -a
Linux ravel.60villagedrive 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
with a TYAN Computer Corp S4881 motherboard, which has a nVidia 4
channel SATA controller. It also has a Marvell Technology Group Ltd.
88SX7042 PCI-e 4-port SATA-II (rev 02).
This machine has a 120G
2012 Sep 04
1
Recover software raid 10 on CentOS 6.3
The setup is 4 1TB drives running RAID10. I was using the Gnome Disk
Utility to verify the integrity of the array (which is a 500MB mirrored md0
and the rest a R10 md1).
I believe one of the drives is bad but prior to the system going offline it
showed that 2 drives were detached from the array but healthy. A reboot
results in a panic complaining that not enough mirrors are available.
Is there a
2002 Mar 02
4
ext3 on Linux software RAID1
Everyone,
We just had a pretty bad crash on one of production boxes and the ext2
filesystem on the data partition of our box had some major filesystem
corruption. Needless to say, I am now looking into converting the
filesystem to ext3 and I have some questions regarding ext3 and Linux
software RAID.
I have read that previously there were some issues running ext3 on a
software raid device
2013 Jan 04
2
Syslinux 5.00 - Doesn't boot my system / Not passing the kernel options to the kernel?
Hi,
I encounter a problem with Syslinux 5.00 I cannot really describe. So I
created two small videos:
Booting with Syslinux 5.00 (1.3 MB):
<https://www.dropbox.com/s/b6g8cdf2t9v48c6/boot-syslinux5-fail.mp4>
How I fixed the problem by downgrading to Syslinux 4.06 and how booting
should look like (6.5 MB):
<https://www.dropbox.com/s/lt7cpgfm0qvqtba/boot-syslinux5-how-i-fixed-it.mp4>
2015 May 09
4
Bug#784810: Xen domU try ton access to dom0 LVM Volume group
Package: xen-hypervisor-4.4-amd64
Version: 4.4.1-9
On a fresh installation of Debian Jessie, when I try to start a domU, it
try to access to a dom0 LVM Volume Group :
[...] (I put all the boot at the end)
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top
... Begin: Assembling all MD
2007 Apr 02
2
Software RAID 10?
Hello...
I have a server with 4 x SCSI drives and I would like to install Centos
4 (or 5) onto a software RAID 10 array. Do you know if this is
possible? I noticed that under the Centos 4.92 beta, RAID 5 is an option
but for some reason RAID 10 is not listed.
There does appear to be a RAID 10 module....
/lib/modules/2.6.9-42.0.8.ELsmp/kernel/drivers/md/raid10.ko
More info I found here:
2011 Jun 24
1
How long should resize2fs take?
Hullo!
First mail, sorry if this is the wrong place for this kind of
question. I realise this is a "piece of string" type question.
tl;dr version: I have a resizefs shrinking an ext4 filesystem from
~4TB to ~3TB and it's been running for ~2 days. Is this normal?
Strace shows lots of:-
lseek(3, 42978250752, SEEK_SET) = 42978250752
read(3,
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system
here, and shortly thereafter remembreed I hadn't updated it in a while,
so i ran yum update...
while installing/updating stuff, got these errors:
Installing: kernel #######################
[14/69]
raid level raid10 (in /proc/mdstat) not recognized
...
Installing: kernel-smp
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed:
md0 is made up of two 250G disks on which the OS and a very large /var
partions resides for a number of virtual machines.
md1 is made up of two 2T disks on which /home resides.
Challenge is that disk 0 of md0 is the problem and it has a 524M /boot
partition outside of the raid partition.
My plan is to back up /home (md1) and at a
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to
create a raid 10 device by installing the system, copying the md modules
onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It
detects the other arrays (raid0 and raid1) fine, but the raid10 array
won't show up. Looking through the logs
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to
configure in RAID10. I read somewhere that Raid10 support is in the
latest kernel, but I can't seem to get anaconda to let me create it. I
only see raid 0, 1, 5, and 6.
Even when I tried to set up raid5 or raid1, it would not let me put the
/boot partition on it, and I though that this was now possible.
Is it
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my
options?
Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid10 num-devices=4
UUID=942f512e:2db8dc6c:71667abc:daf408c3
/proc/mdstat:
Personalities : [raid10]
md127 : active raid10 sdf1[2](F)
2011 Aug 30
3
OT - small hd recommendation
A little OT - but I've seen a few opinions voiced here by various admins
and I'd like to benefit.
Currently running a single combined server for multiple operations -
fileserver, mailserver, webserver, virtual server, and whatever else
pops up. Current incarnation of the machine, after the last rebuild, is
an AMD Opteron 4180 with a Supermicro MB using ATI SB700 chipset - which
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals
do you know if conversion from lvm's raid10 to raid0 is
possible?
I'm fiddling with --splitmirrors but it gets me nowhere.
On "takeover" subject man pages says: "..between
striped/raid0 and raid10."" but no details, nowhere I could
find documentation, nor a howto.
many thanks, L.
2010 Sep 25
3
Raid 10 questions...2 drive
I have been reading lots of stuff but trying to find out if a raid10 2drive
setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives
for my data and a seperate system drive, I am only interested in doing raid
on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2
/dev/sdb1 /dev/sdc1
I have also read
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2015 May 09
2
Bug#784810: Bug#784810: Xen domU try ton access to dom0 LVM Volume group
On 09/05/2015 13:25, Ian Campbell wrote:
> On Sat, 2015-05-09 at 03:41 +0200, Romain Mourier wrote:
> [...]
>> xen-create-image --hostname=test0 --lvm=raid10 --fs=ext4
>> --bridge=br-lan --dhcp --dist=jessie
> [...]
>> root at hv0:~# xl create /etc/xen/test0.cfg && xl console test0
> What does /etc/xen/test0.cfg contain? I suspect it is reusing the dom0
2013 Mar 28
1
question about replacing a drive in raid10
Hi all,
I have a question about replacing a drive in raid10 (and linux kernel 3.8.4).
A bad disk was physical removed from the server. After this a new disk
was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs
FS.
After this the server was rebooted and I mounted the filesystem in
degraded mode. It seems that a previous started balance continued.
At this point I want to