similar to: Raid 10 questions...2 drive

Displaying 20 results from an estimated 3000 matches similar to: "Raid 10 questions...2 drive"

2010 Oct 13
5
network interface question
Hi, I don't have ifcfg-eth1 in my /etc/sysconfig/network-scripts. But when I do ifconfig eth1 I can see output as below. If I do ifconfig eth12 , I don't see anything which i am assume is normal. eth1 Link encap:Ethernet HWaddr 00:24:E8:44:DB:CC BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my servers acting sluggish...so when I looked at the mdadm raid device I see this: mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Sep 27 22:47:44 2010 Raid Level : raid10 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759808 (931.51 GiB 1000.20 GB) Raid
2013 Mar 04
3
RAID MD10
I'm going to rebuild my system at home soon, and was planning to mirror two drives. However, I was just looking up something about RAID, and on wikipedia found some information about the Linux MD driver, and "near" and "far" RAID10. Anyone have some opinions about them? mark "or should that be how many opinions do folks have about them?"
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi! The problem is that the BTRFS raid10 filesystem without any understandable cause refuses to mount. Here is dmesg output: [77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1 [77848.633912] btrfs: allowing degraded mounts [77848.633917] btrfs: enabling auto defrag [77848.633919] btrfs: use lzo compression [77848.633922] btrfs: turning on flush-on-commit [77848.658879]
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All. I have a server which uses RAID10 made of 4 partitions for / and boots from it. It looks like so: mdadm -D /dev/md1 /dev/md1: Version : 00.90 Creation Time : Mon Apr 27 09:25:05 2009 Raid Level : raid10 Array Size : 973827968 (928.71 GiB 997.20 GB) Used Dev Size : 486913984 (464.36 GiB 498.60 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system here, and shortly thereafter remembreed I hadn't updated it in a while, so i ran yum update... while installing/updating stuff, got these errors: Installing: kernel ####################### [14/69] raid level raid10 (in /proc/mdstat) not recognized ... Installing: kernel-smp
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings- I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain. Option #1: My understanding of RAID10 using 4
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to configure in RAID10. I read somewhere that Raid10 support is in the latest kernel, but I can't seem to get anaconda to let me create it. I only see raid 0, 1, 5, and 6. Even when I tried to set up raid5 or raid1, it would not let me put the /boot partition on it, and I though that this was now possible. Is it
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2011 Aug 30
3
OT - small hd recommendation
A little OT - but I've seen a few opinions voiced here by various admins and I'd like to benefit. Currently running a single combined server for multiple operations - fileserver, mailserver, webserver, virtual server, and whatever else pops up. Current incarnation of the machine, after the last rebuild, is an AMD Opteron 4180 with a Supermicro MB using ATI SB700 chipset - which
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2012 Feb 06
1
Unknown KERNEL Warning in boot messages
CentOS Community, Would someone who is familiar with reading boot messages and kernel errors be able to assist with advising me on what the following errors might mean in dmesg. It seems to come up randomly towards the end of the logfile. ------------[ cut here ]------------ WARNING: at arch/x86/kernel/cpu/mtrr/generic.c:467 generic_get_mtrr+0x11e/0x140() (Not tainted) Hardware name: empty
2015 May 09
2
Bug#784810: Bug#784810: Xen domU try ton access to dom0 LVM Volume group
On 09/05/2015 13:25, Ian Campbell wrote: > On Sat, 2015-05-09 at 03:41 +0200, Romain Mourier wrote: > [...] >> xen-create-image --hostname=test0 --lvm=raid10 --fs=ext4 >> --bridge=br-lan --dhcp --dist=jessie > [...] >> root at hv0:~# xl create /etc/xen/test0.cfg && xl console test0 > What does /etc/xen/test0.cfg contain? I suspect it is reusing the dom0
2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.? I assume raid0 means striped activity in a paralleled mode at lease similar to raid0 in mdadm. How can I measure the btrfs read speed since it is copy-on-write which is not the norm in mdadm raid0.? Perhaps I cannot use the same approach in btrfs to determine the performance. Secondly, I see a methodology for raid10 using
2017 Feb 03
3
raid 10 not in consistent state?
hi everyone I've just configured a simple raid10 on a Dell system, but one thing is puzzling to me. I'm seeing this below and I wonder why? There: Consist = No ... /c0/v1 : ====== --------------------------------------------------------------- DG/VD TYPE State Access Consist Cache Cac sCC Size Name --------------------------------------------------------------- 3/1 RAID10 Optl
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from the block group with the degenerated profile. For example, if there is no free space in the RAID1 block groups, the allocator will try to allocate space from the DUP block groups. And besides that, the space reservation has the similar behaviour: if there is no enough space in the space cache to reserve, it will reserve
2007 Jun 10
1
mdadm Linux Raid 10: is it 0+1 or 1+0?
The relevance of this question can be found here: http://aput.net/~jheiss/raid10/ I read the mdadm documents but I could not find a positive answer. I even read the raid10 module source but I didn't find the answer there either. Does someone here know it? Thank you!