similar to: raid10, centos 4.x

Displaying 20 results from an estimated 8000 matches similar to: "raid10, centos 4.x"

2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings- I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain. Option #1: My understanding of RAID10 using 4
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All. I have a server which uses RAID10 made of 4 partitions for / and boots from it. It looks like so: mdadm -D /dev/md1 /dev/md1: Version : 00.90 Creation Time : Mon Apr 27 09:25:05 2009 Raid Level : raid10 Array Size : 973827968 (928.71 GiB 997.20 GB) Used Dev Size : 486913984 (464.36 GiB 498.60 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to configure in RAID10. I read somewhere that Raid10 support is in the latest kernel, but I can't seem to get anaconda to let me create it. I only see raid 0, 1, 5, and 6. Even when I tried to set up raid5 or raid1, it would not let me put the /boot partition on it, and I though that this was now possible. Is it
2017 Sep 20
3
xfs not getting it right?
Chris Adams wrote: > Once upon a time, hw <hw at gc-24.de> said: >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >>
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2009 Aug 30
3
looking for RAID 1+0 setup instructions?
Hi, Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me. -- Kind Regards Rudi Ahlers CEO, SoftDux Hosting Web: http://www.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532
2017 Sep 20
4
xfs not getting it right?
Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from the block group with the degenerated profile. For example, if there is no free space in the RAID1 block groups, the allocator will try to allocate space from the DUP block groups. And besides that, the space reservation has the similar behaviour: if there is no enough space in the space cache to reserve, it will reserve
2006 Nov 21
3
RAID benchmarks
We (a small college with about 3000 active accounts) are currently in the process of moving from UW IMAP running on linux to dovecot running on a cluster of 3 or 4 new faster Linux machines. (Initially using perdition to split the load.) As we are building and designing the system, I'm attempting to take (or find) benchmarks everywhere I can in order to make informed decisions and so
2013 Jan 04
2
Syslinux 5.00 - Doesn't boot my system / Not passing the kernel options to the kernel?
Hi, I encounter a problem with Syslinux 5.00 I cannot really describe. So I created two small videos: Booting with Syslinux 5.00 (1.3 MB): <https://www.dropbox.com/s/b6g8cdf2t9v48c6/boot-syslinux5-fail.mp4> How I fixed the problem by downgrading to Syslinux 4.06 and how booting should look like (6.5 MB): <https://www.dropbox.com/s/lt7cpgfm0qvqtba/boot-syslinux5-how-i-fixed-it.mp4>
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello! I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I could free up another 100 GB or so if necessary) and two empty 0.5 TB drives. Is it possible to get another 1 TB drive and combine the four drives to a btrfs raid10 setup without (if all goes well) losing my data? Regards, Paul -- To unsubscribe from this list: send the line "unsubscribe
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi! The problem is that the BTRFS raid10 filesystem without any understandable cause refuses to mount. Here is dmesg output: [77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1 [77848.633912] btrfs: allowing degraded mounts [77848.633917] btrfs: enabling auto defrag [77848.633919] btrfs: use lzo compression [77848.633922] btrfs: turning on flush-on-commit [77848.658879]
2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed: md0 is made up of two 250G disks on which the OS and a very large /var partions resides for a number of virtual machines. md1 is made up of two 2T disks on which /home resides. Challenge is that disk 0 of md0 is the problem and it has a 524M /boot partition outside of the raid partition. My plan is to back up /home (md1) and at a