similar to: Software RAID1 Drives

Displaying 20 results from an estimated 1100 matches similar to: "Software RAID1 Drives"

2015 Oct 07
1
Software RAID1 Drives
John R Pierce wrote: > On 10/7/2015 3:14 PM, Matt wrote: >> I have 3 4TB WD drives I want to put in a RAID1 array. >> >> Two WD4000FYYZ >> >> and >> >> One WD4000F9YZ >> >> All enterprise class but two are WD Re and one is WD Se. I ordered >> the first two thinking 2 drives in the raid array would be sufficient >> but later
2015 Oct 07
0
Software RAID1 Drives
On 10/7/2015 3:14 PM, Matt wrote: > I have 3 4TB WD drives I want to put in a RAID1 array. > > Two WD4000FYYZ > > and > > One WD4000F9YZ > > All enterprise class but two are WD Re and one is WD Se. I ordered > the first two thinking 2 drives in the raid array would be sufficient > but later decided its a long drive to the server so I would rather > have 3
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 01:33 PM, H wrote: > On 01/11/2023 02:09 AM, Simon Matter wrote: >> What I usually do is this: "cut" the large disk into several pieces of >> equal size and create individual RAID1 arrays. Then add them as LVM PVs to >> one large VG. The advantage is that with one error on one disk, you wont >> lose redundancy on the whole RAID mirror but only on
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> Follow-up question: Is my proposed strategy below correct: > - Make a copy of all existing directories and files on the current disk using clonezilla. > - Install the new M.2 SSDs. > - Partitioning the new SSDs for RAID1 using an external tool. > - Doing a minimal installation of C7 and mdraid. > - If choosing three RAID partitions, one for /boot, one for /boot/efi and the
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> On 01/11/2023 01:33 PM, H wrote: >> On 01/11/2023 02:09 AM, Simon Matter wrote: >>> What I usually do is this: "cut" the large disk into several pieces of >>> equal size and create individual RAID1 arrays. Then add them as LVM PVs >>> to >>> one large VG. The advantage is that with one error on one disk, you >>> wont >>>
2017 Apr 14
2
Possible bug with latest 7.3 installer, md RAID1, and SATADOM.
I'm seeing a problem that I think maybe a bug with the mdraid software on the latest CentOS installer. I have a couple of new supermicro servers and each system has two innodisk 32GB SATADOM's that are experiencing the same issue. I used the latest CentOS-7-x86_64-1611 to install to the two SATADOM's a simple RAID1 for the root. The install goes just fine but when I boot off the new
2023 Jan 11
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 02:09 AM, Simon Matter wrote: > What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another
2012 Jan 29
2
Advise on recovering 2TB RAID1
Hi all, I have one drive fails on a software 2TB RAID1. I have removed the failed partition from mdraid and now ready to replace the failed drive. I want to ask for opinion if there is better way to do that other than: 1. Put the new HDD. 2. Use parted to recreate the same partition scheme. 3. Use mdadm to rebuild the RAID. Especially #2 is rather tricky. I have to create an exact partition
2016 Mar 13
1
C7 + UEFI + GPT + RAID1
Hi messmer, seems that anaconda supports partitioned RAID devices. Disk selection see one mdraid device and permits to create partition on it. Il 13/03/2016 01:04, Gordon Messmer ha scritto: > On 03/12/2016 08:22 AM, Alessandro Baggi wrote: >> From several how-to concerning raid1 installation, I must put each >> partition on a different md devices. > > Not necessarily. You
2008 Oct 08
5
Resilver hanging?
How can I diagnose why a resilver appears to be hanging at a certain percentage, seemingly doing nothing for quite a while, even though the HDD LED is lit up permanently (no apparent head seeking)? The drives in the pool are WD Raid Editions, thus have TLER and should time out on errors in just seconds. ZFS nor the syslog however were reporting any IO errors, so it weren''t the disks.
2007 Jul 27
1
Hard disk recomendation for a software raid 5 array. Does Linux Software Raid support/interacts well with TLER enabled disks.
Hi people, I am building a cheap remote rsync backup server using a software raid 5 array of 4 500GB disks. What I have available on the market is: 1. HITACHI GST Deskstar T7K500 500GB 7200rpm 16MB cache Serial ATA II-300 2. SEAGATE Barracuda 7200.10 with NCQ 500GB 7200rpm 16MB cache Serial ATA II-300 3. Western Digital 500GB SATAII RAID EDITION Caviar SE16 7200rpm 8.9ms 16MB cache I am
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2016 Jan 17
10
HDD badblocks
Hi list, I've a notebook with C7 (1511). This notebook has 2 disk (640 GB) and I've configured them with MD at level 1. Some days ago I've noticed some critical slowdown while opening applications. First of all I've disabled acpi on disks. I've checked disk for badblocks 4 consecutive times for disk sda and sdb and I've noticed a strange behaviour. On sdb there are
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the results of the space allocation are confusing at best. I''ve tried to capture the history of events leading up to the current state. This is all on a Debian Wheezy system using a 3.10.5 kernel package (linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package 0.19+20130315-5). The host uses an
2013 Apr 22
2
hard drive question - WD red
I see a really good price, and better for quantity, for 3TB drives. Now, I've been down on WD for a couple of years, since I found that they'd protected certain h/d parms from being changed (like TLER). The ones I'm looking at are the Red, which seems to be a new color (at least to me), and one technical review I've read says that they're intended for NAS, etc, and you can
2011 Mar 29
4
VMware vSphere Hypervisor (free ESXi) and mdraid
Can I combine VMWare ESXi (free version) virtualization and CentOS mdraid level 1? Any pointers how to do it? I never used VMWare before. - Jussi -- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi at greenspot.fi * http://www.greenspot.fi
2012 Feb 29
7
Software RAID1 with CentOS-6.2
Hello, Having a problem with software RAID that is driving me crazy. Here's the details: 1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot). 2. Reasonably good PC hardware (i.e. not budget, but not server grade either) with a pair of 1TB Western Digital SATA3 Drives. 3. Drives are plugged into the SATA3 ports on the mainboard (both drives and cables say they can do 6Gb/s). 4.
2011 Apr 12
17
40TB File System Recommendations
Hello All I have a brand spanking new 40TB Hardware Raid6 array to play around with. I am looking for recommendations for which filesystem to use. I am trying not to break this up into multiple file systems as we are going to use it for backups. Other factors is performance and reliability. CentOS 5.6 array is /dev/sdb So here is what I have tried so far reiserfs is limited to 16TB ext4