similar to: Expanding RAID 10 array, WAS: 40TB File System Recommendations

Displaying 20 results from an estimated 4000 matches similar to: "Expanding RAID 10 array, WAS: 40TB File System Recommendations"

2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: > >> 3 - Can additional drive(s) be added later with a changein RAID level >> without current data loss? > > Only some systems support that sort of restriping, and its a dangerous > activity (if the power fails or system crashes midway through
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2007 Dec 17
3
ZFS Roadmap - thoughts on expanding raidz / restriping / defrag
Hey folks, Does anybody know if any of these are on the roadmap for ZFS, or have any idea how long it''s likely to be before we see them (we''re in no rush - late 2008 would be fine with us, but it would be nice to know they''re being worked on)? I''ve seen many people ask for the ability to expand a raid-z pool by adding devices. I''m wondering if it
2015 Feb 19
2
CentOS 7: software RAID 5 array with 4 disks and no spares?
On 2/18/2015 8:20 PM, Chris Murphy wrote: > On Wed, Feb 18, 2015 at 3:37 PM, Niki Kovacs<info at microlinux.fr> wrote: >> >Le 18/02/2015 23:12, Chris Murphy a ?crit : >>> >> >>> >>"installer is organized around mount points" is correct, and what gets >>> >>mounted on mount points? Volumes, not partitions. >> >
2015 Feb 18
2
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 23:12, Chris Murphy a ?crit : > "installer is organized around mount points" is correct, and what gets > mounted on mount points? Volumes, not partitions. Says who? -- Microlinux - Solutions informatiques 100% Linux et logiciels libres 7, place de l'?glise - 30730 Montpezat Web : http://www.microlinux.fr Mail : info at microlinux.fr T?l. : 04 66 63 10 32
2011 Apr 12
17
40TB File System Recommendations
Hello All I have a brand spanking new 40TB Hardware Raid6 array to play around with. I am looking for recommendations for which filesystem to use. I am trying not to break this up into multiple file systems as we are going to use it for backups. Other factors is performance and reliability. CentOS 5.6 array is /dev/sdb So here is what I have tried so far reiserfs is limited to 16TB ext4
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 01:33 PM, H wrote: > On 01/11/2023 02:09 AM, Simon Matter wrote: >> What I usually do is this: "cut" the large disk into several pieces of >> equal size and create individual RAID1 arrays. Then add them as LVM PVs to >> one large VG. The advantage is that with one error on one disk, you wont >> lose redundancy on the whole RAID mirror but only on
2023 Jan 11
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 02:09 AM, Simon Matter wrote: > What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another
2011 Mar 21
4
mdraid on top of mdraid
Is it possible or will there be any problems with using mdraid on top of mdraid? specifically say mdraid 1/5 on top of mdraid multipath. e.g. 4 storage machines exporting iSCSI targets via two different physical network switches then use multipath to create md block devices then use mdraid on these md block devices The purpose being the storage array surviving a physical network switch
2010 Jun 28
3
CentOS MD RAID 1 on Openfiler iSCSI
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler? The idea I'm thinking of here is to use two OpenFiler servers with physical drives in RAID 1, to create iSCSI virtual devices and run CentOS guest VMs off the MD RAID 1 device. Since theoretically, this setup would survive both a single physical drive
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> On 01/11/2023 01:33 PM, H wrote: >> On 01/11/2023 02:09 AM, Simon Matter wrote: >>> What I usually do is this: "cut" the large disk into several pieces of >>> equal size and create individual RAID1 arrays. Then add them as LVM PVs >>> to >>> one large VG. The advantage is that with one error on one disk, you >>> wont >>>
2011 Jun 09
4
Possible to use multiple disk to bypass I/O wait?
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The process basically scans through Maildirs, checking for space usage and quota. Because there are hundred odd user folders and several 10s of thousands of small files, this sends the I/O wait % way high. The server hits a very high load level and stops responding to other requests until the crawl is done. I am wondering if I add
2016 Mar 01
10
Any experiences with newer WD Red drives?
Might be slightly OT as it isn't necessarily a CentOS related issue. I've been using WD Reds as mdraid components which worked pretty well for non-IOPS intensive workloads. However, the latest C7 server I built, ran into problems with them on on a Intel C236 board (SuperMicro X11SSH) with tons of "ata bus error write fpdma queued". Googling on it threw up old suggestions to
2017 Jun 30
2
mdraid doesn't allow creation: device or resource busy
Dear fellow CentOS users, I have never experienced this problem with hard disk management before and cannot explain it to myself on any rational basis. The setup: I have a workstation for testing, running latest CentOS 7.3 AMD64. I am evaluating oVirt and a storage-ha as part of my bachelors thesis. I have already been running a RAID1 (mdraid, lvm2) for the system and some oVirt 4.1 testing.
2009 Sep 19
3
How does LVM decide which Physical Volume to write to?
Hi everyone. This isn't specifically a CentOS question, since it could apply for any distro but I hope someone can answer it anyway. I took the following steps but was puzzled by the outcome of the test at the end: 1. Create a RAID1 array called md3 with two 750GB drives 2. Create a RAID1 array called md9 with two 500GB drives 3. Initialise md3 then md9 as physical volumes (pvcreate) 4.
2012 Apr 05
1
Better to use a single large storage server or multiple smaller for mdbox?
I'm trying to improve the setup of our Dovecot/Exim mail servers to handle the increasingly huge accounts (everybody thinks it's like infinitely growing storage like gmail and stores everything forever in their email accounts) by changing from Maildir to mdbox, and to take advantage of offloading older emails to alternative networked storage nodes. The question now is whether having a
2020 Jun 18
2
Amd es1000
On 6/18/20 3:47 PM, John Pierce wrote: > On Thu, Jun 18, 2020 at 11:04 AM paride desimone <parided at gmail.com> wrote: > >> The throuble is the radeon driver. I've already tried to install the gui, >> but the system hung on start gui. >> The es1000 is a shit gpu. >> >> > > those are just intended to provide a minimal VGA for initial
2009 Nov 19
3
New install RAID 1+0
I have a new server to setup. 4 hard drives and I had intended it to be hardware raid but that's a long story. Does it make sense to set up the first two hard drives with RAID-0 partitions and then get through the install and then go back later and then create identically sized RAID-0 partitions on the other two drives and finally create the RAID-1 mirror from the first to the second? Craig
2011 Aug 23
40
[PATCH 00/21] [RFC] Btrfs: restriper
Hello, This patch series adds an initial implementation of restriper (it''s a clever name for relocation framework that allows to do selective profile changing and selective balancing with some goodies like pausing/resuming and reporting progress to the user. Profile changing is global (per-FS) so far, per-subvolume profiles require some discussion and can be implemented in future.
2011 Mar 29
4
VMware vSphere Hypervisor (free ESXi) and mdraid
Can I combine VMWare ESXi (free version) virtualization and CentOS mdraid level 1? Any pointers how to do it? I never used VMWare before. - Jussi -- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi at greenspot.fi * http://www.greenspot.fi