similar to: md raid 10

Displaying 20 results from an estimated 5000 matches similar to: "md raid 10"

2010 Jun 28
3
CentOS MD RAID 1 on Openfiler iSCSI
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler? The idea I'm thinking of here is to use two OpenFiler servers with physical drives in RAID 1, to create iSCSI virtual devices and run CentOS guest VMs off the MD RAID 1 device. Since theoretically, this setup would survive both a single physical drive
2012 Nov 29
2
Data Cleaning -New user coming from SAS
Hello, this is my first post. I have a large CSV file where I need to fill in the 1st and 2nd column with a Loan # and Account name that would be found in a line of text : like this: ,,Loan #:,ML-113-07,Account Name:, Quilting Boutique,,,,,,,,,,, I would like to place the Loan #: ML-113-07 in the first column and the account name quilting boutique in the second column. If possible I would also
2017 Feb 15
3
RAID questions
Hello, Just a couple questions regarding RAID. Here's thesituation. I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far into this that Ican't start over. I wanted disk space to backup 3 other machines. I way overestimated what I needed for full, incremental and image backups with UrBackup.I've used less than 1TB so far. I would like to add an additional drive
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 01:33 PM, H wrote: > On 01/11/2023 02:09 AM, Simon Matter wrote: >> What I usually do is this: "cut" the large disk into several pieces of >> equal size and create individual RAID1 arrays. Then add them as LVM PVs to >> one large VG. The advantage is that with one error on one disk, you wont >> lose redundancy on the whole RAID mirror but only on
2017 Apr 14
2
Possible bug with latest 7.3 installer, md RAID1, and SATADOM.
I'm seeing a problem that I think maybe a bug with the mdraid software on the latest CentOS installer. I have a couple of new supermicro servers and each system has two innodisk 32GB SATADOM's that are experiencing the same issue. I used the latest CentOS-7-x86_64-1611 to install to the two SATADOM's a simple RAID1 for the root. The install goes just fine but when I boot off the new
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> Follow-up question: Is my proposed strategy below correct: > - Make a copy of all existing directories and files on the current disk using clonezilla. > - Install the new M.2 SSDs. > - Partitioning the new SSDs for RAID1 using an external tool. > - Doing a minimal installation of C7 and mdraid. > - If choosing three RAID partitions, one for /boot, one for /boot/efi and the
2015 Feb 10
2
CentOS 7 : create RAID arrays manually using mdadm --create ?
On Tue, Feb 10, 2015 at 1:54 PM, Chris Murphy <lists at colorremedies.com> wrote: > > - I would not put swap on an md device, I'd just put a plain swap > partition on each device; first create two swap mountpoints, If one of the devices fails, doesn't that mean that any processes with swap on the associated space will be killed? Avoiding that is kind of the point of
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote: > On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote: > >> 3 - Can additional drive(s) be added later with a changein RAID level >> without current data loss? > > Only some systems support that sort of restriping, and its a dangerous > activity (if the power fails or system crashes midway through
2011 Apr 13
1
Expanding RAID 10 array, WAS: 40TB File System Recommendations
On 4/13/11, Rudi Ahlers <Rudi at softdux.com> wrote: >> to expand the array :) > > I haven't had problems doing it this way yet. I finally figured out my mistake creating the raid devices and got a working RAID 0 on two RAID 1 arrays. But I wasn't able to add another RAID 1 component to the array with the error mdadm: add new device failed for /dev/md/mdr1_3 as 2:
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
> On 01/11/2023 01:33 PM, H wrote: >> On 01/11/2023 02:09 AM, Simon Matter wrote: >>> What I usually do is this: "cut" the large disk into several pieces of >>> equal size and create individual RAID1 arrays. Then add them as LVM PVs >>> to >>> one large VG. The advantage is that with one error on one disk, you >>> wont >>>
2010 Jun 11
1
Linux software RAID 1.2 superblocks
Hi, Just to bring one more Debian concern to the Syslinux table: the default metadata format in upstream mdadm changed to 1.2, which means MD superblocks at the beginning of the partition, after a 4 KB hole. Is our favorite bootloader prepared to handle such situations? -- Thanks, Feri.
2008 Aug 29
3
new software raid installs
I have noticed that when I do software raid installed (RAID1) that I reboot and one of the first things it says is md1 is not in sync doing background reconstruction... md0 is my /root partition md1 is my /home partition why would md1 not be in sync after an install. Jerry
2017 Sep 08
0
CentOS 7 on PPC64le booting from a MD RAID volume
I've been trying to install CentOS 7 AltArch ppc64le onto a new Power 8 system and I want to configure mdraid for the volumes.? I can get everything working if I install to a single disk, but when I configure for RAID 1, the system fails to boot. So, is mdraid1 supported for booting a Power 8 system? Tom Leach leach at coas.oregonstate.edu
2023 Jan 11
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 02:09 AM, Simon Matter wrote: > What I usually do is this: "cut" the large disk into several pieces of > equal size and create individual RAID1 arrays. Then add them as LVM PVs to > one large VG. The advantage is that with one error on one disk, you wont > lose redundancy on the whole RAID mirror but only on a partial segment. > You can even lose another
2009 Nov 19
3
New install RAID 1+0
I have a new server to setup. 4 hard drives and I had intended it to be hardware raid but that's a long story. Does it make sense to set up the first two hard drives with RAID-0 partitions and then get through the install and then go back later and then create identically sized RAID-0 partitions on the other two drives and finally create the RAID-1 mirror from the first to the second? Craig
2011 Jul 18
2
Kernel 2.6.32.41 raid bug on squeeze
Hi, I''m having issues on intel and amd, when booting it doesn''t assemble the raid devices. It seems to work ok in Lenny but when you upgrade to squeeze it won''t boot. Anyone else with same issue? Ian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2010 Mar 04
1
removing a md/software raid device
Hello folks, I successfully stopped the software RAID. How can I delete the ones found on scan? I also see them in dmesg. [root at extragreen ~]# mdadm --stop --scan ; echo $? 0 [root at extragreen ~]# mdadm --examine --scan ARRAY /dev/md0 level=raid5 num-devices=4 UUID=89af91cb:802eef21:b2220242:b05806b5 ARRAY /dev/md0 level=raid6 num-devices=4 UUID=3ecf5270:339a89cf:aeb092ab:4c95c5c3 [root
2005 Dec 02
1
FIXED Re: Re: MD Raid 1 software device not booting not even reaching grub
doing that grub-install /dev/sda will give me the "corresponding BIOS device" error. But now I fixed it by doing a manual grub install. first boot with cd1 and type linux rescue at the prompt when you're at the linux prompt after detecting and mounting the partitions, do a "chroot /mnt/sysimage" then # grub --batch #grub> root (hd0,0) Filesystem type is ext2fs,
2015 Feb 18
2
CentOS 7: software RAID 5 array with 4 disks and no spares?
Le 18/02/2015 23:12, Chris Murphy a ?crit : > "installer is organized around mount points" is correct, and what gets > mounted on mount points? Volumes, not partitions. Says who? -- Microlinux - Solutions informatiques 100% Linux et logiciels libres 7, place de l'?glise - 30730 Montpezat Web : http://www.microlinux.fr Mail : info at microlinux.fr T?l. : 04 66 63 10 32
2016 Oct 26
1
"Shortcut" for creating a software RAID 60?
On 10/25/2016 11:54 AM, Gordon Messmer wrote: > If you built a RAID0 array of RAID6 arrays, then you'd fail a disk by > marking it failed and removing it from whichever RAID6 array it was a > member of, in the same fashion as you'd remove it from any other array > type. FWIW, what I've done in the past is build the raid 6's with mdraid, then use LVM to stripe them