Displaying 20 results from an estimated 5000 matches similar to: "Recover RAID"
2002 Mar 02
4
ext3 on Linux software RAID1
Everyone,
We just had a pretty bad crash on one of production boxes and the ext2
filesystem on the data partition of our box had some major filesystem
corruption. Needless to say, I am now looking into converting the
filesystem to ext3 and I have some questions regarding ext3 and Linux
software RAID.
I have read that previously there were some issues running ext3 on a
software raid device
2008 Feb 01
2
RAID Hot Spare
I've googled this question without a great deal of information.
Monday I'm rebuilding a Linux server at work. Instead of purchasing 3
drives for this system I purchased 4 with intent to create a hot spare.
Here is my usual setup which I'll do again but with a hot spare for each
partion.
Create /dev/md0 mount point /boot RAID1 3 drives with 1 hot spare
Create two more raid setups
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I
see this:
md7 : active raid1 sda2[0] sdb2[1]
26627648 blocks [2/2] [UU] [-->> it's OK]
md1 : active raid1 sdb3[1] sda3[0]
4192896 blocks [2/2] [UU] [-->> it's OK]
md2 : active raid1 sda5[0] sdb5[1]
4192832 blocks [2/2] [UU] [-->> it's OK]
md3 : active raid1 sdb6[1] sda6[0]
4192832 blocks [2/2]
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings-
I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain.
Option #1:
My understanding of RAID10 using 4
2007 Apr 02
2
Software RAID 10?
Hello...
I have a server with 4 x SCSI drives and I would like to install Centos
4 (or 5) onto a software RAID 10 array. Do you know if this is
possible? I noticed that under the Centos 4.92 beta, RAID 5 is an option
but for some reason RAID 10 is not listed.
There does appear to be a RAID 10 module....
/lib/modules/2.6.9-42.0.8.ELsmp/kernel/drivers/md/raid10.ko
More info I found here:
2016 Oct 25
3
"Shortcut" for creating a software RAID 60?
Hello all,
Testing stuff virtually over here before taking it to the physical servers.
I found a shortcut for creating a software RAID 10 ("--level=10"), device in
CentOS 6.
Looking at the below, I don't see anything about a shortcut for RAID 60.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm
l/Storage_Administration_Guide/s1-raid-levels.html
Is RAID
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2006 Jun 24
3
recover data from linear raid
Hello,
I had a scientific linux 3.0.4 system (rhel compatible), with 3
ide disks, one for / and two others in linear raid (250 gb and 300 gb
each).
This system was obsoleted so i move the raid disks to a new
scientific linux 3.0.7 installation. However, the raid array was not
detected ( I put the disks on the same channels and same master/lsave
setup as in the previous setup). In fact
2012 May 25
1
Installing CIFS on CentOS4
Hello,
I have a CentOS4 install and I am trying to mount a Windows Server 2008
folder. When I use this command:
mount -t cifs //10.1.1.17/Org/MR\ Ops/ test, it only mounts the Org folder.
When I try the same thing on a newer computer (Fedora 15), it mounts all
the way to MR Ops. So I am pretty sure that my cifs file needs to be
updated, but I am having a really hard time doing this. I updated
2020 Sep 10
3
Btrfs RAID-10 performance
I cannot verify it, but I think that even JBOD is propagated as a
virtual device. If you create JBOD from 3 different disks, low level
parameters may differ.
And probably old firmware is the reason we used RAID-0 two or three
years before.
Thank you for the ideas.
Kind regards
Milo
Dne 10.09.2020 v 16:15 Scott Q. napsal(a):
> Actually there is, filesystems like ZFS/BTRFS prefer to see
2007 May 27
1
raid
Hi,
I have 4 x 500GB PATA Harddisk and I like to have them striped and mirrored.
What is the best practise? Raid 0+1 or 1+0.. And how do i go about it?
Thanks
2006 Apr 02
2
raid setup
Hi,
I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i
did is install the centos 4.2 serverCD on the first IBM and set the HDD to
raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st
Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The
1st server rebuild the array fine. My problem is the Second server, after
rebuilding it and
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote:
>
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote:
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid stripe size is).
>
> if its a raid 1
2015 Mar 17
3
unable to recover software raid1 install
Hello All,
on a Centos5 system installed with software raid I'm getting:
raid1: raid set md127 active with 2 out of 2 mirrors
md:.... autorun DONE
md: Autodetecting RAID arrays
md: autorun.....
md : autorun DONE
trying to resume form /dev/md1
creating root device
mounting root device
mounting root filesystem
ext3-fs : unable to read superblock
mount :
2016 Oct 26
1
"Shortcut" for creating a software RAID 60?
On 10/25/2016 11:54 AM, Gordon Messmer wrote:
> If you built a RAID0 array of RAID6 arrays, then you'd fail a disk by
> marking it failed and removing it from whichever RAID6 array it was a
> member of, in the same fashion as you'd remove it from any other array
> type.
FWIW, what I've done in the past is build the raid 6's with mdraid, then
use LVM to stripe them
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from
the block group with the degenerated profile. For example, if there is no free
space in the RAID1 block groups, the allocator will try to allocate space from
the DUP block groups. And besides that, the space reservation has the similar
behaviour: if there is no enough space in the space cache to reserve, it will
reserve
2009 Dec 31
3
Lost mdadm.conf
Hi,
I lost my mdadm.conf (and /proc/mdadm shows nothing useful) and I'd like to
mount the filesystem again. So I've booted using rescue but I was wondering
if I can do a command like this safely (i.e without losing the data
previously stored).
mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Where of course the raid devices and the /dev/x are the correct ones