similar to: mdadm update

Displaying 20 results from an estimated 30000 matches similar to: "mdadm update"

2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote: >>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto: > On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto: > Alessandro Baggi wrote: >> Il 30/01/19 14:02, mark ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 18:47, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 15:03, mark ha scritto: >> >>> I've no idea what happened, but the box I was working on last week has >>> a *second* bad drive. Actually, I'm starting to wonder about that >>> particulare hot-swap bay. >>> >>> Anyway, mdadm --detail shows /dev/sdb1
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 18:47, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 15:03, mark ha scritto: >>>> >>>>> I've no idea what happened, but the box I was working on last week >>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what happened, but the box I was working
2007 Jun 10
1
mdadm Linux Raid 10: is it 0+1 or 1+0?
The relevance of this question can be found here: http://aput.net/~jheiss/raid10/ I read the mdadm documents but I could not find a positive answer. I even read the raid10 module source but I didn't find the answer there either. Does someone here know it? Thank you!
2019 Sep 30
1
CentOS 8 broken mdadm Raid10
Hello, On my system with a Intel SCU Controller and a Raid 10 System it is not possible to install this Raid10. I have tested this with a CentOS 7 and Opensuse all found my Raid but with CentOS 8 this is broken? I found on start the Installation a Error from mdadm that ist all. Now I download and Test the Stream iso? and hope ..... -- mit freundlichen Gr?ssen / best regards G?nther J,
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.? I assume raid0 means striped activity in a paralleled mode at lease similar to raid0 in mdadm. How can I measure the btrfs read speed since it is copy-on-write which is not the norm in mdadm raid0.? Perhaps I cannot use the same approach in btrfs to determine the performance. Secondly, I see a methodology for raid10 using
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are > my > options? > > Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). > > mdadm.conf: > > # mdadm.conf written out by anaconda > MAILADDR root > AUTO +imsm +1.x -all > ARRAY /dev/md/root level=raid10 num-devices=4 > UUID=942f512e:2db8dc6c:71667abc:daf408c3 > >
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All. I have a server which uses RAID10 made of 4 partitions for / and boots from it. It looks like so: mdadm -D /dev/md1 /dev/md1: Version : 00.90 Creation Time : Mon Apr 27 09:25:05 2009 Raid Level : raid10 Array Size : 973827968 (928.71 GiB 997.20 GB) Used Dev Size : 486913984 (464.36 GiB 498.60 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings- I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain. Option #1: My understanding of RAID10 using 4
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system here, and shortly thereafter remembreed I hadn't updated it in a while, so i ran yum update... while installing/updating stuff, got these errors: Installing: kernel ####################### [14/69] raid level raid10 (in /proc/mdstat) not recognized ... Installing: kernel-smp