similar to: C7, mdadm issues

Displaying 20 results from an estimated 6000 matches similar to: "C7, mdadm issues"

2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I identified the drive, brought the system down (8 drives, and I didn't know the s/n of the bad one. why it was there in the box, rather than where I started looking...) Brought it up, RAID not working. I finally found that I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I could add the new
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi, When I upgraded my cluster, df started returning some odd numbers for my legacy volumes. Newly created volumes after the upgrade, df works just fine. I have been researching since Monday and have not found any reference to this symptom. "vm-images" is the old legacy volume, "test" is the new one. [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 18:47, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 15:03, mark ha scritto: >> >>> I've no idea what happened, but the box I was working on last week has >>> a *second* bad drive. Actually, I'm starting to wonder about that >>> particulare hot-swap bay. >>> >>> Anyway, mdadm --detail shows /dev/sdb1
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 18:47, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 15:03, mark ha scritto: >>>> >>>>> I've no idea what happened, but the box I was working on last week >>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2007 Aug 23
1
Transport endpoint not connected after crash of one node
Hi, I am on SLES 10, SP1, x86_64, running the distribution rpm's of ocfs: ocfs2console-1.2.3-0.7 ocfs2-tools-1.2.3-0.7 I have a two node ocfs2 cluster configured. One node died (manual reset), and the second started immediately to have problems on accessing the file system for the following reason from the logs: Transport endpoint not connected. a mounted.ocfs2 on the still living
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto: > On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto: > Alessandro Baggi wrote: >> Il 30/01/19 14:02, mark ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote:
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what happened, but the box I was working
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote: >>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2010 Mar 25
3
RAID 5 setup?
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)? I have 8 x 1.5tb Drives. -Jason
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my options? Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA). mdadm.conf: # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/root level=raid10 num-devices=4 UUID=942f512e:2db8dc6c:71667abc:daf408c3 /proc/mdstat: Personalities : [raid10] md127 : active raid10 sdf1[2](F)