Displaying 20 results from an estimated 2000 matches similar to: "C7 and mdadm"
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi,
When I upgraded my cluster, df started returning some odd numbers for my
legacy volumes.
Newly created volumes after the upgrade, df works just fine.
I have been researching since Monday and have not found any reference to
this symptom.
"vm-images" is the old legacy volume, "test" is the new one.
[root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2007 Aug 23
1
Transport endpoint not connected after crash of one node
Hi,
I am on SLES 10, SP1, x86_64, running the distribution rpm's of ocfs:
ocfs2console-1.2.3-0.7
ocfs2-tools-1.2.3-0.7
I have a two node ocfs2 cluster configured. One node died (manual reset),
and the second started immediately to have problems on accessing the file
system for the following reason from the logs: Transport endpoint not
connected.
a mounted.ocfs2 on the still living
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my
options?
Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid10 num-devices=4
UUID=942f512e:2db8dc6c:71667abc:daf408c3
/proc/mdstat:
Personalities : [raid10]
md127 : active raid10 sdf1[2](F)
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 18:47, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 29/01/19 15:03, mark ha scritto:
>>
>>> I've no idea what happened, but the box I was working on last week has
>>> a *second* bad drive. Actually, I'm starting to wonder about that
>>> particulare hot-swap bay.
>>>
>>> Anyway, mdadm --detail shows /dev/sdb1
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 29/01/19 18:47, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>
>>>>> I've no idea what happened, but the box I was working on last week
>>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto:
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 30/01/19 14:02, mark ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what happened, but the box I was working
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
>>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello
I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped.
md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21]
19533803520
2010 Mar 25
3
RAID 5 setup?
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
-Jason