Displaying 20 results from an estimated 41 matches for "sdh1".
Did you mean:
sda1
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
...% /bricks/sde1
/dev/sdf1????????????????????????? 7.3T? 711G? 6.6T? 10% /bricks/sdf1
/dev/sdf1????????????????????????? 7.3T? 711G? 6.6T? 10% /bricks/sdf1
/dev/sdg1????????????????????????? 7.3T? 756G? 6.6T? 11% /bricks/sdg1
/dev/sdg1????????????????????????? 7.3T? 756G? 6.6T? 11% /bricks/sdg1
/dev/sdh1????????????????????????? 7.3T? 753G? 6.6T? 11% /bricks/sdh1
/dev/sdh1????????????????????????? 7.3T? 753G? 6.6T? 11% /bricks/sdh1
[root at st-srv-03 ~]# df -h|grep localhost
localhost:/test???????????????????? 59T? 5.7T?? 53T? 10% /gfs/test
localhost:/vm-images?????????????? 7.3T? 717G? 6.6T? 10%...
2013 Oct 06
5
btrfs device delete problem
...39;m getting an error when trying to delete a device from a raid1 (data
and metadata mirrored).
> btrfs filesystem show
failed to read /dev/sr0
Label: none uuid: 78b5162b-489e-4de1-a989-a47b91adef50
Total devices 2 FS bytes used 107.64GB
devid 2 size 149.05GB used 109.01GB path /dev/sdh1
devid 1 size 156.81GB used 109.03GB path /dev/sdb6
Btrfs v0.20-rc1
> btrfs device delete /dev/sdh1 /mnt/raid-data/
ERROR: error removing the device ''/dev/sdh1'' - Inappropriate ioctl for device
Raid has been working fine for a long time. Both devices are present
but /d...
2019 Jan 29
2
C7, mdadm issues
...no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a
>> reliable way to make either one active.
>>
>> Actually, I would have expected the linux RAID to replace a failed one
>> with a spare....
>>
>> Clues for the poor? I *really* don't want to freak out the user by...
2019 Jan 29
2
C7, mdadm issues
...week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>> /dev/sdi1...
>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>> a reliable way to make either one active.
>>>>
>>>> Actually, I would have expected the linux RAID to replace a failed
>>>> one with a spare....
>>> can you report your raid configuration lik...
2019 Jan 30
4
C7, mdadm issues
...drive. Actually, I'm starting to wonder about
>>>>>> that particulare hot-swap bay.
>>>>>>
>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>> /dev/sdi1...
>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>>>> a reliable way to make either one active.
>>>>>>
>>>>>> Actually, I would have expected the linux RAID to replace a failed
>>>>>> one with a spare....
>>
>>>>...
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a
*second* bad drive. Actually, I'm starting to wonder about that
particulare hot-swap bay.
Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but
see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable
way to make either one active.
Actually, I would have expected the linux RAID to replace a failed one
with a spare....
Clues for the poor? I *really* don't want to freak out the user by taking
it down, and building yet another array....
2019 Jan 30
2
C7, mdadm issues
...gt;>>>>>> to wonder about that particulare hot-swap bay.
>>>>>>>>
>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>>> /dev/sdi1...
>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet
>>>>>>>> to find a reliable way to make either one active.
>>>>>>>>
>>>>>>>> Actually, I would have expected the linux RAID to replace a
>>>>>>>> failed one with a sp...
2019 Jan 30
1
C7, mdadm issues
...onder about that particulare hot-swap bay.
>>>>>>>>>>
>>>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've
>>>>>>>>>> added /dev/sdi1...
>>>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have
>>>>>>>>>> yet to find a reliable way to make either one active.
>>>>>>>>>>
>>>>>>>>>> Actually, I would have expected the linux RAID to
>>>>>>>>>>...
2008 Sep 25
0
qcow support
...upported by the ubuntu
8.04 LTS kernel and xen-3.2.1.
Is this a ubuntu-specific problem, or is it solved in Suse, Red Hat,
XenSource, whatever?
When i try to block-attach it, xm displays no error, but the device is
not created.
# xm block-attach 0 tap:qcow:/var/xen/domains/dapper10/test.img /dev/sdh1 w 0
# mount /dev/sdh1 /mnt/test/
mount: special device /dev/sdh1 does not exist
In the config file ''tap:qcow:/var/xen/domains/dapper10/test.img,sda2,w'',
--> test.img is in use. And i thought there is no /dev/sdh1?
xm block-detach ...
Now the VM boots, but stucks after kernel...
2019 Jan 29
0
C7, mdadm issues
...d, but the box I was working on last week has
>>> a *second* bad drive. Actually, I'm starting to wonder about that
>>> particulare hot-swap bay.
>>>
>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a
>>> reliable way to make either one active.
>>>
>>> Actually, I would have expected the linux RAID to replace a failed one
>>> with a spare....
>>>
>>> Clues for the poor? I *really* don't want...
2019 Jan 30
0
C7, mdadm issues
...; has a *second* bad drive. Actually, I'm starting to wonder about
>>>>> that particulare hot-swap bay.
>>>>>
>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>> /dev/sdi1...
>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>>> a reliable way to make either one active.
>>>>>
>>>>> Actually, I would have expected the linux RAID to replace a failed
>>>>> one with a spare....
>
>>>> can you report yo...
2019 Jan 30
3
C7, mdadm issues
...nder about
>>>>>>>> that particulare hot-swap bay.
>>>>>>>>
>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>>> /dev/sdi1...
>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>>>>>> a reliable way to make either one active.
>>>>>>>>
>>>>>>>> Actually, I would have expected the linux RAID to replace a failed
>>>>>>>> one with a sp...
2019 Jan 30
0
C7, mdadm issues
...#39;m starting to wonder about
>>>>>>> that particulare hot-swap bay.
>>>>>>>
>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>> /dev/sdi1...
>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>>>>> a reliable way to make either one active.
>>>>>>>
>>>>>>> Actually, I would have expected the linux RAID to replace a failed
>>>>>>> one with a spare....
>>...
2019 Jan 30
0
C7, mdadm issues
...gt;>> to wonder about that particulare hot-swap bay.
>>>>>>>>>
>>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>>>> /dev/sdi1...
>>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet
>>>>>>>>> to find a reliable way to make either one active.
>>>>>>>>>
>>>>>>>>> Actually, I would have expected the linux RAID to replace a
>>>>>>>>> fail...
2019 Jan 30
0
C7, mdadm issues
...#39;m starting to wonder about
>>>>>>> that particulare hot-swap bay.
>>>>>>>
>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>> /dev/sdi1...
>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find
>>>>>>> a reliable way to make either one active.
>>>>>>>
>>>>>>> Actually, I would have expected the linux RAID to replace a failed
>>>>>>> one with a spare....
>>...
2019 Jan 22
2
C7 and mdadm
...e I
started looking...) Brought it up, RAID not working. I finally found that
I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I
could add the new drive.
But: it's now
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sdg1[8](S) sdh1[7] sdf1[4] sde1[3]
sdd1[2] sdc1[1]
23441313792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/5]
[_UUUU_U]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
and I can't mount it (it's xfs, btw). *Should* I make it readwrite, or is
there something else...
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello
I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped.
md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21]
19533803520 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
[=============>.......] reshape = 69.0% (1347861324/1953380352) fi...
2019 Jan 31
0
C7, mdadm issues
...gt;>>>>>> that particulare hot-swap bay.
>>>>>>>>>
>>>>>>>>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added
>>>>>>>>> /dev/sdi1...
>>>>>>>>> but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to
>>>>>>>>> find
>>>>>>>>> a reliable way to make either one active.
>>>>>>>>>
>>>>>>>>> Actually, I would have expected the linux RAID to replace a
>...
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2012 Jul 10
1
Problem with RAID on 6.3
...e0 0000
00001d0 0000 0000 0000 0000 0000 0000 0000 0000
*
00001f0 0000 0000 0000 0000 0000 0000 0000 aa55
0000200
So far, so normal. This works fine under 2.6.32-220.23.1.el6.x86_64
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sdj3[2] sdi2[1] sdk4[3] sdh1[0]
5860537344 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
However, I just patched to CentOS 6.3 and on reboot this array failed
to be built. The 2.6.32-279 kernel complained that /dev/sdj was too
similar to /dev/sdj3. But I reboot to -220.23.1 then it works.
And, indeed, if I ru...