Displaying 20 results from an estimated 20000 matches similar to: "An mdadm question"
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2011 Aug 17
1
RAID5 suddenly broken
Hello,
I have a RAID5 array on my CentOS 5.6 x86_64 workstation which
"suddenly" failed to work (actually after the system could not resume
from a suspend).
I had recently issues after moving the workstation to another office,
where one of the disks got accidently unplugged. But the RAID was
working and it had reconstructed (as far as I can tell) the data.
After I replugged the disk,
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and
includes various fixes for the other two patches.
Rich.
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2014 Jul 25
2
Convert "bare partition" to RAID1 / mdadm?
I have a large disk full of data that I'd like to upgrade to SW RAID 1
with a minimum of downtime. Taking it offline for a day or more to rsync
all the files over is a non-starter. Since I've mounted SW RAID1 drives
directly with "mount -t ext3 /dev/sdX" it would seem possible to flip
the process around, perhaps change the partition type with fdisk or
parted, and remount as
2011 Dec 06
4
/dev/sda
We're just using Linux software RAID for the first time - RAID1, and the
other day, a drive failed. We have a clone machine to play with, so it's
not that critical, but....
I partitioned a replacement drive. On the clone, I marked the RAID
partitions on /dev/sda failed, and remove, and pulled the drive. After
several iterations, I waited a minute or two, until all messages had
stopped,
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I
identified the drive, brought the system down (8 drives, and I didn't know
the s/n of the bad one. why it was there in the box, rather than where I
started looking...) Brought it up, RAID not working. I finally found that
I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I
could add the new
2014 Aug 29
3
*very* ugly mdadm issue
We have a machine that's a distro mirror - a *lot* of data, not just
CentOS. We had the data on /dev/sdc. I added another drive, /dev/sdd, and
created that as /dev/md4, with --missing, made an ext4 filesystem on it,
and rsync'd everything from /dev/sdc.
Note that we did this on *raw*, unpartitioned drives (not my idea). I then
umounted /dev/sdc, and mounted /dev/md4, and it looked fine; I
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the
latest libvirt and now my RAID array with my VM storage is missing. It
seems that the upgrade to mdadm-3.2.2 is the culprit.
This is the output from mdadm when scanning that array,
# mdadm --detail --scan
ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b
ARRAY /dev/md126 metadata=imsm
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2008 Jun 16
2
mdadm on reboot
Hi,
I'm in the process of trying mdadm for the first time
I've been trying stuff out of tutorials, etc.
At this point I know how to create stripes, and mirrors.
My stripe is automatically restarting on reboot,
but the degraded mirror isn't.
--
Drew Einhorn
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Oct 12
1
raid on large disks?
What's the right way to set up >2TB partitions for raid1 autoassembly?
I don't need to boot from this but I'd like it to come up and mount
automatically at boot.
--
Les Mikesell
lesmikesell at gmail.com
2011 Apr 12
8
GUI Software Raid Monitor Software
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote:
> On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote:
>
>> 3 - Can additional drive(s) be added later with a changein RAID level
>> without current data loss?
>
> Only some systems support that sort of restriping, and its a dangerous
> activity (if the power fails or system crashes midway through
2009 May 20
2
help with rebuilding md0 (Raid5)
Sorry, this is going to be a rather long post...Here's the situation; I
have 4 IDE disks from an old snap server which fails to mount the raid
array. We believe there is a controller error on the SNAP so we've put
them in another box running CentOS 5 and can see the disks OK.
hda thru hdd looks like this
Disk /dev/hdd: 185.2 GB, 185283624960 bytes
255 heads, 63 sectors/track, 22526
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2009 Dec 08
3
botched RAID, now e2fsck or what?
Hi all,
Somehow I managed to mess with a RAID array containing an ext3 partition.
Parenthesis, if it matters: I disconnected physically a drive while
the array was online. Next thing, I lost the right order of the drives
in the array. While trying to re-create it, I overwrote the raid
superblocks. Luckily, the array was RAID5 degraded, so whenever I
re-created it, it didn't go into sync;