similar to: Problem with softwareraid

Displaying 20 results from an estimated 4000 matches similar to: "Problem with softwareraid"

2017 Aug 19
2
Problem with softwareraid
Hello Gordon, yeah. it is really strange. from one boot to the next, everyhing is f** up.(2 months between). any idea? [root at quad live]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ??sda1 8:1 0 1.8T 0 part
2017 Aug 19
0
Problem with softwareraid
18. Aug 2017 13:35 by euroregistrar at gmail.com: > Hello all, > > i have already had a discussion on the software raid mailinglist and i > want to switch to this one :) > > I am having a really strange problem with my md0 device running > centos7. after a new start of my server the md0 was gone. now after > trying to find the problem i detected the following: > >
2017 Aug 20
0
Problem with softwareraid
On 08/19/2017 12:06 PM, Mr Typo wrote: > sda 8:0 0 1.8T 0 disk > ??sda1 8:1 0 1.8T 0 part > ??WDC_WD20EFRX-68AX9N0_WD-WMC1T2547260 253:3 0 1.8T 0 mpath > ??WDC_WD20EFRX-68AX9N0_WD-WMC1T2547260p1 253:8 0 1.8T 0 part You haven't said anything about multipath hardware yet,
2017 Aug 18
0
Problem with softwareraid
On 08/18/2017 12:35 PM, Mr Typo wrote: > mdadm: /dev/sda1 is busy - skipping > mdadm: /dev/sdb1 is busy - skipping > mdadm: /dev/sdc1 is busy - skipping > mdadm: /dev/sdd1 is busy - skipping That's plenty strange. The output of "lsblk" might tell you why those devices are busy.
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2019 Jan 29
2
C7, mdadm issues
I've no idea what happened, but the box I was working on last week has a *second* bad drive. Actually, I'm starting to wonder about that particulare hot-swap bay. Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... but see both /dev/sdh1 and /dev/sdi1 as spare, and have yet to find a reliable way to make either one active. Actually, I would have expected the linux
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file: more /etc/mdadm.conf # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382 ARRAY /dev/md2 level=raid1 num-devices=2
2009 Dec 20
1
mdadm help
Hey List, So I had a 4 drive software RAID 5 set up consisting of /dev/sdb1, /dev/sdc1, /dev/sdd1 and /dev/sde1. I reinstalled my OS and after the reinstall I made the mistake of re-assembling the array incorrectly by typing "sudo mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde" in a moment of stupidity. Obviously this didn't work and the array wouldn't mount and
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi, When I perform a software RAID 1 or RAID 5 installation on a LAN server with several hard disks, I wonder if GRUB already gets installed on each individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x, this had to be done like this: # grub grub> device (hd0) /dev/sda grub> device (hd1) /dev/sdb grub> root (hd0,0) grub> setup (hd0) grub> root (hd1,0) grub>
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2011 Feb 23
2
LVM problem after adding new (md) PV
Hello, I have a weird problem after adding new PV do LMV volume group. It seems the error comes out only during boot time. Please read the story. I have couple of 1U machines. They all have two, four or more Fujitsu-Siemens SAS 2,5" disks, which are bounded in Raid1 pairs with Linux mdadm. First pair of disks has always two arrays (md0, md1). Small md0 is used for booting and the rest - md1
2008 Apr 18
1
create raid /dev/md2
Hi , currently i have 2 raid devices /dev/md0 and /dev/md1 , i have added 2 new disks, fdisked , created 2 primary partitions with type fd (linux raid autodetect) Now i want to create raid from them root at vmhost1 ~]# mdadm --create --verbose /dev/md2 --level=1 /dev/sdc1 /dev/sdd1 mdadm: error opening /dev/md2: No such file or directory will return that error, what shouldi do? Thanks!
2007 Nov 29
1
RAID, LVM, extra disks...
Hi, This is my current config: /dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1 sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf. What should I do if I
2014 Mar 17
1
Slow RAID resync
OK todays problem. I have a HP N54L Microserver running centos 6.5. In this box I have a 3x2TB disk raid 5 array, which I am in the process of extending to a 4x2TB raid 5 array. I've added the new disk --> mdadm --add /dev/md0 /dev/sdb And grown the array --> mdadm --grow /dev/md0 --raid-devices=4 Now the problem the resync speed is v slow, it refuses to rise above 5MB, in general
2010 Mar 25
3
RAID 5 setup?
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)? I have 8 x 1.5tb Drives. -Jason