similar to: raid resync deleting data?

Displaying 20 results from an estimated 10000 matches similar to: "raid resync deleting data?"

2007 Mar 20
1
centos raid 1 question
Hi, im having this on my screen and dmesg im not sure if this is an error message. btw im using centos 4.4 with 2 x 200GB PATA drives. md: md0: sync done. RAID1 conf printout: --- wd:2 rd:2 disk 0, wo:0, o:1, dev:hda2 disk 1, wo:0, o:1, dev:hdc2 md: delaying resync of md5 until md3 has finished resync (they share one or more physical units) md: syncing RAID array md5 md: minimum _guaranteed_
2009 Jul 29
0
Software RAID-1 partition constantly syncing
I have a partition set up as software RAID-1 on a CentOS 5.3 machine. Today, the system was rebooted, when it came back up I noticed that it had started to resync. It completes the sync, then immediately starts again. From the log: Jul 29 09:46:02 cbserver kernel: md: syncing RAID array md2 Jul 29 09:46:02 cbserver kernel: md: minimum _guaranteed_ reconstruction speed: 5000 KB/sec/disc. Jul
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List, What this means? md: syncing RAID array md0 md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc. md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction. md: using 128k window, over a total of 2096384 blocks. md: md0: sync done. RAID1 conf printout: --- wd:2 rd:2 disk 0, wo:0, o:1, dev:sda2 disk 1, wo:0, o:1, dev:sdb2 sd 0:0:0:0:
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2010 May 13
1
raid resync speed?
Has anything changed in updates that would affect md raid1 resync speed? I regularly swap a 750G drive and resync to keep an offsite copy and haven't paid enough attention to known when things changed but it seems to take much longer to sync than it did months ago, even if I unmount the partition and stop most other processes that might compete with it. -- Les Mikesell
2011 Mar 20
2
task md1_resync:9770 blocked for more than 120 seconds and OOM errors
Hello, yesterday night I had a problem with my server located at a hoster (strato.de). I couldn't ssh to it and over the remote serial console I saw "out of memory" errors (sorry, don't have the text). Then I had reinstall CentOS 5.5/64 bit + all my setup (2h work), because I have a contract with a social network and they will shut down my little card game if it is not
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2005 Oct 28
0
Xen and EVMS/Raid5 - Null pointer dereference
Hi, A problem with EVMS and Xen: I have patched a 2.6 kernel with the evms patches and then with the Xen patched, compiled and installed it. (kernel is 2.6.11.9 - which is what this server has been running for the past few months w/o the evms patches) At first everything seems to work just fine, I am able to use evms to create a new "volume", in this case it is based on MD/RAID-5 but
2014 Mar 17
1
Slow RAID resync
OK todays problem. I have a HP N54L Microserver running centos 6.5. In this box I have a 3x2TB disk raid 5 array, which I am in the process of extending to a 4x2TB raid 5 array. I've added the new disk --> mdadm --add /dev/md0 /dev/sdb And grown the array --> mdadm --grow /dev/md0 --raid-devices=4 Now the problem the resync speed is v slow, it refuses to rise above 5MB, in general
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 18:47, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 15:03, mark ha scritto: >>>> >>>>> I've no idea what happened, but the box I was working on last week >>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto: > On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto: > Alessandro Baggi wrote: >> Il 30/01/19 14:02, mark ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote:
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what happened, but the box I was working
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped. md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21] 19533803520
2011 Apr 28
2
Server offline :-( please help to repair software RAID
Hello, since weeks I was ignoring this warning at my CentOS 5.6/64 bit machine - /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md0 in the hope that the software RAID will slowly repair itself. I also had executed "echo 100000 > /proc/sys/dev/raid/speed_limit_max" on the advice from the mailing list. But now my web server is offline - I had to boot
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote: >>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2010 Mar 04
1
Resync raid1 from disk with unreadable sectors
Hello, After some fiddling with the server now I have broken RAID1 with the "current" mirror on the disk with few unreadable sectors. If I try to re-add other disk to the mirror resync goes till those bad sectors and then starts from the beginning. And so on. Is it possible to somehow force resync to continue even after errors? Manual resync with dd would require a bit too long
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2012 Jan 31
1
force-resync fails to recover all messages in mdbox
To my understanding, when using mdbox, doveadm force-resync should be able to recover all the messages from the storage files alone, though of course losing all metadata except the initial delivery folder. However, this does not seem to be the case. For me, force-resync creates only partial indices that lose messages. The message contents are of course still in the storage files, but dovecot just