search for: sync_action

Displaying 20 results from an estimated 20 matches for "sync_action".

2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2008 Sep 21
3
question about software Raid 1
Does software raid 1 compare checksums or otherwise verify that the same bits are coming from both disks during reads? What I'm interested in, is whether bit errors that were somehow undetected by the hardware would be detected by the raid 1 software. Thanks, Nataraj
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2019 Jan 29
2
C7, mdadm issues
...d /dev/sdc1 to /dev/md0 as 0 >> mdadm: /dev/md0 assembled from 4 drives and 2 spares - not enough to >> start the array. >> >> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. > Hi Mark, > please post the result from > > cat /sys/block/md0/md/sync_action There is none. There is no /dev/md0. mdadm refusees, saying that it's lost too many drives. mark
2019 Jan 30
4
C7, mdadm issues
...sembled from 4 drives and 2 spares - not enough to >>>> start the array. >>>> >>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. >>> Hi Mark, >>> please post the result from >>> >>> cat /sys/block/md0/md/sync_action >> >> There is none. There is no /dev/md0. mdadm refusees, saying that it's lost >> too many drives. >> >> ?????? mark >> >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> https://list...
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Jan 30
2
C7, mdadm issues
...the array. >>>>>> >>>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are >>>>>> spares. >>>>> Hi Mark, >>>>> please post the result from >>>>> >>>>> cat /sys/block/md0/md/sync_action >>>> >>>> There is none. There is no /dev/md0. mdadm refusees, saying that >>>> it's lost too many drives. >>>> >>>> ?????? mark >>>> >>>> >>>> _______________________________________________ >&gt...
2019 Jan 30
1
C7, mdadm issues
...;>>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both >>>>>>>> are spares. >>>>>>> Hi Mark, >>>>>>> please post the result from >>>>>>> >>>>>>> cat /sys/block/md0/md/sync_action >>>>>> >>>>>> There is none. There is no /dev/md0. mdadm refusees, saying >>>>>> that it's lost too many drives. >>>>>> >>>>>> ?????? mark >>>>>> >>>>>> >>>>...
2012 Jan 02
0
raid resync deleting data?
Hello, I have a c5 box with a 6 drive raid 6 array. I was going away over Christmas so I was shutting the machine down, I noticed a raid resync (on the raid 6 array) so i stopped it, using the command # echo "idle" > /sys/block/md5/md/sync_action then shut the machine down. A week later I turned the machine on and started copying data to it around 24 hours later a raid resync occurred and wiped all the new data I had copied to it. To me this seems broken, is it because I manually stopped a previous resync that it got in this state? Any...
2019 Jan 29
0
C7, mdadm issues
...sdd1 and /dev/sdh1, but that both are spares. > > mark > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos > Hi Mark, please post the result from cat /sys/block/md0/md/sync_action
2019 Jan 30
0
C7, mdadm issues
...;>> mdadm: /dev/md0 assembled from 4 drives and 2 spares - not enough to >>> start the array. >>> >>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. >> Hi Mark, >> please post the result from >> >> cat /sys/block/md0/md/sync_action > > There is none. There is no /dev/md0. mdadm refusees, saying that it's lost > too many drives. > > mark > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos...
2019 Jan 30
3
C7, mdadm issues
...gt;>>>> start the array. >>>>>> >>>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. >>>>> Hi Mark, >>>>> please post the result from >>>>> >>>>> cat /sys/block/md0/md/sync_action >>>> >>>> There is none. There is no /dev/md0. mdadm refusees, saying that it's >>>> lost >>>> too many drives. >>>> >>>> ?????? mark >>>> >>>> _______________________________________________ >&g...
2019 Jan 30
0
C7, mdadm issues
...spares - not enough to >>>>> start the array. >>>>> >>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. >>>> Hi Mark, >>>> please post the result from >>>> >>>> cat /sys/block/md0/md/sync_action >>> >>> There is none. There is no /dev/md0. mdadm refusees, saying that it's >>> lost >>> too many drives. >>> >>> ?????? mark >>> >>> _______________________________________________ >>> CentOS mailing list >&...
2019 Jan 30
0
C7, mdadm issues
...>>> >>>>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are >>>>>>> spares. >>>>>> Hi Mark, >>>>>> please post the result from >>>>>> >>>>>> cat /sys/block/md0/md/sync_action >>>>> >>>>> There is none. There is no /dev/md0. mdadm refusees, saying that >>>>> it's lost too many drives. >>>>> >>>>> ?????? mark >>>>> >>>>> >>>>> ______________________...
2019 Jan 30
0
C7, mdadm issues
...spares - not enough to >>>>> start the array. >>>>> >>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are spares. >>>> Hi Mark, >>>> please post the result from >>>> >>>> cat /sys/block/md0/md/sync_action >>> >>> There is none. There is no /dev/md0. mdadm refusees, saying that it's >>> lost >>> too many drives. >>> >>> ?????? mark >>> >>> _______________________________________________ >>> CentOS mailing list >&g...
2019 Jan 31
0
C7, mdadm issues
...>>> >>>>>>> --examine shows me /dev/sdd1 and /dev/sdh1, but that both are >>>>>>> spares. >>>>>> Hi Mark, >>>>>> please post the result from >>>>>> >>>>>> cat /sys/block/md0/md/sync_action >>>>> >>>>> There is none. There is no /dev/md0. mdadm refusees, saying that it's >>>>> lost >>>>> too many drives. >>>>> >>>>> ?????? mark >>>>> >>>>> ______________________...
2020 Sep 17
2
storage for mailserver
Hello Phil, Wednesday, September 16, 2020, 7:40:24 PM, you wrote: PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and PP> marking the HDD members as --write-mostly, meaning most of the reads PP> will come from the faster SSDs retaining much of the speed advantage, PP> but you have the redundancy of both SSDs and HDDs in the array. PP> Read performance is
2020 Sep 19
1
storage for mailserver
..."Write Mode."? I set it to the maximum of 16383 which must be done when the bitmap is created, so remove the bitmap and create a new one with the options you prefer: mdadm /dev/mdX --grow --bitmap=none mdadm /dev/mdX --grow --bitmap=internal --bitmap-chunk=512M --write-behind=16383 Note sync_action must be idle if you decide to script this.? Bigger bitmap-chunks are my preference, but might not be yours.? Your mileage and performance may differ.? :-) I've been meaning to test big write-behind's on my CentOS 8 systems... [1] https://bugzilla.redhat.com/show_bug.cgi?id=1582673? (login...
2016 Jan 17
10
HDD badblocks
Hi list, I've a notebook with C7 (1511). This notebook has 2 disk (640 GB) and I've configured them with MD at level 1. Some days ago I've noticed some critical slowdown while opening applications. First of all I've disabled acpi on disks. I've checked disk for badblocks 4 consecutive times for disk sda and sdb and I've noticed a strange behaviour. On sdb there are
2013 Aug 22
23
Question: How can I recover this partition? (unable to find logical $hugenum len 4096)
Hi list! I recently butchered my filesystem, and I was wondering if anyone knows how to help. Problem: My filesystem is screwed up, and I can''t mount it at all right now. In the logs, the problem begins around 45s. Background: I''m running a 6x4TB RAID5 array using md. I have a few virtual machines using said array, and one of them is a btrfs storage server. I ran into some