similar to: raid resync speed?

Displaying 20 results from an estimated 10000 matches similar to: "raid resync speed?"

2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions md0,md1 and md2 with md2 as 400+ gigs Now it is almost 36 hours the status is cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[1] hda1[0] 104320 blocks [2/2] [UU] resync=DELAYED md1 : active raid1 hdb2[1] hda2[0] 4096448 blocks [2/2] [UU] resync=DELAYED md2 : active raid1
2014 Mar 17
1
Slow RAID resync
OK todays problem. I have a HP N54L Microserver running centos 6.5. In this box I have a 3x2TB disk raid 5 array, which I am in the process of extending to a 4x2TB raid 5 array. I've added the new disk --> mdadm --add /dev/md0 /dev/sdb And grown the array --> mdadm --grow /dev/md0 --raid-devices=4 Now the problem the resync speed is v slow, it refuses to rise above 5MB, in general
2012 Jan 02
0
raid resync deleting data?
Hello, I have a c5 box with a 6 drive raid 6 array. I was going away over Christmas so I was shutting the machine down, I noticed a raid resync (on the raid 6 array) so i stopped it, using the command # echo "idle" > /sys/block/md5/md/sync_action then shut the machine down. A week later I turned the machine on and started copying data to it around 24 hours later a raid resync
2007 Mar 20
1
centos raid 1 question
Hi, im having this on my screen and dmesg im not sure if this is an error message. btw im using centos 4.4 with 2 x 200GB PATA drives. md: md0: sync done. RAID1 conf printout: --- wd:2 rd:2 disk 0, wo:0, o:1, dev:hda2 disk 1, wo:0, o:1, dev:hdc2 md: delaying resync of md5 until md3 has finished resync (they share one or more physical units) md: syncing RAID array md5 md: minimum _guaranteed_
2010 Mar 04
1
Resync raid1 from disk with unreadable sectors
Hello, After some fiddling with the server now I have broken RAID1 with the "current" mirror on the disk with few unreadable sectors. If I try to re-add other disk to the mirror resync goes till those bad sectors and then starts from the beginning. And so on. Is it possible to somehow force resync to continue even after errors? Manual resync with dd would require a bit too long
2007 Feb 18
1
(no subject)
Hello, I would like to replace one of the disks in a raid 1 array (software raid) in centos 4.4 for the purpose of saving the removed drive as a backup of the system. Replace it with a new disk and have the raod resync. That way the removed disk can be used to restore the system to that point in time if something dramatic occured. I have a number of questions, I can?t find the answers
2010 Apr 12
1
How long will CENTOS 4.X automatic resync to to time server???
We have several CENTOS 4 and 5 servers. ALL CENTOS servers have NTP setup to sync time server. Several days ago due to power outage all servers are reboot. Due to DNS server did NOT up quickly, CENTOS servers start up and can NOT find time server. For CENTOS 5.X servers, it did quickly resync to time server after 30 minutes. For CENTOS 4.X servers, it have been 3 days still NOT sync to time
2007 Dec 18
2
resync linksys SPA9XX config file from Asterisk
Hi All, Anyone know the sip header to send to a Linksys to resync it's config file? Thanks. JR -- JR Richardson Engineering for the Masses
2018 Sep 12
1
Adding namespace alias_for causes index resync?
I just added a new namespace-alias with alias_for. Apparently this causes all mailbox indexes to be resynced? Is this intentional and/or is there some way to avoid this? My NFS storage pretty much kills its elf when hundreds of thausands of users needs to resync indexes :) Thanks -- Tom
2012 May 26
1
Crash on force-resync if / is given as mailbox name
Hi, when I specify a slash a mailbox name on the command line of doveadm force-resync, it throws a panic. I'm not sure this is considered a bug. mail01:~# doveadm force-resync -u user1 at example.org / doveadm(user1 at example.org): Panic: file mailbox-list-fs.c: line 150 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) doveadm(user1 at example.org): Error:
2012 Nov 28
1
corrupt mdbox, force-resync segfaults
I could use some help with a corrupt mdbox. doveadm force-resync is crashing (see below), but I really need just to get this account functioning. What's my next step, as far as deleting index files? These were the earliest errors I could find: Nov 28 09:40:21 macy dovecot[6615]: imap(cory at metro-email.com): Error: Corrupted index cache file
2011 Jan 12
3
variable raid1 rebuild speed?
I have a 750Gb 3-member software raid1 where 2 partitions are always present and the third is regularly rotated and re-synced (SATA disks in hot-swap bays). The timing of the resync seems to be extremely variable recently, taking anywhere from 3 to 10 hours even if the partition is unmounted and the drives aren't doing anything else and regardless of what I echo into
2013 Mar 07
1
[dovecot-2.1.15] mdbox corruption, doveadm force-resync can't repair it (throws segfault)
Hi Timo, hi all! Today i noticed imap throws segmentation faults and dumps cores. I looked into logs I can see: 2013-03-07T12:12:52.257986+01:00 meteor dovecot: imap(marcinxxx at kolekcja.mejor.pl) <7sRXtFPXYAA+eX93>: Error: Corrupted dbox file /dane/domeny/mejor.pl/mail/marcin//.mdbox/mailinglists/storage/m.75 (around offset=2779212): EOF reading msg header (got 0/30 bytes)
2017 Jan 13
2
tinc behind CISCO ASA 5506
Hi there I have the following setup Home - Main Tinc server with public IP running on PfSense work - tinc client running behind a CISCO ASA firewall with public IP running on Windows 10 offsite - tinc client running on tomato router behind a double NAT Home & offsite connect & i can see all PCs & devices & connect to them easily, on either side work to Home or offsite connects
2023 Jan 12
1
Upgrading system from non-RAID to RAID1
Hallo Simon, > Anyway, the splitting of large disks has additional advantages. Think of > what happens in case of a failure (power loss, kernel crash...). With the > disk as one large chunk, the whole disk has to be resynced on restart > while with smaller segments only those which are marked as dirty have to > be resynced. This can make a bit difference. I am not sure if this is
2023 Jan 12
2
Upgrading system from non-RAID to RAID1
> Hallo Simon, > >> Anyway, the splitting of large disks has additional advantages. Think of >> what happens in case of a failure (power loss, kernel crash...). With >> the >> disk as one large chunk, the whole disk has to be resynced on restart >> while with smaller segments only those which are marked as dirty have to >> be resynced. This can make a bit
2011 Feb 16
2
Software RAID Level 1, smartd and changing dev numbers
We have about 50 CentOS servers with software RAID level 1 (mirroring). Each week, we swap out one of the drives (the one in the second of four hot-swap bays, only the first two of which contain drives) on each server and take them offsite for safekeeping. The problem is, the kernel seemingly randomly switches between /dev/sdb and /dev/sdc for these devices. This makes the process slower by
2012 Jan 31
1
force-resync fails to recover all messages in mdbox
To my understanding, when using mdbox, doveadm force-resync should be able to recover all the messages from the storage files alone, though of course losing all metadata except the initial delivery folder. However, this does not seem to be the case. For me, force-resync creates only partial indices that lose messages. The message contents are of course still in the storage files, but dovecot just
2004 Jan 29
4
Can't Figure out why rsync job stops
I connecting to two offsite over servers, that are connected over dedicated T1 lines. I'm using the same script on both servers. One runs fine, but the other starts, gets the file list and processes a few folders. Then it will hang for about 5 minutes before sptting out the following errors: receiving file list ... 16756 files to consider dslagel/ dslagel/DRIVERS/ dslagel/DRIVERS/ATP_PKG/
2017 Dec 17
2
Offsite hosted backup solutions
-----Original Message----- From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Nicolas Kovacs Sent: Sunday, December 17, 2017 12:52 PM To: centos at centos.org Subject: Re: [CentOS] Offsite hosted backup solutions > Can't say about Windows clients, but for all my Linux machines, I'm > using Rsnapshot, either on public or LAN servers. Basically uses rsync > over SSH,