Displaying 20 results from an estimated 4000 matches similar to: "Hard disk activity will not die down"
2020 Feb 03
0
Hard disk activity will not die down
> ----- Original Message -----
> From: Chris Pemberton [mailto:pchris.bci at gmail.com]
> To: centos at centos.org
> Sent: Mon, 3 Feb 2020 13:28:27 -0600
> Subject: [CentOS] Hard disk activity will not die down
>
> I updated my backup server this weekend from CentOS 7 to CentOS 8.
> OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives.
> No hardware was changed.
2020 Feb 03
0
Hard disk activity will not die down
Hi,
Ext4 is (slowly) initializing group blocks as far as I can remember. Patience should do the trick :)
HTH,
Le 3 f?vrier 2020 20:28:27 GMT+01:00, Chris Pemberton <pchris.bci at gmail.com> a ?crit :
>I updated my backup server this weekend from CentOS 7 to CentOS 8.
>OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives.
>No hardware was changed.
>
>1. wiped all drives
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 14:02, mark ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root:
Message 257:
>From root at desk4.localdomain Tue Oct 28 07:25:37 2014
Return-Path: <root at desk4.localdomain>
X-Original-To: root
Delivered-To: root at desk4.localdomain
From: mdadm monitoring <root at desk4.localdomain>
To: root at desk4.localdomain
Subject: DegradedArray event on /dev/md0:desk4
Date: Tue, 28 Oct 2014 07:25:27
2014 Dec 04
2
DegradedArray message
Thanks for all the responses. A little more digging revealed:
md0 is made up of two 250G disks on which the OS and a very large /var
partions resides for a number of virtual machines.
md1 is made up of two 2T disks on which /home resides.
Challenge is that disk 0 of md0 is the problem and it has a 524M /boot
partition outside of the raid partition.
My plan is to back up /home (md1) and at a
2011 Apr 01
5
question on software raid
dmesg is not reporting any issues.
The /proc/mdstat looks fine.
md0 : active raid1 sdb1[1] sda1[0]
X blocks [2/2] [UU]
however /var/log/messages says:
smartd[3392] Device /dev/sda 20 offline uncorrectable sectors
The machine is running fine.. raid array looks good - what
is up with smartd?
THanks,
Jerry
2012 Jun 07
1
mdadm: failed to write superblock to
Hello,
i have a little problem. Our server has an broken RAID.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2](F) sdb1[1]
2096064 blocks [2/1] [_U]
md2 : active raid1 sda3[2](F) sdb3[1]
1462516672 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
524224 blocks [2/2] [UU]
unused devices: <none>
I have remove the partition:
# mdadm --remove
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2009 Apr 28
2
new install and software raid
Is there a reason why after a software raid install (from kickstart)
that md1 is always unclean. md0 seems fine.
boot screen says md1 is dirty and
cat /proc/mdstat show md1 as being rebuilt.
Any ideas?
Jerry
--------------- my kickstart --------------
echo "bootloader --location=mbr --driveorder=$HD1SHORT --append=\"rhgb
quiet\" " >
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2014 Sep 30
1
Centos 6 Software RAID 10 Setup
I am setting up a Centos 6.5 box to host some Openvz containers. I
have a 120gb SSD I am going to use for boot, / and swap. Should allow
for fast boots. Have a 4TB drive I am going to mount as /backup and
use to move container backups too etc. The remaining four 3TB drives
I am putting in a software RAID 10 array and mount as /vz and all the
containers will go there. It will have by far the
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List,
What this means?
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 2096384 blocks.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sda2
disk 1, wo:0, o:1, dev:sdb2
sd 0:0:0:0:
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2014 Dec 09
2
DegradedArray message
On Thu, 2014-12-04 at 16:46 -0800, Gordon Messmer wrote:
> On 12/04/2014 05:45 AM, David McGuffey wrote:
> In practice, however, there's a bunch of information you didn't provide,
> so some of those steps are wrong.
>
> I'm not sure what dm-0, dm-2 and dm-3 are, but they're indicated in your
> mdstat. I'm guessing that you made partitions, and then made
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I
noticed one drive is giving errors. Good thing I had RAID. I planned
on upgrading this server in next month or so. Just wandering if there
was an easy way to fix this to avoid rushing the upgrade? Having a
single drive is slowing down reads as well, I think.
Thanks.
Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb