Displaying 20 results from an estimated 8000 matches similar to: "Prolonged transfers turn hard drive into read-only."
2006 Oct 28
2
hard drive failing in linux raid
Hello all. I have a server with a linux software raid1 setup between
two drives of the same model....one hard drive as primary ide master
and second hard drive as secondary master. Now primary master hard
drive is displaying a lot of SMART errors so I would like to remove it
and replace with another drive....different brand but same size.
Partitions are /dev/md0 till /dev/md5. I think I know what
2019 Mar 01
2
What files to edit when changing the sdX of hard drives?
On Thu, Feb 28, 2019 at 05:19:49PM +0100, Nicolas Kovacs (info at microlinux.fr) wrote:
> Le 28/02/2019 ? 04:12, Jobst Schmalenbach a ?crit?:
> > I want to lock in the SDA/SDB/SDC for my drives
>
> In short : use UUIDs or labels instead of hardcoding /dev/sdX.
I **KNOW** how to use UUID's ... this is NOT the reason why I am doing this!
I *NEED* the order of the disks to be
2011 Sep 19
1
mdadm and drive identification?
I have a server that has 16 drives in it. They are connected to a 3ware 9650SE-16ML SATA RAID card. I have the card set to export all the drives as JBOD because I prefer Linux to do the reporting of drive and RAID health . I'm using mdadm to create a RAID6 with a hot spare. Doing this I can take the disks and put them on a completely different SATA controller/computer and still have the RAID
2020 Sep 18
0
Drive failed in 4-drive md RAID 10
> I got the email that a drive in my 4-drive RAID10 setup failed. What are
> my
> options?
>
> Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
>
> mdadm.conf:
>
> # mdadm.conf written out by anaconda
> MAILADDR root
> AUTO +imsm +1.x -all
> ARRAY /dev/md/root level=raid10 num-devices=4
> UUID=942f512e:2db8dc6c:71667abc:daf408c3
>
>
2014 Oct 02
0
[PATCH v2 1/4] appliance: Use dhclient instead of hard-coding IP address of appliance.
qemu in SLIRP mode offers DHCP services to the appliance. We don't
use them, but use a fixed IP address intead. This changes the
appliance to get its IP address using DHCP.
Note: This is only used when the network is enabled. dhclient is
somewhat slower, but the penalty (a few seconds) is only paid for
network users. We could consider using the faster systemd dhcp client
instead.
---
2019 Mar 01
0
What files to edit when changing the sdX of hard drives?
On 2/28/19 10:04 PM, Jobst Schmalenbach wrote:
> On Thu, Feb 28, 2019 at 05:19:49PM +0100, Nicolas Kovacs (info at microlinux.fr) wrote:
>> Le 28/02/2019 ? 04:12, Jobst Schmalenbach a ?crit?:
>>> I want to lock in the SDA/SDB/SDC for my drives
>>
>> In short : use UUIDs or labels instead of hardcoding /dev/sdX.
>
> I **KNOW** how to use UUID's ... this is
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my
options?
Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid10 num-devices=4
UUID=942f512e:2db8dc6c:71667abc:daf408c3
/proc/mdstat:
Personalities : [raid10]
md127 : active raid10 sdf1[2](F)
2018 Sep 11
1
[PATCH] daemon: consider /etc/mdadm/mdadm.conf while inspecting mountpoints.
From: Nikolay Ivanets <stenavin@gmail.com>
Inspection code checks /etc/mdadm.conf to map MD device paths listed in
mdadm.conf to MD device paths in the guestfs appliance. However on some
operating systems (e.g. Ubuntu) mdadm.conf has alternative location:
/etc/mdadm/mdadm.conf.
This patch consider an alternative location of mdadm.conf as well.
---
daemon/inspect_fs_unix_fstab.ml | 13
2006 Apr 11
1
SATA Raid 5 and losing a drive
Hi Folks -
Using CentOS on a server destined to have a dozen SATA drives in it.
The server is fine, raid 5 is set up on groups of 4 SATA drives.
Today we decide to disconnect one SATA drive to simulate a failure. The
box trucked on fine... a little too fine. We waited some minutes but no
problem was visible in /proc/mdstat or in /var/log/messages or on the
console.
I ran mdadm --monitor
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 18:47, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 29/01/19 15:03, mark ha scritto:
>>
>>> I've no idea what happened, but the box I was working on last week has
>>> a *second* bad drive. Actually, I'm starting to wonder about that
>>> particulare hot-swap bay.
>>>
>>> Anyway, mdadm --detail shows /dev/sdb1
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 29/01/19 18:47, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>
>>>>> I've no idea what happened, but the box I was working on last week
>>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2008 Jul 17
2
lvm errors after replacing drive in raid 10 array
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on
a CentOS 5.2 box before it goes online and before a drive really fails.
I 'mdadm failed, removed', powered off, replaced drive, partitioned with
sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
Everything seems fine until I try to create a snapshot lv. (Creating a
snapshot lv
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto:
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 15:03, mark ha scritto:
>
>> I've no idea what happened, but the box I was working on last week has
>> a *second* bad drive. Actually, I'm starting to wonder about that
>> particulare hot-swap bay.
>>
>> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1...
>> but see both /dev/sdh1 and
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto:
> Alessandro Baggi wrote:
>> Il 30/01/19 14:02, mark ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what happened, but the box I was working
2012 Feb 12
0
Bug#659642: xen-hypervisor-4.0-amd64: outl segfaults when restoring monitor from sleep with DPMS
Package: xen-hypervisor-4.0-amd64
Version: 4.0.1-4
Severity: normal
Tags: squeeze
When the monitor is being restored from power saving mode via DPMS, X will lock
up/restart. This only occurs when running Xen hypervisor. Running just 2.6.32-5
-xen-amd64 doesn't produce this effect. Note that both 2.6.32-5-xen-amd64 and
2.6.32-5-xen-amd64 with Xen 4.0.1 are running with nopat (to workaround
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote:
> Il 29/01/19 18:47, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 15:03, mark ha scritto:
>>>
>>>> I've no idea what happened, but the box I was working on last week
>>>> has a *second* bad drive. Actually, I'm starting to wonder about
>>>> that particulare hot-swap bay.
>>>>
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote:
> Il 30/01/19 16:33, mark ha scritto:
>
>> Alessandro Baggi wrote:
>>
>>> Il 30/01/19 14:02, mark ha scritto:
>>>
>>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>>
>>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>>
>>>>>> Alessandro Baggi wrote:
2011 Nov 24
1
mdadm / Ubuntu 10.04 error
md_create: mdadm: boot: mdadm: boot is not a block device. at /home/rjones/d/libguestfs/images/guest-aux/make-fedora-img.pl line 95.
Looking into this, it appears the old version of mdadm shipped in
Ubuntu (mdadm 2.6.7) doesn't support the notion of giving arbitrary
names to devices. Thus you have to do:
mdadm --create /dev/md0 [devices]
We do:
mdadm --create boot [devices]
which it