similar to: C7, and my RAID

Displaying 20 results from an estimated 30000 matches similar to: "C7, and my RAID"

2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 16:33, mark ha scritto: > Alessandro Baggi wrote: >> Il 30/01/19 14:02, mark ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote:
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
0
C7, mdadm issues
Il 30/01/19 14:02, mark ha scritto: > On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what
2019 Jan 30
0
C7, mdadm issues
Il 29/01/19 20:42, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 18:47, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 15:03, mark ha scritto: >>>> >>>>> I've no idea what happened, but the box I was working on last week >>>>> has a *second* bad drive. Actually, I'm starting to wonder about
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 18:47, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 15:03, mark ha scritto: >>> >>>> I've no idea what happened, but the box I was working on last week >>>> has a *second* bad drive. Actually, I'm starting to wonder about >>>> that particulare hot-swap bay. >>>>
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto: >>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> Il 29/01/19 20:42, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>>> Alessandro Baggi wrote: >>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 29
0
C7, mdadm issues
Il 29/01/19 18:47, mark ha scritto: > Alessandro Baggi wrote: >> Il 29/01/19 15:03, mark ha scritto: >> >>> I've no idea what happened, but the box I was working on last week has >>> a *second* bad drive. Actually, I'm starting to wonder about that >>> particulare hot-swap bay. >>> >>> Anyway, mdadm --detail shows /dev/sdb1
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote: >> Il 29/01/19 20:42, mark ha scritto: >>> Alessandro Baggi wrote: >>>> Il 29/01/19 18:47, mark ha scritto: >>>>> Alessandro Baggi wrote: >>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>> >>>>>>> I've no idea what happened, but the box I was working
2019 Jan 29
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 29/01/19 15:03, mark ha scritto: > >> I've no idea what happened, but the box I was working on last week has >> a *second* bad drive. Actually, I'm starting to wonder about that >> particulare hot-swap bay. >> >> Anyway, mdadm --detail shows /dev/sdb1 remove. I've added /dev/sdi1... >> but see both /dev/sdh1 and
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
In article <20190225050144.GA5984 at button.barrett.com.au>, Jobst Schmalenbach <jobst at barrett.com.au> wrote: > Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list, I'm new with UEFI and GPT. For several years I've used MBR partition table. I've installed my system on software raid1 (mdadm) using md0(sda1,sdb1) for swap, md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to concerning raid1 installation, I must put each partition on a different md devices. I've asked times ago if it's more correct create the
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi. > > CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade > new/old machines. > > I was trying to setup two disks as a RAID1 array, using these lines > > mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 > /dev/sdc1 > mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 > /dev/sdc2 > mdadm
2010 Oct 24
0
CentOS Digest, Vol 69, Issue 24
Spam Sent on the Sprint? Now Network from my BlackBerry? -----Original Message----- From: centos-request at centos.org Sender: centos-bounces at centos.org Date: Sun, 24 Oct 2010 12:00:02 To: <centos at centos.org> Reply-To: centos at centos.org Subject: CentOS Digest, Vol 69, Issue 24 Send CentOS mailing list submissions to centos at centos.org To subscribe or unsubscribe via the World
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my servers acting sluggish...so when I looked at the mdadm raid device I see this: mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Sep 27 22:47:44 2010 Raid Level : raid10 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759808 (931.51 GiB 1000.20 GB) Raid
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi. CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines. I was trying to setup two disks as a RAID1 array, using these lines mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2