Displaying 4 results from an estimated 4 matches for "autoassembl".
Did you mean:
autoassembly
2011 Oct 12
1
raid on large disks?
What's the right way to set up >2TB partitions for raid1 autoassembly?
I don't need to boot from this but I'd like it to come up and mount
automatically at boot.
--
Les Mikesell
lesmikesell at gmail.com
2015 Jul 02
3
An mdadm question
CentOS 7.
I have a server with four drives. 1 is /, and the other three are in
RAID5. I need to pull a drive, so I can test whether the server can read >
2TB drives. I've been googling, but don't want to screw the server up....
I think I'd like to
1. stop the RAID
2. pull a drive
3. put in a large drive, and run parted, and mkfs
4. pull the large drive
5. replace the
2011 Dec 06
4
/dev/sda
We're just using Linux software RAID for the first time - RAID1, and the
other day, a drive failed. We have a clone machine to play with, so it's
not that critical, but....
I partitioned a replacement drive. On the clone, I marked the RAID
partitions on /dev/sda failed, and remove, and pulled the drive. After
several iterations, I waited a minute or two, until all messages had
stopped,
2011 Oct 26
0
PCIe errors handled by OS
...A
sdb:
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't
support DPO or FUA
sda: sda1 sda2 sda3
sd 0:0:0:0: [sda] Attached SCSI disk
sdb1 sdb2 sdb3
sd 1:0:0:0: [sdb] Attached SCSI disk
dracut: Autoassembling MD Raid
md: md0 stopped.
md: bind<sdb1>
md: bind<sda1>
md: raid1 personality registered for level 1
bio: create slab <bio-1> at 1
md/raid1:md0: active with 2 out of 2 mirrors
md0: detected capacity change from 0 to 2146369536
dracut: mdadm: /dev/md0 has been started with 2 driv...