----- Original Message -----
| I have a system running CentOS 6.3, with a SCSI attached RAID:
|
|
|
http://www.raidweb.com/index.php/2012-10-24-12-40-09/janus-ii-scsi/2012-10-24-12-40-59.html
|
|
| For disaster recovery purposes, I want to build up a spare system
| which could take the place of the server hosting the RAID above.
|
| But here's what I see:
|
| # fdisk -l /dev/sdc
|
| WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
| fdisk doesn't support GPT. Use GNU Parted.
|
|
| Disk /dev/sdc: 44004.7 GB, 44004691814400 bytes
| 255 heads, 63 sectors/track, 5349932 cylinders
| Units = cylinders of 16065 * 512 = 8225280 bytes
| Sector size (logical/physical): 512 bytes / 512 bytes
| I/O size (minimum/optimal): 524288 bytes / 524288 bytes
| Disk identifier: 0x00000000
|
| ?? Device Boot????? Start???????? End????? Blocks?? Id? System
| /dev/sdc1?????????????? 1????? 267350? 2147483647+? ee? GPT
| Partition 1 does not start on physical sector boundary.
| #
|
|
| But here's the partitions I have:
|
| # df -k |grep sdc
| /dev/sdc1??????????? 15379809852 8627488256 6596071608? 57% /space01
| /dev/sdc2??????????? 6248052728 905001184 5279574984? 15% /space02
| /dev/sdc5??????????? 8175038780 2418326064 5673659088? 30% /space03
| /dev/sdc4??????????? 6248052728 1444121916 4740454252? 24% /space04
| /dev/sdc3??????????? 6248052728 1886640284 4297935884? 31% /space05
| #
|
| ?
| How can I build up a new system to be ready for this existing RAID?
| ?? Or will the latest/greatest CentOS just know what to do, and allow
| me to simply copy the /etc/fstab over and respect it?
Personally, I wouldn't (and don't) partition the disk(s) at all. I
instead use LVM to manage the storage. This allows far greater flexibility and
could allow you to provision the disks better. I'd recommend you use LVM on
your new machine to create the volumes and use file system volume labels instead
of physical drive locations such as /dev/sdc1. Volume labels are not system
specific, although if you have conflicting labels it could be problematic.
pvcreate /dev/sdc
vgcreate DATA
lvcreate -L 15379809852 -n space01 DATA
lvcreate -L 6248052728 -n space02 DATA
...
mkfs.xfs -L space01 /dev/DATA/space01
mkfs.xfs -L space02 /dev/DATA/space02
...
then in /etc/fstab
LABEL=space01 /space01 xfs defaults 0 0
LABEL=space02 /space02 xfs defaults 0 0
...
This of course all depends on what it is that you're trying to accomplish
but I would certainly recommend moving away from using partitions. If
you're just rsync'ing the data you don't have to worry about the
sizes being the same, you just have to worry about the sizes being sufficiently
large to store the amount of data needed.
Currently I can see that you have well over provisioned the amount of space
required and being able to manage growth of say /space01 which seems to be much
larger, is somewhat fixed in your configuration, how would you handle the growth
or shrinking of other partitions to make room. With LVM you would provision
space with some growth of what's there now and then grow file systems as
needed.
As a side note, you may want to investigate Gluster or DRDB to actually
replicate the data across the nodes giving you a more "true"
replication and fail-over configuration.
--
James A. Peltier
Manager, IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone : 778-782-6573
Fax : 778-782-3045
E-Mail : jpeltier at sfu.ca
Website : http://www.sfu.ca/itservices
?A successful person is one who can lay a solid foundation from the bricks
others have thrown at them.? -David Brinkley via Luke Shaw