Hi Daire,
In your thread discussing "e2scan MDT backup", I was very interested
in
finding out more about how you initially setup your OST''s and
MDT''s with
LVM. We''re implementing a new set of production Lustre servers and had
Sun come on-site last fall to help get us started. Our Sun Lustre
consultant was very knowledgeable and helpful and he setup our environment
as follows:
MGS/MDT Servers:
MGS# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lvroot
9.7G 5.3G 3.9G 58% /
/dev/mapper/vgroot-lvtmp
992M 34M 908M 4% /tmp
/dev/mapper/vgroot-lvlawsonusr
2.0G 68M 1.8G 4% /lawson/usr
/dev/xvda1 122M 85M 31M 74% /boot
tmpfs 8.0G 0 8.0G 0% /dev/shm
/dev/xvdb 1008M 34M 923M 4% /mnt/mgs
MDT# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lvroot
9.7G 6.5G 2.8G 71% /
/dev/mapper/vgroot-lvtmp
992M 34M 908M 4% /tmp
/dev/mapper/vgroot-lvlawsonusr
2.0G 1.3G 609M 68% /lawson/usr
/dev/xvda1 122M 85M 31M 74% /boot
tmpfs 8.0G 0 8.0G 0% /dev/shm
/dev/xvdj 3.5G 173M 3.2G 6% /mnt/lusfs01/mdt
/dev/xvdk 3.5G 170M 3.2G 6% /mnt/lusfs02/mdt
/dev/xvdl 3.5G 169M 3.2G 5% /mnt/lusfs03/mdt
/dev/xvdm 3.5G 173M 3.2G 6% /mnt/lusfs04/mdt
/dev/xvdn 3.5G 170M 3.2G 6% /mnt/lusfs05/mdt
OSS Servers:
OSS1# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lvroot
9.7G 7.7G 1.6G 84% /
/dev/mapper/vgroot-lvtmp
992M 34M 908M 4% /tmp
/dev/mapper/vgroot-lvlawsonusr
2.0G 68M 1.8G 4% /lawson/usr
/dev/xvda1 122M 85M 31M 74% /boot
tmpfs 8.0G 0 8.0G 0% /dev/shm
/dev/xvde1 174G 461M 165G 1% /mnt/lusfs01/ost01
/dev/xvdg1 174G 461M 165G 1% /mnt/lusfs01/ost03
/dev/xvde2 106G 461M 100G 1% /mnt/lusfs02/ost01
/dev/xvdg2 106G 461M 100G 1% /mnt/lusfs02/ost03
/dev/xvde3 16G 439M 14G 3% /mnt/lusfs03/ost01
/dev/xvdg3 16G 439M 14G 3% /mnt/lusfs03/ost03
/dev/xvde5 7.6G 331M 6.9G 5% /mnt/lusfs04/ost01
/dev/xvdg5 7.6G 331M 6.9G 5% /mnt/lusfs04/ost03
/dev/xvde6 16G 2.2G 13G 15% /mnt/lusfs05/ost01
/dev/xvdg6 16G 2.9G 12G 21% /mnt/lusfs05/ost03
OSS2# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lvroot
9.7G 7.2G 2.1G 78% /
/dev/mapper/vgroot-lvtmp
992M 34M 908M 4% /tmp
/dev/mapper/vgroot-lvlawsonusr
2.0G 68M 1.8G 4% /lawson/usr
/dev/xvda1 122M 85M 31M 74% /boot
tmpfs 8.0G 0 8.0G 0% /dev/shm
/dev/xvdd1 174G 461M 165G 1% /mnt/lusfs01/ost00
/dev/xvdd2 106G 461M 100G 1% /mnt/lusfs02/ost00
/dev/xvdf2 106G 461M 100G 1% /mnt/lusfs02/ost02
/dev/xvdd3 16G 439M 14G 3% /mnt/lusfs03/ost00
/dev/xvdf3 16G 439M 14G 3% /mnt/lusfs03/ost02
/dev/xvdd5 7.6G 331M 6.9G 5% /mnt/lusfs04/ost00
/dev/xvdf5 7.6G 331M 6.9G 5% /mnt/lusfs04/ost02
/dev/xvdd6 16G 2.5G 12G 18% /mnt/lusfs05/ost00
/dev/xvdf6 16G 1.1G 14G 8% /mnt/lusfs05/ost02
/dev/xvdf1 174G 461M 165G 1% /mnt/lusfs01/ost02
Lustre client server:
CLIENT# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-lvroot
9.7G 3.6G 5.7G 39% /
/dev/mapper/vgroot-lvlawsonusr
2.0G 68M 1.8G 4% /lawson/usr
/dev/mapper/vgroot-lvwebsphere
9.7G 1.7G 7.6G 18% /websphere
/dev/mapper/vgroot-lvtest
2.0G 68M 1.8G 4% /test
/dev/mapper/vgroot-lvtmp
992M 34M 908M 4% /tmp
/dev/xvda1 122M 77M 39M 67% /boot
tmpfs 8.0G 0 8.0G 0% /dev/shm
/dev/mapper/vgroot-lvlogs
5.0G 139M 4.6G 3% /logs
10.203.4.100 at tcp,10.203.4.101 at tcp:/lusfs01
694G 1.9G 657G 1% /content
10.203.4.100 at tcp,10.203.4.101 at tcp:/lusfs02
423G 1.8G 400G 1% /newproducts
10.203.4.100 at tcp,10.203.4.101 at tcp:/lusfs04
31G 1.3G 28G 5% /gid
10.203.4.100 at tcp,10.203.4.101 at tcp:/lusfs03
61G 1.8G 56G 3% /products
10.203.4.100 at tcp,10.203.4.101 at tcp:/lusfs05
61G 8.6G 49G 15% /scs/content
As you can see, our Sun consultant set our production environment up to
use standard disk partitions that are striped across two sets of servers
for our five Lustre filesystems (which is working fine and we can grow it
no problems). We''re running RHEL 5.2 XEN MGS/MDT/OSS servers using
Linux
Heartbeat for STONITH and failover. The XEN domU''s are LVM-backed on
EMC
SAN. Lustre was compiled into the XEN kernel and we''ve compiled and
are
using the patchless Lustre client (Lustre 1.6.6).
What I''m really curious about is how to go about setting up this
environment under LVM2. Is it as simple as taking the "xvd?" disk on
the
OSS and MDS servers, doing a pvcreate, vgcreate, lvcreate, mkfs? Do you
have to be careful about anything (striping as seen above)? May I ask if
you might be able to give an example? I''d like to test LVM in our
Lustre
certification environment, but would like to make sure I have an idea of
what I''m doing before I mess up my working cert environment. Are there
any pros / cons for LVM2 versus disk partitions? Backup issues, speed,
performance, etc?
> Once you move to LVM then you can snapshot the MDT and mount that
without> unmounting the production MDT. You should probably destroy the snapshot
> afterwards as it affects performance somewhat.
>
> Regards,
>
> Daire
Cheers and many thanks for your consideration and time,
Ms. Andrea D. Rucks
Sr. Unix Systems Administrator,
Lawson ITS Unix Server Team
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090304/5cc1eced/attachment-0001.html