Hi,
I''m having some problems, and i''m on a tight schedule if
some could help i apreciate :)
- I''ve configured lustre to access a storage resource. We have 2
machines connected to it. The config generation script is:
#!/bin/sh
# config.sh
DEV_MDS=''/dev/sds''
DEV_HOME1=''/dev/sdr''
DEV_HOME2=''/dev/sdq''
DEV_SOFT1=''/dev/sdo''
DEV_SOFT2=''/dev/sdp''
DEV_VOL1=''/dev/sdm''
DEV_VOL2=''/dev/sdn''
DEV_DPM1=''/dev/sdk''
DEV_DPM2=''/dev/sdl''
# Create nodes
rm -f config.xml
lmc -m config.xml --add net --node zephyr.up.pt --nid zephyr.up.pt --nettype tcp
lmc -m config.xml --add net --node hades.up.pt --nid hades.up.pt --nettype tcp
# New client profiles
lmc -m config.xml --add net --node SE-vpool --nid ''*''
--nettype tcp
lmc -m config.xml --add net --node SE-ppool --nid ''*''
--nettype tcp
lmc -m config.xml --add net --node VO-soft --nid ''*'' --nettype
tcp
lmc -m config.xml --add net --node homes --nid ''*'' --nettype
tcp
lmc -m config.xml --add net --node shared --nid ''*'' --nettype
tcp
# Cofigure MDS
lmc -m config.xml --add mds --node hades.up.pt --mds mds-polo3 --fstype ext3
--dev $DEV_MDS
# Configure LOVS
# Homes
lmc -m config.xml --add lov --lov lov-home --mds mds-polo3 --stripe_sz 1048576
--stripe_cnt 0 --stripe_pattern 0
# VOs software space
lmc -m config.xml --add lov --lov lov-soft --mds mds-polo3 --stripe_sz 1048576
--stripe_cnt 0 --stripe_pattern 0
# Volatil space for DPM
lmc -m config.xml --add lov --lov lov-tmp --mds mds-polo3 --stripe_sz 1048576
--stripe_cnt 0 --stripe_pattern 0
# DPM space
lmc -m config.xml --add lov --lov lov-dpm --mds mds-polo3 --stripe_sz 1048576
--stripe_cnt 0 --stripe_pattern 0
# Configure OSTs
# Homes
lmc -m config.xml --add ost --node hades.up.pt --lov lov-home --ost home-hades
--fstype ldiskfs --dev $DEV_HOME1
lmc -m config.xml --add ost --node zephyr.up.pt --lov lov-home --ost home-zephyr
--fstype ldiskfs --dev $DEV_HOME2
# Soft
lmc -m config.xml --add ost --node hades.up.pt --lov lov-soft --ost soft-hades
--fstype ldiskfs --dev $DEV_SOFT1
lmc -m config.xml --add ost --node zephyr.up.pt --lov lov-soft --ost soft-zephyr
--fstype ldiskfs --dev $DEV_SOFT2
# Volatil
lmc -m config.xml --add ost --node hades.up.pt --lov lov-tmp --ost tmp-hades
--fstype ldiskfs --dev $DEV_VOL1
lmc -m config.xml --add ost --node zephyr.up.pt --lov lov-tmp --ost tmp-zephyr
--fstype ldiskfs --dev $DEV_VOL2
# DPM
lmc -m config.xml --add ost --node hades.up.pt --lov lov-dpm --ost dpm-hades
--fstype ldiskfs --dev $DEV_DPM1
lmc -m config.xml --add ost --node zephyr.up.pt --lov lov-dpm --ost dpm-zephyr
--fstype ldiskfs --dev $DEV_DPM2
# homes client profile
lmc -m config.xml --add mtpt --node homes --path /local/home --mds mds-polo3
--lov lov-home
# Vo software share client profile
lmc -m config.xml --add mtpt --node VO-soft --path /vosoft --mds mds-polo3 --lov
lov-soft
# Volatil pool client profile
lmc -m config.xml --add mtpt --node SE-vpool --path /mnt/vpool --mds mds-polo3
--lov lov-tmp
# Permanent pool client profile
lmc -m config.xml --add mtpt --node SE-ppool --path /mnt/ppool --mds mds-polo3
--lov lov-dpm
I''ve add it to fstab the mount points for the clients:
# Lustre mount points
hades.up.pt:/mds-polo3/SE-vpool /mnt/vpool lustre rw 0 0
hades.up.pt:/mds-polo3/SE-ppool /mnt/ppool lustre rw 0 0
hades.up.pt:/mds-polo3/VO-soft /vosoft lustre rw 0 0
hades.up.pt:/mds-polo3/homes /local/home lustre rw 0 0
And doing a ''df -h'' i can see the devices are mounted:
hades.up.pt:/mds-polo3/SE-vpool
50G 153M 47G 1% /mnt/vpool
hades.up.pt:/mds-polo3/SE-ppool
1.2T 209M 1.1T 1% /mnt/ppool
hades.up.pt:/mds-polo3/VO-soft
306G 185M 290G 1% /vosoft
hades.up.pt:/mds-polo3/homes
591G 193M 561G 1% /local/home
Now. if i go to one of the partition lets say ''/local/home''
and create some file, this file is created also in the other devices.!
doing "lfs df -h"
UUID bytes Used Available Use% Mounted on
mds-polo3_UUID 39.4G 2.1G 37.3G 5% /mnt/vpool[MDT:0]
tmp-hades_UUID 24.6G 1.3G 23.3G 5% /mnt/vpool[OST:0]
tmp-zephyr_UUID 24.6G 1.3G 23.3G 5% /mnt/vpool[OST:1]
filesystem summary: 49.2G 2.6G 46.6G 5% /mnt/vpool
UUID bytes Used Available Use% Mounted on
mds-polo3_UUID 39.4G 2.1G 37.3G 5% /mnt/ppool[MDT:0]
dpm-hades_UUID 564.5G 28.8G 535.7G 5% /mnt/ppool[OST:0]
dpm-zephyr_UUID 564.5G 28.8G 535.7G 5% /mnt/ppool[OST:1]
filesystem summary: 1.1T 57.6G 1.0T 5% /mnt/ppool
UUID bytes Used Available Use% Mounted on
mds-polo3_UUID 39.4G 2.1G 37.3G 5% /vosoft[MDT:0]
soft-hades_UUID 152.6G 7.8G 144.7G 5% /vosoft[OST:0]
soft-zephyr_UUID 152.6G 7.8G 144.7G 5% /vosoft[OST:1]
filesystem summary: 305.1G 15.7G 289.5G 5% /vosoft
UUID bytes Used Available Use% Mounted on
mds-polo3_UUID 39.4G 2.1G 37.3G 5% /local/home[MDT:0]
home-hades_UUID 295.3G 15.1G 280.2G 5% /local/home[OST:0]
home-zephyr_UUID 295.3G 15.1G 280.2G 5% /local/home[OST:1]
filesystem summary: 590.6G 30.2G 560.4G 5% /local/home
The only think i can think of is, that i use the same mds for each lov. Do i
have to create a MDS for each lov ?
Thanks in advance
--
Rui Ramos
=============================================Universidade do Porto - IRICUP
Pra?a Gomes Teixeira, 4099-002 Porto, Portugal
email: rramos[at]reit.up.pt
phone: +351 22 040 8164
==============================================
On Mar 26, 2007 11:21 +0100, Rui Ramos wrote:> lmc -m config.xml --add lov --lov lov-home --mds mds-polo3 --stripe_sz 1048576 --stripe_cnt 0 --stripe_pattern 0 > # VOs software space > lmc -m config.xml --add lov --lov lov-soft --mds mds-polo3 --stripe_sz 1048576 --stripe_cnt 0 --stripe_pattern 0 > # Volatil space for DPM > lmc -m config.xml --add lov --lov lov-tmp --mds mds-polo3 --stripe_sz 1048576 --stripe_cnt 0 --stripe_pattern 0 > # DPM space > lmc -m config.xml --add lov --lov lov-dpm --mds mds-polo3 --stripe_sz 1048576 --stripe_cnt 0 --stripe_pattern 0> Now. if i go to one of the partition lets say ''/local/home'' and create some file, this file is created also in the other devices.! > > The only think i can think of is, that i use the same mds for each lov. Do i have to create a MDS for each lov ?You need to configure a separate MDS device for each LOV. It''s very interesting that this even works, however. Essentially, the same MDS is storing the metadata for all of the filesystems, and files can be stored in different pools of OSTs based on the mountpoint/LOV where it was created in. However, this will almost certainly have bad consequences for MDS recovery because it won''t be storing the per-OST information correctly. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.