Ted Wetherbee wrote:>
> I''m checking out Lustre toward a symetric cluster with an
independent
> metadata server: Each machine would see all machine disks--including its
> own--as clearly identifiable volumes served up by each machine. With the
> test script below, the volumes show up with df, but only one of them works
> as a shared volume with the other failing with something like " cp:
cannot
> fstat ... " on a copy. Sometimes df fails.
Your configuration is on the right track, but is not quite correct.
If you want two file systems (/mnt/ws210 and /mnt/ws212) then you need
two metadata partitions, even if they run on the same node.
Try something like:
> # symetric cluster test script - Lustre - Jan 20, 2004 ####################
> # create nodes ----- ws210, ws211, and ws212 -----------------------
> lmc -o test.xml --add net --node ws210 --nid ws210 --nettype tcp
> lmc -m test.xml --add net --node ws211 --nid ws211 --nettype tcp
> lmc -m test.xml --add net --node ws212 --nid ws212 --nettype tcp
> # configure mds server ---- ws211 dedicated ---------------------------
Break /dev/sdb1 into two partitions. Note that you don''t need to
supply
--size for real partitions -- it will default to the size of the entire
partition.
lmc -m test.xml --add mds --node ws211 --mds mds0 --fstype ext3 --dev
/dev/sdb1 --size 35559846
lmc -m test.xml --add mds --node ws211 --mds mds2 --fstype ext3 --dev
/dev/sdb2 --size 35559846
> # configure ost ---- ws210 and ws212 serving eachother and themselves
---------
> # ------ ws210
Use the first MDS here:
lmc -m test.xml --add lov --lov lov0 --mds mds0 --stripe_sz 65536
--stripe_cnt 0 --stripe_pattern 0
> lmc -m test.xml --add ost --node ws210 --lov lov0 --ost ost0 --fstype ext3
--dev /dev/sdc1 --size 35559846
> # ------ ws212
And the second for the other LOV:
lmc -m test.xml --add lov --lov lov2 --mds mds2 --stripe_sz 65536
--stripe_cnt 0 --stripe_pattern 0
> lmc -m test.xml --add ost --node ws212 --lov lov2 --ost ost2 --fstype ext3
--dev /dev/sdc1
> # create client config for ws210 and ws212
> ---------------------------------------------------
Likewise for the client mountpoints:
# -------- mount ws210 drive
lmc -m test.xml --add mtpt --node ws210 --path /mnt/ws210 --mds mds0
--lov lov0
lmc -m test.xml --add mtpt --node ws212 --path /mnt/ws210 --mds mds0
--lov lov0
# -------- mount ws212 drive
lmc -m test.xml --add mtpt --node ws210 --path /mnt/ws212 --mds mds2
--lov lov2
lmc -m test.xml --add mtpt --node ws212 --path /mnt/ws212 --mds mds2
--lov lov2
Does that make sense?
> I''m wondering if something is fundamentally wrong with mounting 2+
volumes
> per client with Lustre, and I''ve missed an simple example on
> this. Mounting one volume across machines seems to work well through a
> variety of tests.
No, this is not a problem -- although the standard warnings about
running a client on an OST still apply. That is not a 100% safe option
yet, and would take some bug fixing beforeis not supported in a
production environment today.
Hope that helps--
-Phil