On 2012-07-18, at 3:24 PM, Tiago Soares wrote:
Dear all,> I would like to know if there is some trick to use parallel MPI/IO in HDF5
file on Lustre?
> I have been trying for while to fix the issue that happens in Lustre.
>
> Basically, I have 8 process writing small datas in the same dataset in an
attempt. Each attempt write different data than before, and when an attempt
comes close to 2500, the I/O parallel process broken. There is too a another
serial process writing other data in the same file
>
>
> I found that is a common Lustre issue which do not support locking! So I
tried this parameters above, but my application still broken.
Sorry, no "parameters above", but to enable distributed POSIX locking
you need to mount clients with "-o flock".
> Also, I read in somewhere to set the "stripesize" to count 8
(number of OST) for parallel IO, but still doesn''t woks.
> MPI_Info_create(&info);
>
> /* Disables ROMIO''s data-sieving */
> MPI_Info_set(info, "romio_ds_read", "disable");
> MPI_Info_set(info, "romio_ds_write", "disable");
>
>
> /* Enable ROMIO''s collective buffering */
> MPI_Info_set(info, "romio_cb_read", "enable");
>
> MPI_Info_set(info, "romio_cb_write", "enable");
> https://wickie.hlrs.de/platforms/index.php/MPI-IO
>
> I ran my application on PVFS, and works fine. I know that unlike the
Lustre, PVFS use non-alligned striping data. Could be it the reason that works
in PVFS?
>
> Regards
>
> --
> Tiago Steinmetz Soares
> MSc Student of Computer Science - UFSC
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger Whamcloud, Inc.
Principal Lustre Engineer http://www.whamcloud.com/