Hello,
Charles Taylor wrote:> Anyone have any experience with MpiBlast and Lustre. We have
> MpiBlast-1.4.0-pio and lustre-1.6.3 and we are seeing some pretty
> poor performance with most of the mpiblast threads spending 20% to
> 50% of their time in disk wait. We have the genbank nt database
> split into 24 fragments (one for each of our OSTs, 3 per OSS). The
> individual fragments are not striped due to the ldlm_poold issue so
> that should not be the problem.
>
> Should we not be using the PIO (ROM-IO) version of MPIBlast w/ lustre?
>
I am not familiar with MpiBlast implementation. Does it use collective I/O ?
Could you please specify its I/O pattern here?
And also it is always worth trying to adjust these hints, if you use
MPI I/O.
disable/enable romio_cb_write
disable/enable romio_ds_write
ind_wr_buffer_size and ind_rd_buffer_size
Thanks
WangDi> I''m wondering if there are some already discovered guidelines for
> using MPIBlast on top of Lustre that I''m not finding via google.
>
> Thanks,
>
> Charlie Taylor
> UF HPC Center
>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at clusterfs.com
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>