Hi all, I have completed almost all the suggested and pending changes to the evaluation module.Now module use Term Generator,Query Parser to index , form query object instead of modules from Trec harness code, have flexibility for user to change Weighing scheme and enable bi-gram from configuration itself. After switching the indexing and query object to term generator and query parser precision of the system have drastically increased .So problem of precision was due to outdated modules of trec harness.Now low precision problem is solved. I have shared gimps of performance for all weighting scheme on FIRE Data Set here: Xapian Evaluation Results<https://docs.google.com/spreadsheet/pub?key=0AoCWuAKuwBGfdGljZGJ2MDJIY0dzdkVaTFFBLU1QQWc&single=true&gid=0&output=html> . Most of evaluation results seems fine.BM25(default Weighting scheme) have outperform the current implementation of all the weighting scheme.But according to surveys and initial paper we used as benchmark to implement this Language Model Weighing scheme.Our Language Model Weighting scheme should either perform equivalent or should outperform the BM25 Weighting Scheme. I am investigating on the reason for trashed performance of Language Modelling but problem might be log trick, as some documents might be getting negative score due to not getting on good transformation factor for and still we need to investigate the results using different parameters and Smoothing scheme.Results shown above are for Current Default Schemes. Following the diversion from expected result, i should now see for problem in implementation of Language Model and improve the scheme using the evaluation result. Please comments on the Evaluation Results,Language Model Weighting scheme implementation, Future course of action. Thanks, -- with regards Gaurav Arora -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.xapian.org/pipermail/xapian-devel/attachments/20120730/fc4cd0a5/attachment.html>