search for: irevaluationofletorrankingschem

Displaying 5 results from an estimated 5 matches for "irevaluationofletorrankingschem".

2014 Mar 04
2
Test Dataset for performance and accuracy analysis
Hi Parth, I implemented DFR algorithms in Xapian as a part of GSOC last year under the mentorship of Olly. This year, I want to work on analyzing and optimizing the performance of the DFR algorithms and comparing them with BM25.I also want to work on profiling the query expansion schemes and test the relevance(precision and recall) / speed(time taken) of the
2014 Mar 04
4
Questions on letor module
Hi, I have several questions regarding the letor module,I looked at the framework of learning to rank in xapian http://rishabhmehrotra.com/gsoc/17.png, I am a little confused. Why using deep learning to find unsupervised features in test data? Since in my understanding, learning to rank model usually learn features from the training data then apply the model to the test data? Why test set and
2016 May 14
2
GSoC 2016 Letor dataset discussion
Hello, I wanted to decide the dataset that should be used for Letor stabilisation project. I think 2009 INEX Wikipedia Collection <http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/software/inex/> should work fine. It's a collection of 2,666,190 XML articles, 115 topics <http://inex.mmci.uni-saarland.de/protected/adhoc/2009-topics.zip>, 50,275 qrel
2016 Jul 28
2
Weighting Schemes: Evaluation results
...w he's used some of > them in the past. > ?I can say FIRE is also a reliable source but INEX/TREC are better. INEX can give you free access and TREC is not freely available. I had used INEX for xapian in the past and some details are here: https://trac.xapian.org/wiki/GSoC2011/LTR/Notes#IREvaluationofLetorrankingscheme I roughly remember that there was a discussion with our this year GSOC student Ayush about INEX data. He had also obtained it, this would also be a good way to collaborate with him :) and try to establish a common evaluation dataset for future. Cheers Parth > > Certainly until we have som...
2016 Jul 25
3
Weighting Schemes: Evaluation results
Hi James, > We probably don't want them committed in git where they're evaluation > runs (because we can recreate them); a gist might be more appropriate. Sorry, I have moved results files over to gist for each individual weighting scheme. Link: https://gist.github.com/ivmarkp/secret > I can't tell, but are some of those files from FIRE? If so, they > shouldn't be