|1||Jun 10, 2010 7:13 PM||This year's model, but in Phase 2 also use the reviewer bids to guide the assignments. (The software should be able to come up with a third reviewer for each borderline paper, just as it would have in a single-phase model.)
I heard from ECML09 area chairs that it was very hard to do manual assignments because you don't know what expertise/interest the huge pool of PC reviewers has--- you tend to just assign them to a few colleagues that you know personally--- but that doesn't spread the load nicely.|
|2||Jun 10, 2010 10:55 PM||I think having only 2 reviewers is too few in the first phase. It also means that the authors do not get to respond to any additional reviews that come out after the phase 1 reviews. If there's going to be a response period, then I think it makes sense to try to get all the reviewers' comments before then. I'm indifferent whether the reviewers are assigned based on their bids or area chairs.|
|3||Jun 10, 2010 11:12 PM||A single-phase model where reviewers are assigned to papers based on their bids and the opinion of area chairs. So nothing is completely automatic but also not totally manual either.
Also there should remain a chance of going to the second phase for some papers (say 20% of papers), but it should not happen for most papers.
But there still should be a discussion phase.|
|4||Jun 10, 2010 11:36 PM||Again, why not use some machine learning?|
|5||Jun 10, 2010 11:55 PM||Single phase model, where reviewers are assigned to papers by area chairs, but if possible based on their bids.|
|6||Jun 11, 2010 2:10 AM||The two-phase model was poorly implemented this year. On one paper, I specifically said something like "this paper is so bad it should not be given a second-round review", and the other reviewer's review was equally negative (both reviewers had very high confidence), yet the paper still got some second phase reviewing. If that's going to be the case, we might as well do a single phase review. Otherwise we need more aggressive first-round pruning, especially for such clear-cut cases (and I can't imagine a more clear-cut case than this particular paper!).|
|7||Jun 11, 2010 7:21 AM||NIPS is trying an automated system based|
|8||Jun 12, 2010 11:12 AM||NIPS idea looks nice.
1. propose say 15 papers to each reviewer based on the actual system
2. reviewer rank them
3. train a system to assign based on the ranking|
|9||Jun 25, 2010 6:39 AM||Assign papers based on match between paper's content (keywords) and the reviewer's expertise after eliminating conflicts, with some manual fine-tuning of assignments.|