mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.9K
active users

0xDE

Applications of group fairness to the assignment of reviewers in large conferences: arxiv.org/abs/2410.03474, via retractionwatch.com/2024/10/12

The goal is to assign reviewers in such a way that no subcommunity feels it would be better off making its own splinter conference. Subcommunities are not part of the input; they are an emergent feature of the model. The model of reviewing is a little oversimplified: all papers are single-author, the only conflicts of interest are with one's own papers, there is a strict threshold of accepting scores that leads to acceptance, and only author preferences for reviewers rather than reviewer preferences for what to review are considered. And there is no consideration of the possibility that allowing authors power in selecting their preferred reviewers is a recipe for quid-quo-pro behavior and refereeing cartels. Still, I think it's an interesting idea.

arXiv.orgGroup Fairness in Peer ReviewLarge conferences such as NeurIPS and AAAI serve as crossroads of various AI fields, since they attract submissions from a vast number of communities. However, in some cases, this has resulted in a poor reviewing experience for some communities, whose submissions get assigned to less qualified reviewers outside of their communities. An often-advocated solution is to break up any such large conference into smaller conferences, but this can lead to isolation of communities and harm interdisciplinary research. We tackle this challenge by introducing a notion of group fairness, called the core, which requires that every possible community (subset of researchers) to be treated in a way that prevents them from unilaterally benefiting by withdrawing from a large conference. We study a simple peer review model, prove that it always admits a reviewing assignment in the core, and design an efficient algorithm to find one such assignment. We use real data from CVPR and ICLR conferences to compare our algorithm to existing reviewing assignment algorithms on a number of metrics.