Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
UPDATE AFTER AUTHOR RESPONSE: I thank the authors for a very comprehensive author response, in particular an explanation of how the trade-off between efficiency and fairness relates and when E(M)/E(opt) does not converge. In light of these changes, I am changing my score to an accept. ---- The paper is a creative approach to reconciling efficiency and fairness in a graph-based setting. High level comments: (1) Figure 1 and 2 suggest that E(M) appears to find the same result as E_opt. Are there multiple matchings M with the same (or very close) overall utility and this algorithm finds the most fair one? Or do fairness and utility seem to converge always in this problem framework? Put another way, if we run max-flow with only regard for only overall utility, what level of fairness do we get without explicit constraints in the algorithm? When would we expect E(M) / E_opt to be much smaller? Can we construct synthetic examples -- or expand our inclusion criteria for data in the taxi dataset -- to find cases where the ratio doesn't plateau? (2) Ridesharing is a two-way market. In line 25, the introduction mentions that prior work focuses on passenger utility and efficiency. How are the two related? It feels remiss to restructure the problem as utility and fairness for the drivers without a discussion for how to reconcile this with earlier work from the passenger perspective. Could we augment the utility of each request in relation to the passenger? (3) What does the underlying dataset look like? Line 104 comments that \Delta is effectively very small across the entire graph. This seems to suggest that optimizing for fairness and utility will not have very different outcomes compared to optimizing for just utility. Empirically do we see this in the dataset? Intuitively, very uneven distributions of edge weights might create greater disparities between utilitarian efficiency and Rawsian fairness. The relatively benign decreases in efficiency because of the additional fairness criterion could be because of the edge weights. (4) How does this algorithm perform in runtime and against other baselines? Performing a binary search (line 173) to find a fair assignment M_fair while performing a max flow algorithm like Ford-Fulkerson or Edmonds-Karp runs in O(VE^2 log F) time. Is this what the authors did in implementation? Experimentally, was that a lot? A discussion of baselines -- starting with the pure utilitarian maximization and then related prior work -- would help contextualize the contribution of this paper. (5) There are other sources of unfairness in ridesharing than strictly the historical utility as defined in the experiment section. What about drivers who start from less populated areas and therefore have fewer edges (fewer requests near by) and higher edge weights (farther to drive)? Specific comments: (1) When implementing, would we directly use the fair assignment found in M_fair (line 167)? Is there a benefit to Algorithm 1 that I am missing? (2) What happens when |V| > |R|? Lack of resources aka requests may also be a case where fairness criterion are particular more important. (3) Should p2, line 100 be "maximimum utility difference of different vehicles [across all requests]"? (4) It took me a few readings of "Compute a fair assignment" (line 167) to realize you could choose to not assign a request, aka no-serve requests. Clarifying this earlier might make readability easier. (5) In Supplement p1, line 17, it is not immediately obvious to me why we use F_opt instead of f. Wouldn't using f give a tighter lower bound? (6) Figure 1b) What does "both sorted from smallest to largest" mean? (7) p6 line 255: "We also tested several other datasets" -> would be interested to see results in supplement. (8) Minor point, but could be helpful to clarify that E+ includes old E and R+ includes old R. At first reading, E+ and R+ seem to be only the new r^bar_i and corresponding edges.
(See improvements below.) AFTER READING REBUTTAL: The rebuttal bumped my score up a bit; thanks, authors, for addressing some concerns. I do think the relation to related literature is still a little light, as I mentioned in my review, and would appreciate seeing better placement in newer versions of the paper. Also, point (1) -- another discussion-based point -- should be addressed.
The paper considers the ridesharing setup (think Uber, Lyft for instance). Private drivers are linked with passengers through a mobile app. The authors study fairness in the allocation of a set of riders to a set of vehicles. A standard solution would try to maximise efficiency, i.e., maximise some aggregate measure of pairwise affinity between rider and vehicle (for instance, minimise total time to serve the request). The authors propose an alternative solution that instead optimises for a predetermined level of fairness (for instance, an upper bound on the time to serve any request) while still attempting to maximimise efficiency. The authors then show that the proposed algorithm provides a lower bound on efficiency which is optimal within the proposed model. Finally, the authors empirically study a public dataset of New York taxi trips, and conclude that significant improvements in fairness could b obtained with very small loss in efficiency. I think this is a good paper for a few reasons: - It is clearly written and easy to understand. - It is technically sound, and the proposed solution is very simple and directly implementable without anything more than a linear assignment solver. - It addresses a problem of great importance: machine learning objectives have historically been focused on the mean or median of a distribution, whereas here we see an attempt to improve the "worst-off" individual in an allocation problem, in line with the notion of fairness proposed by Rawls. - It provides further evidence of a very important finding I've seen elsewhere: often when you introduce fairness constraints the loss in efficiency is negligible. Whenever this is true, organisations have the capacity to do much greater good without affecting their profitability (and as such buying a very cheap insurance policy against potential future public retaliation due to unfair treatment or certain customers).