Robustness of Empirical Revenue Maximization in Auction Learning

Abstract

Empirical Revenue Maximization (ERM) is an important price learning algorithm in data-driven auction design. It learns, from samples of bidders’ value distribution, an approximately revenue optimal reservation price in both repeated auctions and uniform-price auctions.

However, in these scenarios the bidders who provide samples to ERM have incentives to manipulate the samples in order to lower the output price. We show that ERM is robust against such manipulation, as long as the number of manipulated samples is small. Specifically, we generalize a measure called “incentive-awareness measure” proposed by Lavi et al (2019) to quantify the reduction of ERM’s output due to a change of 1 ≤ m≤ o(N^0.5) out of N input samples, and provide specific convergence rates of this measure to zero as N goes to infinity. By adopting this measure, we use ERM to construct an efficient, approximately incentive-compatible, and revenue-optimal learning algorithm in repeated auctions against non-myopic bidders, and show approximate group-IC in uniform-price auctions.

This is joint work with Xiaotie Deng, Ron Lavi, Qi Qi, Wenwei Wang, Xiang Yan, accepted by NeurIPS’20 (see https://arxiv.org/abs/2010.05519 )

Date
Dec 23, 2020
Event
Location
Institute for Theoretical Computer Science (ITCS), SUFE
Shanghai, China
Avatar
Tao Lin
PhD student in Computer Science