[Seminar] Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds, by Dr. Han Bao, Kyoto University

[Seminar] Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds, by Dr. Han Bao, Kyoto University
Wednesday June 7th, 2023 01:30 PM
Lab5, DE18 Lounge space

Description

Speaker: Assistant Professor Han Bao, Kyoto University

Title: Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds

Abstract: Proper losses (or proper scoring rules) have been used for over half a century to elicit users' subjective probability from the observations. In the recent machine learning community, we often tackle downstream tasks such as classification and bipartite ranking with the elicited probabilities. Here, we engage in assessing the quality of the elicited probabilities with different proper losses, which can be characterized by surrogate regret bounds to describe the convergence speed of an estimated probability to the optimal one when optimizing a proper loss. This work contributes to a sharp analysis of surrogate regret bounds in two ways. First, we provide general surrogate regret bounds for proper losses measured by the $L^1$ distance. This abstraction eschews a tailor-made analysis of each downstream task and delineates how universally a loss function operates. Our analysis relies on a classical mathematical tool known as the moduli of convexity, which is of independent interest per se. Second, we evaluate the surrogate regret bounds with polynomials to identify the quantitative convergence rate. These devices enable us to compare different losses, with which we can confirm that the lower bound of the surrogate regret bounds is $\Omega(\epsilon^{1/2})$ for popular loss functions.

Add Event to My Calendar

Subscribe to the OIST Calendar

See OIST events in your calendar app