Simpler PAC-Bayesian Bounds for Hostile Data - École polytechnique Accéder directement au contenu
Article Dans Une Revue Machine Learning Année : 2018

Simpler PAC-Bayesian Bounds for Hostile Data

Résumé

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.
Fichier principal
Vignette du fichier
main.pdf (162.73 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01385064 , version 1 (20-10-2016)
hal-01385064 , version 2 (23-10-2016)
hal-01385064 , version 3 (23-05-2019)

Identifiants

Citer

Pierre Alquier, Benjamin Guedj. Simpler PAC-Bayesian Bounds for Hostile Data. Machine Learning, 2018, ⟨10.1007/s10994-017-5690-0⟩. ⟨hal-01385064v2⟩
365 Consultations
274 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More