Regularized Bayesian inference (RegBayes) is a computational framework that allows your Bayesian and nonparametric Bayesian models to incorporate rich side knowledge into the inference process by defining an appropriate posterior regularization term. When the posterior regularization is defined following the principle of max-margin, RegBayes allows you to learn Bayesian and nonparametric Bayesian models discriminatively, similar to what we do in support vector machines; but here everything is done via probabilistic inference, thus easily handling noise, ambiguity, or missing values and discovering statistical structures hiding in complex data. With nonparametric techniques, so we have infinite support vector machines and infinite latent support vector machines. Some details can be found in our papers and my recent tutorial talks at ADL 2016, ACML 2013 and the one I gave at MLA 2013. A highlight of my work on Bayesian methods was published at IEEE Intelligent Systems under the tile of AI's 10 to Watch.
Here are some of my favoriate examples showing you what we can get by doing Bayesian inference and max-margin learning jointly.
- MedLDA: a max-margin supervised topic model with efficient and scalabe inference algorithms;
- MMH: a max-margin latent space model for multi-view data analysis;
- iSVM: a Dirichlet Process (DP) mixture of large-margin kernel machines that allows you to discover clustering structures in SVM classifers;
- iLSVM: an SVM model that learns latent features good for classification and multi-task learning, and determines the feature dimension automatically;
- MedLFRM: a link prediction model that learns latent features and determine the feature dimension automatically;
- BM3F: a nonparametric Bayesian formulation of max-margin matrix factorization, with applications in collaborative prediction and recommendation.