## Robust Linear Classifier

(Bhattacharyya et al., 2004; Shivaswamy et al., 2006) built Robust Support Vector Machines to handle missing values in data. In the case of missing values we may be able to (using a secondary estimation procedure) estimate the values of the missing variables, albeit with a certain degree of uncertainty. This work has also modeled uncertainty by its expected value and co-variance structure and has proposed a robust classification method via worst case optimization scheme. The task of learning a robust SVM has been posed as SVM with chance-constraint and finally with the help of "Chebysev's inequality" it has been again posed as a seccond order cone program.

(Bhadra et al., 2009; Ben-Tal et al., 2011) present a novel methodology for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in examples. The idea is to employ chance-constraints which ensure that the uncertain examples are classified correctly with high probability. The key novelty is in employing Bernstein bounding schemes to relax the resulting chance-constrained program as a convex second order cone program. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization.