The central theme of the project is to study linear and non linear SVM formulations in the presence of uncertain observations. We propose novel methodologies for constructing maximum margin classifiers which are robust to uncertainties in the data points. We propose every data point to be represented by a random vector instead of a deterministic feature vector and the proposed classifiers should classify every instances of the random vector correctly. The main contribution of this work is to derive robust classifiers from partial knowledge of the underlying uncertainty. It also deals with both linear and non-linear kernelized classifications.