Variable Sparsity Kernel Learning
IIT-Bombay, IISc, Technion
The key features of the algorithm (in contrast with simpleMKL) are:
Solves MKL for various mixed-norm based regularizers
Provable global convergence
Computational complexity where is the number of training datapoints, are are the maximum number of kernels in any group, total number of kernels respectively.
Similar in spirit to projected gradient descent but negligible computational effort for projection and step-size
Number of iterations nearly-independent of number of kernels
 Aharon Ben-Tal, Tamar Margalit, and Arkadi Nemirovski. The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography. SIAM Journal of Optimization, 12(1):79–108, 2001.
 Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167–175, 2003.
 J. Aflalo, A. Ben-Tal, C. Bhattacharyya, J. Saketha Nath and S. Raman. Variable Sparsity Kernel Learning --- Algorithms and Applications. Submitted to JMLR, 2009. link
 J. Saketha Nath, G. Dinesh, S. Raman, C. Bhattacharyya, A. Ben-Tal and K. R. Ramakrishnan. On the Algorithmics and Applications of a mixed-norm regularization based Kernel Learning Formulation. Advances in NIPS, 2009. link
 G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El. Ghaoui, and M.I. Jordan. Learning the Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research, 5:27–72, 2004.
 A. Rakotomamonjy, F. Bach, S. Canu, and Y Grandvalet. SimpleMKL. Journal of Machine Learning Research, 9:2491–2521, 2008.
 M. Szafranski, Y. Grandvalet, and A. Rakotomamonjy. Composite Kernel Learning. In Proceedings of the ICML, 2008.
For any comments/bugs please email ramans [AT] csa.iisc.ernet.in.