Eprint already available on another site (E-prints, working papers and research blog)
Elastic-Net Multiple Kernel Learning: Combining Multiple Data Sources for Prediction
Mourão-Miranda, Janaina; Hussain, Zakria; Tsirlis, Konstantinos et al.
2025
 

Files


Full Text
2512.11547v1.pdf
Author postprint (887.16 kB) Creative Commons License - Attribution
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Computer Science - Learning; Multiple Kernel Learning (MKL); MRI; Neuroimaging; Predictive model
Abstract :
[en] Multiple Kernel Learning (MKL) models combine several kernels in supervised and unsupervised settings to integrate multiple data representations or sources, each represented by a different kernel. MKL seeks an optimal linear combination of base kernels that maximizes a generalized performance measure under a regularization constraint. Various norms have been used to regularize the kernel weights, including $l1$, $l2$ and $lp$, as well as the "elastic-net" penalty, which combines $l1$- and $l2$-norm to promote both sparsity and the selection of correlated kernels. This property makes elastic-net regularized MKL (ENMKL) especially valuable when model interpretability is critical and kernels capture correlated information, such as in neuroimaging. Previous ENMKL methods have followed a two-stage procedure: fix kernel weights, train a support vector machine (SVM) with the weighted kernel, and then update the weights via gradient descent, cutting-plane methods, or surrogate functions. Here, we introduce an alternative ENMKL formulation that yields a simple analytical update for the kernel weights. We derive explicit algorithms for both SVM and kernel ridge regression (KRR) under this framework, and implement them in the open-source Pattern Recognition for Neuroimaging Toolbox (PRoNTo). We evaluate these ENMKL algorithms against $l1$-norm MKL and against SVM (or KRR) trained on the unweighted sum of kernels across three neuroimaging applications. Our results show that ENMKL matches or outperforms $l1$-norm MKL in all tasks and only underperforms standard SVM in one scenario. Crucially, ENMKL produces sparser, more interpretable models by selectively weighting correlated kernels.
Disciplines :
Engineering, computing & technology: Multidisciplinary, general & others
Author, co-author :
Mourão-Miranda, Janaina
Hussain, Zakria
Tsirlis, Konstantinos
Phillips, Christophe  ;  Université de Liège - ULiège > GIGA > GIGA Neurosciences - Development in data acquisition & modeling
Shawe-Taylor, John
Language :
English
Title :
Elastic-Net Multiple Kernel Learning: Combining Multiple Data Sources for Prediction
Publication date :
12 December 2025
Source :
Commentary :
Technical Report
Available on ORBi :
since 16 December 2025

Statistics


Number of views
16 (1 by ULiège)
Number of downloads
10 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi