[1] M. A. Alam, H. Y. Lin, H. W. Deng, V. D. Calhoun and Y. P. Wang, A
kernel machine method for detecting higher order interactions in
multimodal datasets: Application to schizophrenia, Journal of
Neuroscience Methods 309 (2018), 161-174.
DOI: https://doi.org/10.1016/j.jneumeth.2018.08.027
[2] M. A. Alam, V. D. Calhoun and Y. P. Wang, Identify outliers using
multiple kernel canonical correlation analysis with application to
imaging genetics, Computational Statistics and Data Analysis 125
(2018), 70-85.
DOI: https://doi.org/10.1016/j.csda.2018.03.013
[3] M. A. Alam and K. Fukumizu, Higher-order regularized kernel
canonical correlation analysis, International Journal of Pattern
Recognition and Artificial Intelligence 29(4) (2015), 1-24.
DOI: https://doi.org/10.1142/S0218001415510052
[4] N. S. Altman, An introduction to kernel and nearest-neighbor
nonparametric regression, The American Statistician 46(3) (1992),
175-185.
[5] J. Ali, R. Khan, N, Ahmad and I. Maqsood, Random forests and
decision trees, International Journal of Computer Science 9(5) (2012),
272-278.
[6] S. Alex and S. V. N. Vishwanathan, Introduction to Machine
Learning, 1st Edition, University of Cambridge, Cambridge, United
Kingdom, 2008, pp 181-186.
[7] P. Brazdil, C. Giraud-Carrier, and C. Soares and Ricardo Vilalta,
Metalearning: Applications to Data Mining, 1st Edition, Springer
Verlag, Berlin Heidelberg (2009), 17-42.
[8] Asa Ben-Hur, David Horn, Hava T. Siegelmann and Vladimir Vapnik,
Support vector clustering, Journal of Machine Learning Research 2
(2001), 125-137.
[9] V. Cherkassky and F. M. Mulier, Learning From Data: Concepts,
Theory, and Methods, 2nd Edition, John Wiley, IEEE Press, 2007, pp
340-464.
[10] Glenn De’ath, Multivariate regression trees: A new technique
for modeling species-environment relationships, Ecological Society of
America 83(4) (2002), 1105-1117.
DOI: https://doi.org/10.2307/3071917
[11] Guobin Zhu and Dan G. Blumberg, Classification using ASTER data
and SVM algorithms: The case study of Beer Sheva, Israel, Remote
Sensing of Environment 80(2) (2002), 233-240.
DOI: https://doi.org/10.1016/S0034-4257(01)00305-4
[12] M. Greenacre and R. Primicerio, Multivariate Analysis of
Ecological Data, 1st Edition, Rubes Editorial, Spain, 2013, pp
15-24.
[13] A. Goyal and R. Mehta, Performance comparison of Naive Bayes and
J48 classification algorithms, International Journal of Applied
Engineering Research 7(11) (2012), 281-297.
[14] H. Hormozi, E. Hormozi and H. R. Nohooji, The classification of
the applicable machine learning methods in robot manipulators,
International Journal of Machine Learning and Computing 2(5) (2012),
560-563.
DOI: https://doi.org/10.7763/IJMLC.2012.V2.189
[15] J. Han, M. Kamber and J. Pei, Data Mining: Concepts and
Techniques, 3rd Edition, Morgan Kaufmann, USA, 2012, pp 327-439.
DOI: https://doi.org/10.1016/C2009-0-61819-5
[16] N. Landwehr, M. Hall and E. Frank, Logistic model trees, Machine
Learning 59(1-2) (2005), 161-205.
DOI: https://doi.org/10.1007/s10994-005-0466-3
[17] Ilyes Jenhani, Nahla Ben Amor and Zied Elouedi, Decision trees as
possibilistic classifiers, International Journal of Approximate
Reasoning 48(3) (2008), 784-807.
DOI: https://doi.org/10.1016/j.ijar.2007.12.002
[18] L. I. Kuncheva, Combining Pattern Classifiers: Methods and
Algorithms, John Wiley & Sons, 1st Edition, West Sussex, England,
2004, pp 45-99.
[19] Kellie J. Archer and Ryan V. Kimes, Empirical characterization of
random forest variable importance measures, Computational Statistics &
Data Analysis 52(4) (2008), 2249-2260.
DOI: https://doi.org/10.1016/j.csda.2007.08.015
[20] S. B. Kotsiantis, I. D. Zaharakis and P. E. Pintelas, Machine
learning: A review of classification and combining techniques,
Artificial Intelligence Review 26(3) (2006), 159-190.
DOI: https://doi.org/10.1007/s10462-007-9052-3
[21] Margaret H. Danham and S. Sridhar, Data Mining, Introductory and
Advanced Topics, Person Education, 1st Edition, UK, 2006, pp 75-84.
[22] G. J. McLachlan, Discriminant Analysis and Statistical Pattern
Recognition, Wiley Interscience, 1st Edition, UK, 2004, pp 189-200.
[23] F. Nigsch, A. Bender, B. Buuren, J. Tissen, E. Nigsch and J. B.
O. Mitchell, Melting point prediction employing k-nearest
neighbor algorithms and genetic parameter optimization, Journal of
Chemical Information and Modeling 46(6) (2006), 2412-2422.
DOI: https://doi.org/10.1021/ci060149f
[24] F. Y. Osisanwo, J. E. T. Akinsola, O. Awodele, J. O. Hinmikaiye,
O. Olakanmi and J. Akinjobi, Supervised machine learning algorithms:
Classification and comparison, International Journal of Computer
Trends and Technology 48(3) (2017), 128-138.
DOI: https://doi.org/10.14445/22312803/IJCTT-V48P126
[25] K. R. Pradeep and N. C. Naveen, A collective study of machine
learning (ML) algorithms with big data analytics (BDA) for healthcare
analytics (HcA), International Journal of Computer Trends and
Technology 47(3) (2017), 149-155.
DOI: https://doi.org/10.14445/22312803/IJCTT-V47P121
[26] Lior Rokach and Oded Maimon, Data Mining with Decision Trees:
Theory and Applications, 1st Edition, World Scientific Publishing Co.,
Inc., USA, 2008, pp 31-58.
[27] Shujun Huang, Nianguang Cai, Pedro Penzuti Pacheco, Shavira
Narrandes, Yang Wang and Wayne Xu, Applications of support vector
machine (SVM) learning in cancer genomics, Cancer Genomics and
Proteomics 15(1) (2018), 41-51.
DOI: https://doi.org/10.21873/cgp.20063
[28] A. K. Sharma and S. Sahni, A comparative study of classification
algorithms for spam email data analysis, International Journal on
Computer Science and Engineering 3(5) (2011), 1890-1895.
[29] I. H. Witten and E. Frank, Data Mining: Practical Machine
Learning Tools and Techniques, 3rd Edition, Morgan Kaufmann, USA,
2011, pp 203-215.
[30] M. J. Crawley, The R Book, 1st Edition, John Wiley & Sons, Ltd.,
England, 2007, pp 811-827.