Version 1
: Received: 7 April 2019 / Approved: 8 April 2019 / Online: 8 April 2019 (11:50:05 CEST)
Version 2
: Received: 5 June 2020 / Approved: 5 June 2020 / Online: 5 June 2020 (04:37:39 CEST)
Version 3
: Received: 5 June 2020 / Approved: 7 June 2020 / Online: 7 June 2020 (17:44:06 CEST)
Version 4
: Received: 23 December 2021 / Approved: 24 December 2021 / Online: 24 December 2021 (16:08:06 CET)
Version 5
: Received: 18 April 2023 / Approved: 19 April 2023 / Online: 19 April 2023 (07:43:17 CEST)
Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707.
Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707.
Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707.
Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707.
Abstract
Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Commenter: Ayman Elgharabawy
Commenter's Conflict of Interests: Author