Version 1
: Received: 3 May 2024 / Approved: 7 May 2024 / Online: 7 May 2024 (11:42:50 CEST)
How to cite:
Liu, Y.; Li, G.; Payne, T. R.; Yue, Y.; Man, K. L. Non-stationary Transformer Architecture: A Versatile Framework for Recommendation Systems. Preprints2024, 2024050378. https://doi.org/10.20944/preprints202405.0378.v1
Liu, Y.; Li, G.; Payne, T. R.; Yue, Y.; Man, K. L. Non-stationary Transformer Architecture: A Versatile Framework for Recommendation Systems. Preprints 2024, 2024050378. https://doi.org/10.20944/preprints202405.0378.v1
Liu, Y.; Li, G.; Payne, T. R.; Yue, Y.; Man, K. L. Non-stationary Transformer Architecture: A Versatile Framework for Recommendation Systems. Preprints2024, 2024050378. https://doi.org/10.20944/preprints202405.0378.v1
APA Style
Liu, Y., Li, G., Payne, T. R., Yue, Y., & Man, K. L. (2024). Non-stationary Transformer Architecture: A Versatile Framework for Recommendation Systems. Preprints. https://doi.org/10.20944/preprints202405.0378.v1
Chicago/Turabian Style
Liu, Y., Yong Yue and Ka Lok Man. 2024 "Non-stationary Transformer Architecture: A Versatile Framework for Recommendation Systems" Preprints. https://doi.org/10.20944/preprints202405.0378.v1
Abstract
Recommendation systems are crucial in navigating the vast digital market. However, user data’s dynamic and non-stationary nature often hinders their efficacy. Traditional models struggle to adapt to the evolving preferences and behaviours inherent in user interaction data, posing a significant challenge for accurate prediction and personalisation. Addressing this, we propose a novel theoretical framework, the Non-stationary Transformer, designed to capture and leverage the temporal dynamics within data effectively. This approach enhances the traditional transformer architecture by introducing mechanisms accounting for non-stationary elements, offering a robust and adaptable solution for recommendation systems. Our experimental analysis, encompassing deep learning and reinforcement learning paradigms, demonstrates the framework’s superiority over benchmark models. The empirical results confirm the efficacy of our proposed framework, which not only provides significant performance enhancements, approximately 8% in Logloss reduction and up to 2% increase in F1 score but also underscores its potential applicability across accumulative reward scenarios. These findings advocate adopting Non-stationary Transformer models to tackle the complexities of today’s recommendation tasks.
Keywords
Non-stationary Transformer; Recommendation Systems; Deep Learning; Reinforcement Learning; User-centric Systems
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.