Ni, H., Meng, S., Chen, X., Zhao, Z., Chen, A., Li, P., Zhang, S., Yin, Q., Wang, Y., & Chan, Y. (2024). Harnessing Earnings Reports for Stock Predictions: A QLoRA-Enhanced LLM Approach. Preprints. https://doi.org/10.20944/preprints202408.0631.v1
Chicago/Turabian Style
Ni, H., Yuanqing Wang and Yuxi Chan. 2024 "Harnessing Earnings Reports for Stock Predictions: A QLoRA-Enhanced LLM Approach" Preprints. https://doi.org/10.20944/preprints202408.0631.v1
Abstract
Accurate stock market predictions following earningsreportsarecrucialforinvestors.Traditionalmethods, particularlyclassicalmachinelearningmodels,strugglewith these predictions because they cannot effectively process and interpret extensive textual data contained in earnings reports and often overlook nuances that influence market movements. Thispaperintroducesanadvancedapproachbyemploying Large Language Models (LLMs) instruction fine-tuned with a novel combination of instruction-based techniques and quantizedlow-rank adaptation (QLoRA) compression. Our methodology integrates ‘base factors’, such as financial metric growth and earnings transcripts, with ‘external factors’, including recent market indices performances and analyst grades, to create a rich,superviseddataset.Thiscomprehensivedatasetenables our models to achieve superior predictive performance in terms of accuracy, weighted F1, and Matthews correlation coefficient (MCC), especially evident in the comparison with benchmarks suchasGPT-4.Wespecificallyhighlighttheefficacyofthe llama-3-8b-Instruct-4bitmodel, which showcases significant improvements over baseline models. The paper also discusses the potential of expanding the output capabilities to include a‘Hold’ option and extending the prediction horizon, aiming to accommodate various investment styles and time frames. This study not only demonstrates the power of integrating cutting-edge AI with fine-tuned financial data but also paves the way for future research in enhancing AI-driven financial analysis tools.
Keywords
large language model; quantized low-rank adaptation; instruction fine tuning
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.