The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Full-Parameter Fine-Tuning Method of LLMs for Sports Injury Prevention and Treatment
Abstract
Fine-tuning large language models (LLMs) for sports injury prevention and treatment in resource-constrained environments poses significant challenges due to memory demands and growing size of data. This paper proposes an efficient full-parameter fine-tuning approach based on Gradient Low-Rank Projection (GaLore) to reduce memory usage. Further, a data augmentation strategy for sports injury prevention and treatment is utilized to finetune a question-and-answer (Q&A) model with 0.5B parameter on consumer GPUs with 24GB memory. Experiment results show that the proposed method enhanced by GaLore is superior to SOTA methods such as low-rank adaptation (LoRA) in terms of convergence accuracy, training time, memory consumption, and indicators of BLEU-4 and ROUGE-2. Meanwhile, the empirical effect of injury prevention Q&A cases indicate that Qwen2-0.5B-Instruct trained by the proposed method have obvious advantages in professional knowledge understanding and overcoming hallucinations.
Related Content
Xiangzhao Cheng, Ting Li, Zhuo Zhang, Yue Zheng, Guanglei Hu, Hongxin Li, Tongqing Zhang.
© 2025.
23 pages.
|
Xinli Zhu, Zhiqiang Gao, Xu An Wang.
© 2025.
14 pages.
|
Xu-Jun Jian, Chao-Hung Wang, Tieh-Cheng Fu, Shiyang Lyu, David Taniar, Tun-Wen Pai.
© 2025.
13 pages.
|
Jie Huang.
© 2025.
23 pages.
|
Yue Hu, Yanan Wang, Wei Zhao, Li Shang, Yuhang Pang, Juan Pan, Tongtong Zhang, Weiwei Dou.
© 2025.
22 pages.
|
Badreya Al-jenaibi (536d4bda-1d8b-42f0-94b7-3346c14bc901.
© 2024.
24 pages.
|
Wanqiao Wang, Jian Su, Hui Zhang, Luyao Guan, Qingrong Zheng, Zhuofan Tang, Huixia Ding.
© 2024.
16 pages.
|
|
|