Ethnic Festival LLM Via Prompt and LoRA Fine-Tuning

Main Article Content

Dan Wang
Weina Zhao
Longlong Ma
Bo An

Abstract

 As the core carrier of ethnic culture, the construction of a multi-ethnic festival question-and-answer model is conducive to promoting the digital development of ethnic cultural heritage.Addressing the challenges of constructing large language models and the high costs of full fine-tuning, this paper investigates LoRA fine-tuning methods for ethnic festival large language models. It establishes an ethnic festival question-answering model centered on core content from books such as The Chinese Festival Traditional Culture Reader: Collector's Edition and Ethnic Minority Festivals, alongside ethnic festival introductions from Baidu Baike and ethnic websites. First, we constructed datasets and guided the model to generate knowledge question-answer pairs through prompt design. Subsequently, multiple general-purpose large models underwent LoRA fine-tuning. Finally, the optimal fine-tuned model was selected through evaluation and its effectiveness verified. Experimental results show that the fine-tuned ChatGLM4-9B-Chat model achieved improvements of 16.74, 14.25, 13.48, and 16.00 in BLEU-4, ROUGE-1, ROUGE-2, and ROUGE-L metrics respectively compared to the base model. The experiments demonstrate that the selected model can effectively learn and comprehend knowledge related to festival domains, providing accurate answers to user queries. This contributes to promoting the dissemination of ethnic festival culture and advancing the digitalization of festivals.

Article Details

How to Cite
Wang , D., Zhao, W., Ma , L., & An , B. (2025). Ethnic Festival LLM Via Prompt and LoRA Fine-Tuning. Journal of Research in Multidisciplinary Methods and Applications, 4(10), 01250410002. Retrieved from http://www.satursonpublishing.com/jrmma/article/view/a01250410002
Section
Articles

References

Zhang J, Zhou Z. Digital Preservation and Presentation of China's Outstanding Traditional Culture. Journal of Guizhou Minzu University (Philosophy and Social Sciences Edition), online first,1–16 (2025).

Xu Yuemei, Hu Ling, Zhao Jiayi, et al.Technical Application Prospects and Risk Challenges of Large Language Models [J].Computer Applications,2024,44(06):1655-1662.

Peng Shaoliang, Liu Wenjuan. Computing Power Requirements and Future Challenges for Large Models [J/OL]. Communications of the China Computer Federation, 2024(2).

Huang L, Yu W J, Ma W T,et al.A Survey on Hallucination in Large Language Models:Principles,Taxonomy,Challenges,and Open Questions[EB/OL].(2024-11-19).

Yan Jingqun, Readings on Traditional Chinese Festival Culture (Collector's Edition) [M]. Beijing: Oriental Publishing House, 2009.

Ji Chengqian, Ethnic Minority Festivals [M]. Beijing:China Social Sciences Press,2006.

Papineni K,Roukos S,Ward T,et al.BLEU: a method for automatic evaluation of machine translation[J/OL].Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,2002:311-318.

Lin C Y.Rouge:A package for automatic evaluation of summaries.Association for Computational Linguistics.2004:74-81.

Team GLM,Zeng A H,Xu B,et al.ChatGLM:A Family of Large Language Models from GLM-130B to GLM-4 All Tools[EB/OL].(2024-07-30)[2025-09-06].

Yang A,Yang B S,Hui B Y,et al.Qwen2 Technical Report[EB/OL].(2024-09-10)[2025-09-06].

WANG S,ZHENG Y,WANG G,et al.Llama3-8B-Chinesechat[EB/OL].(2024-05-09)[2025-09-06]