Oleksii KUZNIETSOV, PhD Student
ORCID ID: 0000-0002-3537-9976
e-mail: oleksiy1908@gmail.com
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine
Gennadiy KYSELOV, PhD (Engin.), Assoc. Prof.
ORCID ID: 0000-0003-2682-3593
e-mail: g.kyselov@gmail.com
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine
DOI: https://doi.org/10.17721/AIT.2024.1.04
Abstract
B a c k g r o u n d . The article reviews existing approaches to evaluating the quality of automatically generated summaries of informational texts. It provides an overview of automatic summarization methods, including classical approaches and modern models based on artificial intelligence. The review covers extractive summarization methods such as TF-IDF and PageRank, as well as graph-based methods, specifically TextRank. Special attention is given to abstractive approaches, including Generative Pretrained Transformer (GPT) and Bidirectional and Auto-Regressive Transformers (BART) models. The quality of generated summaries is evaluated using quantitative metrics of summary relevance, particularly ROUGE and BLEU.
M e t h o d s . The article analyzes several approaches to automatic text summarization. Classical extractive methods, such as TF -IDF, calculate the importance of terms based on their frequency within a document and across a collection of documents. PageRank and TextRank utilize graph models to determine the significance of sentences based on the connections between them. Abstractive methods, s uch as GPT and BART, generate new sentences that succinctly convey the content of the original text. The effectiveness of each approach is assessed usi ng ROUGE and BLEU metrics, which measure the overlap between automatically generated summaries and reference texts. Particular a ttention is given to analyzing their accuracy, flexibility, resource requirements, and ease of implementation.
R e s u l t s . The results of the study show that ROUGE metrics demonstrate good accuracy in measuring n-gram overlaps (sequences of n words), while BLEU is effective in machine translation tasks but may not account for certain syntactic features of the text. The evaluation of automatic summarization methods using these metrics revealed that extractive summarization methods, such as TF -IDF, are effective for processing simple texts but may lose important context in complex texts. PageRank and TextRank consider the connections between sentences but may produce less relevant results for texts with weak structural connections. Abstractive models like GPT and BA RT provide a more flexible approach to summarization, creating new sentences that better convey the meaning, though they require significant computational resources and are complex to implement.
C o n c l u s i o n s . Combining classical and modern methods of automatic text summarization allows for achieving higher quality results. It is important to consider the specificity of the text and the requirements for the final outcome, adapting the selected approa ches and metrics according to the task.
K e y w o r d s : automatic summarization, extractive methods, abstractive methods, GPT, BART, ROUGE, BLEU, TextRank, PageRank, TF-IDF.
Published
2024-12-20
How to Cite
Oleksii KUZNIETSOV, Gennadiy KYSELOV “ USING AND ANALYSIS OF FORMAL METHODS FOR EVALUATING THE RELEVANCE OF AUTOMATICALLY GENERATED SUMMARIES OF INFORMATIONAL TEXTS” Advanced Information Technology, vol.1(3), pp. 32–48, 2024
Issue
Advanced Information Technology № 1 (3), 2024
Section
Applied information systems and technology
References
Callison-Burch, C., Osborne, M., Koehn, P. (2006). Re-evaluating the Role of Bleu in Machine Translation Research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics (pp. 249–256). https://aclanthology.org/E06-1032/
Christen, P., J. Hand, D., & Kirielle, N. (2024). A review of the F-measure: Its History, Properties, Criticism, and Alternatives. ACM Computing Surveys, 56(3), 1–24. https://doi.org/10.1145/3606367
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics, (pp. 4171–4186). https://doi.org/10.18653/v1/N19-1423
Hosna, A., Merry, E., Gyalmo, J., Alom, Z, Aung, Z., Azim, M. A. (2022). Transfer learning: a friendly introduction. Journal of Big Data, 9(102). https://journals.uran.ua/tarp/article/view/309472
Kuznetsov, O., Kyselyov, G. (2022). Methods of Text Recognition and Keyword Search for Automatic Text Summarization. System Sciences and Informatics: Proceedings of the 1st Scientific and Practical Conference “System Sciences and Informatics”, November 22–29, 2022 (pp. 331–335). National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” [in Ukrainian]. https://ai.kpi.ua/ua/document2022.pdf
Kuznietsov, O., & Kyselov, G. (2024). An overview of current issues in automatic text summarization of natural language using artificial intelligence methods.
Technology Audit and Production Reserves, 4(78), 12–19. https://journals.uran.ua/tarp/article/view/309472
Lewis, M., Liu,Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L. (2020). BART: Denoising Sequence-to-Sequence Pre- training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online. (pp. 7871–7880). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.703
Lin, C.-Y. (2004). ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out (pp. 74–81). Barcelona, Spain.
Association for Computational Linguistics. https://aclanthology.org/W04-1013.pdf
Lin C.-Y., Hovy E.H. (2000) The Automated acquisition of topic signatures for text summarization. In Proceedings of COLING-00. Saarbrücken, Germany. (pp. 495-501). International Committee on Computational Linguistics. https://doi.org/10.1007/978-3-540-74851-9_25
Mihalcea, R., Tarau P. (2004). TextRank: Bringing order into texts. Association for Computational Linguistics. https://web.eecs.umich.edu/~mihalcea/ papers/mihalcea.emnlp04.pdf
Moratanch, N, Chitrakala, S. (2016). A survey on abstractive text summarization, 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 2016, (pp. 1–7). Institute of Electrical and Electronics Engineers. https://ieeexplore.ieee.org/document/7530193/ OpenAI. (2022). ChatGPT. https://openai.com/chatgpt/
Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). BLEU: a method for automatic evaluation of machine translation. ACL-2002: 40th Annual meeting of the Association for Computational Linguistics, (pp. 311–318). Association for Computational Linguistics. http://aclweb.org/anthology/P/P02/P02-1040.pdf
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, Aidan, N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, 30. Curran Associates, Inc. https://arxiv.org/abs/1706.03762