Holle, Khadijah Fahmi Hayati
ORCID: https://orcid.org/0000-0002-6991-1748, Munna, Daurin Nabilatul and Ekaputri, Enggarani Wahyu
(2025)
Performance Evaluation of Transformer Models: Scratch, Bart, and Bert for News Document Summarization.
Jurnal Teknik Informatika (JUTIF), 6 (2).
pp. 787-802.
ISSN 2723-3871
|
Text
20.+ID2534+(787-802) (3).pdf Download (630kB) |
Abstract
This study evaluates the performance of three Transformer models: Transformer from Scratch, BART (Bidirectional and Auto-Regressive Transformers), and BERT (Bidirectional Encoder Representations from Transformers) in the task of summarizing news documents. The evaluation results show that BERT excels in understanding the bidirectional context of text, with a ROUGE-1 value of 0.2471, ROUGE-2 of 0.1597, and ROUGE-L of 0.1597. BART shows strong ability in de-noising and producing coherent summaries, with a ROUGE-1 value of 0.5239, ROUGE-2 of 0.3517, and ROUGE-L of 0.3683. Transformer from Scratch, despite requiring large training data and computational resources, produces good performance when trained optimally, with ROUGE-1 scores of 0.7021, ROUGE-2 scores of 0.5652, and ROUGE-L scores of 0.6383. This evaluation provides insight into the strengths and weaknesses of each model in the context of news document summarization.
| Item Type: | Journal Article |
|---|---|
| Keywords: | BART, BERT, Document Summarization, NLP, ROUGE, Transformer. |
| Subjects: | 08 INFORMATION AND COMPUTING SCIENCES > 0801 Artificial Intelligence and Image Processing > 080107 Natural Language Processing |
| Divisions: | Faculty of Technology > Department of Informatics Engineering |
| Depositing User: | Khadijah Fahmi Hayati Holle |
| Date Deposited: | 05 Dec 2025 08:10 |
Downloads
Downloads per month over past year
Origin of downloads
Actions (login required)
![]() |
View Item |
Dimensions
Dimensions