This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning
1
Department of Applied Data Science, Hong Kong Shue Yan University, Hong Kong SAR, China
2
Department of Computer Science, University of Sahiwal, Sahiwal 57000, Pakistan
3
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
4
Faculty of Data Science and Information Technology, INTI International University, Nilai 71800, Negeri Sembilan, Malaysia
*
Author to whom correspondence should be addressed.
Information 2025, 16(11), 991; https://doi.org/10.3390/info16110991 (registering DOI)
Submission received: 30 September 2025
/
Revised: 5 November 2025
/
Accepted: 10 November 2025
/
Published: 16 November 2025
Abstract
In today’s data-driven world, automatic text summarization is essential for extracting insights from large data volumes. While extractive summarization is well-studied, abstractive summarization remains limited, especially for low-resource languages like Urdu. This study introduces process innovation through transformer-based models—Efficient-BART (EBART), Efficient-T5 (ET5), and Efficient-GPT-2 (EGPT-2)—optimized for Urdu abstractive summarization. Innovations include strategically removing inefficient attention heads to reduce computational complexity and improve accuracy. Theoretically, this pruning preserves structural integrity by retaining heads that capture diverse linguistic features, while eliminating redundant ones. Adapted from BART, T5, and GPT-2, these optimized models significantly outperform their originals in ROUGE evaluations, demonstrating the effectiveness of process innovation and optimization for Urdu natural language processing.
Share and Cite
MDPI and ACS Style
Azhar, M.; Amjad, A.; Farid, G.; Dewi, D.A.; Batumalay, M.
Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning. Information 2025, 16, 991.
https://doi.org/10.3390/info16110991
AMA Style
Azhar M, Amjad A, Farid G, Dewi DA, Batumalay M.
Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning. Information. 2025; 16(11):991.
https://doi.org/10.3390/info16110991
Chicago/Turabian Style
Azhar, Muhammad, Adeen Amjad, Ghulam Farid, Deshinta Arrova Dewi, and Malathy Batumalay.
2025. "Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning" Information 16, no. 11: 991.
https://doi.org/10.3390/info16110991
APA Style
Azhar, M., Amjad, A., Farid, G., Dewi, D. A., & Batumalay, M.
(2025). Efficient Transformer-Based Abstractive Urdu Text Summarization Through Selective Attention Pruning. Information, 16(11), 991.
https://doi.org/10.3390/info16110991
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.