- 4.2Impact Factor
- 7.5CiteScore
- 17 daysTime to First Decision
Computers, Volume 13, Issue 4
April 2024 - 25 articles
Cover Story: Transformers have emerged as a major deep-learning architecture, with their diffusion spreading over a wide population due to some popular user-friendly interfaces, and their use extending from the original NLP domain to images and other forms of data. However, their user-friendliness has not translated into an equal degree of transparency. Transformers maintain the level of opacity typically associated with deep-learning architectures. However, efforts are underway to add explainability to the features of transformers. In this paper, a review of the current status of those efforts and their observed trends is provided. The major explainability techniques are described by adopting a taxonomy based on the component of the architecture that is exploited to explain the transformer’s results. View this paper
- Issues are regarded as officially published after their release is announced to the table of contents alert mailing list .
- You may sign up for email alerts to receive table of contents of newly released issues.
- PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Articles
There are no articles in this issue yet.

