Raising the Bar on Acceptability Judgments Classification: An Experiment on ItaCoLA Using ELECTRA
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Dataset
3.2. Models
3.2.1. BERT
3.2.2. ELECTRA
4. Results and Discussion
4.1. Quantitative Analysis
4.2. Qualitative Analysis
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Chen, S.Y.C.; Huang, C.M.; Hsing, C.W.; Kao, Y.J. Hybrid quantum-classical classifier based on tensor network and variational quantum circuit. arXiv 2020, arXiv:2011.14651. [Google Scholar]
- Warstadt, A.; Singh, A.; Bowman, S.R. Neural Network Acceptability Judgments. Trans. Assoc. Comput. Linguist. 2019, 7, 625–641. [Google Scholar] [CrossRef]
- Chomsky, N. Aspects of the Theory of Syntax; MIT Press: New York, NY, USA, 1965. [Google Scholar]
- Schütze, C.T. The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology; University of Chicago Press: Chicago, IL, USA, 2016. [Google Scholar]
- Gibson, E.; Fedorenko, E. The need for quantitative methods in syntax and semantics research. Lang. Cogn. Process. 2013, 28, 88–124. [Google Scholar] [CrossRef]
- Sprouse, J.; Almeida, D. A quantitative defense of linguistic methodology. Manuscript submitted for publication 2010.
- Linzen, T. What can linguistics and deep learning contribute to each other? Response to Pater. Language 2019, 95, e99–e108. [Google Scholar] [CrossRef]
- Hewitt, J.; Manning, C.D. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4129–4138. [Google Scholar] [CrossRef]
- Manning, C.D.; Clark, K.; Hewitt, J.; Khandelwal, U.; Levy, O. Emergent linguistic structure in artificial neural networks trained by self-supervision. Proc. Natl. Acad. Sci. USA 2020, 117, 30046–30054. [Google Scholar] [CrossRef] [PubMed]
- Jawahar, G.; Sagot, B.; Seddah, D. What does BERT learn about the structure of language? In Proceedings of the ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019. [Google Scholar]
- Guarasci, R.; Damiano, E.; Minutolo, A.; Esposito, M.; De Pietro, G. Lexicon-grammar based open information extraction from natural language sentences in Italian. Expert Syst. Appl. 2020, 143, 112954. [Google Scholar] [CrossRef]
- Esposito, M.; Damiano, E.; Minutolo, A.; De Pietro, G.; Fujita, H. Hybrid query expansion using lexical resources and word embeddings for sentence retrieval in question answering. Inf. Sci. 2020, 514, 88–105. [Google Scholar] [CrossRef]
- Gulordava, K.; Bojanowski, P.; Grave, É.; Linzen, T.; Baroni, M. Colorless Green Recurrent Networks Dream Hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA, 1–6 June 2018; pp. 1195–1205. [Google Scholar]
- Lau, J.H.; Armendariz, C.; Lappin, S.; Purver, M.; Shu, C. How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context. Trans. Assoc. Comput. Linguist. 2020, 8, 296–310. [Google Scholar] [CrossRef]
- Mikhailov, V.; Shamardina, T.; Ryabinin, M.; Pestova, A.; Smurov, I.; Artemova, E. RuCoLA: Russian Corpus of Linguistic Acceptability. arXiv 2022, arXiv:2210.12814. [Google Scholar]
- Someya, T.; Sugimoto, Y.; Oseki, Y. JCoLA: Japanese Corpus of Linguistic Acceptability. arXiv 2023, arXiv:2309.12676. [Google Scholar]
- Jentoft, M.; Samuel, D. NocoLA: The norwegian corpus of linguistic acceptability. In Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), Tórshavn, Faroe Islands, 22–24 May 2023; pp. 610–617. [Google Scholar]
- Volodina, E.; Mohammed, Y.A.; Klezl, J. DaLAJ—A dataset for linguistic acceptability judgments for Swedish. In Proceedings of the 10th Workshop on NLP for Computer Assisted Language Learning, Online, May 2021; pp. 28–37. [Google Scholar]
- Bel, N.; Punsola, M.; Ruíz-Fernández, V. EsCoLA: Spanish Corpus of Linguistic Acceptability. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, 20–25 May 2024; pp. 6268–6277. [Google Scholar]
- Trotta, D.; Guarasci, R.; Leonardelli, E.; Tonelli, S. Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, 16–20 November 2021; pp. 2929–2940. [Google Scholar]
- Volodina, E.; Mohammed, Y.A.; Berdičevskis, A.; Bouma, G.; Öhman, J. DaLAJ-GED-a dataset for Grammatical Error Detection tasks on Swedish. In Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning, Online, May 2023; pp. 94–101. [Google Scholar]
- Clark, K.; Luong, M.T.; Le, Q.V.; Manning, C.D. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In Proceedings of the ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Fang, H.; Xu, G.; Long, Y.; Tang, W. An Effective ELECTRA-Based Pipeline for Sentiment Analysis of Tourist Attraction Reviews. Appl. Sci. 2022, 12, 10881. [Google Scholar] [CrossRef]
- Gargiulo, F.; Minutolo, A.; Guarasci, R.; Damiano, E.; De Pietro, G.; Fujita, H.; Esposito, M. An ELECTRA-Based Model for Neural Coreference Resolution. IEEE Access 2022, 10, 75144–75157. [Google Scholar] [CrossRef]
- Guarasci, R.; Minutolo, A.; Damiano, E.; De Pietro, G.; Fujita, H.; Esposito, M. ELECTRA for neural coreference resolution in Italian. IEEE Access 2021, 9, 115643–115654. [Google Scholar] [CrossRef]
- Kuo, C.C.; Chen, K.Y. Toward zero-shot and zero-resource multilingual question answering. IEEE Access 2022, 10, 99754–99761. [Google Scholar] [CrossRef]
- Italian Corpus of Linguistic Acceptability (Repository). Available online: https://paperswithcode.com/dataset/itacola (accessed on 24 April 2024).
- Bonetti, F.; Leonardelli, E.; Trotta, D.; Raffaele, G.; Tonelli, S. Work Hard, Play Hard: Collecting Acceptability Annotations through a 3D Game. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, European Language Resources Association, Marseille, France, 20–25 June 2022; pp. 1740–1750. [Google Scholar]
- Cho, H. Analyzing ChatGPT’s Judgments on Nativelikeness of Sentences Written by English Native Speakers and Korean EFL Learners. Multimed.-Assist. Lang. Learn. 2023, 26, 9–32. [Google Scholar]
- Qiu, Z.; Duan, X.; Cai, Z.G. Grammaticality Representation in ChatGPT as Compared to Linguists and Laypeople. arXiv 2024, arXiv:2406.11116. [Google Scholar]
- Ranaldi, L.; Pucci, G. Knowing knowledge: Epistemological study of knowledge in transformers. Appl. Sci. 2023, 13, 677. [Google Scholar] [CrossRef]
- Linzen, T.; Oseki, Y. The reliability of acceptability judgments across languages. Glossa J. Gen. Linguist. 2018, 3, 100. [Google Scholar]
- Cherniavskii, D.; Tulchinskii, E.; Mikhailov, V.; Proskurina, I.; Kushnareva, L.; Artemova, E.; Barannikov, S.; Piontkovskaya, I.; Piontkovski, D.; Burnaev, E. Acceptability judgements via examining the topology of attention maps. arXiv 2022, arXiv:2205.09630. [Google Scholar]
- Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; Bowman, S. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, 1 November 2018; pp. 353–355. [Google Scholar]
- Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Wang, W.; Bi, B.; Yan, M.; Wu, C.; Xia, J.; Bao, Z.; Peng, L.; Si, L. StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding. In Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Wang, S.; Fang, H.; Khabsa, M.; Mao, H.; Ma, H. Entailment as Few-Shot Learner. arXiv 2021, arXiv:2104.14690. [Google Scholar]
- Someya, T.; Oseki, Y. JBLiMP: Japanese Benchmark of Linguistic Minimal Pairs. In Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, Toronto, ON, Canada, 9–14 July 2023; pp. 1536–1549. [Google Scholar]
- Xiang, B.; Yang, C.; Li, Y.; Warstadt, A.; Kann, K. CLiMP: A Benchmark for Chinese Language Model Evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Online, 19–23 April 2021; pp. 2784–2790. [Google Scholar] [CrossRef]
- Hu, H.; Zhang, Z.; Huang, W.; Lai, J.Y.K.; Li, A.; Patterson, Y.; Huang, J.; Zhang, P.; Lin, C.J.C.; Wang, R. Revisiting Acceptability Judgements. arXiv 2023, arXiv:2305.14091. [Google Scholar]
- Sprouse, J.; Almeida, D. The empirical status of data in syntax: A reply to Gibson and Fedorenko. Lang. Cogn. Processes 2013, 28, 222–228. [Google Scholar] [CrossRef]
- Lau, J.H.; Clark, A.; Lappin, S. Measuring gradience in speakers’ grammaticality judgements. In Proceedings of the Annual Meeting of the Cognitive Science Society, Quebec City, QC, Canada, 23–26 July 2014; Volume 36. [Google Scholar]
- Marvin, R.; Linzen, T. Targeted Syntactic Evaluation of Language Models. arXiv 2019, arXiv:1808.09031. [Google Scholar]
- Feldhausen, I.; Buchczyk, S. Testing the reliability of acceptability judgments for subjunctive obviation in French. In Proceedings of the Going Romance 2020, Online, 25–27 November 2020. [Google Scholar]
- Chen, Z.; Xu, Y.; Xie, Z. Assessing introspective linguistic judgments quantitatively: The case of The Syntax of Chinese. J. East Asian Linguist. 2020, 29, 311–336. [Google Scholar] [CrossRef]
- Brunato, D.; Chesi, C.; Dell’Orletta, F.; Montemagni, S.; Venturi, G.; Zamparelli, R. AcCompl-it @ EVALITA2020: Overview of the Acceptability & Complexity Evaluation Task for Italian. In Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020), Online event, 17 December 2020; Basile, V., Croce, D., Maro, M.D., Passaro, L.C., Eds.; CEUR Workshop Proceedings. Volume 2765. [Google Scholar]
- Guarasci, R.; Buonaiuto, G.; De Pietro, G.; Esposito, M. Applying Variational Quantum Classifier on Acceptability Judgements: A QNLP experiment. Numer. Comput. Theory Algorithms NUMTA 2023, 116. [Google Scholar]
- Sprouse, J.; Schütze, C.; Almeida, D. Assessing the reliability of journal data in syntax: Linguistic Inquiry 2001–2010. Lingua 2013, 134, 219–248. [Google Scholar] [CrossRef]
- Snow, R.; O’connor, B.; Jurafsky, D.; Ng, A.Y. Cheap and fast–but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, USA, 25–27 October 2008; pp. 254–263. [Google Scholar]
- Lau, J.H.; Clark, A.; Lappin, S. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cogn. Sci. 2017, 41, 1202–1241. [Google Scholar] [CrossRef] [PubMed]
- Fornaciari, T.; Cagnina, L.; Rosso, P.; Poesio, M. Fake opinion detection: How similar are crowdsourced datasets to real data? Lang. Resour. Eval. 2020, 54, 1019–1058. [Google Scholar] [CrossRef]
- Ott, M.; Cardie, C.; Hancock, J.T. Negative deceptive opinion spam. In Proceedings of the 2013 Conference of the north American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, GA, USA, 9–14 June 2013; pp. 497–501. [Google Scholar]
- Guarasci, R.; Catelli, R.; Esposito, M. Classifying deceptive reviews for the cultural heritage domain: A lexicon-based approach for the Italian language. Expert Syst. Appl. 2024, 252, 124131. [Google Scholar] [CrossRef]
- Ruan, N.; Deng, R.; Su, C. GADM: Manual fake review detection for O2O commercial platforms. Comput. Secur. 2020, 88, 101657. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Processing Syst. 2017, 30. [Google Scholar]
- Sun, C.; Qiu, X.; Xu, Y.; Huang, X. How to fine-tune bert for text classification? In Proceedings of the China national conference on Chinese computational linguistics, Kunming, China, 18–20 October 2019; Springer: Cham, Switzerland, 2019; pp. 194–206. [Google Scholar]
- dbmdz BERT and ELECTRA Models. Available online: https://huggingface.co/dbmdz/bert-base-italian-xxl-cased (accessed on 20 June 2024).
- Open Source Project on Multilingual Resources for Machine Learning (OSCAR). Available online: https://traces1.inria.fr/oscar/ (accessed on 20 June 2024).
- OPUS corpora collection. Available online: http://opus.nlpl.eu/ (accessed on 20 June 2024).
- Rogers, A.; Kovaleva, O.; Rumshisky, A. A primer in bertology: What we know about how bert works. Trans. Assoc. Comput. Linguist. 2020, 8, 842–866. [Google Scholar] [CrossRef]
- Electra Base Iitalian XXL Cased. Available online: https://huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator (accessed on 20 June 2024).
- Peters, M.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; Zettlemoyer, L. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA, 1–6 June 2018; pp. 2227–2237. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Liu, H. Dependency direction as a means of word-order typology: A method based on dependency treebanks. Lingua 2010, 120, 1567–1578. [Google Scholar] [CrossRef]
- Di Liello, L.; Gabburo, M.; Moschitti, A. Efficient pre-training objectives for transformers. arXiv 2021, arXiv:2104.09694. [Google Scholar]
- Margiotta, V. Modeling and Classifying Textual Data through Transformer-Based Architecture: A Comparative Approach in Natural Language Processing. Ph.D. Thesis, Politecnico di Torino, Turin, Italy, 2021. [Google Scholar]
- Tepecik, A.; Demir, E. Emotion Detection with Pre-Trained Language Models BERT and ELECTRA Analysis of Turkish Data. Intell. Methods Eng. Sci. 2024, 3, 7–12. [Google Scholar]
- Warstadt, A.; Bowman, S.R. Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments. arXiv 2019, arXiv:1901.03438. [Google Scholar]
- Burzio, L. Italian Syntax: A Government-Binding Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1986; Volume 1. [Google Scholar]
- Manning, C.D.; Sag, I.A. Argument structure, valence, and binding. Nord. J. Linguist. 1998, 21, 107–144. [Google Scholar] [CrossRef]
- Chesi, C. An efficient Trie for binding (and movement). Comput. Linguist. Clic-It 2018, 105. [Google Scholar]
- Brunato, D.; De Mattei, L.; Dell’Orletta, F.; Iavarone, B.; Venturi, G. Is this Sentence Difficult? Do you Agree? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 2690–2699. [Google Scholar]
- Varda, A.G.d.; Marelli, M. Data-driven Cross-lingual Syntax: An Agreement Study with Massively Multilingual Models. Comput. Linguist. 2023, 49, 261–299. [Google Scholar] [CrossRef]
- Marulli, F.; Pota, M.; Esposito, M.; Maisto, A.; Guarasci, R. Tuning syntaxnet for pos tagging italian sentences. Lect. Notes Data Eng. Commun. Technol. 2018, 13, 314–324. [Google Scholar] [CrossRef]
- Warstadt, A.; Parrish, A.; Liu, H.; Mohananey, A.; Peng, W.; Wang, S.F.; Bowman, S.R. BLiMP: The benchmark of linguistic minimal pairs for English. Trans. Assoc. Comput. Linguist. 2020, 8, 377–392. [Google Scholar] [CrossRef]
- Buonaiuto, G.; Guarasci, R.; Minutolo, A.; De Pietro, G.; Esposito, M. Quantum transfer learning for acceptability judgements. Quantum Mach. Intell. 2024, 6, 13. [Google Scholar] [CrossRef]
- Li, L.; Li, Z.; Chen, Y.; Li, S.; Zhou, G. Prompt-Free Few-Shot Learning with ELECTRA for Acceptability Judgment. In Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing, Foshan, China, 12–15 October 2023; Springer: Cham, Switzerland, 2023; pp. 43–54. [Google Scholar]
Label | Sentence |
---|---|
0 | Maria andava nella sua l’inverno passato città. (Maria went to her winter past city) |
1 | Max vuole sposare Andrea (Max want to marry Andrea) |
0 | Il racconto ti hanno colpito. (The story have impressed you) |
1 | Il racconto ti ha colpito. (The story has impressed you) |
Model | Accuracy | MCC |
---|---|---|
LSTM | ||
BERT | ||
ELECTRA |
Phenomenon | Sentences | Description | Example |
---|---|---|---|
Simple | 365 | One-verb sentences composed of only mandatory arguments. | “Marco ha baciato Alice” (En. Marco kissed Alice.) |
Cleft constructions | 136 | Sentences in which a constituent is displaced from its typical position to give it emphasis. | “È Clara che Anna ha visto uscire” (En. It is Clara whom Anna saw leaving.) |
Subject–verb agreement | 406 | Sentences lacking the agreement in gender or number between subject and verb. | “Maurizio sostiene che Lucia ha parlato di lui a casa con la moglie” (En. Maurizio claims that Lucia talked about him at home with his wife.) |
Indefinite pronouns | 312 | Sentences with one or more indefinite pronouns referring to someone or something. | “Spero in qualcosa che arriverà” (En. I am hoping for something to come.) |
Copular constructions | 855 | Sentences in which the subject is connected to a noun or an adjective with a copulative verb. | “Cicerone era un grande oratore” (En. Cicero was a great speaker.) |
Auxiliary | 398 | Sentences containing the verb “essere” (to be) or “avere” (to have). | “Stavamo correndo nel pomeriggio” (En. We were running in the afternoon.) |
Bind | 27 | Sentences in which anaphoric elements are grammatically associated with their antecedents. | “Cesare adula se stesso” (En. Caesar flatters himself.) |
Wh-islands violations | 53 | Sentences at the beginning of which there is a Wh- clause. | “Che opera lirica avevi suggerito di andare a vedere stasera?” (En. What opera did you suggest we see tonight?) |
Questions | 177 | Interrogative sentences. | “È tua quella bicicletta rossa?” (En. Is that red bicycle yours?) |
Phenomenon | Model | |
---|---|---|
ELECTRA | BERT | |
MCC/ACC | ||
Cleft construction | 0.53/0.82 | 0.48/0.80 |
Copular construction | 0.56/0.88 | 0.36/0.88 |
Subject–verb agreement | 0.54/0.88 | 0.41/0.86 |
Wh-islands violations | 0.5 /0.83 | 0.46/0.81 |
Simple | 0.54/0.89 | 0.35/0.86 |
Question | 0.50/0.86 | 0.37/0.86 |
Auxiliary | 0.47/0.85 | 0.30/0.82 |
Bind | 0.43/0.70 | 0.18/0.55 |
Indefinite pronouns | 0.51/0.87 | 0.28/0.83 |
Total | 0.54/0.87 | 0.37/0.84 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guarasci, R.; Minutolo, A.; Buonaiuto, G.; De Pietro, G.; Esposito, M. Raising the Bar on Acceptability Judgments Classification: An Experiment on ItaCoLA Using ELECTRA. Electronics 2024, 13, 2500. https://doi.org/10.3390/electronics13132500
Guarasci R, Minutolo A, Buonaiuto G, De Pietro G, Esposito M. Raising the Bar on Acceptability Judgments Classification: An Experiment on ItaCoLA Using ELECTRA. Electronics. 2024; 13(13):2500. https://doi.org/10.3390/electronics13132500
Chicago/Turabian StyleGuarasci, Raffaele, Aniello Minutolo, Giuseppe Buonaiuto, Giuseppe De Pietro, and Massimo Esposito. 2024. "Raising the Bar on Acceptability Judgments Classification: An Experiment on ItaCoLA Using ELECTRA" Electronics 13, no. 13: 2500. https://doi.org/10.3390/electronics13132500
APA StyleGuarasci, R., Minutolo, A., Buonaiuto, G., De Pietro, G., & Esposito, M. (2024). Raising the Bar on Acceptability Judgments Classification: An Experiment on ItaCoLA Using ELECTRA. Electronics, 13(13), 2500. https://doi.org/10.3390/electronics13132500