Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = multiple dictionaries

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3308 KB  
Article
Structural Discourse Markers in German Palliative Care Interactions
by Aaron Schmidt-Riese
Languages 2025, 10(8), 195; https://doi.org/10.3390/languages10080195 - 18 Aug 2025
Viewed by 549
Abstract
The aim of this study is to provide a systematic account of structural discourse markers operating at a conversational macro-level in German Palliative Care interactions, focusing on their frequency, distribution, co-occurrence, and speaker-group-specific usage. By combining qualitative approaches from conversation analysis and interactional [...] Read more.
The aim of this study is to provide a systematic account of structural discourse markers operating at a conversational macro-level in German Palliative Care interactions, focusing on their frequency, distribution, co-occurrence, and speaker-group-specific usage. By combining qualitative approaches from conversation analysis and interactional linguistics with quantitative methods from corpus linguistics, discourse markers are analyzed together as a functional category from multiple analytical perspectives to enhance the overall understanding of the use of discourse markers. The analysis reveals a functional distribution across different transition points in conversation: Whereas the German so most frequently appears in openings and transitions to non-verbal activities, gut and okay predominate in topic shifts and conversation closings. However, gut and okay differ in their composition of discursive functions, although discourse structuring emerges as the second most frequent function in both cases, an observation that stands in contrast to the continued neglect of this function in standard dictionary entries. The institutional asymmetries inherent in medical interactions are reflected in the finding that both doctors and caregivers use significantly more structural discourse markers than patients and their relatives. Differences between physicians’ and caregivers’ use of discourse markers can be attributed to their different professional roles and communicative responsibilities. Full article
(This article belongs to the Special Issue Current Trends in Discourse Marker Research)
Show Figures

Figure 1

9 pages, 224 KB  
Article
Cardiovascular Toxicities in Chimeric Antigen Receptor Therapy in Relapsed and Refractory Multiple Myeloma and Lymphoma Using FAERS Database
by Fathima Shehnaz Ayoobkhan, Suryakumar Balasubramanian, Arindam Bagga and Tarun Parvataneni
Lymphatics 2025, 3(3), 16; https://doi.org/10.3390/lymphatics3030016 - 20 Jun 2025
Viewed by 596
Abstract
Introduction: In the past decade, chimeric antigen receptor T-cell therapy (CAR-T) has revolutionized the treatment of relapsed refractory multiple myeloma (RRMM) and lymphoma, but it is associated with significant cardiovascular adverse effects. We aim to analyze the incidence, patterns, and outcomes of cardiac [...] Read more.
Introduction: In the past decade, chimeric antigen receptor T-cell therapy (CAR-T) has revolutionized the treatment of relapsed refractory multiple myeloma (RRMM) and lymphoma, but it is associated with significant cardiovascular adverse effects. We aim to analyze the incidence, patterns, and outcomes of cardiac events in RRMM and lymphoma patients undergoing CAR-T therapy utilizing the FDA Adverse Event Reporting System (FAERS) database, paving the way for future research and being more vigilant in treating high-risk populations. Methods: We conducted a retrospective post-marketing pharmacovigilance inquiry using the FDA Adverse Event Reporting System (FAERS) database and the Medical Dictionary for Regulatory Activities (MEDRA). We examined the adverse effects associated with CAR-T and TCE since their FDA approval in US and non-US populations (accessed 5 January 2024), and we analyzed the incidence of cardiac events related to six CAR-T products: Idecabtagene vicleucel, Ciltacabtagene autoleucel, Axicabtagene ciloleucel, Tisagenlecleucel, Lisocabtagene maraleucel, and Brexucabtagene autoleucel since FDA approval. Cardiotoxicities were assessed, including coronary artery disease (CAD), myocardial infarction (MI), arrhythmia, heart failure, and hypotension. Results: Out of 12,949 adverse events, we identified 675 (5.2%) cardiac events irrespective of severity. Almost 440 (65%) cardiac events were associated with cytokine release syndrome (CRS). The most common cardiotoxic event was atrial fibrillation (122), followed by the development of heart failure (113), ventricular arrhythmia (108), hypotension (87), and bradyarrhythmia (41). The mortality rate was highest among Brexucabtagene autoleucel recipients (n = 26, 2.3%), followed by Tisagenlecleucel (n = 71, 2.1%) and Lisocabtagene maraleucel (n = 10, 2.1%). Conclusions: CAR-T therapy can result in fatal adverse events due to its cardiotoxic properties. Timely monitoring, such as screening echocardiography and electrocardiograms, can help identify the at-risk population and allow for early intervention—particularly in patients with high baseline cardiovascular risk or previous exposure to cardiotoxic agents—thereby improving outcomes by enabling risk stratification and supportive management. Full article
(This article belongs to the Collection Lymphomas)
20 pages, 4751 KB  
Article
Recovery and Characterization of Tissue Properties from Magnetic Resonance Fingerprinting with Exchange
by Naren Nallapareddy and Soumya Ray
J. Imaging 2025, 11(5), 169; https://doi.org/10.3390/jimaging11050169 - 20 May 2025
Viewed by 592
Abstract
Magnetic resonance fingerprinting (MRF), a quantitative MRI technique, enables the acquisition of multiple tissue properties in a single scan. In this paper, we study a proposed extension of MRF, MRF with exchange (MRF-X), which can enable acquisition of the six tissue properties [...] Read more.
Magnetic resonance fingerprinting (MRF), a quantitative MRI technique, enables the acquisition of multiple tissue properties in a single scan. In this paper, we study a proposed extension of MRF, MRF with exchange (MRF-X), which can enable acquisition of the six tissue properties T1a,T2a, T1b, T2b, ρ and τ simultaneously. In MRF-X, ‘a’ and ‘b’ refer to distinct compartments modeled in each voxel, while ρ is the fractional volume of component ‘a’, and τ is the exchange rate of protons between the two components. To assess the feasibility of recovering these properties, we first empirically characterize a similarity metric between MRF and MRF-X reconstructed tissue property values and known reference property values for candidate signals. Our characterization indicates that such a recovery is possible, although the similarity metric surface across the candidate tissue properties is less structured for MRF-X than for MRF. We then investigate the application of different optimization techniques to recover tissue properties from noisy MRF and MRF-X data. Previous work has widely utilized template dictionary-based approaches in the context of MRF; however, such approaches are infeasible with MRF-X. Our results show that Simplicial Homology Global Optimization (SHGO), a global optimization algorithm, and Limited-memory Bryoden–Fletcher–Goldfarb–Shanno algorithm with Bounds (L-BFGS-B), a local optimization algorithm, performed comparably with direct matching in two-tissue property MRF at an SNR of 5. These optimization methods also successfully recovered five tissue properties from MRF-X data. However, with the current pulse sequence and reconstruction approach, recovering all six tissue properties remains challenging for all the methods investigated. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

28 pages, 1764 KB  
Article
A Generative Model Approach for LiDAR-Based Classification and Ego Vehicle Localization Using Dynamic Bayesian Networks
by Muhammad Adnan, Pamela Zontone, David Martín Gómez, Lucio Marcenaro and Carlo Regazzoni
Appl. Sci. 2025, 15(9), 5181; https://doi.org/10.3390/app15095181 - 7 May 2025
Viewed by 917
Abstract
Our work presents a robust framework for classifying static and dynamic tracks and localizing an ego vehicle in dynamic environments using LiDAR data. Our methodology leverages generative models, specifically Dynamic Bayesian Networks (DBNs), interaction dictionaries, and a Markov Jump Particle Filter (MJPF), to [...] Read more.
Our work presents a robust framework for classifying static and dynamic tracks and localizing an ego vehicle in dynamic environments using LiDAR data. Our methodology leverages generative models, specifically Dynamic Bayesian Networks (DBNs), interaction dictionaries, and a Markov Jump Particle Filter (MJPF), to accurately classify objects within LiDAR point clouds and localize the ego vehicle without relying on external odometry data during testing. The classification phase effectively distinguishes between static and dynamic objects with high accuracy, achieving an F1 score of 91%. The localization phase utilizes a combined dictionary approach, integrating multiple static landmarks to improve robustness, particularly during simultaneous multi-track observations and no-observation intervals. Experimental results validate the efficacy of our proposed approach in enhancing localization accuracy and maintaining consistency in diverse scenarios Full article
Show Figures

Figure 1

19 pages, 5990 KB  
Article
FPCAM: A Weighted Dictionary-Driven Model for Single-Cell Annotation in Pulmonary Fibrosis
by Guojun Liu, Yan Shi, Hongxu Huang, Ningkun Xiao, Chuncheng Liu, Hongyu Zhao, Yongqiang Xing and Lu Cai
Biology 2025, 14(5), 479; https://doi.org/10.3390/biology14050479 - 26 Apr 2025
Cited by 2 | Viewed by 900
Abstract
The groundbreaking development of scRNA-seq has significantly improved cellular resolution. However, accurate cell-type annotation remains a major challenge. Existing annotation tools are often limited by their reliance on reference datasets, the heterogeneity of marker genes, and subjective biases introduced through manual intervention, all [...] Read more.
The groundbreaking development of scRNA-seq has significantly improved cellular resolution. However, accurate cell-type annotation remains a major challenge. Existing annotation tools are often limited by their reliance on reference datasets, the heterogeneity of marker genes, and subjective biases introduced through manual intervention, all of which impact annotation accuracy and reliability. To address these limitations, we developed FPCAM, a fully automated pulmonary fibrosis cell-type annotation model. Built on the R Shiny platform, FPCAM utilizes a matrix of up-regulated marker genes and a manually curated gene–cell association dictionary specific to pulmonary fibrosis. It achieves accurate and efficient cell-type annotation through similarity matrix construction and optimized matching algorithms. To evaluate its performance, we compared FPCAM with state-of-the-art annotation models, including SCSA, SingleR, and SciBet. The results showed that FPCAM and SCSA both achieved an accuracy of 89.7%, outperforming SingleR and SciBet. Furthermore, FPCAM demonstrated high accuracy in annotating the external validation dataset GSE135893, successfully identifying multiple cell subtypes. In summary, FPCAM provides an efficient, flexible, and accurate solution for cell-type identification and serves as a powerful tool for scRNA-seq research in pulmonary fibrosis and other related diseases. Full article
(This article belongs to the Special Issue Computational Discovery Tools in Genomics and Precision Medicine)
Show Figures

Figure 1

14 pages, 2548 KB  
Article
Lossless Compression with Trie-Based Shared Dictionary for Omics Data in Edge–Cloud Frameworks
by Rani Adam, Daniel R. Catchpoole, Simeon J. Simoff, Zhonglin Qu, Paul J. Kennedy and Quang Vinh Nguyen
J. Sens. Actuator Netw. 2025, 14(2), 41; https://doi.org/10.3390/jsan14020041 - 9 Apr 2025
Viewed by 1374
Abstract
The growing complexity and volume of genomic and omics data present critical challenges for storage, transfer, and analysis in edge–cloud platforms. Existing compression techniques often involve trade-offs between efficiency and speed, requiring innovative approaches that ensure scalability and cost-effectiveness. This paper introduces a [...] Read more.
The growing complexity and volume of genomic and omics data present critical challenges for storage, transfer, and analysis in edge–cloud platforms. Existing compression techniques often involve trade-offs between efficiency and speed, requiring innovative approaches that ensure scalability and cost-effectiveness. This paper introduces a lossless compression method that integrates Trie-based shared dictionaries within an edge–cloud architecture. It presents a software-centric scientific research process of the design and evaluation of the proposed compression method. By enabling localized preprocessing at the edge, our approach reduces data redundancy before cloud transmission, thereby optimizing both storage and network efficiency. A global shared dictionary is constructed using N-gram analysis to identify and prioritize repeated sequences across multiple files. A lightweight index derived from this dictionary is then pushed to edge nodes, where Trie-based sequence replacement is applied to eliminate redundancy locally. The preprocessed data are subsequently transmitted to the cloud, where advanced compression algorithms, such as Zstd, GZIP, Snappy, and LZ4, further compress them. Evaluation on real patient omics datasets from B-cell Acute Lymphoblastic Leukemia (B-ALL) and Chronic Lymphocytic Leukemia (CLL) demonstrates that edge preprocessing significantly improves compression ratios, reduces upload times, and enhances scalability in hybrid cloud frameworks. Full article
Show Figures

Figure 1

16 pages, 2643 KB  
Article
The Geometry of Concepts: Sparse Autoencoder Feature Structure
by Yuxiao Li, Eric J. Michaud, David D. Baek, Joshua Engels, Xiaoqing Sun and Max Tegmark
Entropy 2025, 27(4), 344; https://doi.org/10.3390/e27040344 - 27 Mar 2025
Cited by 15 | Viewed by 4714
Abstract
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: (1) The “atomic” small-scale structure contains “crystals” whose faces are parallelograms [...] Read more.
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: (1) The “atomic” small-scale structure contains “crystals” whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man:woman::king:queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently performed with linear discriminant analysis. (2) The “brain” intermediate-scale structure has significant spatial modularity; for example, math and code features form a “lobe” akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. (3) The “galaxy”-scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer. Full article
Show Figures

Figure 1

17 pages, 2787 KB  
Article
Improved Variational Bayes for Space-Time Adaptive Processing
by Kun Li, Jinyang Luo, Peng Li, Guisheng Liao, Zhixiang Huang and Lixia Yang
Entropy 2025, 27(3), 242; https://doi.org/10.3390/e27030242 - 26 Feb 2025
Viewed by 810
Abstract
To tackle the challenge of enhancing moving target detection performance in environments characterized by small sample sizes and non-uniformity, methods rooted in sparse signal reconstruction have been incorporated into Space-Time Adaptive Processing (STAP) algorithms. Given the prominent sparse nature of clutter spectra in [...] Read more.
To tackle the challenge of enhancing moving target detection performance in environments characterized by small sample sizes and non-uniformity, methods rooted in sparse signal reconstruction have been incorporated into Space-Time Adaptive Processing (STAP) algorithms. Given the prominent sparse nature of clutter spectra in the angle-Doppler domain, adopting sparse recovery algorithms has proven to be a feasible approach for accurately estimating high-resolution spatio-temporal two-dimensional clutter spectra. Sparse Bayesian Learning (SBL) is a pivotal tool in sparse signal reconstruction and has been previously utilized, yet it has demonstrated limited success in enhancing sparsity, resulting in insufficient robustness in local fitting. To significantly improve sparsity, this paper introduces a hierarchical Bayesian prior framework and derives iterative parameter update formulas through variational inference techniques. However, this algorithm encounters significant computational hurdles during the parameter update process. To overcome this obstacle, the paper proposes an enhanced Variational Bayesian Inference (VBI) method that leverages prior information on the rank of the temporal clutter covariance matrix to refine the parameter update formulas, thereby significantly reducing computational complexity. Furthermore, this method fully exploits the joint sparsity of the Multiple Measurement Vector (MMV) model to achieve greater sparsity without compromising accuracy, and employs a first-order Taylor expansion to eliminate grid mismatch in the dictionary. The research presented in this paper enhances the moving target detection capabilities of STAP algorithms in complex environments and provides new perspectives and methodologies for the application of sparse signal reconstruction in related fields. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

22 pages, 4759 KB  
Article
An Improved Nonnegative Matrix Factorization Algorithm Combined with K-Means for Audio Noise Reduction
by Yan Liu, Haozhen Zhu, Yongtuo Cui, Xiaoyu Yu, Haibin Wu and Aili Wang
Electronics 2024, 13(20), 4132; https://doi.org/10.3390/electronics13204132 - 21 Oct 2024
Cited by 1 | Viewed by 1589
Abstract
Clustering algorithms have the characteristics of being simple and efficient and can complete calculations without a large number of datasets, making them suitable for application in noise reduction processing for audio module mass production testing. In order to solve the problems of the [...] Read more.
Clustering algorithms have the characteristics of being simple and efficient and can complete calculations without a large number of datasets, making them suitable for application in noise reduction processing for audio module mass production testing. In order to solve the problems of the NMF algorithm easily getting stuck in local optimal solutions and difficult feature signal extraction, an improved NMF audio denoising algorithm combined with K-means initialization was designed. Firstly, the Euclidean distance formula of K-means has been improved to extract audio signal features from multiple dimensions. Combined with the initialization strategy of K-means decomposition, the initialization dictionary matrix of the NMF algorithm has been optimized to avoid getting stuck in local optimal solutions and effectively improve the robustness of the algorithm. Secondly, in the sparse coding part of the NMF algorithm, feature extraction expressions are added to solve the problem of noise residue and partial spectral signal loss in audio signals during the operation process. At the same time, the size of the coefficient matrix is limited to reduce operation time and improve the accuracy of feature extraction in high-precision audio signals. Then, comparative experiments were conducted using the NOIZEUS and NOISEX-92 datasets, as well as random noise audio signals. This algorithm improved the signal-to-noise ratio by 10–20 dB and reduced harmonic distortion by approximately −10 dB. Finally, a high-precision audio acquisition unit based on FPGA was designed, and practical applications have shown that it can effectively improve the signal-to-noise ratio of audio signals and reduce harmonic distortion. Full article
Show Figures

Figure 1

20 pages, 9500 KB  
Article
Image Captioning Based on Semantic Scenes
by Fengzhi Zhao, Zhezhou Yu, Tao Wang and Yi Lv
Entropy 2024, 26(10), 876; https://doi.org/10.3390/e26100876 - 18 Oct 2024
Cited by 2 | Viewed by 3320
Abstract
With the development of artificial intelligence and deep learning technologies, image captioning has become an important research direction at the intersection of computer vision and natural language processing. The purpose of image captioning is to generate corresponding natural language descriptions by understanding the [...] Read more.
With the development of artificial intelligence and deep learning technologies, image captioning has become an important research direction at the intersection of computer vision and natural language processing. The purpose of image captioning is to generate corresponding natural language descriptions by understanding the content of images. This technology has broad application prospects in fields such as image retrieval, autonomous driving, and visual question answering. Currently, many researchers have proposed region-based image captioning methods. These methods generate captions by extracting features from different regions of an image. However, they often rely on local features of the image and overlook the understanding of the overall scene, leading to captions that lack coherence and accuracy when dealing with complex scenes. Additionally, image captioning methods are unable to extract complete semantic information from visual data, which may lead to captions with biases and deficiencies. Due to these reasons, existing methods struggle to generate comprehensive and accurate captions. To fill this gap, we propose the Semantic Scenes Encoder (SSE) for image captioning. It first extracts a scene graph from the image and integrates it into the encoding of the image information. Then, it extracts a semantic graph from the captions and preserves semantic information through a learnable attention mechanism, which we refer to as the dictionary. During the generation of captions, it combines the encoded information of the image and the learned semantic information to generate complete and accurate captions. To verify the effectiveness of the SSE, we tested the model on the MSCOCO dataset. The experimental results show that the SSE improves the overall quality of the captions. The improvement in scores across multiple evaluation metrics further demonstrates that the SSE possesses significant advantages when processing identical images. Full article
(This article belongs to the Collection Entropy in Image Analysis)
Show Figures

Figure 1

34 pages, 4479 KB  
Article
Development of a Children’s Educational Dictionary for a Low-Resource Language Using AI Tools
by Diana Rakhimova, Aidana Karibayeva, Vladislav Karyukin, Assem Turarbek, Zhansaya Duisenbekkyzy and Rashid Aliyev
Computers 2024, 13(10), 253; https://doi.org/10.3390/computers13100253 - 2 Oct 2024
Cited by 7 | Viewed by 3854
Abstract
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek, [...] Read more.
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek, Mongolian, and others, due to the lack of qualitative and accessible resources, morphological complexity, and the semantics of agglutinative languages. This article presents research on early childhood learning resources for the low-resource Kazakh language. Generally, a dictionary for children differs from classical educational dictionaries. The difference between dictionaries for children and adults lies in their purpose and methods of presenting information. A themed dictionary will make learning and remembering new words easier for children because they will be presented in a specific context. This article discusses developing an approach to creating a thematic children’s dictionary of the low-resource Kazakh language using artificial intelligence. The proposed approach is based on several important stages: the initial formation of a list of English words with the use of ChatGPT; identification of their semantic weights; generation of phrases and sentences with the use of the list of semantically related words; translation of obtained phrases and sentences from English to Kazakh, dividing them into bigrams and trigrams; and processing with Kazakh language POS pattern tag templates to adapt them for children. When the dictionary was formed, the semantic proximity of words and phrases to the given theme and age restrictions for children were taken into account. The formed dictionary phrases were evaluated using the cosine similarity, Euclidean similarity, and Manhattan distance metrics. Moreover, the dictionary was extended with video and audio data by implementing models like DALL-E 3, Midjourney, and Stable Diffusion to illustrate the dictionary data and TTS (Text to Speech) technology for the Kazakh language for voice synthesis. The developed thematic dictionary approach was tested, and a SUS (System Usability Scale) assessment of the application was conducted. The experimental results demonstrate the proposed approach’s high efficiency and its potential for wide use in educational purposes. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

20 pages, 24086 KB  
Article
Clustering Hyperspectral Imagery via Sparse Representation Features of the Generalized Orthogonal Matching Pursuit
by Wenqi Guo, Xu Xu, Xiaoqiang Xu, Shichen Gao and Zibu Wu
Remote Sens. 2024, 16(17), 3230; https://doi.org/10.3390/rs16173230 - 31 Aug 2024
Cited by 1 | Viewed by 1512
Abstract
This study focused on improving the clustering performance of hyperspectral imaging (HSI) by employing the Generalized Orthogonal Matching Pursuit (GOMP) algorithm for feature extraction. Hyperspectral remote sensing imaging technology, which is crucial in various fields like environmental monitoring and agriculture, faces challenges due [...] Read more.
This study focused on improving the clustering performance of hyperspectral imaging (HSI) by employing the Generalized Orthogonal Matching Pursuit (GOMP) algorithm for feature extraction. Hyperspectral remote sensing imaging technology, which is crucial in various fields like environmental monitoring and agriculture, faces challenges due to its high dimensionality and complexity. Supervised learning methods require extensive data and computational resources, while clustering, an unsupervised method, offers a more efficient alternative. This research presents a novel approach using GOMP to enhance clustering performance in HSI. The GOMP algorithm iteratively selects multiple dictionary elements for sparse representation, which makes it well-suited for handling complex HSI data. The proposed method was tested on two publicly available HSI datasets and evaluated in comparison with other methods to demonstrate its effectiveness in enhancing clustering performance. Full article
Show Figures

Graphical abstract

30 pages, 5833 KB  
Review
A Survey: Security Vulnerabilities and Protective Strategies for Graphical Passwords
by Zena Mohammad Saadi, Ahmed T. Sadiq, Omar Z. Akif and Alaa K. Farhan
Electronics 2024, 13(15), 3042; https://doi.org/10.3390/electronics13153042 - 1 Aug 2024
Cited by 7 | Viewed by 4115
Abstract
As technology advances and develops, the need for strong and simple authentication mechanisms that can help protect data intensifies. The contemporary approach to giving access control is through graphical passwords comprising images, patterns, or graphical items. The objective of this review was to [...] Read more.
As technology advances and develops, the need for strong and simple authentication mechanisms that can help protect data intensifies. The contemporary approach to giving access control is through graphical passwords comprising images, patterns, or graphical items. The objective of this review was to determine the documented security risks that are related to the use of graphical passwords, together with the measures that have been taken to prevent them. The review was intended to present an extensive literature review of the subject matter on graphical password protection and to point toward potential future research directions. Many attacks, such as shoulder surfing attacks, SQL injection attacks, and spyware attacks, can easily exploit the graphical password scheme, which is one of the most widely used. To counter these security threats, several measures have been suggested, but none of the security attacks can be completely overcome. Each of the proposed measures has its pros and cons. This study begins by elucidating some of the graphical password schemes studied between 2012 and 2023, delving into potential threats and defense mechanisms associated with these schemes. Following a thorough identification and selection process, five of the reviewed papers explain the threat of shoulder surfing and spyware attacks on graphical password schemes, while two explain the threat of brute force attacks. One paper focuses on dictionary attacks, while four other papers address social engineering, SQL injection attacks, and guessing attacks as potential threats to graphical password schemes. In addition, the papers recognize other forms of attacks, such as video recording attacks, filtering attacks, reverse engineering attacks, multiple observation attacks, key/mouse logger attacks, insider attacks, computer vision attacks, image gallery attacks, sonar attacks, reply attacks, data interception attacks, and histogram manipulation attacks. These attacks are examined in three, three, eight, one, four, one, one, one, one, one, one, and one papers, respectively. Moreover, out of all such countermeasures, most of them are based on three categories—randomization, obfuscation, and password space complexity—which are the most commonly employed strategies for improving graphical password schemes. Full article
(This article belongs to the Special Issue AI in Cybersecurity, 2nd Edition)
Show Figures

Figure 1

20 pages, 1291 KB  
Article
Reversal of the Word Sense Disambiguation Task Using a Deep Learning Model
by Algirdas Laukaitis
Appl. Sci. 2024, 14(13), 5550; https://doi.org/10.3390/app14135550 - 26 Jun 2024
Viewed by 2593
Abstract
Word sense disambiguation (WSD) remains a persistent challenge in the natural language processing (NLP) community. While various NLP packages exist, the Lesk algorithm in the NLTK library demonstrates suboptimal accuracy. In this research article, we propose an innovative methodology and an open-source framework [...] Read more.
Word sense disambiguation (WSD) remains a persistent challenge in the natural language processing (NLP) community. While various NLP packages exist, the Lesk algorithm in the NLTK library demonstrates suboptimal accuracy. In this research article, we propose an innovative methodology and an open-source framework that effectively addresses the challenges of WSD by optimizing memory usage without compromising accuracy. Our system seamlessly integrates WSD into NLP tasks, offering functionality similar to that provided by the NLTK library. However, we go beyond the existing approaches by introducing a novel idea related to WSD. Specifically, we leverage deep neural networks and consider the language patterns learned by these models as the new gold standard. This approach suggests modifying existing semantic dictionaries, such as WordNet, to align with these patterns. Empirical validation through a series of experiments confirmed the effectiveness of our proposed method, achieving state-of-the-art performance across multiple WSD datasets. Notably, our system does not require the installation of additional software beyond the well-known Python libraries. The classification model is saved in a readily usable text format, and the entire framework (model and data) is publicly available on GitHub for the NLP research community. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

33 pages, 531 KB  
Article
The Limits of Words: Expanding a Word-Based Emotion Analysis System with Multiple Emotion Dictionaries and the Automatic Extraction of Emotive Expressions
by Lu Wang, Sho Isomura, Michal Ptaszynski, Pawel Dybala, Yuki Urabe, Rafal Rzepka and Fumito Masui
Appl. Sci. 2024, 14(11), 4439; https://doi.org/10.3390/app14114439 - 23 May 2024
Cited by 2 | Viewed by 2363
Abstract
Wide adoption of social media has caused an explosion of information stored online, with the majority of that information containing subjective, opinionated, and emotional content produced daily by users. The field of emotion analysis has helped effectively process such human emotional expressions expressed [...] Read more.
Wide adoption of social media has caused an explosion of information stored online, with the majority of that information containing subjective, opinionated, and emotional content produced daily by users. The field of emotion analysis has helped effectively process such human emotional expressions expressed in daily social media posts. Unfortunately, one of the greatest limitations of popular word-based emotion analysis systems has been the limited emotion vocabulary. This paper presents an attempt to extensively expand one such word-based emotion analysis system by integrating multiple emotion dictionaries and implementing an automatic extraction mechanism for emotive expressions. We first leverage diverse emotive expression dictionaries to expand the emotion lexicon of the system. To do that, we solve numerous problems with the integration of various dictionaries collected using different standards. We demonstrate the performance improvement of the system with improved accuracy and granularity of emotion classification. Furthermore, our automatic extraction mechanism facilitates the identification of novel emotive expressions in an emotion dataset, thereby enriching the depth and breadth of emotion analysis capabilities. In particular, the automatic extraction method shows promising results for applicability in further expansion of the dictionary base in the future, thus advancing the field of emotion analysis and offering new avenues for research in sentiment analysis, affective computing, and human–computer interaction. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop