Previous Issue
Volume 6, June
 
 

Big Data Cogn. Comput., Volume 6, Issue 3 (September 2022) – 18 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
A Holistic Scalability Strategy for Time Series Databases Following Cascading Polyglot Persistence
Big Data Cogn. Comput. 2022, 6(3), 86; https://doi.org/10.3390/bdcc6030086 - 18 Aug 2022
Viewed by 7
Abstract
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested [...] Read more.
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested resources becomes a further requirement. Following this goal, NagareDB and its Cascading Polyglot Persistence approach were born. They were not just intended to provide a fast time series solution, but also to find a great cost-efficiency balance. However, although they provided outstanding results, they lacked a natural way of scaling out in a cluster fashion. Consequently, monolithic approaches could extract the maximum value from the solution but distributed ones had to rely on general scalability approaches. In this research, we proposed a holistic approach specially tailored for databases following Cascading Polyglot Persistence to further maximize its inherent resource-saving goals. The proposed approach reduced the cluster size by 33%, in a setup with just three ingestion nodes and up to 50% in a setup with 10 ingestion nodes. Moreover, the evaluation shows that our scaling method is able to provide efficient cluster growth, offering scalability speedups greater than 85% in comparison to a theoretically 100% perfect scaling, while also ensuring data safety via data replication. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Combination of Deep Cross-Stage Partial Network and Spatial Pyramid Pooling for Automatic Hand Detection
Big Data Cogn. Comput. 2022, 6(3), 85; https://doi.org/10.3390/bdcc6030085 - 09 Aug 2022
Viewed by 268
Abstract
The human hand is involved in many computer vision tasks, such as hand posture estimation, hand movement identification, human activity analysis, and other similar tasks, in which hand detection is an important preprocessing step. It is still difficult to correctly recognize some hands [...] Read more.
The human hand is involved in many computer vision tasks, such as hand posture estimation, hand movement identification, human activity analysis, and other similar tasks, in which hand detection is an important preprocessing step. It is still difficult to correctly recognize some hands in a cluttered environment because of the complex display variations of agile human hands and the fact that they have a wide range of motion. In this study, we provide a brief assessment of CNN-based object identification algorithms, specifically Densenet Yolo V2, Densenet Yolo V2 CSP, Densenet Yolo V2 CSP SPP, Resnet 50 Yolo V2, Resnet 50 CSP, Resnet 50 CSP SPP, Yolo V4 SPP, Yolo V4 CSP SPP, and Yolo V5. The advantages of CSP and SPP are thoroughly examined and described in detail in each algorithm. We show in our experiments that Yolo V4 CSP SPP provides the best level of precision available. The experimental results show that the CSP and SPP layers help improve the accuracy of CNN model testing performance. Our model leverages the advantages of CSP and SPP. Our proposed method Yolo V4 CSP SPP outperformed previous research results by an average of 8.88%, with an improvement from 87.6% to 96.48%. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
RSS-Based Wireless LAN Indoor Localization and Tracking Using Deep Architectures
Big Data Cogn. Comput. 2022, 6(3), 84; https://doi.org/10.3390/bdcc6030084 - 08 Aug 2022
Viewed by 311
Abstract
Wireless Local Area Network (WLAN) positioning is a challenging task indoors due to environmental constraints and the unpredictable behavior of signal propagation, even at a fixed location. The aim of this work is to develop deep learning-based approaches for indoor localization and tracking [...] Read more.
Wireless Local Area Network (WLAN) positioning is a challenging task indoors due to environmental constraints and the unpredictable behavior of signal propagation, even at a fixed location. The aim of this work is to develop deep learning-based approaches for indoor localization and tracking by utilizing Received Signal Strength (RSS). The study proposes Multi-Layer Perceptron (MLP), One and Two Dimensional Convolutional Neural Networks (1D CNN and 2D CNN), and Long Short Term Memory (LSTM) deep networks architectures for WLAN indoor positioning based on the data obtained by actual RSS measurements from an existing WLAN infrastructure in a mobile user scenario. The results, using different types of deep architectures including MLP, CNNs, and LSTMs with existing WLAN algorithms, are presented. The Root Mean Square Error (RMSE) is used as the assessment criterion. The proposed LSTM Model 2 achieved a dynamic positioning RMSE error of 1.73m, which outperforms probabilistic WLAN algorithms such as Memoryless Positioning (RMSE: 10.35m) and Nonparametric Information (NI) filter with variable acceleration (RMSE: 5.2m) under the same experiment environment. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
Impactful Digital Twin in the Healthcare Revolution
Big Data Cogn. Comput. 2022, 6(3), 83; https://doi.org/10.3390/bdcc6030083 - 08 Aug 2022
Viewed by 245
Abstract
Over the last few decades, our digitally expanding world has experienced another significant digitalization boost because of the COVID-19 pandemic. Digital transformations are changing every aspect of this world. New technological innovations are springing up continuously, attracting increasing attention and investments. Digital twin, [...] Read more.
Over the last few decades, our digitally expanding world has experienced another significant digitalization boost because of the COVID-19 pandemic. Digital transformations are changing every aspect of this world. New technological innovations are springing up continuously, attracting increasing attention and investments. Digital twin, one of the highest trending technologies of recent years, is now joining forces with the healthcare sector, which has been under the spotlight since the outbreak of COVID-19. This paper sets out to promote a better understanding of digital twin technology, clarify some common misconceptions, and review the current trajectory of digital twin applications in healthcare. Furthermore, the functionalities of the digital twin in different life stages are summarized in the context of a digital twin model in healthcare. Following the Internet of Things as a service concept and digital twining as a service model supporting Industry 4.0, we propose a paradigm of digital twinning everything as a healthcare service, and different groups of physical entities are also clarified for clear reference of digital twin architecture in healthcare. This research discusses the value of digital twin technology in healthcare, as well as current challenges and insights for future research. Full article
Show Figures

Figure 1

Article
Multi-State Synchronization of Chaotic Systems with Distributed Fractional Order Derivatives and Its Application in Secure Communications
Big Data Cogn. Comput. 2022, 6(3), 82; https://doi.org/10.3390/bdcc6030082 - 27 Jul 2022
Viewed by 292
Abstract
This study investigates multiple synchronizations of distributed fractional-order chaotic systems. These systems consider unknown parameters, disturbance, and time delays. A robust adaptive control method is designed for multistage distributed fractional-order chaotic systems. In this paper, system parameters are changed step by step. Using [...] Read more.
This study investigates multiple synchronizations of distributed fractional-order chaotic systems. These systems consider unknown parameters, disturbance, and time delays. A robust adaptive control method is designed for multistage distributed fractional-order chaotic systems. In this paper, system parameters are changed step by step. Using Lyapunov’s function, while the synchronization error convergence to zero is guaranteed, adaptive rules are designed to estimate the parameters. Then, a secure communication scheme is proposed using the new chaotic masking method. Finally, the simulations are performed on a chaotic system of distributed Duffing fractional order. The results show the high efficiency of the proposed synchronization scheme using robust adaptive control, despite the parametric uncertainties, external disturbance, and variable and unknown time delays. Then, the simulations were performed on the sinusoidal signals of the message in the application of secure communications. The results showed the success of the proposed masking scheme with synchronization in coding and decoding information. Full article
Show Figures

Figure 1

Article
An Evaluation of Key Adoption Factors towards Using the Fog Technology
Big Data Cogn. Comput. 2022, 6(3), 81; https://doi.org/10.3390/bdcc6030081 - 26 Jul 2022
Viewed by 352
Abstract
Fog technology is one of the recent improvements in cloud technology that is designed to reduce some of its drawbacks. Fog technology architecture is often widely distributed to minimize the time required for data processing and enable Internet of Things (IoT) innovations. The [...] Read more.
Fog technology is one of the recent improvements in cloud technology that is designed to reduce some of its drawbacks. Fog technology architecture is often widely distributed to minimize the time required for data processing and enable Internet of Things (IoT) innovations. The purpose of this paper is to evaluate the main factors that might influence the adoption of fog technology. This paper offers a combined framework that addresses fog technology adoption based on the technology adoption perspective, which has been comprehensively researched in the information systems discipline. The proposed integrated framework combines the technology acceptance model (TAM) and diffusion of innovation (DOI) theory to develop a holistic perspective on the adoption of fog technology. The factors that might affect the adoption of fog technology are analyzed from the results of an online survey in 43 different organizations across a wide range of industries. These factors are observed based on data collected from 216 participants, including professional IT staff and senior business executives. This analysis was conducted by using structural equation modeling (SEM). The research results identified nine factors with a statistically significant impact on the adoption of fog technology, and these factors included relative advantage, compatibility, awareness, cost-effectiveness, security, infrastructure, ease of use, usefulness, and location. The findings from this research offer insight to organizations looking to implement fog technology to enable IoT and tap into the digital transformation opportunities presented by this new digital economy. Full article
Show Figures

Figure 1

Article
How Does AR Technology Adoption and Involvement Behavior Affect Overseas Residents’ Life Satisfaction?
Big Data Cogn. Comput. 2022, 6(3), 80; https://doi.org/10.3390/bdcc6030080 - 25 Jul 2022
Viewed by 345
Abstract
This study aims to better understand foreign residents’ life satisfaction by exploring residents’ AR technology adoption behavior (a combination of transportation applications’ usefulness and ease of use) and travel involvement. Data were collected from 400 respondents randomly through a questionnaire-based survey. SPSS and [...] Read more.
This study aims to better understand foreign residents’ life satisfaction by exploring residents’ AR technology adoption behavior (a combination of transportation applications’ usefulness and ease of use) and travel involvement. Data were collected from 400 respondents randomly through a questionnaire-based survey. SPSS and AMOS were used to analyze and gather results. This study suggests overall life satisfaction as an operationalized dependent variable to measure a traveler’s sense of satisfaction, a traveler’s involvement, and AR adoption of necessary transportation apps is constructed as an independent variable. The model was proposed to explore the impacts of travel satisfaction on overall life satisfaction. The model focused on the role of traveling involvement when it is considered a first variable to explore the impact of travel satisfaction on the overall quality of life. Furthermore, AR technology adoption behavior is where people use traveling apps before and during traveling to fulfill travel needs, obtain details about locations, and make proper arrangements, as well as other facilities. Two significant roles of transportation apps and travelers’ involvement in travel-satisfaction development and overall life satisfaction were found; variables had a positive effect on travel satisfaction and life satisfaction. The results also revealed that AR mobile travel applications with traveler involvement could help improve individual overseas residents’ travel satisfaction; travel satisfaction provides more feelings of satisfaction with life in South Korea. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

Article
Real-Time End-to-End Speech Emotion Recognition with Cross-Domain Adaptation
Big Data Cogn. Comput. 2022, 6(3), 79; https://doi.org/10.3390/bdcc6030079 - 15 Jul 2022
Viewed by 462
Abstract
Language resources are the main factor in speech-emotion-recognition (SER)-based deep learning models. Thai is a low-resource language that has a smaller data size than high-resource languages such as German. This paper describes the framework of using a pretrained-model-based front-end and back-end network to [...] Read more.
Language resources are the main factor in speech-emotion-recognition (SER)-based deep learning models. Thai is a low-resource language that has a smaller data size than high-resource languages such as German. This paper describes the framework of using a pretrained-model-based front-end and back-end network to adapt feature spaces from the speech recognition domain to the speech emotion classification domain. It consists of two parts: a speech recognition front-end network and a speech emotion recognition back-end network. For speech recognition, Wav2Vec2 is the state-of-the-art for high-resource languages, while XLSR is used for low-resource languages. Wav2Vec2 and XLSR have proposed generalized end-to-end learning for speech understanding based on the speech recognition domain as feature space representations from feature encoding. This is one reason why our front-end network was selected as Wav2Vec2 and XLSR for the pretrained model. The pre-trained Wav2Vec2 and XLSR are used for front-end networks and fine-tuned for specific languages using the Common Voice 7.0 dataset. Then, feature vectors of the front-end network are input for back-end networks; this includes convolution time reduction (CTR) and linear mean encoding transformation (LMET). Experiments using two different datasets show that our proposed framework can outperform the baselines in terms of unweighted and weighted accuracies. Full article
Show Figures

Figure 1

Article
Enhancing Marketing Provision through Increased Online Safety That Imbues Consumer Confidence: Coupling AI and ML with the AIDA Model
Big Data Cogn. Comput. 2022, 6(3), 78; https://doi.org/10.3390/bdcc6030078 - 12 Jul 2022
Viewed by 497
Abstract
To enhance the effectiveness of artificial intelligence (AI) and machine learning (ML) in online retail operations and avoid succumbing to digital myopia, marketers need to be aware of the different approaches to utilizing AI/ML in terms of the information they make available to [...] Read more.
To enhance the effectiveness of artificial intelligence (AI) and machine learning (ML) in online retail operations and avoid succumbing to digital myopia, marketers need to be aware of the different approaches to utilizing AI/ML in terms of the information they make available to appropriate groups of consumers. This can be viewed as utilizing AI/ML to improve the customer journey experience. Reflecting on this, the main question to be addressed is: how can retailers utilize big data through the implementation of AI/ML to improve the efficiency of their marketing operations so that customers feel safe buying online? To answer this question, we conducted a systematic literature review and posed several subquestions that resulted in insights into why marketers need to pay specific attention to AI/ML capability. We explain how different AI/ML tools/functionalities can be related to different stages of the AIDA (Awareness, Interest, Desire, and Action) model, which in turn helps retailers to recognize potential opportunities as well as increase consumer confidence. We outline how digital myopia can be reduced by focusing on human inputs. Although challenges still exist, it is clear that retailers need to identify the boundaries in terms of AI/ML’s ability to enhance the company’s business model. Full article
(This article belongs to the Special Issue Artificial Intelligence for Online Safety)
Show Figures

Figure 1

Article
We Know You Are Living in Bali: Location Prediction of Twitter Users Using BERT Language Model
Big Data Cogn. Comput. 2022, 6(3), 77; https://doi.org/10.3390/bdcc6030077 - 07 Jul 2022
Viewed by 545
Abstract
Twitter user location data provide essential information that can be used for various purposes. However, user location is not easy to identify because many profiles omit this information, or users enter data that do not correspond to their actual locations. Several related works [...] Read more.
Twitter user location data provide essential information that can be used for various purposes. However, user location is not easy to identify because many profiles omit this information, or users enter data that do not correspond to their actual locations. Several related works attempted to predict location on English-language tweets. In this study, we attempted to predict the location of Indonesian tweets. We utilized machine learning approaches, i.e., long-short term memory (LSTM) and bidirectional encoder representations from transformers (BERT) to infer Twitter users’ home locations using display name in profile, user description, and user tweets. By concatenating display name, description, and aggregated tweet, the model achieved the best accuracy of 0.77. The performance of the IndoBERT model outperformed several baseline models. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
Optimizing Operation Room Utilization—A Prediction Model
Big Data Cogn. Comput. 2022, 6(3), 76; https://doi.org/10.3390/bdcc6030076 - 06 Jul 2022
Cited by 1 | Viewed by 634
Abstract
Background: Operating rooms are the core of hospitals. They are a primary source of revenue and are often seen as one of the bottlenecks in the medical system. Many efforts are made to increase throughput, reduce costs, and maximize incomes, as well as [...] Read more.
Background: Operating rooms are the core of hospitals. They are a primary source of revenue and are often seen as one of the bottlenecks in the medical system. Many efforts are made to increase throughput, reduce costs, and maximize incomes, as well as optimize clinical outcomes and patient satisfaction. We trained a predictive model on the length of surgeries to improve the productivity and utility of operative rooms in general hospitals. Methods: We collected clinical and administrative data for the last 10 years from two large general public hospitals in Israel. We trained a machine learning model to give the expected length of surgery using pre-operative data. These data included diagnoses, laboratory tests, risk factors, demographics, procedures, anesthesia type, and the main surgeon’s level of experience. We compared our model to a naïve model that represented current practice. Findings: Our prediction model achieved better performance than the naïve model and explained almost 70% of the variance in surgery durations. Interpretation: A machine learning-based model can be a useful approach for increasing operating room utilization. Among the most important factors were the type of procedures and the main surgeon’s level of experience. The model enables the harmonizing of hospital productivity through wise scheduling and matching suitable teams for a variety of clinical procedures for the benefit of the individual patient and the system as a whole. Full article
(This article belongs to the Special Issue Data Science in Health Care)
Show Figures

Figure 1

Opinion
Environmental Justice and the Use of Artificial Intelligence in Urban Air Pollution Monitoring
Big Data Cogn. Comput. 2022, 6(3), 75; https://doi.org/10.3390/bdcc6030075 - 05 Jul 2022
Viewed by 541
Abstract
The main aims of urban air pollution monitoring are to optimize the interaction between humanity and nature, to combine and integrate environmental databases, and to develop sustainable approaches to the production and the organization of the urban environment. One of the main applications [...] Read more.
The main aims of urban air pollution monitoring are to optimize the interaction between humanity and nature, to combine and integrate environmental databases, and to develop sustainable approaches to the production and the organization of the urban environment. One of the main applications of urban air pollution monitoring is for exposure assessment and public health studies. Artificial intelligence (AI) and machine learning (ML) approaches can be used to build air pollution models to predict pollutant concentrations and assess environmental and health risks. Air pollution data can be uploaded into AI/ML models to estimate different exposure levels within different communities. The correlation between exposure estimates and public health surveys is important for assessing health risks. These aspects are critical when it concerns environmental injustice. Computational approaches should efficiently manage, visualize, and integrate large datasets. Effective data integration and management are a key to the successful application of computational intelligence approaches in ecology. In this paper, we consider some of these constraints and discuss possible ways to overcome current problems and environmental injustice. The most successful global approach is the development of the smart city; however, such an approach can only increase environmental injustice as not all the regions have access to AI/ML technologies. It is challenging to develop successful regional projects for the analysis of environmental data in the current complicated operating conditions, as well as taking into account the time, computing power, and constraints in the context of environmental injustice. Full article
(This article belongs to the Special Issue Big Data and Internet of Things)
Article
Topological Data Analysis Helps to Improve Accuracy of Deep Learning Models for Fake News Detection Trained on Very Small Training Sets
Big Data Cogn. Comput. 2022, 6(3), 74; https://doi.org/10.3390/bdcc6030074 - 05 Jul 2022
Viewed by 483
Abstract
Topological data analysis has recently found applications in various areas of science, such as computer vision and understanding of protein folding. However, applications of topological data analysis to natural language processing remain under-researched. This study applies topological data analysis to a particular natural [...] Read more.
Topological data analysis has recently found applications in various areas of science, such as computer vision and understanding of protein folding. However, applications of topological data analysis to natural language processing remain under-researched. This study applies topological data analysis to a particular natural language processing task: fake news detection. We have found that deep learning models are more accurate in this task than topological data analysis. However, assembling a deep learning model with topological data analysis significantly improves the model’s accuracy if the available training set is very small. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
Digital Technologies and the Role of Data in Cultural Heritage: The Past, the Present, and the Future
Big Data Cogn. Comput. 2022, 6(3), 73; https://doi.org/10.3390/bdcc6030073 - 04 Jul 2022
Viewed by 661
Abstract
Is culture considered to be our past, our roots, ancient ruins, or an old piece of art? Culture is all the factors that define who we are, how we act and interact in our world, in our daily activities, in our personal and [...] Read more.
Is culture considered to be our past, our roots, ancient ruins, or an old piece of art? Culture is all the factors that define who we are, how we act and interact in our world, in our daily activities, in our personal and public relations, in our life. Culture is all the things we are not obliged to do. However, today, we live in a mixed environment, an environment that is a combination of “offline” and the online, digital world. In this mixed environment, it is technology that defines our behaviour, technology that unites people in a large world, that finally, defines a status of “monoculture”. In this article, we examine the role of technology, and especially big data, in relation to the culture. We present the advances that led to paradigm shifts in the research area of cultural informatics, and forecast the future of culture as will be defined in this mixed world. Full article
(This article belongs to the Special Issue Big Data Analytics for Cultural Heritage)
Show Figures

Figure 1

Article
Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition
Big Data Cogn. Comput. 2022, 6(3), 72; https://doi.org/10.3390/bdcc6030072 - 01 Jul 2022
Cited by 1 | Viewed by 580
Abstract
The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artificial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this [...] Read more.
The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artificial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this work, there is a focus on Industry 4.0 and Smart City paradigms and a proposal of a new approach to monitor and track water consumption using an OCR, as well as the artificial intelligence algorithm and, in particular the YoLo 4 machine learning model. The goal of this work is to provide optimized results in real time. The recognition rate obtained with the proposed algorithms is around 98%. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Deep Federated Learning Models)
Show Figures

Figure 1

Article
A Comprehensive Spark-Based Layer for Converting Relational Databases to NoSQL
Big Data Cogn. Comput. 2022, 6(3), 71; https://doi.org/10.3390/bdcc6030071 - 27 Jun 2022
Viewed by 707
Abstract
Currently, the continuous massive growth in the size, variety, and velocity of data is defined as big data. Relational databases have a limited ability to work with big data. Consequently, not only structured query language (NoSQL) databases were utilized to handle big data [...] Read more.
Currently, the continuous massive growth in the size, variety, and velocity of data is defined as big data. Relational databases have a limited ability to work with big data. Consequently, not only structured query language (NoSQL) databases were utilized to handle big data because NoSQL represents data in diverse models and uses a variety of query languages, unlike traditional relational databases. Therefore, using NoSQL has become essential, and many studies have attempted to propose different layers to convert relational databases to NoSQL; however, most of them targeted only one or two models of NoSQL, and evaluated their layers on a single node, not in a distributed environment. This study proposes a Spark-based layer for mapping relational databases to NoSQL models, focusing on the document, column, and key–value databases of NoSQL models. The proposed Spark-based layer comprises of two parts. The first part is concerned with converting relational databases to document, column, and key–value databases, and encompasses two phases: a metadata analyzer of relational databases and Spark-based transformation and migration. The second part focuses on executing a structured query language (SQL) on the NoSQL. The suggested layer was applied and compared with Unity, as it has similar components and features and supports sub-queries and join operations in a single-node environment. The experimental results show that the proposed layer outperformed Unity in terms of the query execution time by a factor of three. In addition, the proposed layer was applied to multi-node clusters using different scenarios, and the results show that the integration between the Spark cluster and NoSQL databases on multi-node clusters provided better performance in reading and writing while increasing the dataset size than using a single node. Full article
Show Figures

Figure 1

Article
DeepWings©: Automatic Wing Geometric Morphometrics Classification of Honey Bee (Apis mellifera) Subspecies Using Deep Learning for Detecting Landmarks
Big Data Cogn. Comput. 2022, 6(3), 70; https://doi.org/10.3390/bdcc6030070 - 27 Jun 2022
Viewed by 800
Abstract
Honey bee classification by wing geometric morphometrics entails the first step of manual annotation of 19 landmarks in the forewing vein junctions. This is a time-consuming and error-prone endeavor, with implications for classification accuracy. Herein, we developed a software called DeepWings© that overcomes [...] Read more.
Honey bee classification by wing geometric morphometrics entails the first step of manual annotation of 19 landmarks in the forewing vein junctions. This is a time-consuming and error-prone endeavor, with implications for classification accuracy. Herein, we developed a software called DeepWings© that overcomes this constraint in wing geometric morphometrics classification by automatically detecting the 19 landmarks on digital images of the right forewing. We used a database containing 7634 forewing images, including 1864 analyzed by F. Ruttner in the original delineation of 26 honey bee subspecies, to tune a convolutional neural network as a wing detector, a deep learning U-Net as a landmarks segmenter, and a support vector machine as a subspecies classifier. The implemented MobileNet wing detector was able to achieve a mAP of 0.975 and the landmarks segmenter was able to detect the 19 landmarks with 91.8% accuracy, with an average positional precision of 0.943 resemblance to manually annotated landmarks. The subspecies classifier, in turn, presented an average accuracy of 86.6% for 26 subspecies and 95.8% for a subset of five important subspecies. The final implementation of the system showed good speed performance, requiring only 14 s to process 10 images. DeepWings© is very user-friendly and is the first fully automated software, offered as a free Web service, for honey bee classification from wing geometric morphometrics. DeepWings© can be used for honey bee breeding, conservation, and even scientific purposes as it provides the coordinates of the landmarks in excel format, facilitating the work of research teams using classical identification approaches and alternative analytical tools. Full article
Show Figures

Figure 1

Article
Comparative Analysis of Backbone Networks for Deep Knee MRI Classification Models
Big Data Cogn. Comput. 2022, 6(3), 69; https://doi.org/10.3390/bdcc6030069 - 21 Jun 2022
Viewed by 619
Abstract
This paper focuses on different types of backbone networks for machine learning architectures which perform classification of knee Magnetic Resonance Imaging (MRI) images. This paper aims to compare different types of feature extraction networks for the same classification task, in terms of accuracy [...] Read more.
This paper focuses on different types of backbone networks for machine learning architectures which perform classification of knee Magnetic Resonance Imaging (MRI) images. This paper aims to compare different types of feature extraction networks for the same classification task, in terms of accuracy and performance. Multiple variations of machine learning models were trained based on the MRNet architecture, choosing AlexNet, ResNet, VGG-11, VGG-16, and Efficientnet as the backbone. The models were evaluated on the MRNet validation dataset, computing Area Under the Receiver Operating Characteristics Curve (ROC-AUC), accuracy, f1 score, and Cohen’s Kappa as evaluation metrics. The MRNet-VGG16 model variant shows the best results for Anterior Cruciate Ligament (ACL) tear detection. For general abnormality detection, MRNet-VGG16 is dominated by MRNet-Resnet in confidence between 0.5 and 0.75 and by MRNet-VGG11 for confidence more than 0.8. Due to the non-uniform nature of backbone network performance on different MRI planes, it is advisable to use an LR ensemble of: VGG16 on a coronal plane for all classification tasks; on an axial plane for abnormality and ACL tear detection; Alexnet on a sagittal plane for abnormality detection, and an axial plane for meniscal tear detection; and VGG11 on a sagittal plane for ACL tear detection. The results also indicate that the Cohen’s Kappa metric is valuable in model evaluation for the MRNet dataset, as it provides deeper insights on classification decisions. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop