Topic Editors

Department of Computer Science and Information Engineering, National Taipei University of Technology (Taipei Tech), Taipei 10608, Taiwan
Prof. Dr. Wei-Shinn Ku
Department of Computer Science and Software Engineering, Auburn University, Auburn, AL 36849, USA

Applied Computing and Machine Intelligence (ACMI)

Abstract submission deadline
closed (30 September 2023)
Manuscript submission deadline
closed (31 December 2023)
Viewed by
46830

Topic Information

Dear Colleagues,

Applied computing addresses the real-world problems in the applications of computer technologies. It considers both theoretical and applied computer science, from hardware to software, and is a multidisciplinary study to integrate computer/information technology with a second discipline. Machine artificial intelligence is one of the most important and promising technologies for applied computing and makes computers learn from data using different techniques in various applications.

This Topic Issue (TI) focuses on the latest ideas, solutions, or developments of applied computing using machine intelligence. The topics of interest include all kinds of computing applications using machine/artificial intelligence techniques as well as theoretical. The related areas for computation and technology are information management, systems, networking, programming, software engineering, mobile technology, graphic applications and visualization, data integration, security, and artificial intelligence. As this juncture sits at the intersection of multiple disciplines, it has a wide range of applications, including finance, retail, education, healthcare, agriculture, navigation, lifestyle, manufacturing, etc.

Prof. Dr. Chuan-Ming Liu
Prof. Dr. Wei-Shinn Ku
Topic Editors

Keywords

  • applied computing
  • applied information technology
  • artificial intelligence
  • big data
  • bioinformatics
  • computational intelligence
  • cloud computing
  • data science
  • data management
  • data analytics
  • data mining
  • deep learning
  • health informatics
  • healthcare
  • e-learning
  • edge computing
  • electronic commerce
  • enterprise computing
  • image processing
  • Internet of Things (IoTs)
  • information systems
  • law, social and behavioral sciences
  • machine learning
  • mobile computing
  • networking
  • neural networks
  • security
  • sensing technology
  • smart agriculture
  • smart campus
  • smart city
  • smart governance
  • smart transportation
  • software engineering
  • visualization
  • wireless networks

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
- - 2020 20.8 Days CHF 1600
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Big Data and Cognitive Computing
BDCC
3.7 4.9 2017 18.2 Days CHF 1800
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Information
information
3.1 5.8 2010 18 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (24 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
30 pages, 163128 KiB  
Article
PCGen: A Fully Parallelizable Point Cloud Generative Model
by Nicolas Vercheval, Remco Royen, Adrian Munteanu and Aleksandra Pižurica
Sensors 2024, 24(5), 1414; https://doi.org/10.3390/s24051414 - 22 Feb 2024
Viewed by 627
Abstract
Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model [...] Read more.
Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model (VQVAE) that can synthesize high-quality point clouds in milliseconds. Unlike previous work in VQVAEs, our model offers a compact sample representation suitable for conditional generation and data exploration with potential applications in rapid prototyping. We achieve this result by combining architectural improvements with an innovative approach for probabilistic random generation. First, we rethink current parallel point cloud autoencoder structures, and we propose several solutions to improve robustness, efficiency and reconstruction quality. Notable contributions in the decoder architecture include an innovative computation layer to process the shape semantic information, an attention mechanism that helps the model focus on different areas and a filter to cover possible sampling errors. Secondly, we introduce a parallel sampling strategy for VQVAE models consisting of a double encoding system, where a variational autoencoder learns how to generate the complex discrete distribution of the VQVAE, not only allowing quick inference but also describing the shape with a few global variables. We compare the proposed decoder and our VQVAE model with established and concurrent work, and we prove, one by one, the validity of the single contributions. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

13 pages, 554 KiB  
Article
Network Traffic Characteristics and Analysis in Recent Mobile Games
by Daekyeong Moon
Appl. Sci. 2024, 14(4), 1397; https://doi.org/10.3390/app14041397 - 8 Feb 2024
Viewed by 741
Abstract
The landscape of mobile gaming has evolved significantly over the years, with profound changes in network reliability and traffic patterns. In the early 2010s, mobile games faced challenges due to unreliable networks and primarily featured asynchronous gameplay. However, in the current era, modern [...] Read more.
The landscape of mobile gaming has evolved significantly over the years, with profound changes in network reliability and traffic patterns. In the early 2010s, mobile games faced challenges due to unreliable networks and primarily featured asynchronous gameplay. However, in the current era, modern mobile games benefit from robust network connectivity, mirroring PC gaming experiences by relying on persistent connections to game servers. This shift prompted us to conduct an in-depth traffic analysis of two mobile games that represent opposite ends of the genre spectrum: a massively multiplayer game resembling PC MMORPGs with tightly synchronized gameplay, and a single-player puzzle game that incorporates asynchronous social interactions. Surprisingly, both games exhibited remarkably similar traffic footprints; small packets with short inter-packet arrival times, indicating their high expectations for network reliability. This suggests that game developers now prioritize network quality similarly to their PC gaming counterparts. Additionally, our analysis of packet lengths unveiled that recent mobile games predominantly employ short packets dominated by a few key packet types closely tied to player actions, which conforms to observations from PC online games. However, the self-similarity in traffic patterns, a notable feature in PC online games, only partially explains the traffic in mobile games, varying across genres. These findings shed light on the evolving traffic patterns in mobile games and emphasize the need for further research in this dynamic domain. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 3238 KiB  
Article
An Algorithm for Finding Optimal k-Core in Attribute Networks
by Jing Liu and Yong Zhong
Appl. Sci. 2024, 14(3), 1256; https://doi.org/10.3390/app14031256 - 2 Feb 2024
Cited by 1 | Viewed by 456
Abstract
As a structural indicator of dense subgraphs, k-core has been widely used in community search due to its concise and efficient calculation. Many community search algorithms have been expanded on the basis of k-core. However, relevant algorithms often set k values [...] Read more.
As a structural indicator of dense subgraphs, k-core has been widely used in community search due to its concise and efficient calculation. Many community search algorithms have been expanded on the basis of k-core. However, relevant algorithms often set k values based on empirical analysis of datasets or require users to input manually. Once users are not familiar with the graph network structure, they may miss the optimal solution due to an improper k setting. Especially in attribute social networks, characterizing communities with only k-cores may lead to a lack of semantic interpretability of communities. Consequently, this article proposes a method for identifying the optimal k-core with the greatest attribute score in the attribute social network as the target community. The difficulty of the problem is that the query needs to integrate both structural and textual indicators of the community while fully considering the diversity of attribute scoring functions. To effectively reduce computational costs, we incorporate the topological characteristics of the k-core and the attribute characteristics of entities to construct a hierarchical forest. It is worth noting that we name tree nodes in a way similar to pre-order traversal and can maintain the order of all tree nodes during the forest creation process. In such an attribute forest, it is possible to quickly locate the initial solution containing all query vertices and reuse intermediate results during the process of expanding queries. We conducted effectiveness and performance experiments on multiple real datasets. As the results show, attribute scoring functions are not monotonic, and the algorithm proposed in this paper can avoid scores falling into local optima. With the help of the attribute k-core forest, the actual query time of the Advanced algorithm has improved by two orders of magnitude compared to the BaseLine algorithm. In addition, the average F1 score of our target community has increased by 2.04 times and 26.57% compared to ACQ and SFEG, respectively. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 8308 KiB  
Article
Weed Detection Method Based on Lightweight and Contextual Information Fusion
by Chi Zhang, Jincan Liu, Hongjun Li, Haodong Chen, Zhangxun Xu and Zhen Ou
Appl. Sci. 2023, 13(24), 13074; https://doi.org/10.3390/app132413074 - 7 Dec 2023
Cited by 1 | Viewed by 1035
Abstract
Weed detection technology is of paramount significance in achieving automation and intelligence in weed control. Nevertheless, it grapples with several formidable challenges, including imprecise small target detection, high computational demands, inadequate real-time performance, and susceptibility to environmental background interference. In response to these [...] Read more.
Weed detection technology is of paramount significance in achieving automation and intelligence in weed control. Nevertheless, it grapples with several formidable challenges, including imprecise small target detection, high computational demands, inadequate real-time performance, and susceptibility to environmental background interference. In response to these practical issues, we introduce CCCS-YOLO, a lightweight weed detection algorithm, built upon enhancements to the Yolov5s framework. In this study, the Faster_Block is integrated into the C3 module of the YOLOv5s neck network, creating the C3_Faster module. This modification not only streamlines the network but also significantly amplifies its detection capabilities. Subsequently, the context aggregation module is enhanced in the head by improving the convolution blocks, strengthening the network’s ability to distinguish between background and targets. Furthermore, the lightweight Content-Aware ReAssembly of Feature (CARAFE) module is employed to replace the upsampling module in the neck network, enhancing the performance of small target detection and promoting the fusion of contextual information. Finally, Soft-NMS-EIoU is utilized to replace the NMS and CIoU modules in YOLOv5s, enhancing the accuracy of target detection under dense conditions. Through detection on a publicly available sugar beet weed dataset and sesame weed datasets, the improved algorithm exhibits significant improvement in detection performance compared to YOLOv5s and demonstrates certain advancements over classical networks such as YOLOv7 and YOLOv8. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

21 pages, 84693 KiB  
Article
Measuring and Predicting Sensor Performance for Camouflage Detection in Multispectral Imagery
by Tobias Hupel and Peter Stütz
Sensors 2023, 23(19), 8025; https://doi.org/10.3390/s23198025 - 22 Sep 2023
Viewed by 976
Abstract
To improve the management of multispectral sensor systems on small reconnaissance drones, this paper proposes an approach to predict the performance of a sensor band with respect to its ability to expose camouflaged targets under a given environmental context. As a reference for [...] Read more.
To improve the management of multispectral sensor systems on small reconnaissance drones, this paper proposes an approach to predict the performance of a sensor band with respect to its ability to expose camouflaged targets under a given environmental context. As a reference for sensor performance, a new metric is introduced that quantifies the visibility of camouflaged targets in a particular sensor band: the Target Visibility Index (TVI). For the sensor performance prediction, several machine learning models are trained to learn the relationship between the TVI for a specific sensor band and an environmental context state extracted from the visual band by multiple image descriptors. Using a predicted measure of performance, the sensor bands are ranked according to their significance. For the training and evaluation of the performance prediction approach, a dataset featuring 853 multispectral captures and numerous camouflaged targets in different environments was created and has been made publicly available for download. The results show that the proposed approach can successfully determine the most informative sensor bands in most cases. Therefore, this performance prediction approach has great potential to improve camouflage detection performance in real-world reconnaissance scenarios by increasing the utility of each sensor band and reducing the associated workload of complex multispectral sensor systems. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

18 pages, 5546 KiB  
Article
PBA-YOLOv7: An Object Detection Method Based on an Improved YOLOv7 Network
by Yang Sun, Yi Li, Song Li, Zehao Duan, Haonan Ning and Yuhang Zhang
Appl. Sci. 2023, 13(18), 10436; https://doi.org/10.3390/app131810436 - 18 Sep 2023
Cited by 2 | Viewed by 2347
Abstract
Deep learning-based object detection methods address the problem of how to trade off the object detection accuracy and detection speed of the model. This paper proposes the PBA-YOLOv7 network algorithm, which is based on the YOLOv7 network, and first introduces the PConv, which [...] Read more.
Deep learning-based object detection methods address the problem of how to trade off the object detection accuracy and detection speed of the model. This paper proposes the PBA-YOLOv7 network algorithm, which is based on the YOLOv7 network, and first introduces the PConv, which lightens the ELAN module in the backbone network structure and reduces the number of parameters to improve the detection speed of the network and then designs and introduces the BiFusionNet network, which better aggregates the high-level semantic features and the low-level semantic features; and finally, on this basis, the coordinate attention mechanism is introduced to make the network focus on more critical features without increasing the model complexity. The coordinate attention mechanism is introduced to make the network focus more on important feature information and improve the feature expression ability of the network without increasing the model complexity. Experiments on the publicly available KITTI’s dataset show that the PBA-YOLOv7 network model significantly improves both detection accuracy and detection speed compared to the original YOLOv7 model, with 4% and 7.8% improvement in mAP0.5 and mAP0.5:0.95, respectively, and six frames improvement in FPS. The improved algorithm in this paper weighs the model’s detection accuracy and detection speed in the detection task. It performs well compared to other algorithms, such as YOLOv7 and YOLOv5l. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

13 pages, 6244 KiB  
Article
Scale-Aware Tracking Method with Appearance Feature Filtering and Inter-Frame Continuity
by Haiyu He, Zhen Chen, Zhen Li, Xiangdong Liu and Haikuo Liu
Sensors 2023, 23(17), 7516; https://doi.org/10.3390/s23177516 - 30 Aug 2023
Viewed by 561
Abstract
Visual object tracking is a fundamental task in computer vision that requires estimating the position and scale of a target object in a video sequence. However, scale variation is a difficult challenge that affects the performance and robustness of many trackers, especially those [...] Read more.
Visual object tracking is a fundamental task in computer vision that requires estimating the position and scale of a target object in a video sequence. However, scale variation is a difficult challenge that affects the performance and robustness of many trackers, especially those based on the discriminative correlation filter (DCF). Existing scale estimation methods based on multi-scale features are computationally expensive and degrade the real-time performance of the DCF-based tracker, especially in scenarios with restricted computing power. In this paper, we propose a practical and efficient solution that can handle scale changes without using multi-scale features and can be combined with any DCF-based tracker as a plug-in module. We use color name (CN) features and a salient feature to reduce the target appearance model’s dimensionality. We then estimate the target scale based on a Gaussian distribution model and introduce global and local scale consistency assumptions to restore the target’s scale. We fuse the tracking results with the DCF-based tracker to obtain the new position and scale of the target. We evaluate our method on the benchmark dataset Temple Color 128 and compare it with some popular trackers. Our method achieves competitive accuracy and robustness while significantly reducing the computational cost. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

22 pages, 4319 KiB  
Article
Evaluating Deep Learning Techniques for Blind Image Super-Resolution within a High-Scale Multi-Domain Perspective
by Valdivino Alexandre de Santiago Júnior
AI 2023, 4(3), 598-619; https://doi.org/10.3390/ai4030032 - 1 Aug 2023
Viewed by 1607
Abstract
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains [...] Read more.
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains to proper evaluate the techniques. It is also interesting to remark that blind SR is attractive for real-world scenarios since it is based on the idea that the degradation process is unknown, and, hence, techniques in this context rely basically on low-resolution (LR) images. In this article, we present a high-scale (8×) experiment which evaluates five recent DL techniques tailored for blind image SR: Adaptive Pseudo Augmentation (APA), Blind Image SR with Spatially Variant Degradations (BlindSR), Deep Alternating Network (DAN), FastGAN, and Mixture of Experts Super-Resolution (MoESR). We consider 14 datasets from five different broader domains (Aerial, Fauna, Flora, Medical, and Satellite), and another remark is that some of the DL approaches were designed for single-image SR but others not. Based on two no-reference metrics, NIQE and the transformer-based MANIQA score, MoESR can be regarded as the best solution although the perceptual quality of the created high-resolution (HR) images of all the techniques still needs to improve. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

24 pages, 2668 KiB  
Article
CNN Hardware Accelerator for Real-Time Bearing Fault Diagnosis
by Ching-Che Chung, Yu-Pei Liang and Hong-Jin Jiang
Sensors 2023, 23(13), 5897; https://doi.org/10.3390/s23135897 - 25 Jun 2023
Cited by 1 | Viewed by 1276
Abstract
This paper introduces a one-dimensional convolutional neural network (CNN) hardware accelerator. It is crafted to conduct real-time assessments of bearing conditions using economical hardware components, implemented on a field-programmable gate array evaluation platform, negating the necessity to transfer data to a cloud-based server. [...] Read more.
This paper introduces a one-dimensional convolutional neural network (CNN) hardware accelerator. It is crafted to conduct real-time assessments of bearing conditions using economical hardware components, implemented on a field-programmable gate array evaluation platform, negating the necessity to transfer data to a cloud-based server. The adoption of the down-sampling technique augments the visible time span of the signal in an image, thereby enhancing the accuracy of the bearing condition diagnosis. Furthermore, the proposed method of quaternary quantization enhances precision and shrinks the memory demand for the neural network model by an impressive 89%. Provided that the current signal data sampling rate stands at 64 K samples/s, the proposed design can accomplish real-time fault diagnosis at a clock frequency of 100 MHz. Impressively, the response duration of the proposed CNN hardware system is a mere 0.28 s, with the fault diagnosis precision reaching a remarkable 96.37%. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

19 pages, 2256 KiB  
Article
Lhia: A Smart Chatbot for Breastfeeding Education and Recruitment of Human Milk Donors
by Joeckson Santos Corrêa, Ari Pereira de Araújo Neto, Giovanny Rebouças Pinto, Lucas Daniel Batista Lima and Ariel Soares Teles
Appl. Sci. 2023, 13(12), 6923; https://doi.org/10.3390/app13126923 - 8 Jun 2023
Viewed by 1673
Abstract
Human milk is the most important way to feed and protect newborns as it has the components to ensure human health. Human Milk Banks (HMBs) form a network that offers essential services to ensure that newborns and mothers can take advantage of the [...] Read more.
Human milk is the most important way to feed and protect newborns as it has the components to ensure human health. Human Milk Banks (HMBs) form a network that offers essential services to ensure that newborns and mothers can take advantage of the benefits of human milk. Despite this, there is low adherence to exclusive breastfeeding in Brazil, and human milk stocks available in HMBs are usually below demand. This study aimed to co-develop a smart conversational agent (Lhia chatbot) for breastfeeding education and human milk donor recruitment for HMBs. The co-design methodology was carried out with health professionals from the HMB of the University Hospital of the Federal University of Maranhão (HMB-UHFUMA). Five natural language processing pipelines based on deep learning were trained to classify different user intents. During the rounds in the co-design procedure, improvements were made in the content and structure of the conversational flow, and the data produced were used in subsequent training sessions of pipelines. The best-performing pipeline achieved an accuracy of 93%, with a fallback index of 15% for 1851 interactions. In addition, the conversational flow improved, reaching 2904 responses given by the chatbot during the last co-design round. The pipeline with the best performance and the most improved conversational flow were deployed in the Lhia chatbot to be put into production. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 3552 KiB  
Article
Corner-Point and Foreground-Area IoU Loss: Better Localization of Small Objects in Bounding Box Regression
by Delong Cai, Zhaoyun Zhang and Zhi Zhang
Sensors 2023, 23(10), 4961; https://doi.org/10.3390/s23104961 - 22 May 2023
Cited by 5 | Viewed by 1858
Abstract
Bounding box regression is a crucial step in object detection, directly affecting the localization performance of the detected objects. Especially in small object detection, an excellent bounding box regression loss can significantly alleviate the problem of missing small objects. However, there are two [...] Read more.
Bounding box regression is a crucial step in object detection, directly affecting the localization performance of the detected objects. Especially in small object detection, an excellent bounding box regression loss can significantly alleviate the problem of missing small objects. However, there are two major problems with the broad Intersection over Union (IoU) losses, also known as Broad IoU losses (BIoU losses) in bounding box regression: (i) BIoU losses cannot provide more effective fitting information for predicted boxes as they approach the target box, resulting in slow convergence and inaccurate regression results; (ii) most localization loss functions do not fully utilize the spatial information of the target, namely the target’s foreground area, during the fitting process. Therefore, this paper proposes the Corner-point and Foreground-area IoU loss (CFIoU loss) function by delving into the potential for bounding box regression losses to overcome these issues. First, we use the normalized corner point distance between the two boxes instead of the normalized center-point distance used in the BIoU losses, which effectively suppresses the problem of BIoU losses degrading to IoU loss when the two boxes are close. Second, we add adaptive target information to the loss function to provide richer target information to optimize the bounding box regression process, especially for small object detection. Finally, we conducted simulation experiments on bounding box regression to validate our hypothesis. At the same time, we conducted quantitative comparisons of the current mainstream BIoU losses and our proposed CFIoU loss on the small object public datasets VisDrone2019 and SODA-D using the latest anchor-based YOLOv5 and anchor-free YOLOv8 object detection algorithms. The experimental results demonstrate that YOLOv5s (+3.12% Recall, +2.73% [email protected], and +1.91% [email protected]:0.95) and YOLOv8s (+1.72% Recall and +0.60% [email protected]), both incorporating the CFIoU loss, achieved the highest performance improvement on the VisDrone2019 test set. Similarly, YOLOv5s (+6% Recall, +13.08% [email protected], and +14.29% [email protected]:0.95) and YOLOv8s (+3.36% Recall, +3.66% [email protected], and +4.05% [email protected]:0.95), both incorporating the CFIoU loss, also achieved the highest performance improvement on the SODA-D test set. These results indicate the effectiveness and superiority of the CFIoU loss in small object detection. Additionally, we conducted comparative experiments by fusing the CFIoU loss and the BIoU loss with the SSD algorithm, which is not proficient in small object detection. The experimental results demonstrate that the SSD algorithm incorporating the CFIoU loss achieved the highest improvement in the AP (+5.59%) and AP75 (+5.37%) metrics, indicating that the CFIoU loss can also improve the performance of algorithms that are not proficient in small object detection. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 7980 KiB  
Article
An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery
by Mingzhang Pan, Shuo Wang, Jingao Li, Jing Li, Xiuze Yang and Ke Liang
Sensors 2023, 23(9), 4496; https://doi.org/10.3390/s23094496 - 5 May 2023
Cited by 1 | Viewed by 1688
Abstract
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed [...] Read more.
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

24 pages, 1784 KiB  
Article
Complementing Solutions for Facility Location Optimization via Video Game Crowdsourcing and Machine Learning Approach
by Mariano Vargas-Santiago, Diana A. León-Velasco, Ricardo Marcelín Jiménez and Luis Alberto Morales-Rosales
Appl. Sci. 2023, 13(8), 4884; https://doi.org/10.3390/app13084884 - 13 Apr 2023
Cited by 2 | Viewed by 1713
Abstract
The facility location problem (FLP) is a complex optimization problem that has been widely researched and applied in industry. In this research, we proposed two innovative approaches to complement the limitations of traditional methods, such as heuristics, metaheuristics, and genetic algorithms. The first [...] Read more.
The facility location problem (FLP) is a complex optimization problem that has been widely researched and applied in industry. In this research, we proposed two innovative approaches to complement the limitations of traditional methods, such as heuristics, metaheuristics, and genetic algorithms. The first approach involves utilizing crowdsourcing through video game players to obtain improved solutions, filling the gap in existing research on crowdsourcing for FLP. The second approach leverages machine learning techniques, specifically prediction methods, to provide an efficient exploration of the solution space. Our findings indicate that machine learning techniques can complement existing solutions by providing a more comprehensive approach to solving FLP and filling gaps in the solution space. Furthermore, machine learning predictive models are efficient for decision making and provide quick insights into the system’s behavior. In conclusion, this research contributes to the advancement of problem-solving techniques and has potential implications for solving a wide range of complex, NP-hard problems in various domains. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

13 pages, 426 KiB  
Article
BTDM: A Bi-Directional Translating Decoding Model-Based Relational Triple Extraction
by Zhi Zhang, Junan Yang, Hui Liu and Pengjiang Hu
Appl. Sci. 2023, 13(7), 4447; https://doi.org/10.3390/app13074447 - 31 Mar 2023
Viewed by 1237
Abstract
The goal of relational triple extraction is to extract knowledge-rich relational triples from unstructured text. Although the previous methods obtain considerable performance, there are still some problems, such as error propagation, the overlapping triple problem, and suboptimal subject–object alignment. To address the shortcomings [...] Read more.
The goal of relational triple extraction is to extract knowledge-rich relational triples from unstructured text. Although the previous methods obtain considerable performance, there are still some problems, such as error propagation, the overlapping triple problem, and suboptimal subject–object alignment. To address the shortcomings above, in this paper, we decompose this task into three subtasks from a fresh perspective: entity extraction, subject–object alignment and relation judgement, as well as propose a novel bi-directional translating decoding model (BTDM). Specifically, a bidirectional translating decoding structure is designed to perform entity extraction and subject–object alignment, which decodes entity pairs from both forward and backward extraction. The bidirectional structure effectively mitigates the error propagation problem and aligns the subject–object pairs. The translating decoding approach handles the overlapping triple problem. Finally, a (entity pair, relation) bipartite graph is designed to achieve practical relationship judgement. Experiments show that our model outperforms previous methods and achieves state-of-the-art performance on NYT and WebNLG. We achieved F1-scores of 92.7% and 93.8% on the two datasets. Meanwhile, in various complementary experiments on complex scenarios, our model demonstrates consistent performance gain in various complex scenarios. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

12 pages, 817 KiB  
Article
The Application of Intelligent Data Models for Dementia Classification
by Rabah AlShboul, Fadi Thabtah, Alexander James Walter Scott and Yun Wang
Appl. Sci. 2023, 13(6), 3612; https://doi.org/10.3390/app13063612 - 11 Mar 2023
Cited by 6 | Viewed by 1639
Abstract
Background and Objective: Dementia is a broad term for a complex range of conditions that affect the brain, such as Alzheimer’s disease (AD). Dementia affects a lot of people in the elderly community, hence there is a huge demand to better understand [...] Read more.
Background and Objective: Dementia is a broad term for a complex range of conditions that affect the brain, such as Alzheimer’s disease (AD). Dementia affects a lot of people in the elderly community, hence there is a huge demand to better understand this condition by using cost effective and quick methods, such as neuropsychological tests, since pathological assessments are invasive and demand expensive resources. One of the promising initiatives that deals with dementia and Mild Cognitive Impairment (MCI) is the Alzheimer’s Disease Neuroimaging Initiative (ADNI), which includes cognitive tests, such as Clinical Dementia Rating (CDR) scores. The aim of this research is to investigate non-invasive dementia indicators, such as cognitive features, that are typically diagnosed by clinical assessment within ADNI’s data to understand their effect on dementia. Methods: To achieve the aim, machine learning techniques have been utilized to classify patients into Cognitively Normal (CN), MCI, or having dementia, based on the sum of CDR scores (CDR-SB) besides demographic variables. Particularly, the performance of Support Vector Machine (SVM), K-nearest neighbors (KNN), Decision Trees (C4.5), Probabilistic Naïve Bayes (NB), and Rule Induction (RIPPER) is measured with respect to different evaluation measures, including specificity, sensitivity, and harmonic mean (F-measure), among others, on a large number of cases and controls from the ADNI dataset. Results: The results indicate competitive performance when classifying subjects from the baseline selected variables using machine learning technology. Though we observed fairly good results across all machine learning algorithms utilized, there was still variation in the performance ability, indicating that some algorithms, such as NB and C4.5, are better suited to the task of classifying dementia status based on our baseline data. Conclusions: Using cognitive tests, such as CDR-SB scores, with demographic attributes to pinpoint to dementia using machine learning can be seen a less invasive approach that could be good for clinical use to aid in the diagnosis of dementia. This study gives an indication that a comprehensive assessment tool, such as CDR, may be adequate in assessing and assigning a dementia class to patients, upon their visit, in order to speed further clinical procedures. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

18 pages, 603 KiB  
Review
A Taxonomy of Machine Learning Clustering Algorithms, Challenges, and Future Realms
by Shahneela Pitafi, Toni Anwar and Zubair Sharif
Appl. Sci. 2023, 13(6), 3529; https://doi.org/10.3390/app13063529 - 9 Mar 2023
Cited by 12 | Viewed by 5695
Abstract
In the field of data mining, clustering has shown to be an important technique. Numerous clustering methods have been devised and put into practice, and most of them locate high-quality or optimum clustering outcomes in the field of computer science, data science, statistics, [...] Read more.
In the field of data mining, clustering has shown to be an important technique. Numerous clustering methods have been devised and put into practice, and most of them locate high-quality or optimum clustering outcomes in the field of computer science, data science, statistics, pattern recognition, artificial intelligence, and machine learning. This research provides a modern, thorough review of both classic and cutting-edge clustering methods. The taxonomy of clustering is presented in this review from an applied angle and the compression of some hierarchical and partitional clustering algorithms with various parameters. We also discuss the open challenges in clustering such as computational complexity, refinement of clusters, speed of convergence, data dimensionality, effectiveness and scalability, data object representation, evaluation measures, data streams, and knowledge extraction; scientists and professionals alike will be able to use it as a benchmark as they strive to advance the state-of-the-art in clustering techniques. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

24 pages, 1639 KiB  
Article
Ontology Learning Applications of Knowledge Base Construction for Microelectronic Systems Information
by Frank Wawrzik, Khushnood Adil Rafique, Farin Rahman and Christoph Grimm
Information 2023, 14(3), 176; https://doi.org/10.3390/info14030176 - 9 Mar 2023
Cited by 4 | Viewed by 2024
Abstract
Knowledge base construction (KBC) using AI has been one of the key goals of this highly popular technology since its emergence, as it helps to comprehend everything, including relations, around us. The construction of knowledge bases can summarize a piece of text in [...] Read more.
Knowledge base construction (KBC) using AI has been one of the key goals of this highly popular technology since its emergence, as it helps to comprehend everything, including relations, around us. The construction of knowledge bases can summarize a piece of text in a machine-processable and understandable way. This can prove to be valuable and assistive to knowledge engineers. In this paper, we present the application of natural language processing in the construction of knowledge bases. We demonstrate how a trained bidirectional long short-term memory or bi-LSTM neural network model can be used to construct knowledge bases in accordance with the exact ISO26262 definitions as defined in the GENIAL! Basic Ontology. We provide the system with an electronic text document from the microelectronics domain and the system attempts to create a knowledge base from the available information in textual format. This information is then expressed in the form of graphs when queried by the user. This method of information retrieval presents the user with a much more technical and comprehensive understanding of an expert piece of text. This is achieved by applying the process of named entity recognition (NER) for knowledge extraction. This paper provides a result report of the current status of our knowledge construction process and knowledge base content, as well as describes our challenges and experiences. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

15 pages, 1215 KiB  
Article
A Four-Stage Algorithm for Community Detection Based on Label Propagation and Game Theory in Social Networks
by Atefeh Torkaman, Kambiz Badie, Afshin Salajegheh, Mohammad Hadi Bokaei and Seyed Farshad Fatemi Ardestani
AI 2023, 4(1), 255-269; https://doi.org/10.3390/ai4010011 - 8 Feb 2023
Cited by 3 | Viewed by 2747
Abstract
Over the years, detecting stable communities in a complex network has been a major challenge in network science. The global and local structures help to detect communities from different perspectives. However, previous methods based on them suffer from high complexity and fall into [...] Read more.
Over the years, detecting stable communities in a complex network has been a major challenge in network science. The global and local structures help to detect communities from different perspectives. However, previous methods based on them suffer from high complexity and fall into local optimum, respectively. The Four-Stage Algorithm (FSA) is proposed to reduce these issues and to allocate nodes to stable communities. Balancing global and local information, as well as accuracy and time complexity, while ensuring the allocation of nodes to stable communities, are the fundamental goals of this research. The Four-Stage Algorithm (FSA) is described and demonstrated using four real-world data with ground truth and three real networks without ground truth. In addition, it is evaluated with the results of seven community detection methods: Three-stage algorithm (TS), Louvain, Infomap, Fastgreedy, Walktrap, Eigenvector, and Label propagation (LPA). Experimental results on seven real network data sets show the effectiveness of our proposed approach and confirm that it is sufficiently capable of identifying those communities that are more desirable. The experimental results confirm that the proposed method can detect more stable and assured communities. For future work, deep learning methods can also be used to extract semantic content features that are more beneficial to investigating networks. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

21 pages, 15441 KiB  
Article
Anomaly Detection of DC Nut Runner Processes in Engine Assembly
by James Simon Flynn, Cinzia Giannetti and Hessel Van Dijk
AI 2023, 4(1), 234-254; https://doi.org/10.3390/ai4010010 - 7 Feb 2023
Cited by 1 | Viewed by 2789
Abstract
In many manufacturing systems, anomaly detection is critical to identifying process errors and ensuring product quality. This paper proposes three semi-supervised solutions to detect anomalies in Direct Current (DC) Nut Runner engine assembly processes. The nut runner process is a challenging anomaly detection [...] Read more.
In many manufacturing systems, anomaly detection is critical to identifying process errors and ensuring product quality. This paper proposes three semi-supervised solutions to detect anomalies in Direct Current (DC) Nut Runner engine assembly processes. The nut runner process is a challenging anomaly detection problem due to the manual nature of the process inducing high variability and ambiguity of the anomalous class. These characteristics lead to a scenario where anomalies are not outliers, and the normal operating conditions are difficult to define. To address these challenges, a Gaussian Mixture Model (GMM) was trained using a semi-supervised approach. Three dimensionality reduction methods were compared in pre-processing: PCA, t-SNE, and UMAP. These approaches are demonstrated to outperform the current approaches used by a major automotive company on two real-world datasets. Furthermore, a novel approach to labelling real-world data is proposed, including the concept of an ‘Anomaly No Concern’ class, in addition to the traditional labels of ‘Anomaly’ and ‘Normal’. Introducing this new term helped address knowledge gaps between data scientists and domain experts, as well as providing new insights during model development and testing. This represents a major advancement in identifying anomalies in manual production processes that use handheld tools. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 2209 KiB  
Article
Two Practical Methods for the Forward Kinematics of 3-3 Type Spatial and 3-RRR Planar Parallel Manipulators
by Ercan Düzgün and Osman Kopmaz
Appl. Sci. 2022, 12(24), 12811; https://doi.org/10.3390/app122412811 - 13 Dec 2022
Cited by 1 | Viewed by 1726
Abstract
The forward kinematics in parallel manipulators is a mathematically challenging issue, unlike serial manipulators. Kinematic constraint equations are non-linear transcendental equations that can be reduced to algebraic equations with appropriate transformations. For this reason, sophisticated and time-consuming methods such as the Bezout method, [...] Read more.
The forward kinematics in parallel manipulators is a mathematically challenging issue, unlike serial manipulators. Kinematic constraint equations are non-linear transcendental equations that can be reduced to algebraic equations with appropriate transformations. For this reason, sophisticated and time-consuming methods such as the Bezout method, the Groebner bases method, and the like, are used. In this paper, we demonstrate that these equations can be solved by non-complicated mathematical methods for some special types of manipulators such as the 3-3 and 6-3 types of Stewart platforms, and the 3-RRR planar parallel manipulator. Our first method is an analytical approach that exploits the special structure of kinematic constraint equations and yields polynomials of 32nd and 16th order, as mentioned in the previous works. In the second method, an error function is defined. This error function is employed to find the most appropriate initial values for the non-linear equation solver which is used for solving kinematic constraint equations. Determining the initial values in this manner saves computation time and guarantees fast convergence to real solutions. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

13 pages, 895 KiB  
Article
Detecting Emotions behind the Screen
by Najla Alkaabi, Nazar Zaki, Heba Ismail and Manzoor Khan
AI 2022, 3(4), 948-960; https://doi.org/10.3390/ai3040056 - 22 Nov 2022
Cited by 4 | Viewed by 2222
Abstract
Students’ emotional health is a major contributor to educational success. Hence, to support students’ success in online learning platforms, we contribute with the development of an analysis of the emotional orientations and triggers in their text messages. Such analysis could be automated and [...] Read more.
Students’ emotional health is a major contributor to educational success. Hence, to support students’ success in online learning platforms, we contribute with the development of an analysis of the emotional orientations and triggers in their text messages. Such analysis could be automated and used for early detection of the emotional status of students. In our approach, we relied on transfer learning to train the model, using the pre-trained Bidirectional Encoder Representations from Transformers model (BERT). The model classified messages as positive, negative, or neutral. The transfer learning model was then used to classify a larger unlabeled dataset and fine-grained emotions in the negative messages only, using NRC lexicon. In our analysis to the results, we focused in discovering the dominant negative emotions expressed and the most common words students used to express them. We believe this can be an important clue or first line of detection that may assist mental health practitioners to develop targeted programs for students, especially with the massive shift to online education due to the COVID-19 pandemic. We compared our model to a state-of-the-art ML-based model and found our model outperformed the other by achieving a 91% accuracy compared to an 86%. To the best of our knowledge, this is the first study to focus on a mental health analysis of students in online educational platforms other than massive open online courses (MOOCs). Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

29 pages, 21299 KiB  
Article
Sensing and Detection of Traffic Signs Using CNNs: An Assessment on Their Performance
by Lorenzo Canese, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Hamed Famil Ghadakchi, Marco Re and Sergio Spanò
Sensors 2022, 22(22), 8830; https://doi.org/10.3390/s22228830 - 15 Nov 2022
Cited by 1 | Viewed by 2440
Abstract
Traffic sign detection systems constitute a key component in trending real-world applications such as autonomous driving and driver safety and assistance. In recent years, many learning systems have been used to help detect traffic signs more accurately, such as ResNet, Vgg, Squeeznet, and [...] Read more.
Traffic sign detection systems constitute a key component in trending real-world applications such as autonomous driving and driver safety and assistance. In recent years, many learning systems have been used to help detect traffic signs more accurately, such as ResNet, Vgg, Squeeznet, and DenseNet, but which of these systems can perform better than the others is debatable. They must be examined carefully and under the same conditions. To check the system under the same conditions, you must first have the same database structure. Moreover, the practice of training under the same number of epochs should be the same. Other points to consider are the language in which the coding operation was performed as well as the method of calling the training system, which should be the same. As a result, under these conditions, it can be said that the comparison between different education systems has been done under equal conditions, and the result of this analogy will be valid. In this article, traffic sign detection was done using AlexNet and XresNet 50 training methods, which had not been used until now. Then, with the implementation of ResNet 18, 34, and 50, DenseNet 121, 169, and 201, Vgg 16_bn and Vgg19_bn, AlexNet, SqueezeNet1_0, and SqueezeNet1_1 training methods under completely the same conditions. The results are compared with each other, and finally, the best ones for use in detecting traffic signs are introduced. The experimental results showed that, considering parameters train loss, valid loss, accuracy, error rate and Time, three types of CNN learning models Vgg 16_bn, Vgg19_bn and, AlexNet performed better for the intended purpose. As a result, these three types of learning models can be considered for further studies. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

18 pages, 1990 KiB  
Article
Enhanced Authenticated Key Agreement for Surgical Applications in a Tactile Internet Environment
by Tian-Fu Lee, Xiucai Ye, Wei-Yu Chen and Chi-Chang Chang
Sensors 2022, 22(20), 7941; https://doi.org/10.3390/s22207941 - 18 Oct 2022
Cited by 1 | Viewed by 1474
Abstract
The Tactile Internet enables physical touch to be transmitted over the Internet. In the context of electronic medicine, an authenticated key agreement for the Tactile Internet allows surgeons to perform operations via robotic systems and receive tactile feedback from remote patients. The fifth [...] Read more.
The Tactile Internet enables physical touch to be transmitted over the Internet. In the context of electronic medicine, an authenticated key agreement for the Tactile Internet allows surgeons to perform operations via robotic systems and receive tactile feedback from remote patients. The fifth generation of networks has completely changed the network space and has increased the efficiency of the Tactile Internet with its ultra-low latency, high data rates, and reliable connectivity. However, inappropriate and insecure authentication key agreements for the Tactile Internet may cause misjudgment and improper operation by medical staff, endangering the life of patients. In 2021, Kamil et al. developed a novel and lightweight authenticated key agreement scheme that is suitable for remote surgery applications in the Tactile Internet environment. However, their scheme directly encrypts communication messages with constant secret keys and directly stores secret keys in the verifier table, making the scheme vulnerable to possible attacks. Therefore, in this investigation, we discuss the limitations of the scheme proposed by Kamil scheme and present an enhanced scheme. The enhanced scheme is developed using a one-time key to protect communication messages, whereas the verifier table is protected with a secret gateway key to mitigate the mentioned limitations. The enhanced scheme is proven secure against possible attacks, providing more security functionalities than similar schemes and retaining a lightweight computational cost. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

22 pages, 10795 KiB  
Article
Machine Learning Approach to Predict the Performance of a Stratified Thermal Energy Storage Tank at a District Cooling Plant Using Sensor Data
by Afzal Ahmed Soomro, Ainul Akmar Mokhtar, Waleligne Molla Salilew, Zainal Ambri Abdul Karim, Aijaz Abbasi, Najeebullah Lashari and Syed Muslim Jameel
Sensors 2022, 22(19), 7687; https://doi.org/10.3390/s22197687 - 10 Oct 2022
Cited by 5 | Viewed by 2600
Abstract
In the energy management of district cooling plants, the thermal energy storage tank is critical. As a result, it is essential to keep track of TES results. The performance of the TES has been measured using a variety of methodologies, both numerical and [...] Read more.
In the energy management of district cooling plants, the thermal energy storage tank is critical. As a result, it is essential to keep track of TES results. The performance of the TES has been measured using a variety of methodologies, both numerical and analytical. In this study, the performance of the TES tank in terms of thermocline thickness is predicted using an artificial neural network, support vector machine, and k-nearest neighbor, which has remained unexplored. One year of data was collected from a district cooling plant. Fourteen sensors were used to measure the temperature at different points. With engineering judgement, 263 rows of data were selected and used to develop the prediction models. A total of 70% of the data were used for training, whereas 30% were used for testing. K-fold cross-validation were used. Sensor temperature data was used as the model input, whereas thermocline thickness was used as the model output. The data were normalized, and in addition to this, moving average filter and median filter data smoothing techniques were applied while developing KNN and SVM prediction models to carry out a comparison. The hyperparameters for the three machine learning models were chosen at optimal condition, and the trial-and-error method was used to select the best hyperparameter value: based on this, the optimum architecture of ANN was 14-10-1, which gives the maximum R-Squared value, i.e., 0.9, and minimum mean square error. Finally, the prediction accuracy of three different techniques and results were compared, and the accuracy of ANN is 0.92%, SVM is 89%, and KNN is 96.3%, concluding that KNN has better performance than others. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

Back to TopTop