Next Issue
Volume 11, February
Previous Issue
Volume 10, December
 
 

Computers, Volume 11, Issue 1 (January 2022) – 14 articles

Cover Story (view full-size image): The complexity of production systems significantly affects companies, especially small- and medium-sized enterprises (SMEs), which need to reduce costs and, at the same time, become more competitive and increase their productivity by optimizing their production processes to make manufacturing processes more efficient. From a mathematical point of view, most real-world machine scheduling and sequencing problems are classified as NP-hard problems. Thus, heuristic and metaheuristic techniques are widely used, as are commercial solvers. In this paper, we develop a matheuristic algorithm to optimize the job-shop problem. The matheuristic algorithm combines a genetic algorithm with a disjunctive mathematical model, and the Coin-OR Branch and Cut open-source solver is employed. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1194 KiB  
Article
A Systematic Selection Process of Machine Learning Cloud Services for Manufacturing SMEs
by Can Kaymakci, Simon Wenninger, Philipp Pelger and Alexander Sauer
Computers 2022, 11(1), 14; https://doi.org/10.3390/computers11010014 - 17 Jan 2022
Cited by 13 | Viewed by 5621
Abstract
Small and medium-sized enterprises (SMEs) in manufacturing are increasingly facing challenges of digital transformation and a shift towards cloud-based solutions to leveraging artificial intelligence (AI) or, more specifically, machine learning (ML) services. Although literature covers a variety of frameworks related to the adaptation [...] Read more.
Small and medium-sized enterprises (SMEs) in manufacturing are increasingly facing challenges of digital transformation and a shift towards cloud-based solutions to leveraging artificial intelligence (AI) or, more specifically, machine learning (ML) services. Although literature covers a variety of frameworks related to the adaptation of cloud solutions, cloud-based ML solutions in SMEs are not yet widespread, and an end-to-end process for ML cloud service selection is lacking. The purpose of this paper is to present a systematic selection process of ML cloud services for manufacturing SMEs. Following a design science research approach, including a literature review and qualitative expert interviews, as well as a case study of a German manufacturing SME, this paper presents a four-step process to select ML cloud services for SMEs based on an analytic hierarchy process. We identified 24 evaluation criteria for ML cloud services relevant for SMEs by merging knowledge from manufacturing, cloud computing, and ML with practical aspects. The paper provides an interdisciplinary, hands-on, and easy-to-understand decision support system that lowers the barriers to the adoption of ML cloud services and supports digital transformation in manufacturing SMEs. The application in other practical use cases to support SMEs and simultaneously further development is advocated. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Graphical abstract

24 pages, 37226 KiB  
Article
An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge
by Imran Zualkernan, Salam Dhou, Jacky Judas, Ali Reza Sajun, Brylle Ryan Gomez and Lana Alhaj Hussain
Computers 2022, 11(1), 13; https://doi.org/10.3390/computers11010013 - 13 Jan 2022
Cited by 31 | Viewed by 10696
Abstract
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to [...] Read more.
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

23 pages, 1011 KiB  
Article
Meta-Governance Framework to Guide the Establishment of Mass Collaborative Learning Communities
by Majid Zamiri, Luis M. Camarinha-Matos and João Sarraipa
Computers 2022, 11(1), 12; https://doi.org/10.3390/computers11010012 - 8 Jan 2022
Cited by 4 | Viewed by 2837
Abstract
The application of mass collaboration in different areas of study and work has been increasing over the last few decades. For example, in the education context, this emerging paradigm has opened new opportunities for participatory learning, namely, “mass collaborative learning (MCL)”. The development [...] Read more.
The application of mass collaboration in different areas of study and work has been increasing over the last few decades. For example, in the education context, this emerging paradigm has opened new opportunities for participatory learning, namely, “mass collaborative learning (MCL)”. The development of such an innovative and complementary method of learning, which can lead to the creation of knowledge-based communities, has helped to reap the benefits of diversity and inclusion in the creation and development of knowledge. In other words, MCL allows for enhanced connectivity among the people involved, providing them with the opportunity to practice learning collectively. Despite recent advances, this area still faces many challenges, such as a lack of common agreement about the main concepts, components, applicable structures, relationships among the participants, as well as applicable assessment systems. From this perspective, this study proposes a meta-governance framework that benefits from various other related ideas, models, and methods that together can better support the implementation, execution, and development of mass collaborative learning communities. The proposed framework was applied to two case-study projects in which vocational education and training respond to the needs of collaborative education–enterprise approaches. It was also further used in an illustration of the MCL community called the “community of cooks”. Results from these application cases are discussed. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Graphical abstract

31 pages, 10375 KiB  
Article
Approximator: A Software Tool for Automatic Generation of Approximate Arithmetic Circuits
by Padmanabhan Balasubramanian, Raunaq Nayar, Okkar Min and Douglas L. Maskell
Computers 2022, 11(1), 11; https://doi.org/10.3390/computers11010011 - 8 Jan 2022
Cited by 1 | Viewed by 3556
Abstract
Approximate arithmetic circuits are an attractive alternative to accurate arithmetic circuits because they have significantly reduced delay, area, and power, albeit at the cost of some loss in accuracy. By keeping errors due to approximate computation within acceptable limits, approximate arithmetic circuits can [...] Read more.
Approximate arithmetic circuits are an attractive alternative to accurate arithmetic circuits because they have significantly reduced delay, area, and power, albeit at the cost of some loss in accuracy. By keeping errors due to approximate computation within acceptable limits, approximate arithmetic circuits can be used for various practical applications such as digital signal processing, digital filtering, low power graphics processing, neuromorphic computing, hardware realization of neural networks for artificial intelligence and machine learning etc. The degree of approximation that can be incorporated into an approximate arithmetic circuit tends to vary depending on the error resiliency of the target application. Given this, the manual coding of approximate arithmetic circuits corresponding to different degrees of approximation in a hardware description language (HDL) may be a cumbersome and a time-consuming process—more so when the circuit is big. Therefore, a software tool that can automatically generate approximate arithmetic circuits of any size corresponding to a desired accuracy would not only aid the design flow but also help to improve a designer’s productivity by speeding up the circuit/system development. In this context, this paper presents ‘Approximator’, which is a software tool developed to automatically generate approximate arithmetic circuits based on a user’s specification. Approximator can automatically generate Verilog HDL codes of approximate adders and multipliers of any size based on the novel approximate arithmetic circuit architectures proposed by us. The Verilog HDL codes output by Approximator can be used for synthesis in an FPGA or ASIC (standard cell based) design environment. Additionally, the tool can perform error and accuracy analyses of approximate arithmetic circuits. The salient features of the tool are illustrated through some example screenshots captured during different stages of the tool use. Approximator has been made open-access on GitHub for the benefit of the research community, and the tool documentation is provided for the user’s reference. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

14 pages, 16960 KiB  
Article
Brain Tumour Classification Using Noble Deep Learning Approach with Parametric Optimization through Metaheuristics Approaches
by Dillip Ranjan Nayak, Neelamadhab Padhy, Pradeep Kumar Mallick, Dilip Kumar Bagal and Sachin Kumar
Computers 2022, 11(1), 10; https://doi.org/10.3390/computers11010010 - 7 Jan 2022
Cited by 39 | Viewed by 5627 | Correction
Abstract
Deep learning has surged in popularity in recent years, notably in the domains of medical image processing, medical image analysis, and bioinformatics. In this study, we offer a completely autonomous brain tumour segmentation approach based on deep neural networks (DNNs). We describe a [...] Read more.
Deep learning has surged in popularity in recent years, notably in the domains of medical image processing, medical image analysis, and bioinformatics. In this study, we offer a completely autonomous brain tumour segmentation approach based on deep neural networks (DNNs). We describe a unique CNN architecture which varies from those usually used in computer vision. The classification of tumour cells is very difficult due to their heterogeneous nature. From a visual learning and brain tumour recognition point of view, a convolutional neural network (CNN) is the most extensively used machine learning algorithm. This paper presents a CNN model along with parametric optimization approaches for analysing brain tumour magnetic resonance images. The accuracy percentage in the simulation of the above-mentioned model is exactly 100% throughout the nine runs, i.e., Taguchi’s L9 design of experiment. This comparative analysis of all three algorithms will pique the interest of readers who are interested in applying these techniques to a variety of technical and medical challenges. In this work, the authors have tuned the parameters of the convolutional neural network approach, which is applied to the dataset of Brain MRIs to detect any portion of a tumour, through new advanced optimization techniques, i.e., SFOA, FBIA and MGA. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

19 pages, 505 KiB  
Article
Application of the Crow Search Algorithm to the Problem of the Parametric Estimation in Transformers Considering Voltage and Current Measures
by David Gilberto Gracia-Velásquez, Andrés Steven Morales-Rodríguez and Oscar Danilo Montoya
Computers 2022, 11(1), 9; https://doi.org/10.3390/computers11010009 - 6 Jan 2022
Cited by 7 | Viewed by 2679
Abstract
The problem of the electrical characterization of single-phase transformers is addressed in this research through the application of the crow search algorithm (CSA). A nonlinear programming model to determine the series and parallel impedances of the transformer is formulated using the mean square [...] Read more.
The problem of the electrical characterization of single-phase transformers is addressed in this research through the application of the crow search algorithm (CSA). A nonlinear programming model to determine the series and parallel impedances of the transformer is formulated using the mean square error (MSE) between the voltages and currents measured and calculated as the objective function. The CSA is selected as a solution technique since it is efficient in dealing with complex nonlinear programming models using penalty factors to explore and exploit the solution space with minimum computational effort. Numerical results in three single-phase transformers with nominal sizes of 20 kVA, 45 kVA, 112.5 kVA, and 167 kVA demonstrate the efficiency of the proposed approach to define the transformer parameters when compared with the large-scale nonlinear solver fmincon in the MATLAB programming environment. Regarding the final objective function value, the CSA reaches objective functions lower than 2.75×1011 for all the simulation cases, which confirms their effectiveness in minimizing the MSE between real (measured) and expected (calculated) voltage and current variables in the transformer. Full article
Show Figures

Figure 1

13 pages, 2049 KiB  
Article
Melanoma Detection in Dermoscopic Images Using a Cellular Automata Classifier
by Benjamín Luna-Benoso, José Cruz Martínez-Perales, Jorge Cortés-Galicia, Rolando Flores-Carapia and Víctor Manuel Silva-García
Computers 2022, 11(1), 8; https://doi.org/10.3390/computers11010008 - 4 Jan 2022
Cited by 7 | Viewed by 2778
Abstract
Any cancer type is one of the leading death causes around the world. Skin cancer is a condition where malignant cells are formed in the tissues of the skin, such as melanoma, known as the most aggressive and deadly skin cancer type. The [...] Read more.
Any cancer type is one of the leading death causes around the world. Skin cancer is a condition where malignant cells are formed in the tissues of the skin, such as melanoma, known as the most aggressive and deadly skin cancer type. The mortality rates of melanoma are associated with its high potential for metastasis in later stages, spreading to other body sites such as the lungs, bones, or the brain. Thus, early detection and diagnosis are closely related to survival rates. Computer Aided Design (CAD) systems carry out a pre-diagnosis of a skin lesion based on clinical criteria or global patterns associated with its structure. A CAD system is essentially composed by three modules: (i) lesion segmentation, (ii) feature extraction, and (iii) classification. In this work, a methodology is proposed for a CAD system development that detects global patterns using texture descriptors based on statistical measurements that allow melanoma detection from dermoscopic images. Image analysis was carried out using spatial domain methods, statistical measurements were used for feature extraction, and a classifier based on cellular automata (ACA) was used for classification. The proposed model was applied to dermoscopic images obtained from the PH2 database, and it was compared with other models using accuracy, sensitivity, and specificity as metrics. With the proposed model, values of 0.978, 0.944, and 0.987 of accuracy, sensitivity and specificity, respectively, were obtained. The results of the evaluated metrics show that the proposed method is more effective than other state-of-the-art methods for melanoma detection in dermoscopic images. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

17 pages, 3935 KiB  
Article
Experimental and Mathematical Models for Real-Time Monitoring and Auto Watering Using IoT Architecture
by Jabar H. Yousif and Khaled Abdalgader
Computers 2022, 11(1), 7; https://doi.org/10.3390/computers11010007 - 3 Jan 2022
Cited by 16 | Viewed by 7396
Abstract
Manufacturing industries based on Internet of Things (IoT) technologies play an important role in the economic development of intelligent agriculture and watering. Water availability has become a global problem that afflicts many countries, especially in remote and desert areas. An efficient irrigation system [...] Read more.
Manufacturing industries based on Internet of Things (IoT) technologies play an important role in the economic development of intelligent agriculture and watering. Water availability has become a global problem that afflicts many countries, especially in remote and desert areas. An efficient irrigation system is needed for optimizing the amount of water consumption, agriculture monitoring, and reducing energy costs. This paper proposes a real-time monitoring and auto-watering system based on predicting mathematical models that efficiently control the water rate needed. It gives the plant the optimal amount of required water level, which helps to save water. It also ensures interoperability among heterogeneous sensing data streams to support large-scale agricultural analytics. The mathematical model is embedded in the Arduino Integrated Development Environment (IDE) for sensing the soil moisture level and checking whether it is less than the pre-defined threshold value, then plant watering is performed automatically. The proposed system enhances the watering system’s efficiency by reducing the water consumption by more than 70% and increasing production due to irrigation optimization. It also reduces the water and energy consumption amount and decreases the maintenance costs. Full article
(This article belongs to the Special Issue Real-Time Systems in Emerging IoT-Embedded Applications)
Show Figures

Figure 1

21 pages, 1579 KiB  
Review
A Review of Intelligent Sensor-Based Systems for Pressure Ulcer Prevention
by Arlindo Silva, José Metrôlho, Fernando Ribeiro, Filipe Fidalgo, Osvaldo Santos and Rogério Dionisio
Computers 2022, 11(1), 6; https://doi.org/10.3390/computers11010006 - 31 Dec 2021
Cited by 16 | Viewed by 7704
Abstract
Pressure ulcers are a critical issue not only for patients, decreasing their quality of life, but also for healthcare professionals, contributing to burnout from continuous monitoring, with a consequent increase in healthcare costs. Due to the relevance of this problem, many hardware and [...] Read more.
Pressure ulcers are a critical issue not only for patients, decreasing their quality of life, but also for healthcare professionals, contributing to burnout from continuous monitoring, with a consequent increase in healthcare costs. Due to the relevance of this problem, many hardware and software approaches have been proposed to ameliorate some aspects of pressure ulcer prevention and monitoring. In this article, we focus on reviewing solutions that use sensor-based data, possibly in combination with other intrinsic or extrinsic information, processed by some form of intelligent algorithm, to provide healthcare professionals with knowledge that improves the decision-making process when dealing with a patient at risk of developing pressure ulcers. We used a systematic approach to select 21 studies that were thoroughly reviewed and summarized, considering which sensors and algorithms were used, the most relevant data features, the recommendations provided, and the results obtained after deployment. This review allowed us not only to describe the state of the art regarding the previous items, but also to identify the three main stages where intelligent algorithms can bring meaningful improvement to pressure ulcer prevention and mitigation. Finally, as a result of this review and following discussion, we drew guidelines for a general architecture of an intelligent pressure ulcer prevention system. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

15 pages, 3156 KiB  
Article
Application of Unsupervised Multivariate Analysis Methods to Raman Spectroscopic Assessment of Human Dental Enamel
by Iulian Otel, Joao Silveira, Valentina Vassilenko, António Mata, Maria Luísa Carvalho, José Paulo Santos and Sofia Pessanha
Computers 2022, 11(1), 5; https://doi.org/10.3390/computers11010005 - 28 Dec 2021
Cited by 1 | Viewed by 2964
Abstract
This work explores the suitability of data treatment methodologies for Raman spectra of teeth using multivariate analysis methods. Raman spectra were measured in our laboratory and obtained from control enamel samples and samples with a protective treatment before and after an erosive attack. [...] Read more.
This work explores the suitability of data treatment methodologies for Raman spectra of teeth using multivariate analysis methods. Raman spectra were measured in our laboratory and obtained from control enamel samples and samples with a protective treatment before and after an erosive attack. Three different approaches for data treatment were undertaken in order to evaluate the aptitude of distinguishing between groups: A—Principal Component Analysis of the numerical parameters derived from deconvoluted spectra; B—PCA of average Raman spectra after baseline correction; and C—PCA of average raw Raman spectra. Additionally, Hierarchical Cluster Analysis were applied to Raman spectra of enamel measured with different laser wavelengths (638 nm or 785 nm) to evaluate the most suitable choice of illumination. According to the different approaches, PC1 scores obtained between control and treatment group were A—50.5%, B—97.1% and C—83.0% before the erosive attack and A—55.2%, B—93.2% and C—87.8% after an erosive attack. The obtained results showed that performing PCA analysis of raw or baseline corrected Raman spectra of enamel was not as efficient in the evaluation of samples with different treatments. Moreover, acquiring Raman spectra with a 785 nm laser increases precision in the data treatment methodologies. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

12 pages, 3180 KiB  
Article
Learn2Write: Augmented Reality and Machine Learning-Based Mobile App to Learn Writing
by Md. Nahidul Islam Opu, Md. Rakibul Islam, Muhammad Ashad Kabir, Md. Sabir Hossain and Mohammad Mainul Islam
Computers 2022, 11(1), 4; https://doi.org/10.3390/computers11010004 - 27 Dec 2021
Cited by 11 | Viewed by 6212
Abstract
Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main [...] Read more.
Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main features: (i) guided learning to teach users how to write the alphabet and (ii) on-screen and AR-based handwriting testing using machine learning. A learner needs to write on the mobile screen in on-screen testing, whereas AR-based testing allows one to evaluate writing on paper or a board in a real world environment. We implement a novel approach to use machine learning for AR-based testing to detect an alphabet written on a board or paper. It detects the handwritten alphabet using our developed machine learning model. After that, a 3D model of that alphabet appears on the screen with its pronunciation/sound. The key benefit of our approach is that it allows the learner to use a handwritten alphabet. As we have used marker-less augmented reality, it does not require a static image as a marker. The app was built with ARCore SDK for Unity. We further evaluated and quantified the performance of our app on multiple devices. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education)
Show Figures

Figure 1

29 pages, 1562 KiB  
Review
A Review of Urdu Sentiment Analysis with Multilingual Perspective: A Case of Urdu and Roman Urdu Language
by Ihsan Ullah Khan, Aurangzeb Khan, Wahab Khan, Mazliham Mohd Su’ud, Muhammad Mansoor Alam, Fazli Subhan and Muhammad Zubair Asghar
Computers 2022, 11(1), 3; https://doi.org/10.3390/computers11010003 - 27 Dec 2021
Cited by 23 | Viewed by 16505
Abstract
Research efforts in the field of sentiment analysis have exponentially increased in the last few years due to its applicability in areas such as online product purchasing, marketing, and reputation management. Social media and online shopping sites have become a rich source of [...] Read more.
Research efforts in the field of sentiment analysis have exponentially increased in the last few years due to its applicability in areas such as online product purchasing, marketing, and reputation management. Social media and online shopping sites have become a rich source of user-generated data. Manufacturing, sales, and marketing organizations are progressively turning their eyes to this source to get worldwide feedback on their activities and products. Millions of sentences in Urdu and Roman Urdu are posted daily on social sites, such as Facebook, Instagram, Snapchat, and Twitter. Disregarding people’s opinions in Urdu and Roman Urdu and considering only resource-rich English language leads to the vital loss of this vast amount of data. Our research focused on collecting research papers related to Urdu and Roman Urdu language and analyzing them in terms of preprocessing, feature extraction, and classification techniques. This paper contains a comprehensive study of research conducted on Roman Urdu and Urdu text for a product review. This study is divided into categories, such as collection of relevant corpora, data preprocessing, feature extraction, classification platforms and approaches, limitations, and future work. The comparison was made based on evaluating different research factors, such as corpus, lexicon, and opinions. Each reviewed paper was evaluated according to some provided benchmarks and categorized accordingly. Based on results obtained and the comparisons made, we suggested some helpful steps in a future study. Full article
Show Figures

Figure 1

12 pages, 9688 KiB  
Article
Markerless Dog Pose Recognition in the Wild Using ResNet Deep Learning Model
by Srinivasan Raman, Rytis Maskeliūnas and Robertas Damaševičius
Computers 2022, 11(1), 2; https://doi.org/10.3390/computers11010002 - 24 Dec 2021
Cited by 10 | Viewed by 5274
Abstract
The analysis and perception of behavior has usually been a crucial task for researchers. The goal of this paper is to address the problem of recognition of animal poses, which has numerous applications in zoology, ecology, biology, and entertainment. We propose a methodology [...] Read more.
The analysis and perception of behavior has usually been a crucial task for researchers. The goal of this paper is to address the problem of recognition of animal poses, which has numerous applications in zoology, ecology, biology, and entertainment. We propose a methodology to recognize dog poses. The methodology includes the extraction of frames for labeling from videos and deep convolutional neural network (CNN) training for pose recognition. We employ a semi-supervised deep learning model of reinforcement. During training, we used a combination of restricted labeled data and a large amount of unlabeled data. Sequential CNN is also used for feature localization and to find the canine’s motions and posture for spatio-temporal analysis. To detect the canine’s features, we employ image frames to locate the annotations and estimate the dog posture. As a result of this process, we avoid starting from scratch with the feature model and reduce the need for a large dataset. We present the results of experiments on a dataset of more than 5000 images of dogs in different poses. We demonstrated the effectiveness of the proposed methodology for images of canine animals in various poses and behavior. The methodology implemented as a mobile app that can be used for animal tracking. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

16 pages, 9185 KiB  
Article
Matheuristic Algorithm for Job-Shop Scheduling Problem Using a Disjunctive Mathematical Model
by Eduardo Guzman, Beatriz Andres and Raul Poler
Computers 2022, 11(1), 1; https://doi.org/10.3390/computers11010001 - 22 Dec 2021
Cited by 10 | Viewed by 4504
Abstract
This paper focuses on the investigation of a new efficient method for solving machine scheduling and sequencing problems. The complexity of production systems significantly affects companies, especially small- and medium-sized enterprises (SMEs), which need to reduce costs and, at the same time, become [...] Read more.
This paper focuses on the investigation of a new efficient method for solving machine scheduling and sequencing problems. The complexity of production systems significantly affects companies, especially small- and medium-sized enterprises (SMEs), which need to reduce costs and, at the same time, become more competitive and increase their productivity by optimizing their production processes to make manufacturing processes more efficient. From a mathematical point of view, most real-world machine scheduling and sequencing problems are classified as NP-hard problems. Different algorithms have been developed to solve scheduling and sequencing problems in the last few decades. Thus, heuristic and metaheuristic techniques are widely used, as are commercial solvers. In this paper, we propose a matheuristic algorithm to optimize the job-shop problem which combines a genetic algorithm with a disjunctive mathematical model, and the Coin-OR Branch & Cut open-source solver is employed. The matheuristic algorithm allows efficient solutions to be found, and cuts computational times by using an open-source solver combined with a genetic algorithm. This provides companies with an easy-to-use tool and does not incur costs associated with expensive commercial software licenses. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop