Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = real-time machine learning business processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Viewed by 66
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

26 pages, 759 KB  
Article
AI-Driven Process Innovation: Transforming Service Start-Ups in the Digital Age
by Neda Azizi, Peyman Akhavan, Claire Davison, Omid Haass, Shahrzad Saremi and Syed Fawad M. Zaidi
Electronics 2025, 14(16), 3240; https://doi.org/10.3390/electronics14163240 - 15 Aug 2025
Viewed by 2240
Abstract
In today’s fast-moving digital economy, service start-ups are reshaping industries; however, they face intense uncertainty, limited resources, and fierce competition. This study introduces an Artificial Intelligence (AI)-powered process modeling framework designed to give these ventures a competitive edge by combining big data analytics, [...] Read more.
In today’s fast-moving digital economy, service start-ups are reshaping industries; however, they face intense uncertainty, limited resources, and fierce competition. This study introduces an Artificial Intelligence (AI)-powered process modeling framework designed to give these ventures a competitive edge by combining big data analytics, machine learning, and Business Process Model and Notation (BPMN). While past models often overlook the dynamic, human-centered nature of service businesses, this research fills that gap by integrating AI-Driven Ideation, AI-Augmented Content, and AI-Enabled Personalization to fuel innovation, agility, and customer-centricity. Expert insights, gathered through a two-stage fuzzy Delphi method and validated using DEMATEL, reveal how AI can transform start-up processes by offering real-time feedback, predictive risk management, and smart customization. This model does more than optimize operations; it empowers start-ups to thrive in volatile, data-rich environments, improving strategic decision-making and even health and safety governance. By blending cutting-edge AI tools with process innovation, this research contributes a fresh, scalable framework for digital-age entrepreneurship. It opens exciting new pathways for start-up founders, investors, and policymakers looking to harness AI’s full potential in transforming how new ventures operate, compete, and grow. Full article
(This article belongs to the Special Issue Advances in Information, Intelligence, Systems and Applications)
Show Figures

Figure 1

27 pages, 1889 KB  
Article
Advancing Smart City Sustainability Through Artificial Intelligence, Digital Twin and Blockchain Solutions
by Ivica Lukić, Mirko Köhler, Zdravko Krpić and Miljenko Švarcmajer
Technologies 2025, 13(7), 300; https://doi.org/10.3390/technologies13070300 - 11 Jul 2025
Cited by 4 | Viewed by 2720
Abstract
This paper presents an integrated Smart City platform that combines digital twin technology, advanced machine learning, and a private blockchain network to enhance data-driven decision making and operational efficiency in both public enterprises and small and medium-sized enterprises (SMEs). The proposed cloud-based business [...] Read more.
This paper presents an integrated Smart City platform that combines digital twin technology, advanced machine learning, and a private blockchain network to enhance data-driven decision making and operational efficiency in both public enterprises and small and medium-sized enterprises (SMEs). The proposed cloud-based business intelligence model automates Extract, Transform, Load (ETL) processes, enables real-time analytics, and secures data integrity and transparency through blockchain-enabled audit trails. By implementing the proposed solution, Smart City and public service providers can significantly improve operational efficiency, including a 15% reduction in costs and a 12% decrease in fuel consumption for waste management, as well as increased citizen engagement and transparency in Smart City governance. The digital twin component facilitated scenario simulations and proactive resource management, while the participatory governance module empowered citizens through transparent, immutable records of proposals and voting. This study also discusses technical, organizational, and regulatory challenges, such as data integration, scalability, and privacy compliance. The results indicate that the proposed approach offers a scalable and sustainable model for Smart City transformation, fostering citizen trust, regulatory compliance, and measurable environmental and social benefits. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

26 pages, 824 KB  
Article
Advancing Credit Rating Prediction: The Role of Machine Learning in Corporate Credit Rating Assessment
by Nazário Augusto de Oliveira and Leonardo Fernando Cruz Basso
Risks 2025, 13(6), 116; https://doi.org/10.3390/risks13060116 - 17 Jun 2025
Viewed by 6196
Abstract
Accurate corporate credit ratings are essential for financial risk assessment; yet, traditional methodologies relying on manual evaluation and basic statistical models often fall short in dynamic economic conditions. This study investigated the potential of machine-learning (ML) algorithms as a more precise and adaptable [...] Read more.
Accurate corporate credit ratings are essential for financial risk assessment; yet, traditional methodologies relying on manual evaluation and basic statistical models often fall short in dynamic economic conditions. This study investigated the potential of machine-learning (ML) algorithms as a more precise and adaptable alternative for credit rating predictions. Using a seven-year dataset from S&P Capital IQ Pro, corporate credit ratings across 20 countries were analyzed, leveraging 51 financial and business risk variables. The study evaluated multiple ML models, including Logistic Regression, Support Vector Machines, Decision Trees, Random Forest, Gradient Boosting (GB), and Neural Networks, using rigorous data pre-processing, feature selection, and validation techniques. Results indicate that Artificial Neural Networks (ANN) and GB consistently outperform traditional models, particularly in capturing non-linear relationships and complex interactions among predictive factors. This study advances financial risk management by demonstrating the efficacy of ML-driven credit rating systems, offering a more accurate, efficient, and scalable solution. Additionally, it provides practical insights for financial institutions aiming to enhance their risk assessment frameworks. Future research should explore alternative data sources, real-time analytics, and model explainability to facilitate regulatory adoption. Full article
(This article belongs to the Special Issue Risk and Return Analysis in the Stock Market)
Show Figures

Figure 1

29 pages, 7747 KB  
Article
Empowering Retail in the Metaverse by Leveraging Consumer Behavior Analysis for Personalized Shopping: A Pilot Study in the Saudi Market
by Monerah Alawadh and Ahmed Barnawi
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 63; https://doi.org/10.3390/jtaer20020063 - 2 Apr 2025
Cited by 4 | Viewed by 3907
Abstract
The integration of advanced technologies, such as the Metaverse, has the potential to revolutionize the retail industry and enhance the shopping experience. Understanding consumer behavior and leveraging machine learning predictions based on analysis can significantly enhance user experiences, enabling personalized interactions and fostering [...] Read more.
The integration of advanced technologies, such as the Metaverse, has the potential to revolutionize the retail industry and enhance the shopping experience. Understanding consumer behavior and leveraging machine learning predictions based on analysis can significantly enhance user experiences, enabling personalized interactions and fostering overall engagement within the virtual environment. In our ongoing research effort, we have developed a consumer behavior framework to predict interesting buying patterns based on analyzing sales transaction records using association rule learning techniques aiming at improving sales parameters for retailers. In this paper, we introduce a validation analysis of our predictive framework that can improve the personalization of the shopping experience in virtual reality shopping environments, which provides powerful marketing facilities, unlike real-time shopping. The findings of this work provide a promising outcome in terms of achieving satisfactory prediction accuracy in a focused pilot study conducted in association with a prominent retailer in Saudi Arabia. Such results can be employed to empower the personalization of the shopping experience, especially on virtual platforms such as the Metaverse, which is expected to play a revolutionary role in future businesses and other life activities. Shopping in the Metaverse offers a unique blend of immersive experiences and endless possibilities, enabling consumers to interact with products and brands in a virtual environment like never before. This integration of cutting-edge technology not only transforms the retail landscape but also paves the way for a new era of personalized and engaging shopping experiences. Lastly, this empowerment offers new opportunities for retailers and streamlines the process of engaging with customers in innovative ways. Full article
(This article belongs to the Special Issue Emerging Digital Technologies and Consumer Behavior)
Show Figures

Figure 1

21 pages, 2523 KB  
Systematic Review
Transformation of the Dairy Supply Chain Through Artificial Intelligence: A Systematic Review
by Gabriela Joseth Serrano-Torres, Alexandra Lorena López-Naranjo, Pedro Lucas Larrea-Cuadrado and Guido Mazón-Fierro
Sustainability 2025, 17(3), 982; https://doi.org/10.3390/su17030982 - 25 Jan 2025
Cited by 12 | Viewed by 7827
Abstract
The dairy supply chain encompasses all stages involved in the production, processing, distribution, and delivery of dairy products from farms to end consumers. Artificial intelligence (AI) refers to the use of advanced technologies to optimize processes and make informed decisions. Using the PRISMA [...] Read more.
The dairy supply chain encompasses all stages involved in the production, processing, distribution, and delivery of dairy products from farms to end consumers. Artificial intelligence (AI) refers to the use of advanced technologies to optimize processes and make informed decisions. Using the PRISMA methodology, this research analyzes AI technologies applied in the dairy supply chain, their impact on process optimization, the factors facilitating or hindering their adoption, and their potential to enhance sustainability and operational efficiency. The findings show that artificial intelligence (AI) is transforming dairy supply chain management through technologies such as artificial neural networks, deep learning, IoT sensors, and blockchain. These tools enable real-time planning and decision-making optimization, improve product quality and safety, and ensure traceability. The use of machine learning algorithms, such as Tabu Search, ACO, and SARIMA, is highlighted for predicting production, managing inventories, and optimizing logistics. Additionally, AI fosters sustainability by reducing environmental impact through more responsible farming practices and process automation, such as robotic milking. However, its adoption faces barriers such as high costs, lack of infrastructure, and technical training, particularly in small businesses. Despite these challenges, AI drives operational efficiency, strengthens food safety, and supports the transition toward a more sustainable and resilient supply chain. It is important to note that the study has limitations in analyzing long-term impacts, stakeholder resistance, and the lack of comparative studies on the effectiveness of different AI approaches. Full article
Show Figures

Figure 1

18 pages, 731 KB  
Review
Computational Methods for Information Processing from Natural Language Complaint Processes—A Systematic Review
by J. C. Blandón Andrade, A. Castaño Toro, A. Morales Ríos and D. Orozco Ospina
Computers 2025, 14(1), 28; https://doi.org/10.3390/computers14010028 - 20 Jan 2025
Viewed by 2130
Abstract
Complaint processing is of great importance for companies because it allows them to understand customer satisfaction levels, which is crucial for business success. It allows them to show the real perceptions of users and thus visualize the problems, which are regularly processed from [...] Read more.
Complaint processing is of great importance for companies because it allows them to understand customer satisfaction levels, which is crucial for business success. It allows them to show the real perceptions of users and thus visualize the problems, which are regularly processed from oral or written natural language, derived from the provision of a service. In addition, the treatment of complaints is relevant because according to the laws of each country, companies have the obligation to respond to these complaints in a specified time. The specialized literature mentions that enterprises lost USD 75 billion due to poor customer service, highlighting that companies need to know and understand customer perceptions, especially emotions, and product reviews to gain insight and learn about customer feedback because of the importance of the voice of the customer for an organization. In general, it is evident that there is a need for research related to computational language processing to handle user requests. The authors show great interest in computational techniques for the processing of this information in natural language and how this could contribute to the improvement of processes within the productive sector. This work searches in indexed journals for information related to computational methods for processing relevant data from user complaints. It is proposed to apply a systematic literature review (SLR) method combining literature review guides by Kitchenham and the PRISMA statement. The systematic process allows the extraction of consistent information, and after applying it, 27 articles were obtained from which the analysis was conducted. The results show various proposals using linguistic, statistical, machine learning, and hybrid methods. We find that most authors combine Natural Language Processing (NLP) and Machine Learning (ML) to create hybrid methods. The methods extract relevant information from complaints of the customers in natural language in various domains, such as government, medical, banks, e-commerce, public services, agriculture, customer service, environmental, and tourism, among others. This work contributes as support for the creation of new systems that can give companies a significant competitive advantage due to their ability to reduce the response time of the complaints as established by law. Full article
Show Figures

Figure 1

25 pages, 1610 KB  
Article
A Novel End-to-End Provenance System for Predictive Maintenance: A Case Study for Industrial Machinery Predictive Maintenance
by Emrullah Gultekin and Mehmet S. Aktas
Computers 2024, 13(12), 325; https://doi.org/10.3390/computers13120325 - 4 Dec 2024
Viewed by 2270
Abstract
In this study, we address the critical gap in predictive maintenance systems regarding the absence of a robust provenance system and specification. To tackle this issue, we propose a provenance system based on the PROV-O schema, designed to enhance explainability, accountability, and transparency [...] Read more.
In this study, we address the critical gap in predictive maintenance systems regarding the absence of a robust provenance system and specification. To tackle this issue, we propose a provenance system based on the PROV-O schema, designed to enhance explainability, accountability, and transparency in predictive maintenance processes. Our framework facilitates the collection, processing, recording, and visualization of provenance data, integrating them seamlessly into these systems. We developed a prototype to evaluate the effectiveness of our approach and conducted comprehensive user studies to assess the system’s usability. Participants found the extended PROV-O structure valuable, with improved task completion times. Furthermore, performance tests demonstrated that our system manages high workloads efficiently, with minimal overhead. The contributions of this study include the design of a provenance system tailored for predictive maintenance and a specification that ensures scalability and efficiency. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

27 pages, 3396 KB  
Review
Internet of Things and Distributed Computing Systems in Business Models
by Albérico Travassos Rosário and Ricardo Raimundo
Future Internet 2024, 16(10), 384; https://doi.org/10.3390/fi16100384 - 21 Oct 2024
Cited by 5 | Viewed by 3971
Abstract
The integration of the Internet of Things (IoT) and Distributed Computing Systems (DCS) is transforming business models across industries. IoT devices allow immediate monitoring of equipment and processes, mitigating lost time and enhancing efficiency. In this case, manufacturing companies use IoT sensors to [...] Read more.
The integration of the Internet of Things (IoT) and Distributed Computing Systems (DCS) is transforming business models across industries. IoT devices allow immediate monitoring of equipment and processes, mitigating lost time and enhancing efficiency. In this case, manufacturing companies use IoT sensors to monitor machinery, predict failures, and schedule maintenance. Also, automation via IoT reduces manual intervention, resulting in boosted productivity in smart factories and automated supply chains. IoT devices generate this vast amount of data, which businesses analyze to gain insights into customer behavior, operational inefficiencies, and market trends. In turn, Distributed Computing Systems process this data, providing actionable insights and enabling advanced analytics and machine learning for future trend predictions. While, IoT facilitates personalized products and services by collecting data on customer preferences and usage patterns, enhancing satisfaction and loyalty, IoT devices support new customer interactions, like wearable health devices, and enable subscription-based and pay-per-use models in transportation and utilities. Conversely, real-time monitoring enhances security, as distributed systems quickly respond to threats, ensuring operational safety. It also aids regulatory compliance by providing accurate operational data. In this way, this study, through a Bibliometric Literature Review (LRSB) of 91 screened pieces of literature, aims at ascertaining to what extent the aforementioned capacities, overall, enhance business models, in terms of efficiency and effectiveness. The study concludes that those systems altogether leverage businesses, promoting competitive edge, continuous innovation, and adaptability to market dynamics. In particular, overall, the integration of both IoT and Distributed Systems in business models augments its numerous advantages: it develops smart infrastructures e.g., smart grids; edge computing that allows data processing closer to the data source e.g., autonomous vehicles; predictive analytics, by helping businesses anticipate issues e.g., to foresee equipment failures; personalized services e.g., through e-commerce platforms of personalized recommendations to users; enhanced security, while reducing the risk of centralized attacks e.g., blockchain technology, in how IoT and Distributed Computing Systems altogether impact business models. Future research avenues are suggested. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

26 pages, 3537 KB  
Article
From Data to Insight: Transforming Online Job Postings into Labor-Market Intelligence
by Giannis Tzimas, Nikos Zotos, Evangelos Mourelatos, Konstantinos C. Giotopoulos and Panagiotis Zervas
Information 2024, 15(8), 496; https://doi.org/10.3390/info15080496 - 20 Aug 2024
Cited by 6 | Viewed by 8986
Abstract
In the continuously changing labor market, understanding the dynamics of online job postings is crucial for economic and workforce development. With the increasing reliance on Online Job Portals, analyzing online job postings has become an essential tool for capturing real-time labor-market trends. This [...] Read more.
In the continuously changing labor market, understanding the dynamics of online job postings is crucial for economic and workforce development. With the increasing reliance on Online Job Portals, analyzing online job postings has become an essential tool for capturing real-time labor-market trends. This paper presents a comprehensive methodology for processing online job postings to generate labor-market intelligence. The proposed methodology encompasses data source selection, data extraction, cleansing, normalization, and deduplication procedures. The final step involves information extraction based on employer industry, occupation, workplace, skills, and required experience. We address the key challenges that emerge at each step and discuss how they can be resolved. Our methodology is applied to two use cases: the first focuses on the analysis of the Greek labor market in the tourism industry during the COVID-19 pandemic, revealing shifts in job demands, skill requirements, and employment types. In the second use case, a data-driven ontology is employed to extract skills from job postings using machine learning. The findings highlight that the proposed methodology, utilizing NLP and machine-learning techniques instead of LLMs, can be applied to different labor market-analysis use cases and offer valuable insights for businesses, job seekers, and policymakers. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

32 pages, 3606 KB  
Systematic Review
Integrating Blockchain, IoT, and XBRL in Accounting Information Systems: A Systematic Literature Review
by Mohamed Nofel, Mahmoud Marzouk, Hany Elbardan, Reda Saleh and Aly Mogahed
J. Risk Financial Manag. 2024, 17(8), 372; https://doi.org/10.3390/jrfm17080372 - 19 Aug 2024
Cited by 11 | Viewed by 11243
Abstract
Over the last few decades, remarkable technical advancements, including artificial intelligence, machine learning, big data, blockchain, cloud computing, and the Internet of Things, have emerged. These tools have the ability to change the accounting process. This study aims to conduct a systematic literature [...] Read more.
Over the last few decades, remarkable technical advancements, including artificial intelligence, machine learning, big data, blockchain, cloud computing, and the Internet of Things, have emerged. These tools have the ability to change the accounting process. This study aims to conduct a systematic literature review on using the Internet of Things (IoT), blockchain, and eXtensible Business Reporting Language (XBRL) in a single accounting information system (AIS) to enhance the quality of digital financial reports. This paper employs a systematic literature review (SLR) methodology, specifically, by adopting the widely accepted PRISMA technique. The final sample of this study included 309 related studies from 2013 to 2023. Our findings highlight the lack of literature related to the integration of these three types of technologies within a unified AIS. This study is extremely significant because it proposes a new research stream that explores the possibility of integrating IoT, blockchain, and XBRL in a single accounting system, yielding a plethora of benefits to the accounting field. However, the potential benefits of such an integration are evident, including enhanced transparency, real-time reporting capabilities, and improved data security. Our paper’s main contribution is that it is the first paper, to the best of our knowledge, to explore the integration of these three technologies. We also identified important gaps in the research and pointed out ways for future research to somehow take a lead in exploring further how this integrated system is affecting accounting practices. Full article
Show Figures

Figure 1

18 pages, 1456 KB  
Article
Insights into Cybercrime Detection and Response: A Review of Time Factor
by Hamed Taherdoost
Information 2024, 15(5), 273; https://doi.org/10.3390/info15050273 - 12 May 2024
Cited by 14 | Viewed by 8549
Abstract
Amidst an unprecedented period of technological progress, incorporating digital platforms into diverse domains of existence has become indispensable, fundamentally altering the operational processes of governments, businesses, and individuals. Nevertheless, the swift process of digitization has concurrently led to the emergence of cybercrime, which [...] Read more.
Amidst an unprecedented period of technological progress, incorporating digital platforms into diverse domains of existence has become indispensable, fundamentally altering the operational processes of governments, businesses, and individuals. Nevertheless, the swift process of digitization has concurrently led to the emergence of cybercrime, which takes advantage of weaknesses in interconnected systems. The growing dependence of society on digital communication, commerce, and information sharing has led to the exploitation of these platforms by malicious actors for hacking, identity theft, ransomware, and phishing attacks. With the growing dependence of organizations, businesses, and individuals on digital platforms for information exchange, commerce, and communication, malicious actors have identified the susceptibilities present in these systems and have begun to exploit them. This study examines 28 research papers focusing on intrusion detection systems (IDS), and phishing detection in particular, and how quickly responses and detections in cybersecurity may be made. We investigate various approaches and quantitative measurements to comprehend the link between reaction time and detection time and emphasize the necessity of minimizing both for improved cybersecurity. The research focuses on reducing detection and reaction times, especially for phishing attempts, to improve cybersecurity. In smart grids and automobile control networks, faster attack detection is important, and machine learning can help. It also stresses the necessity to improve protocols to address increasing cyber risks while maintaining scalability, interoperability, and resilience. Although machine-learning-based techniques have the potential for detection precision and reaction speed, obstacles still need to be addressed to attain real-time capabilities and adjust to constantly changing threats. To create effective defensive mechanisms against cyberattacks, future research topics include investigating innovative methodologies, integrating real-time threat intelligence, and encouraging collaboration. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

19 pages, 6982 KB  
Article
Machine Learning-Based Lane-Changing Behavior Recognition and Information Credibility Discrimination
by Xing Chen, Song Yan, Jingsheng Wang and Yi Zhang
Symmetry 2024, 16(1), 58; https://doi.org/10.3390/sym16010058 - 1 Jan 2024
Cited by 4 | Viewed by 2131
Abstract
Intelligent Vehicle–Infrastructure Collaboration Systems (i-VICS) put forward higher requirements for the real-time security of dynamic traffic information interaction. It is difficult to ensure the safety of dynamic traffic information interaction by means of traditional static information security. In this study, a method was [...] Read more.
Intelligent Vehicle–Infrastructure Collaboration Systems (i-VICS) put forward higher requirements for the real-time security of dynamic traffic information interaction. It is difficult to ensure the safety of dynamic traffic information interaction by means of traditional static information security. In this study, a method was proposed through machine learning-based lane-changing (LC) behavior recognition and information credibility discrimination, based on the utilization and exploitation of traffic business characteristics. The method consisted of three stages: LC behavior recognition based on Support Vector Machine (SVM), LC speed prediction based on Recurrent Neural Network (RNN), and credibility discrimination of speed information under LC states. Firstly, the labeling rules of vehicle LC behavior and the input/output of each stage model were determined, and the raw NGSIM data were processed to obtain data sets for LC behavior identification and LC speed prediction. Both the SVM classification and RNN prediction models were trained and tested, respectively. Afterwards, a model of credibility discrimination speed information under an LC state was constructed, and the real vehicle speed data were processed for model verification. The results showed that the overall accuracy of vehicle status recognition by the SVM model was 99.18%, and the precision of the RNN model was on the order of magnitude of cm/s. Considering transverse and longitudinal abnormal velocity, the accuracy credibility discrimination of LC velocity was more than 97% in most experimental groups. The model can effectively identify the abnormal speed data of LC vehicles and provide support for the real-time identification of LC vehicle speed information under i-VICS. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

20 pages, 2660 KB  
Article
Reinforcement Learning-Based Multi-Objective of Two-Stage Blocking Hybrid Flow Shop Scheduling Problem
by Ke Xu, Caixia Ye, Hua Gong and Wenjuan Sun
Processes 2024, 12(1), 51; https://doi.org/10.3390/pr12010051 - 25 Dec 2023
Cited by 11 | Viewed by 3534
Abstract
Consideration of upstream congestion caused by busy downstream machinery, as well as transportation time between different production stages, is critical for improving production efficiency and reducing energy consumption in process industries. A two-stage hybrid flow shop scheduling problem is studied with the objective [...] Read more.
Consideration of upstream congestion caused by busy downstream machinery, as well as transportation time between different production stages, is critical for improving production efficiency and reducing energy consumption in process industries. A two-stage hybrid flow shop scheduling problem is studied with the objective of the makespan and the total energy consumption while taking into consideration blocking and transportation restrictions. An adaptive objective selection-based Q-learning algorithm is designed to solve the problem. Nine state characteristics are extracted from real-time information about jobs, machines, and waiting processing queues. As scheduling actions, eight heuristic rules are used, including SPT, FCFS, Johnson, and others. To address the multi-objective optimization problem, an adaptive objective selection strategy based on t-tests is designed for making action decisions. This strategy can determine the optimization objective based on the confidence of the objective function under the current job and machine state, achieving coordinated optimization for multiple objectives. The experimental results indicate that the proposed algorithm, in comparison to Q-learning and the non-dominated sorting genetic algorithm, has shown an average improvement of 4.19% and 22.7% in the makespan, as well as 5.03% and 9.8% in the total energy consumption, respectively. The generated scheduling solutions provide theoretical guidance for production scheduling in process industries such as steel manufacturing. This contributes to helping enterprises reduce blocking and transportation energy consumption between upstream and downstream. Full article
Show Figures

Figure 1

19 pages, 8110 KB  
Article
Prediction of Short-Shot Defects in Injection Molding by Transfer Learning
by Zhe-Wei Zhou, Hui-Ya Yang, Bei-Xiu Xu, Yu-Hung Ting, Shia-Chung Chen and Wen-Ren Jong
Appl. Sci. 2023, 13(23), 12868; https://doi.org/10.3390/app132312868 - 30 Nov 2023
Cited by 6 | Viewed by 3484
Abstract
For a long time, the traditional injection molding industry has faced challenges in improving production efficiency and product quality. With advancements in Computer-Aided Engineering (CAE) technology, many factors that could lead to product defects have been eliminated, reducing the costs associated with trial [...] Read more.
For a long time, the traditional injection molding industry has faced challenges in improving production efficiency and product quality. With advancements in Computer-Aided Engineering (CAE) technology, many factors that could lead to product defects have been eliminated, reducing the costs associated with trial runs during the manufacturing process. However, despite the progress made in CAE simulation results, there still exists a slight deviation from actual conditions. Therefore, relying solely on CAE simulations cannot entirely prevent product defects, and businesses still need to implement real-time quality checks during the production process. In this study, we developed a Back Propagation Neural Network (BPNN) model to predict the occurrence of short-shots defects in the injection molding process using various process states as inputs. We developed a Back Propagation Neural Network (BPNN) model that takes injection molding process states as input to predict the occurrence of short-shot defects during the injection molding process. Additionally, we investigated the effectiveness of two different transfer learning methods. The first method involved training the neural network model using CAE simulation data for products with length–thickness ratios (LT) of 60 and then applying transfer learning with real process data. The second method trained the neural network model using real process data for products with LT60 and then applied transfer learning with real process data from products with LT100. From the results, we have inferred that transfer learning, as compared to conventional neural network training methods, can prevent overfitting with the same amount of training data. The short-shot prediction models trained using transfer learning achieved accuracies of 90.2% and 94.4% on the validation datasets of products with LT60 and LT100, respectively. Through integration with the injection molding machine, this enables production personnel to determine whether a product will experience a short-shot before the mold is opened, thereby increasing troubleshooting time. Full article
(This article belongs to the Special Issue The Future of Manufacturing and Industry 4.0)
Show Figures

Figure 1

Back to TopTop