Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (166)

Search Parameters:
Keywords = scheduling single machine

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2775 KB  
Article
Deep Learning-Based Disaggregation of EV Fast Charging Stations for Intelligent Energy Management in Smart Grids
by Sami M. Alshareef
Sustainability 2026, 18(6), 2729; https://doi.org/10.3390/su18062729 - 11 Mar 2026
Viewed by 151
Abstract
This paper investigates the deployment of four electric vehicle (EV) fast-charging stations (FCSs) in a commercial facility’s parking area, where multiple service centers operate on varying schedules. The commercial load demand is modeled using Monte Carlo Simulation (MCS), introducing realistic stochastic variability and [...] Read more.
This paper investigates the deployment of four electric vehicle (EV) fast-charging stations (FCSs) in a commercial facility’s parking area, where multiple service centers operate on varying schedules. The commercial load demand is modeled using Monte Carlo Simulation (MCS), introducing realistic stochastic variability and overlapping power patterns with FCS operations. A single-point sensing strategy at the point of common coupling (PCC) is adopted for load disaggregation. Continuous Wavelet Transform (CWT) is employed for feature extraction, and multiclass classification is performed using Error-Correcting Output Codes (ECOC). Under commercial load interference, conventional machine-learning classifiers achieve a macro classification accuracy of 89.53%, with the lowest class accuracy dropping to 76.74%. To address this limitation, a deep learning (DL)-based framework is implemented. Simulation results demonstrate that the proposed DL approach improves overall classification accuracy from 89.53% to 100%, corresponding to a 10.47 percentage-point absolute improvement, an 11.7% relative gain, and complete elimination of misclassification errors. Notably, the most affected charging station class (FCS2) accuracy increases from 76.74% to 100%. These results demonstrate that the proposed deep learning framework reliably detects FCS activations even under overlapping, variable, and high-power commercial load conditions, enabling more efficient energy management and optimal utilization of electrical resources, reduced energy waste, and enhanced sustainability of EV charging infrastructure within commercial facilities. Full article
Show Figures

Figure 1

35 pages, 633 KB  
Article
Bi-Objective Optimization for Scalable Resource Scheduling in Dense IoT Deployments via 5G Network Slicing Using NSGA-II
by Francesco Nucci and Gabriele Papadia
Telecom 2026, 7(2), 24; https://doi.org/10.3390/telecom7020024 - 2 Mar 2026
Viewed by 234
Abstract
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs [...] Read more.
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs with quality-of-service. We formulate a bi-objective optimization problem that balances operational costs with quality-of-service (QoS) requirements across heterogeneous 5G network slices. The proposed approach employs a tailored Non-dominated Sorting Genetic Algorithm II (NSGA-II) incorporating domain-specific constraints, including device priorities, slicing isolation requirements, radio resource limitations, and battery capacity. Through extensive simulations on scenarios with up to 5000 devices, our method generates diverse Pareto-optimal solutions achieving hypervolume improvements of 8–13% over multi-objective DRL, 15–28% over single-objective DRL baselines, and 22–41% over heuristic approaches while maintaining computational scalability suitable for real-time network management (sub-2 min execution). Validation with real-world traffic traces from operational deployments confirms algorithm robustness under realistic burstiness and temporal patterns, with 7% performance degradation vs. synthetic traffic—within expected simulation–reality gaps. This work provides a practical framework for IoT resource scheduling in current 5G and future Beyond-5G (B5G) telecommunications infrastructures, validated in scenarios of up to 5000 devices. Full article
Show Figures

Figure 1

30 pages, 503 KB  
Article
Due-Window Assignment Scheduling Problems with Position-Dependent Weights, Truncated Learning Effects and Past-Sequence-Dependent Setup-Times
by Li-Yan Wang
Symmetry 2026, 18(3), 396; https://doi.org/10.3390/sym18030396 - 24 Feb 2026
Viewed by 243
Abstract
This paper addresses single-machine due-window assignment scheduling with truncated learning effects and past-sequence-dependent setup times. In practical production systems, truncated learning effects capture the ceiling of skill improvement, past-sequence-dependent setup times reflect sequence-dependent switching efforts, and position-dependent weights allow varying importance across job [...] Read more.
This paper addresses single-machine due-window assignment scheduling with truncated learning effects and past-sequence-dependent setup times. In practical production systems, truncated learning effects capture the ceiling of skill improvement, past-sequence-dependent setup times reflect sequence-dependent switching efforts, and position-dependent weights allow varying importance across job positions. The due-window assignment includes the common, slack, and different assignments. The objective cost is the minimum of the weighted sum of earliness and tardiness, the number of early and tardy jobs, due-window cost, and the completion time. In which the weights are position-dependent. For the common and slack due-window assignments, several optimal structural properties are established. Based on these, the optimal schedule can be derived by solving a series of assignment problems, i.e., the problems can be solved in polynomial time O(n5), where n is the number of jobs. Under the common, slack, and different assignments without the number of early and tardy jobs cost, the optimal schedule of the problems can be obtained from an assignment problem, i.e., the problems can be solved in O(n3) time. In addition, an extension of the job-dependent processing times is given. This study extends existing research models in this domain and proposes polynomial-time algorithms that guarantee optimal solutions for minimizing the objective cost function. The proposed approach not only advances scheduling theory by handling multiple realistic constraints simultaneously but also offers a practical decision-making tool for just-in-time production systems. The algorithms are tested numerically and compared with simulated annealing algorithm and tabu search algorithm. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 1287 KB  
Article
Time-Dependent DCE-MRI Radiomics to Predict Response to Neoadjuvant Therapy in Breast Cancer: A Multicenter Study with External Validation
by Giulia Vatteroni, Riccardo Levi, Paola Nardi, Giulia Pruneddu, Elisa Salpietro, Federica Fici, Cinzia Monti, Rubina Manuela Trimboli and Daniela Bernardi
Diagnostics 2026, 16(4), 611; https://doi.org/10.3390/diagnostics16040611 - 19 Feb 2026
Viewed by 432
Abstract
Background: The accurate prediction of response to neoadjuvant therapy (NAT) is crucial for optimizing breast cancer management. Conventional breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) radiomics typically relies on single post-contrast phases and may not fully capture temporal enhancement patterns related to [...] Read more.
Background: The accurate prediction of response to neoadjuvant therapy (NAT) is crucial for optimizing breast cancer management. Conventional breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) radiomics typically relies on single post-contrast phases and may not fully capture temporal enhancement patterns related to tumor heterogeneity. This study evaluated a machine learning model based on time-dependent radiomic features extracted from pre-treatment DCE-MRI for predicting NAT response in breast cancer patients. Methods: Breast DCE-MRI examinations of women scheduled for NAT, acquired on 1.5 T scanners from three different vendors, were retrospectively collected from two centers. Tumors were automatically segmented on the third post-contrast DCE image using a 3D nnUNet model trained on 30 lesions. All DCE phases were registered to the reference image, and radiomic features were extracted from a consistent tumor region of interest across all phases. Time-dependent radiomic features were computed using linear regression modeling of feature evolution over time. A random forest classifier integrating static and time-dependent radiomic features was developed to predict pathological complete response (pCR), partial response (pPR), and non-response (pNR). Model performance was evaluated using internal validation (Center 1) and an independent external test cohort (Center 2). Results: A total of 212 patients were included (173 from Center 1 and 39 from Center 2), comprising 103 pCR, 103 pPR and 6 pNR cases. Among 759 extracted features, 30 showed significant differences across response groups. Several time-dependent texture features related to intratumoral heterogeneity were significantly associated with pNR. The model achieved AUC values of 0.80, 0.81, and 0.95 in the internal validation cohort and 0.75, 0.74, and 0.86 in the external test cohort for predicting pCR, pPR, and pNR, respectively. Conclusions: Time-dependent radiomic features derived from pre-treatment breast DCE-MRI enable the accurate prediction of response to NAT, with particularly strong performance in identifying non-responders. This approach may support imaging-based risk stratification and contribute to more personalized treatment. Full article
(This article belongs to the Special Issue Advances in Breast Diagnostics)
Show Figures

Figure 1

23 pages, 7420 KB  
Article
Machine Learning-Based Physical Layer Security for 5G/6G-Enabled Electric Vehicle Charging Network
by Livin Shaji, Yang Luo, Cheng Yin and Jie Lin
Electronics 2026, 15(4), 865; https://doi.org/10.3390/electronics15040865 - 19 Feb 2026
Viewed by 322
Abstract
The rapid deployment of electric vehicle (EV) charging infrastructure, coupled with the integration of 5G/6G and Internet of Vehicles (IoV) technologies, has transformed charging stations into cyber–physical systems that rely on wireless communication for authentication, control, and grid coordination. While existing security standards [...] Read more.
The rapid deployment of electric vehicle (EV) charging infrastructure, coupled with the integration of 5G/6G and Internet of Vehicles (IoV) technologies, has transformed charging stations into cyber–physical systems that rely on wireless communication for authentication, control, and grid coordination. While existing security standards such as ISO 15118 provide cryptographic protection at upper layers, they are insufficient to address physical-layer threats inherent to wireless connectivity. In particular, wireless active eavesdropping attacks can corrupt channel estimation during the authentication phase, enabling impersonation, unauthorized charging, and disruption of grid operations. This paper proposes a machine learning-based physical layer security (PLS) framework for detecting active eavesdropping attacks in 5G/6G-enabled EV charging systems. By modeling malicious EVs as pilot-spoofing attackers, three discriminative features, namely mean power, power ratio, and angle-based feature, are extracted from received pilot signals at the charging station. Three classifiers are evaluated: single-class support vector machine (SC-SVM), Random Forest (RF), and DNN. Simulation results demonstrate that the SC-SVM maintains a stable accuracy between 94% and 96% across all attacker power levels, while RF and DNN significantly outperform it under stronger attack conditions. Specifically, under strong attacker conditions, RF achieves an accuracy of 99.9%, and DNN reaches 99.8%, both exceeding 99% detection accuracy. By preventing pilot-spoofing-based impersonation during authentication, the proposed framework enhances charging availability, billing integrity, and grid-aware scheduling in intelligent EV charging infrastructure. Full article
Show Figures

Figure 1

20 pages, 354 KB  
Article
Study on Controllable Processing Time and Minmax Group Scheduling with Common Due-Window Assignment
by Li-Han Zhang, Ming-Hui Li and Lin Lin
Symmetry 2026, 18(2), 358; https://doi.org/10.3390/sym18020358 - 14 Feb 2026
Cited by 1 | Viewed by 190
Abstract
We considerthe single-machine group scheduling problem with controllable processing times (i.e., resource allocation) under a common due-window (condw) assignment. The objective is to minimize a total cost composed of earliness, tardiness, due-window-related penalties, and resource consumption. Motivated [...] Read more.
We considerthe single-machine group scheduling problem with controllable processing times (i.e., resource allocation) under a common due-window (condw) assignment. The objective is to minimize a total cost composed of earliness, tardiness, due-window-related penalties, and resource consumption. Motivated by realistic production settings such as aerospace component machining and electronics batch assembly, the study addresses the joint optimization of group sequence, job sequence, due-window placement, and resource allocation. For linear and convex resource models, we propose a branch-and-bound (BaB^) algorithm and efficient heuristics. Numerical experiments show that the BaB^ algorithm can solve instances with up to 250 jobs and 16 groups. The heuristics (UB^), including a simulated annealing (SA^) algorithm, obtain near-optimal solutions with an average error below 0.05% much faster, demonstrating their practical usefulness for real-time scheduling. Full article
26 pages, 2547 KB  
Article
An Artificial Plant Community with a Random-Pairwise Single-Elimination Tournament System for Conflict-Free Human–Machine Collaborative Manufacturing in Industry 5.0
by Zhengying Cai, Xinfei Dou, Cancan He, Huiyan Deng and Zhen Liu
Machines 2026, 14(2), 205; https://doi.org/10.3390/machines14020205 - 10 Feb 2026
Viewed by 264
Abstract
Human–machine collaborative manufacturing plays an important role in emerging Industry 5.0 and smart manufacturing. However, addressing the conflict-free human–machine collaborative manufacturing problem (CHMCMP) is extremely challenging because the cooperation and conflict between humans and machines are closely intertwined. This article examines the CHMCMP [...] Read more.
Human–machine collaborative manufacturing plays an important role in emerging Industry 5.0 and smart manufacturing. However, addressing the conflict-free human–machine collaborative manufacturing problem (CHMCMP) is extremely challenging because the cooperation and conflict between humans and machines are closely intertwined. This article examines the CHMCMP within the context of integrating the flexible job-shop scheduling problem (FJSP) and the flow-shop scheduling problem (FSP). Firstly, the CHMCMP was modeled as a job-flow-shop scheduling problem (JFSP), where machine processing is an FJSP and human operation is an FSP. Our goal is to complete all manufacturing jobs while pursuing multi-objective optimization, i.e., high manufacturing performance, conflict-free human–machine collaboration, and low no-load energy consumption. Secondly, an improved artificial plant community (APC) algorithm was developed to solve the NP-hard problem. A random-pairwise single-elimination tournament system is introduced for elite selection, with a time complexity of O(S) linearly correlated with the population size (S), superior to the sorting-based elite selection used by most evolutionary algorithms with polynomial time complexity, i.e., O(S3) of the genetic algorithm (GA) and O(S2) of the non-dominated sorting genetic algorithm-II (NSGA-II). Thirdly, a medium-scale benchmark dataset was exploited according to a human–machine collaborative manufacturing scenario. The Gantt charts of machine processing and human operating reveal that the FJSP and the FSP are entangled and are interdependent on each other in the CHMCMP, and solving FJSP and FSP separately cannot eliminate the conflict between the two. Compared with other state-of-the-art algorithms, the APC algorithm improves the makespan by up to 11.38%, the total transfer time of humans by up to 14.09%, and the no-loaded processing energy consumption by up to 12.62% with conflict avoidance. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

20 pages, 1295 KB  
Article
A Conceptual AI-Based Framework for Clash Triage in Building Information Modeling (BIM): Towards Automated Prioritization in Complex Construction Projects
by Andrzej Szymon Borkowski and Alicja Kubrat
Buildings 2026, 16(4), 690; https://doi.org/10.3390/buildings16040690 - 7 Feb 2026
Viewed by 384
Abstract
Effective clash management is critical to the success of complex construction projects, yet BIM coordinators face severe information overload when modern detection tools generate thousands or even millions of collision reports, making interdisciplinary coordination increasingly difficult. This article presents a conceptual framework for [...] Read more.
Effective clash management is critical to the success of complex construction projects, yet BIM coordinators face severe information overload when modern detection tools generate thousands or even millions of collision reports, making interdisciplinary coordination increasingly difficult. This article presents a conceptual framework for using AI for collision triage in a Building Information Modeling (BIM) environment. Previous approaches have focused mainly on collision detection itself and simple, rule-based prioritization, rarely exploiting the potential of Artificial Intelligence (AI) methods for post-processing of results, which constitutes the main innovation of this work. The proposed framework describes a modular system in which collision detection results and data from BIM models, schedules (4D), and cost estimates (5D) are processed by a set of AI components, offering adaptive, data-driven decision support unlike static rule-based methods. These include: a classifier that filters out irrelevant collisions (noise), algorithms that group recurring collisions into single design problems, a model that assesses the significance of collisions by determining a composite ‘AI Triage Score’ indicator, and a module that assigns responsibility to the appropriate trades and process participants. The framework leverages supervised machine learning methods (gradient boosting algorithms, selected for their effectiveness with tabular data) for noise filtering, density-based clustering (HDBSCAN, chosen for its ability to detect clusters of varying densities without predefined cluster count) for clash aggregation, and multi-criteria scoring models for priority assessment. The article also discusses a potential way to integrate the framework into the existing BIM workflow and possible scenarios for its validation based on case studies and expert evaluation. The proposed conceptual framework represents a step towards moving from manual, intuitive collision triage to a data- and AI-based approach, which can contribute to increased coordination efficiency, reduced risk of errors, and better use of design resources. As a conceptual study, the framework provides a foundation for future empirical validation and its limitations include dependency on historical training data availability and the need for calibration to project-specific contexts. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 1878 KB  
Article
Research on Scheduling of Metal Structural Part Blanking Workshop with Feeding Constraints
by Yaping Wang, Xuebing Wei, Xiaofei Zhu, Lili Wan and Zihui Zhao
Math. Comput. Appl. 2026, 31(1), 24; https://doi.org/10.3390/mca31010024 - 6 Feb 2026
Viewed by 322
Abstract
Taking a metal structural part blanking workshop as the application background, this study addresses the challenges of high material variety, long crane feeding travel caused by heterogeneous line-side storage layouts, and frequent machine stoppages due to the limited feeding capacity of a single [...] Read more.
Taking a metal structural part blanking workshop as the application background, this study addresses the challenges of high material variety, long crane feeding travel caused by heterogeneous line-side storage layouts, and frequent machine stoppages due to the limited feeding capacity of a single overhead crane. To this end, an integrated machine–crane dual-resource scheduling model is developed by explicitly considering line-side storage locations. The objective is to minimize the maximum waiting time among all machine tools. Under constraints of material assignment, processing sequence, and the crane’s single-task execution and travel requirements, the storage positions of materials in line-side buffers are jointly optimized. To solve the problem, a genetic algorithm with fitness-value-based crossover is proposed, and a simulated-annealing acceptance criterion is embedded to suppress premature convergence and enhance the ability to escape local optima. Comparative experiments on randomly generated instances show that the proposed algorithm can significantly reduce the maximum waiting time and yield more stable results for medium- and large-scale cases. Furthermore, a simulation based on real production data from an industrial enterprise verifies that, under limited feeding capacity, the proposed method effectively shortens material-waiting time, improves equipment utilization, and enhances production efficiency, demonstrating its effectiveness. Full article
Show Figures

Figure 1

28 pages, 5845 KB  
Article
High-Accuracy ETA Prediction for Long-Distance Tramp Shipping: A Stacked Ensemble Approach
by Pengfei Huang, Jinfen Cai, Jinggai Wang, Hongbin Chen and Pengfei Zhang
J. Mar. Sci. Eng. 2026, 14(2), 177; https://doi.org/10.3390/jmse14020177 - 14 Jan 2026
Viewed by 530
Abstract
The Estimated Time of Arrival (ETA) of vessels is a vital operational indicator for voyage planning, fleet deployment, and resource allocation. However, most existing studies focus on short-distance liner services with fixed routes, while ETA prediction for long-distance tramp bulk carriers remains insufficiently [...] Read more.
The Estimated Time of Arrival (ETA) of vessels is a vital operational indicator for voyage planning, fleet deployment, and resource allocation. However, most existing studies focus on short-distance liner services with fixed routes, while ETA prediction for long-distance tramp bulk carriers remains insufficiently accurate, often resulting in operational inefficiencies and charter party disputes. To fill this gap, this study proposes a data-driven stacking ensemble learning framework that integrates Light Gradient-Boosting Machine (LightGBM), Extreme Gradient Boosting (XGBoost), and Random Forest (RF) as base learners, combined with a Linear Regression meta-learner. This framework is specifically tailored to the unique complexities of tramp shipping, advancing beyond traditional single-model approaches by incorporating systematic feature engineering and model fusion. The study also introduces the construction of a comprehensive multi-dimensional AIS feature system, incorporating baseline, temporal, speed-related, course-related, static, and historical behavioral features, thereby enabling more nuanced and accurate ETA prediction. Using AIS trajectory data from bulk carrier voyages between Weipa (Australia) and Qingdao (China) in 2023, the framework leverages multi-feature fusion to enhance predictive performance. The results demonstrate that the stacking model achieves the highest accuracy, reducing the Mean Absolute Error (MAE) to 3.30 h—a 74.7% improvement over the historical averaging benchmark and an 11.3% reduction compared with the best individual model, XGBoost. Extensive performance evaluation and interpretability analysis confirm that the stacking ensemble provides stability and robustness. Feature importance analysis reveals that vessel speed, course stability, and remaining distance are the primary drivers of ETA prediction. Additionally, meta-learner weighting analysis shows that LightGBM offers a stable baseline, while systematic deviations in XGBoost predictions act as effective error-correction signals, highlighting the complementary strengths captured by the ensemble. The findings provide operational insights for maritime logistics and port management, offering significant benefits for port scheduling and maritime logistics management. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

33 pages, 4474 KB  
Article
An Improved Multi-Objective Memetic Algorithm with Q-Learning for Distributed Hybrid Flow Shop Considering Sequence-Dependent Setup Times
by Yong Shen, Yibo Liu, Hongwei Kang, Xingping Sun and Qingyi Chen
Symmetry 2026, 18(1), 135; https://doi.org/10.3390/sym18010135 - 9 Jan 2026
Cited by 1 | Viewed by 355
Abstract
Most multi-objective studies on distributed hybrid flow shops that include tardiness-related objectives focus solely on optimizing makespan alongside a single tardiness objective. However, in real-world scenarios with strict contractual deadlines or high penalty costs for delays, minimizing both total tardiness and the number [...] Read more.
Most multi-objective studies on distributed hybrid flow shops that include tardiness-related objectives focus solely on optimizing makespan alongside a single tardiness objective. However, in real-world scenarios with strict contractual deadlines or high penalty costs for delays, minimizing both total tardiness and the number of tardy jobs becomes critically important. This paper addresses this gap by prioritizing tardiness-related objectives while simultaneously optimizing makespan, total tardiness, and the number of tardy jobs. It investigates a distributed hybrid flow shop scheduling problem (DHFSP), which has some symmetries on machines. We propose an improved multi-objective memetic algorithm incorporating Q-learning (IMOMA-QL) to solve this problem, featuring (1) a hybrid initialization method that generates high-quality, diverse solutions by balancing all three objectives; (2) a multi-factory SB2OX crossover operator preserving high-performance job sequences across factories; (3) six problem-specific neighborhood structures for efficient solution space exploration; and (4) a Q-learning-guided variable neighborhood search that adaptively selects neighborhood structures. Based on extensive numerical experiments across 100 generated instances and a comprehensive comparison with four comparative algorithms, the proposed IMOMA demonstrates its effectiveness and proves to be a competitive method for solving the DHFSP. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

27 pages, 382 KB  
Article
Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability
by Nodari Vakhania, Frank Werner and Kevin Johedan Ramírez-Fuentes
Algorithms 2026, 19(1), 38; https://doi.org/10.3390/a19010038 - 4 Jan 2026
Viewed by 513
Abstract
Since the publication of the first scheduling paper in 1954, a huge number of works dealing with different types of single machine problems have appeared. They addressed many heuristics and enumerative procedures, complexity results or structural properties of certain problems. Regarding surveys, often [...] Read more.
Since the publication of the first scheduling paper in 1954, a huge number of works dealing with different types of single machine problems have appeared. They addressed many heuristics and enumerative procedures, complexity results or structural properties of certain problems. Regarding surveys, often particular subjects like special objective functions were discussed or more general scheduling problems were surveyed, in which a substantial part was devoted to single machine problems. In this paper, we focus on standard settings, basic structural properties of these settings, polynomial algorithms and complexity and approximation issues, which have not been reviewed so far, and suggest some future work in this area. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

18 pages, 1327 KB  
Article
Affective Response Dataset for Virtual Workspaces: Based on Color Stimuli and Multimodal Physiological Signals
by Yimeng Zhang, Ting Li, Zihan Li, Jean-Marc Pondo, Xiaobo Wang and Ping An
Sensors 2025, 25(24), 7461; https://doi.org/10.3390/s25247461 - 8 Dec 2025
Cited by 1 | Viewed by 768
Abstract
In the context of post-pandemic remote work normalization and the emergence of the metaverse, virtual workspaces have attracted significant attention as critical digital infrastructure with promising application prospects. While virtual workspaces enable efficient task performance, compared with traditional ones, the lack of emotional [...] Read more.
In the context of post-pandemic remote work normalization and the emergence of the metaverse, virtual workspaces have attracted significant attention as critical digital infrastructure with promising application prospects. While virtual workspaces enable efficient task performance, compared with traditional ones, the lack of emotional connection between humans and machines adversely affects participants’ mental health. The emergence of affective computing has made it possible to endow virtual workspaces with “affective intelligence”. Therefore, this study aims to clarify the relationship between color and participants’ emotions in virtual workspaces through an experiment involving 48 participants, and eight virtual workspaces were constructed, incorporating four color conditions (red, blue, yellow, and green) and two workspace types (shared and single). Data were synchronously collected using the Positive and Negative Affect Schedule (PANAS), a questionnaire item on arousal, electrodermal activity (EDA), and heart rate variability (HRV). The results successfully established specific associations between colors and emotions: red with “anxious”, yellow with “happy”, and blue with “calm”. Although no specific emotion word was identified for green, this study successfully achieved the emotion classification of virtual workspaces and constructed a corresponding dataset. These findings provide a theoretical foundation for the development of affective computing models. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

41 pages, 3181 KB  
Article
Transmission-Path Selection with Joint Computation and Communication Resource Allocation in 6G MEC Networks with RIS and D2D Support
by Yao-Liang Chung
Future Internet 2025, 17(12), 565; https://doi.org/10.3390/fi17120565 - 6 Dec 2025
Viewed by 737
Abstract
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design [...] Read more.
This paper proposes a transmission-path selection algorithm with joint computation and communication resource allocation for sixth-generation (6G) mobile edge computing (MEC) networks enhanced by helper-assisted device-to-device (D2D) communication and reconfigurable intelligent surfaces (RIS). The novelties of this work lie in the joint design of three key components: a helper-assisted D2D uplink scheme, a packet-partitioning cooperative MEC offloading mechanism, and RIS-assisted downlink transmission and deployment design. These components collectively enable diverse transmission paths under strict latency constraints, helping mitigate overload and reduce delay. To demonstrate its performance advantages, the proposed algorithm is compared with a baseline algorithm without helper-assisted D2D or RIS support, under two representative scheduling policies—modified maximum rate and modified proportional fair. Simulation results in single-base station (BS) and dual-BS environments show that the proposed algorithm consistently achieves a higher effective packet-delivery success percentage, defined as the fraction of packets whose total delay (uplink, MEC computation, and downlink) satisfies service-specific latency thresholds, and a lower average total delay, defined as the mean total delay of all successfully delivered packets, regardless of whether individual delays exceed their thresholds. Both metrics are evaluated separately for ultra-reliable low-latency communications, enhanced mobile broadband, and massive machine-type communications services. These results indicate that the proposed algorithm provides solid performance and robustness in supporting diverse 6G services under stringent latency requirements across different scheduling policies and deployment scenarios. Full article
Show Figures

Figure 1

27 pages, 2640 KB  
Article
An Exact Approach for Multitasking Scheduling with Two Competitive Agents on Identical Parallel Machines
by Xin Xin, Suxia Zhou and Jinsheng Gao
Appl. Sci. 2025, 15(22), 12111; https://doi.org/10.3390/app152212111 - 14 Nov 2025
Cited by 1 | Viewed by 573
Abstract
The cloud manufacturing (CMfg) platform serves as a centralized hub for allocating and scheduling tasks to distributed resources. It features a concrete two-agent model that addresses real-world industrial needs: the first agent handles long-term flexible tasks, while the second agent manages urgent short-term [...] Read more.
The cloud manufacturing (CMfg) platform serves as a centralized hub for allocating and scheduling tasks to distributed resources. It features a concrete two-agent model that addresses real-world industrial needs: the first agent handles long-term flexible tasks, while the second agent manages urgent short-term tasks, both sharing a common due date. The second agent employs multitasking scheduling, which allows for the flexible suspension and switching of tasks. This paper addresses a novel scheduling problem aimed at minimizing the total weighted completion time of the first agent’s jobs while guaranteeing the second agent’s due date. For single-machine cases, a polynomial algorithm provides an efficient baseline; for parallel machines, an exact branch-and-price approach is developed, where the polynomial method informs the pricing problem and structural properties accelerate convergence. Computational results demonstrate significant improvements: the branch-and-price solves large-sized instances (up to 40 jobs) within 7200 s, outperforming CPLEX, which fails to find solutions for instances with more than 15 jobs. This approach is scalable for industrial cloud manufacturing applications, such as automotive parts production, and is capable of handling both design validation and quality inspection tasks. Full article
Show Figures

Figure 1

Back to TopTop