Next Article in Journal
Comfort-Oriented Optimization of Speed-Dependent Variable Inertance for Intelligent Vehicle Suspension Systems
Previous Article in Journal
DBO-Optimized Fuzzy PID Control for Position Tracking of a Pilot-Operated Proportional Directional Valve with Dead-Zone Nonlinearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Dispatch of Mobile Robots in Manufacturing Using Convolutional Neural Networks

Lyle School of Engineering, Southern Methodist University, Dallas, TX 75205, USA
*
Author to whom correspondence should be addressed.
Machines 2026, 14(5), 512; https://doi.org/10.3390/machines14050512
Submission received: 6 April 2026 / Revised: 1 May 2026 / Accepted: 1 May 2026 / Published: 5 May 2026
(This article belongs to the Section Automation and Control Systems)

Abstract

Material delivery plays a critical role in manufacturing efficiency, with manual retrieval introducing non-value-added (NVA) time and disrupting workflow continuity. Autonomous mobile robots (AMRs) can improve performance by enabling overlap between material transport and productive work, but their effectiveness depends on how they are deployed. In this work, a convolutional neural network (CNN)-based autonomous dispatch framework was implemented and tested in a controlled experimental setting. This study utilized a representative aerospace assembly task to evaluate three material delivery approaches across 60 runs, including manual walking, manual AMR dispatch, and autonomous AMR deployment. System performance was assessed using total operation time, panel lead times, and non-value-added time. Results showed that manual AMR dispatch significantly increased total operation time and non-value-added time due to sequential task execution. Autonomous deployment reduced this inefficiency by enabling preemptive material transport and overlap with operator activity, but did not significantly outperform manual walking under the tested conditions. Operator variability also influenced non-value-added time under automated dispatch. These results indicate that AMR effectiveness depends strongly on deployment timing and workflow synchronization, with the greatest potential benefits expected in environments that allow greater overlap between transport and productive work.

1. Introduction

Efficient material handling is a critical component of modern manufacturing systems, directly influencing productivity, workflow continuity, and overall system performance. In many industrial environments, operators repeatedly retrieve materials from designated pick up locations and transport them to workstations; while straightforward, this process introduces non-value-added (NVA) time, interrupts task continuity, and contributes to operator fatigue.
Autonomous mobile robots (AMRs) are increasingly being adopted in manufacturing environments to support material transport and reduce reliance on manual handling. By enabling material transport without direct operator involvement, AMRs offer the potential for parallelization between delivery and productive work. However, performance improvements are not guaranteed. When robot movement is initiated only after materials are required, deployment becomes sequential, leading to idle waiting time and reduced efficiency.
This highlights a key limitation in current implementations. The effectiveness of AMR integration depends not only on the presence of robotic systems, but on how and when they are deployed within the workflow. Preemptive and context-aware dispatch strategies are necessary to fully leverage the potential for overlapping robot travel with operator task execution.
Computer vision and machine learning provide a pathway for enabling such strategies. Convolutional neural networks (CNNs) can monitor manufacturing processes in real-time and support automated decision making based on workflow progression. By integrating perception with control, AMR deployment can be triggered dynamically in response to process state rather than relying on manual intervention.
With this in mind, the objective of this study was to quantify the impact of deployment strategy on AMR effectiveness and to determine the conditions under which autonomous dispatch provides measurable benefits over manual approaches. The results provide insight into the role of workflow synchronization, operator variability, and system design in realizing the full potential of AMR-based material delivery.
The remainder of this work is organized as follows. Section 2 explores related work relevant to the inclusion of AMRs and CNNs in manufacturing workflows. Section 3 presents the experimental setup, perception system, and autonomous dispatch logic. Section 4 provides a detailed analysis of system performance across three material delivery scenarios. Section 5 discusses key findings and implications for real-world deployment. Finally, Section 6 summarizes the contributions of this work.

2. Related Work

AMRs have seen increasing adoption across a range of industries due to their flexibility, autonomy, and ability to operate in dynamic environments. Beyond traditional warehouse and logistics applications, AMRs have been explored in mobile manipulation systems [1], cyber–physical production platforms [2], healthcare [3], and facilities management [4] to improve efficiency, reduce operator workload, and enhance system responsiveness.
Within manufacturing specifically, AMRs are increasingly utilized for intralogistics, offering advantages over both manual material transport and traditional automated guided vehicles that rely on fixed infrastructure [5,6]. Unlike manual transport, which requires operators to leave their workstations and introduces NVA time, AMRs enable material delivery without interrupting ongoing tasks. Their ability to autonomously navigate and integrate with dynamic manufacturing environments supports applications in workstation supply, part delivery, and production line coordination [7].
However, realizing these benefits depends on effective integration with existing production systems and control architectures [8,9]. In practice, this integration is non-trivial, as manufacturing environments involve dynamic task execution, variable processing times, and real-time material demand, making dispatch a complex problem [10]. Existing approaches largely focus on routing, scheduling, and system level integration rather than real-time, context-aware deployment strategies [11]. As a result, many implementations rely on manually triggered dispatch or predefined scheduling, resulting in sequential operation and limiting the robot’s ability to operate in parallel with ongoing tasks.
A potential solution to these limitations is the use of computer vision and machine learning to provide real-time awareness of manufacturing processes. CNNs, in particular, have been widely adopted for various tasks. For example, CNN-based systems are used to detect unsafe operator behavior and hazardous conditions to improve situational awareness and worker safety [12,13]. They are also applied to monitor manufacturing processes such as computer numerical control machining, printed circuit board etching, and additive manufacturing to support process stability and quality control [14,15,16]. Additionally, CNNs analyze sensor data for fault detection and tool condition monitoring, enabling proactive maintenance strategies [17,18].
While CNNs have been successfully applied to a wide range of perception and monitoring tasks, their use in conjunction with AMRs has primarily focused on navigation, localization, and obstacle avoidance in unstructured environments [19,20,21]. These approaches enable robot autonomy but do not incorporate process level understanding into higher level operational decisions.
Despite the extensive use of CNNs for monitoring and AMRs for material transport, limited work has explored the integration of CNN-based process monitoring with AMR dispatch logic. Prior research has primarily focused on state-aware dispatch strategies based on digital twin frameworks, sensor-driven signals, and multi-agent coordination for task allocation and routing [2,8,22]; while these methods provide system-level awareness for scheduling and coordination, they generally rely on predefined signals or structured data inputs rather than direct perception of workflow progression.
As a result, the use of real-time process state estimation through CNN-based methods to inform and optimize material delivery decisions remains largely unaddressed. This ultimately highlights the need for context-aware dispatch strategies that leverage perception driven insights to improve AMR utilization and overall manufacturing efficiency.

3. Materials and Methods

3.1. Manufacturing Testbed and Experimental Design

A representative aerospace assembly process was developed to evaluate the impact of material delivery strategies on manufacturing performance. The experimental testbed consisted of a fuselage model requiring sequential installation of two removable panels, as shown in Figure 1.
In Figure 2, the two panels are designated as Panel 1 and Panel 2, and each corresponding fastener location is explicitly labeled. The holes for each panel are numbered starting at the top-right corner and proceeding in a counterclockwise direction. These labeled positions were used to define a structured installation sequence in which Panel 1 was installed first, with bolts and washers placed in specific locations at designated steps throughout the procedure, followed by an analogous installation process for Panel 2.
Each experimental run followed a standardized sequence of operations, including tool acquisition, hole inspection, and panel placement and fastening using the previously defined installation sequence. This process was divided into discrete steps to enable detailed temporal analysis and consistent comparison across trials. Three material retrieval and delivery scenarios were evaluated, as summarized in Algorithm 1. The overarching workflow remained consistent across scenarios, with differences arising only in the method of panel acquisition and delivery.
In the first scenario, operators manually walked between the material drop off location (MDO) and the material pick up location (MPU) to retrieve panels, representing a fully manual baseline condition. In the second and third scenarios, material transport was performed using an OMRON LD-60 autonomous mobile robot platform shown in Figure 3.
Algorithm 1 Manufacturing Workflow with Scenario-Dependent Material Handling
Inspection of Tools and Fuselage Model:
 Obtain wrench and ensure fit for use
 Inspect Holes 1–8 for debris and document defects
 Inspect Holes 9–11 for debris and document defects
 Inspect Holes 12–15 for debris and document defects
 Inspect Holes 16–17 for debris and document defects
 Inspect Holes 18–22 for debris and document defects
 Inspect confined spaces for debris and document defects

Panel 1 Acquisition:
if Scenario 1 (Manual Walking) then
    Walk to MPU and retrieve Panel 1 and hardware
    Return to manufacturing area  
else if Scenario 2 (Manual AMR Dispatch) then
    Dispatch AMR to MPU via MobilePlanner to retrieve Panel 1 and hardware
    Dispatch AMR to MDO via MobilePlanner  
else if Scenario 3 (Autonomous AMR Dispatch) then
    Wait for AMR arrival at MDO and retrieve Panel 1 and hardware
end if

Panel 1 Installation:
 Hand install initial bolts (Holes 1.1, 1.4, 1.8, 1.11)
 Hand install remaining bolts (Holes 1.2, 1.3, 1.5, 1.6, 1.7, 1.9, 1.10, 1.12)
 Torque all Panel 1 bolts in numerical sequence

Panel 2 Acquisition:
if Scenario 1 (Manual Walking) then
    Walk to MPU and retrieve Panel 2 and hardware
    Return to manufacturing area  
else if Scenario 2 (Manual AMR Dispatch) then
    Dispatch AMR to MPU via MobilePlanner to retrieve Panel 2 and hardware
    Dispatch AMR to MDO via MobilePlanner  
else if Scenario 3 (Autonomous AMR Dispatch) then
    Wait for AMR arrival at MDO and retrieve Panel 2 and hardware  
end if

Panel 2 Installation:
 Hand install initial bolts (Holes 2.2, 2.5, 2.8, 2.10)
 Hand install remaining bolts (Holes 2.1, 2.3, 2.4, 2.6, 2.7, 2.9)
 Torque all Panel 2 bolts in numerical sequence

Complete Manufacturing Process:
 Return tools to cart
In the second scenario, operators manually dispatched the AMR to retrieve materials, with dispatch initiated only when materials were immediately required, resulting in sequential execution of tasks and robot travel. In the third scenario, AMR deployment was automatically triggered using a CNN-based perception system that monitored the manufacturing process via an overhead Rhombus security camera. This enabled preemptive material transport and potential overlap between robot travel and operator task execution. Across all scenarios, the approximate round trip travel distance between pick up and drop off locations, as shown in Figure 4, was 49 m.
A total of four operators participated in the study, with each operator completing five runs per scenario in a randomized order to reduce potential learning and fatigue effects. This resulted in a balanced experimental design with 60 total runs, enabling the evaluation of both scenario level effects and operator variability, which are explored later.
The primary response variables of interest for this work included total operation time, panel 1 lead time, panel 2 lead time, non-value-added time, and individual step durations. Time measurements were obtained through post processing of overhead video recordings captured using the Rhombus camera system. This approach ensured consistent timing data while eliminating disruption to operator workflow during experimental execution. Representative video examples of the three experimental scenarios are available at https://drive.google.com/file/d/1VuQPxw3TsS2kYm44Ue7qtzXG65GyJMnI/view?usp=sharing (accessed on 28 April 2026).

3.2. CNN-Based Perception System

To monitor the representative manufacturing process and enable automated decision making for AMR deployment, two convolutional neural network models based on the YOLOv8m-cls architecture were implemented. The medium-sized YOLOv8 classification model was selected due to its balance between accuracy and computational efficiency. Because all training and inference were performed on a desktop system equipped with an NVIDIA RTX 4070 GPU, model size was not constrained by embedded hardware limitations. This allowed for improved classification performance during deployment. Additionally, the YOLOv8 framework was chosen for its ease of implementation and integration within a Python-based workflow using the Ultralytics library in Python 3.12.7. It is important to note that this work does not aim to benchmark CNN architectures, but rather to demonstrate an autonomous dispatch system enabled through reliable visual classification.
Input images for both models were obtained from an overhead Rhombus security camera providing a continuous view of the manufacturing area, as shown in Figure 5. Regions of interest (ROIs) corresponding to relevant areas of the workspace were extracted from the video stream and resized to 640 × 640 pixels to serve as input to each CNN.
The perception system was structured into two stages. In Stage 1, CNN 1 classified the state of the material drop off location and adjacent tool cart. This model was trained to distinguish four classes representing combinations of tool presence and AMR material status, as shown in Figure 6. A numerical labeling convention was assigned to each class to facilitate integration with the autonomous dispatch logic described in the following section.
The CNN 1 dataset consisted of 12,738 labeled images distributed relatively evenly across the four classes. The dataset was randomly split into 80% training and 20% validation subsets. Training settings were configured with a maximum of 40 epochs, an input image size of 640 × 640 pixels, a batch size of 32, and default optimization parameters. As seen in Figure 7, training and validation loss values decreased to below 0.05 within eight epochs, while classification accuracy exceeded 0.99 after a single epoch. Based on this rapid convergence, training was terminated after 9 epochs. The high performance is attributed to the strong visual separability between classes, including clear differences in AMR material and torque wrench presence. These results are not indicative of overfitting, but rather reflect the distinct and easily distinguishable features present in each class.
In Stage 2, CNN 2 classified the state of the fuselage model based on panel installation progress. The input to this model consisted of cropped images of the fuselage region where panel installation occurred. Four classes were defined to represent all combinations of panel presence, as shown in Figure 8. Similar to CNN 1, a numerical labeling convention was used to support integration with the dispatch logic.
The CNN 2 dataset consisted of 9708 labeled images, with approximately 2400 images per class. An 80–20% train–validation split was used. Training settings were identical to those used for CNN 1. As seen in Figure 9, training and validation loss values decreased to below 0.05 within five epochs, while classification accuracy exceeded 0.99 after the first epoch. Consistent with this behavior, training was terminated after 10 epochs once convergence was observed. As with CNN 1, this performance is attributed to the clear visual differences between classes, specifically the presence or absence of panels on the fuselage, rather than overfitting.

3.3. Autonomous Dispatch Logic

The autonomous dispatch system was implemented through a Python-based control framework that communicated directly with the OMRON LD-60 AMR via a network socket interface. This framework enabled real-time transmission of navigation commands between predefined pick up and drop off locations based on the state of the manufacturing process.
Dispatch decisions were governed by a two-stage classification pipeline that was activated only when the AMR was located at the MDO. In Stage 1, the output of CNN 1 was used to detect the current state of the workstation, including tool presence and AMR material status. Specific classifications (CNN 1.2 and CNN 1.4) triggered auditory feedback from the AMR, such as commands to “grab material” or “grab material and tools”, providing real-time guidance to the operator. Compliance with these commands was inferred through subsequent visual observations by CNN 1, as the absence of both tools and materials (CNN 1.1) resulted in progression to Stage 2 for further evaluation of material requirements.
To account for the possibility of missed commands in a noisy manufacturing environment, auditory cues were repeated at fixed time intervals of 1 min if the corresponding visual state remained unchanged. This reminder interval can be adjusted as needed, but ensured continued operator guidance until the required action was completed. A classification indicating that no material was present on the AMR while tools were present on the cart (CNN 1.3) triggered progression to Stage 2, where the state of the fuselage assembly was evaluated to determine whether the manufacturing process was complete.
In Stage 2, CNN 2 evaluated the state of the fuselage assembly to determine whether the manufacturing process had progressed to a point where additional material would soon be required or whether the process was complete. Once both panels were installed (CNN 2.1), the manufacturing process was considered complete and the AMR was docked at a nearby location outside of the manufacturing area.
When dispatch to the material pick up location was triggered (CNN 2.2–CNN 2.4), the AMR issued auditory commands to request a specific panel based on the detected assembly state. Because the MPU was not visually monitored in the current implementation, explicit confirmation that auditory commands were received or followed was not available. To facilitate material loading in the absence of visual confirmation, a fixed waiting period of 10 s was enforced at the MPU prior to returning to the MDO. As a result, in a noisy manufacturing environment, a command may be missed, leading to the AMR returning to the MDO without material after the fixed waiting period. In such cases, subsequent classification at the MDO detects the absence of material and triggers a return to the MPU, resulting in repeated retrieval attempts until material is loaded.
The duration of the fixed waiting period was selected to approximate the time spent at the MPU during material retrieval in the baseline condition represented by Scenario 1, enabling a consistent comparison across scenarios; while this simplified implementation does not guarantee successful or time-efficient material loading, a more advanced framework could incorporate visual monitoring of the MPU to confirm successful loading, provide repeated auditory cues, and enable dynamic wait times.
To improve reliability, the dispatch logic accounted for non-ideal scenarios, such as CNN 2.4, corresponding to the presence of Panel 2 without Panel 1 on the fuselage, while this condition is not expected under normal process sequencing, it may occur if panels are delivered or installed out of order. Incorporating this condition ensured that the system remained functional under variations in operator behavior or material handling.
More complex non-sequential events or additional failure modes, such as mid-process tool changes or fastener issues, were not explicitly considered in this proof-of-concept framework. Addressing such scenarios would require additional visual states, expanded classification outputs, and more sophisticated decision logic. Nonetheless, the dispatch logic presented here enables context-aware decision making for AMR deployment within the scope of a structured and highly controlled assembly process.
In order to ensure robust operation, a stability criterion of 3.5 s was enforced at both stages of the classification pipeline. More specifically, classification outputs at each stage were required to remain consistent for this duration before triggering an action. This prevented unintended AMR actions and ensured that final deployment decisions were based on stable and reliable classification results rather than transient misclassifications.
Once the stability condition was satisfied at both stages and the combined classification logic indicated that material delivery was required, a dispatch command was sent to the AMR. This enabled preemptive robot movement, allowing material transport to occur concurrently with operator task execution. The overall decision making process and control flow are illustrated in Figure 10. The Python-based dispatch control framework used in this study can be made available upon reasonable request to the corresponding author.

4. Results

Experimental results were evaluated across 60 total runs using several time-based performance metrics, including total operation time, panel 1 lead time, panel 2 lead time, non-value-added time, and individual step durations. These metrics were selected to quantify both overall system efficiency and the temporal impact of different material delivery strategies on the manufacturing process.
Statistical analysis was performed to assess differences between dispatch scenarios and to evaluate the influence of operator variability. A one-way analysis of variance (ANOVA) was used to determine whether significant differences existed between scenarios for each response metric. A two-way ANOVA was then conducted to evaluate the effects of both scenario and operator, as well as their interaction, on system performance.
Following significant ANOVA results, Tukey’s honestly significant difference (HSD) tests were performed to identify pairwise differences between scenarios and, where appropriate, between operators. These post hoc comparisons enabled direct evaluation of how each dispatch strategy influenced performance outcomes and revealed cases in which operator variability contributed to observed differences. When a significant interaction between scenario and operator was identified in the two-way ANOVA, simple effects analysis was performed by applying Tukey HSD within each operator across scenarios and within each scenario across operators to further characterize these interactions.
Additionally, paired t-tests were used to compare Panel 1 and Panel 2 lead times within each scenario to assess symmetry in panel acquisition behavior. Lead times were also examined at the operator–scenario level to evaluate consistency across individual operators. All statistical tests were conducted using a significance level of α = 0.05 .

4.1. Dispatch Scenario Comparative Analysis

To evaluate the impact of material delivery strategy independent of operator specific effects, a scenario level analysis was conducted by aggregating results across all runs within each dispatch condition. Total operation time across scenarios is shown in Figure 11, where each box plot represents the distribution of all runs within a given scenario. Panel lead times and NVA time are similarly presented in Figure 12.
At the scenario level, Scenario 2 produced significantly higher total operation time and non-value-added time than both Scenarios 1 and 3, as reflected by the elevated box plot medians in Figure 11 and Figure 12. In contrast, Scenarios 1 and 3 exhibited similar median values for both total operation time and non-value-added time.
Panel lead times were significantly greater in Scenarios 2 and 3 compared to Scenario 1, as shown in Figure 12. This trend aligns with the expected behavior that AMR-based material retrieval introduces additional travel time relative to manual operator movement between the MDO and MPU.
Within each dispatch scenario, the box plot medians for Panel 1 and Panel 2 lead times were similar, as shown in Figure 12. This observation was confirmed by paired t-tests (all p > 0.05 ), indicating symmetric panel acquisition behavior across workflow stages.
To quantitatively compare scenario performance, a one-way ANOVA was conducted to evaluate the effect of dispatch scenario on each time-based performance metric, with results shown in Table 1.
Because the one-way ANOVA indicated a significant effect of scenario for each time metric (all p < 0.05 ), Tukey’s HSD post hoc tests were conducted to determine pairwise differences, as shown in Table 2.
All statistically significant pairwise differences ( p < 0.05 ) are consistent with the trends observed in Figure 11 and Figure 12. Most notably, no statistically significant differences were observed between Scenarios 1 and 3 ( p > 0.05 ) for total operation time or non-value-added time, indicating that while automation eliminates dispatch inefficiency relative to Scenario 2, it does not significantly outperform the fully manual baseline in this study.

4.2. Operator Influence Across Dispatch Scenarios

While the previous subsection evaluated scenario level performance by aggregating across operators, this section examines whether those trends remain consistent when operator variability is explicitly considered. Because only four operators were included in this study, operator-specific results should be interpreted as preliminary. To begin, cumulative step time profiles for each operator–scenario combination are shown in Figure 13.
The cumulative step time plot reveals trends and variability across operators and scenarios, including occasional outlier runs within a given operator–scenario combination. For example, Run 1 for Operator 3 in Scenario 1 is notably slower than the other runs for that grouping. These profiles provide insight into execution variability and inform later discussion in Section 5 regarding autonomous deployment optimization and selection of logic initiation steps for CNN-based workflow control.
Total operation time distributions for each operator–scenario combination are shown in Figure 14.
Observable trends indicate that Operator 2 generally exhibited the fastest execution, followed by Operator 3, then Operator 4, while Operator 1 was consistently the slowest across scenarios. This highlights the presence of operator variability in total performance. However, the scenario level trends identified previously remain consistent across all operators.
To quantitatively evaluate these observations, a two-way ANOVA was conducted for total operation time, with results shown in Table 3.
The two-way ANOVA revealed significant main effects of both scenario and operator on total operation time, with no significant interaction between operator and scenario ( p > 0.05 ). The absence of a significant interaction indicates that the effect of scenario on total operation time is consistent across operators. Because scenario effects were previously examined using one-way ANOVA and Tukey HSD, post hoc analysis in this section focuses on the operator main effect, as shown in Table 4.
Post hoc analysis using Tukey’s HSD indicated that Operator 2 completed tasks significantly faster than Operator 1 (mean difference = 114.9 s, p = 0.044 ), consistent with trends observed in Figure 14. No other pairwise operator comparisons were statistically significant ( p > 0.05 ). This indicates that while modest operator variability exists, significant total operation time differences are limited to a single operator pair and do not reflect broad systematic performance disparities.
Panel lead times and non-value-added time distributions for each operator–scenario combination are shown in Figure 15.
Panel lead times were generally consistent across operators within each scenario. Non-value-added time was also relatively consistent, with the exception of Scenario 3, where Operator 1 appears as an outlier. When compared to the manual baseline in Scenario 1, the effect of automated AMR dispatch in Scenario 3 varied across operators, with some exhibiting reduced NVA time, others experiencing minimal change, and some showing slight increases. This highlights variability in how operators benefit from automation.
Within individual operator–scenario combinations, Panel 1 and Panel 2 lead times were generally statistically consistent (most p > 0.05 ). A single comparison (Operator 1—Scenario 2) showed a statistically significant difference ( p = 0.010 ). However, no consistent bias was observed across operators or scenarios. Overall, panel acquisition behavior remained symmetric at the operator level.
A two-way ANOVA was conducted for Panel 1 lead time, with results shown in Table 5.
The two-way ANOVA revealed a statistically significant main effect of dispatch scenario on Panel 1 lead time. A statistically significant main effect of operator was also observed ( p = 0.046 ), suggesting modest differences across operators. However, the interaction between operator and scenario was not statistically significant ( p = 0.076 ), indicating that the influence of the dispatch scenario on Panel 1 lead time was consistent across operators.
To further evaluate the operator effect on Panel 1 lead time, Tukey HSD post hoc comparisons were conducted, as shown in Table 6.
Although the operator main effect reached statistical significance in the ANOVA, Tukey HSD post hoc comparisons revealed that no individual pairwise operator differences were statistically significant (all p > 0.05 ). This indicates that while minor variability exists, no specific operator consistently differed from another in Panel 1 lead time. Overall, the dispatch scenario has a substantially stronger influence on Panel 1 lead time than operator identity.
A two-way ANOVA was also conducted for Panel 2 lead time, with results summarized in Table 7.
The results indicate a significant main effect of dispatch scenario, with no significant main effect of operator or interaction. This further supports that Panel 2 lead time is primarily driven by dispatch strategy rather than operator differences.
The results for a two-way ANOVA conducted for non-value-added time are shown in Table 8.
The analysis revealed significant main effects of both operator and scenario, as well as a significant interaction, indicating that the impact of dispatch strategy on NVA time varies across operators. Because the interaction is significant, interpretation of main effects alone is insufficient, and further simple effects analysis is required. As a result, simple effects analysis was conducted using Tukey HSD, including comparisons of scenarios within each operator in Table 9 and comparisons of operators within each scenario in Table 10.
Across all operators, Scenarios 1 and 2, as well as Scenarios 2 and 3, were consistently statistically different in non-value-added time ( p < 0.05 in all cases), confirming that Scenario 2 yielded the largest non-value-added time for every operator. However, the comparison between Scenarios 1 and 3 was operator dependent. Specifically, Scenarios 1 and 3 differed significantly for Operators 1 and 3, but not for Operators 2 and 4. Thus, while automated AMR deployment in Scenario 3 uniformly reduced non-value-added time relative to Scenario 2, its performance relative to the fully manual baseline in Scenario 1 varied by operator.
Further simple effects analysis revealed no statistically significant operator differences within Scenarios 1 or 2 (all p > 0.05 ), indicating comparable performance under manual walking and manual AMR dispatch conditions. In contrast, significant operator differences emerged under Scenario 3 ( p < 0.05 for multiple comparisons), suggesting that the automated AMR dispatch condition introduced variability in how effectively individual operators benefited from automation.
These findings indicate that automation consistently reduced non-value-added time relative to the manual AMR dispatch condition in Scenario 2. However, its effect relative to the fully manual baseline in Scenario 1 was heterogeneous across operators. Consequently, automation eliminates dispatch inefficiency but does not uniformly outperform manual execution for all operators.

5. Discussion

This study demonstrated that manual AMR dispatch in Scenario 2 produced significantly greater total operation time and non-value-added time than both walking in Scenario 1 and autonomous AMR dispatch in Scenario 3. At the scenario level, Scenarios 1 and 3 exhibited statistically comparable total operation and NVA times. Thus, while automation eliminated dispatch inefficiency relative to manual robot calling, it did not significantly outperform manual walking under the specific experimental conditions implemented in this study.
More specifically, manual AMR dispatch in Scenario 2 consistently underperformed because it combined the slower traversal speed of the robot with sequential task execution. Operators initiated robot movement only when a part was immediately required, resulting in unavoidable waiting time. This highlights that simply introducing an AMR into a manufacturing workflow does not guarantee efficiency gains, and that the deployment strategy of the AMR is equally important.
When comparing Scenarios 1 and 3, the similarity in performance is primarily attributable to system-level constraints. For example, the round-trip traversal distance between the MDO and MPU was relatively short at approximately 49 m, limiting the walking penalty in Scenario 1. Furthermore, the manufacturing sequence preceding Panel 1 installation was brief, which constrained the ability to fully overlap AMR travel with productive work in Scenario 3. As a result, even with an earlier dispatch, non-value-added time would still occur during Panel 1 delivery due to insufficient temporal buffer relative to AMR travel duration.
The similarity in performance is also influenced by non-optimized deployment timing. More specifically, Panel 2 delivery in Scenario 3 consistently occurred prior to material demand, while early AMR arrival is generally preferable to late delivery from a time-based perspective, it indicates that sufficient overlap existed later in the workflow and that the selected trigger point was not synchronized with the optimal arrival time. This suboptimal behavior is a direct consequence of the autonomous trigger points in this study being based on easily distinguishable workflow states. The nature of these states enabled consistent and robust autonomous trigger points in this proof-of-concept framework. Further modification to ensure optimal system synchronization would have required transitioning to more complex visual states or dispatch logic which was outside the scope of this initial study. As a result, synchronization between AMR arrival and material demand was not guaranteed, and kickoff timing was not optimized for either panel.
In principle, autonomous deployment enables parallelization of AMR travel and productive labor, allowing the slower traversal speed of the robot to be offset through temporal overlap. The effectiveness of this approach depends on selecting trigger points that align AMR travel time with cumulative task duration. Achieving this alignment would require identifying workflow states, if possible, that provide sufficient temporal buffer between kickoff and expected material demand. These states may be defined by more complex visual conditions based on partial task completion, or by maintaining easily distinguishable states and applying time offsets following their detection.
Furthermore, the significant operator × scenario interaction observed for non-value-added time in this work reflects variability in step execution speed across operators. This interaction is most pronounced in Scenario 3, where non-value-added time is driven entirely by waiting for Panel 1 delivery, as Panel 2 consistently arrived prior to demand. Differences in NVA time across operators arise from variation in execution speed between the standardized kickoff point and Panel 1 demand. More specifically, since the kickoff trigger remained fixed across all operators, faster operators progressed more quickly through intermediate steps and experienced longer waiting periods, while slower operators experienced reduced delay. This behavior highlights that the implemented trigger strategy was not only suboptimal at the system level, as previously mentioned, but also did not account for operator-specific execution differences.
This observation suggests that, in systems where sufficient temporal overlap exists and general synchronization between material demand and delivery is achieved, further performance gains could be realized by tailoring trigger points to individual operators. Operator-specific trigger selection could be informed by historical execution data, such as the cumulative step time profiles shown in Figure 13, by identifying when material demand occurs and determining appropriate kickoff points based on AMR travel duration. These triggers could be implemented through alternative visual states or by applying time offsets to existing, easily distinguishable states, as described earlier. This approach would reduce waiting time for faster operators while avoiding excessively early delivery for slower operators, enabling synchronization at the operator level. It is important to note that these operator-related findings are based on a limited sample size and may not fully generalize across broader operator populations.
In larger-scale manufacturing environments, the relative advantage of optimized autonomous deployment may become more pronounced. Facilities with longer traversal distances, such as those on the order of hundreds of meters in aerospace assembly lines, impose substantial walking penalties that accumulate across repeated tasks. When coupled with more complex manufacturing procedures containing multiple intermediate steps, greater temporal buffer may exist to enable increased overlap between AMR travel and productive labor. Under such conditions, appropriately optimized autonomous dispatch has the potential to reduce both total operation time and non-value-added time relative to manual walking. However, these outcomes are context-dependent and were not directly evaluated in the present study.
Beyond time-based metrics, AMR deployment may provide additional operational benefits relevant to manufacturing environments. Automated material transport can improve process consistency by reducing task interruptions due to material availability. AMRs also support improved predictability, repeatability, and traceability of material movement, which are critical considerations in aerospace manufacturing. Additionally, reducing manual traversal may decrease physical workload and workflow disruption for operators, while these factors were not directly quantified in this work, they represent important considerations when evaluating the broader impact of AMR integration.

6. Conclusions

Autonomous mobile robots have gained increasing interest as a means of improving efficiency and flexibility in manufacturing workflows. To evaluate their impact on material delivery, a controlled experimental study of a representative aerospace assembly process was conducted. Three scenarios were considered, including manual walking, manual AMR dispatch, and CNN-based autonomous AMR deployment.
The results showed that manual AMR dispatch produced significantly higher total operation time and non-value-added time due to sequential task execution and delayed robot deployment. Autonomous AMR deployment eliminated this inefficiency by enabling preemptive material transport and overlap between robot travel and operator task execution. However, under the conditions of this study, autonomous deployment did not significantly outperform manual walking due to limited traversal distance, minimal overlap between Panel 1 delivery and productive work, and non-optimized deployment timing.
These findings demonstrate that the effectiveness of AMR integration depends strongly on deployment strategy. When properly synchronized with workflow demand, autonomous deployment has the potential to outperform manual material retrieval, particularly in larger scale manufacturing environments where conditions may allow for greater temporal overlap between transport and productive work.
Future work will focus on extending the proposed framework to support more adaptive and comprehensive manufacturing environments. In particular, incorporating visual monitoring of both the MDO and MPU would enable the material handling process to be dynamically adjusted based on system state at both ends. Additionally, expanding the perception system to include non-sequential workflow conditions would improve robustness to variations in process execution, such as handling out-of-order operations and responding to mid-process disruptions. This would require the incorporation of more complex visual states and associated decision logic.
Furthermore, advanced visual states and decision logic could be implemented to improve synchronization between material demand and delivery at both the system and operator level. This may involve detecting partial task completion, applying timing offsets, and integrating historical execution data to inform adaptive trigger selection tailored to individual operator behavior. Such enhancements would enable more precise alignment between AMR deployment and workflow progression, ultimately improving system efficiency beyond the baseline framework presented in this work.
Overall, this study demonstrates that manual robot dispatch increases inefficiency when travel is not preemptively coordinated with workflow demand. In contrast, autonomous dispatch has the potential to reduce delays associated with material retrieval and increase overlap between transport and productive work when effectively synchronized with the manufacturing process. Ultimately, realizing these benefits requires careful synchronization of deployment triggers with human workflow timing.

Author Contributions

Conceptualization, G.M., C.L.C., and Y.H.; methodology, G.M. and G.T.; software, G.M.; validation, G.M., G.M.G., G.T., and B.C.; formal analysis, G.M.; investigation, G.M.; resources, G.M.G. and B.C.; data curation, G.M., G.M.G., G.T., and B.C.; writing—original draft preparation, G.M.; writing—review and editing, G.M., C.L.C., and Y.H.; visualization, G.M.G., G.T., and B.C.; supervision, C.L.C. and Y.H.; project administration, G.M.; funding acquisition, C.L.C. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this research was provided by the Southern Methodist University, Lyle School Mechanical Engineering Department.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study were collected at Southern Methodist University and are not publicly available due to institutional restrictions. However, they are available from the corresponding author upon reasonable request.

Acknowledgments

The research presented in this paper was conducted at Southern Methodist University (SMU) within the Center for Digital and Human-Augmented Manufacturing (CDHAM) and the SMU Control Systems Laboratory. Supervision for this work was provided by Chris Colaw at CDHAM, and Yildirim Hurmuzlu at SMU. The research team consisted of Ph.D. and undergraduate students from the SMU Mechanical Engineering Department. The authors wish to express sincere gratitude to the supervisors and research team for their guidance and dedication, which were instrumental in the conceptualization and development of this autonomous dispatch system.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
NVANon-Value-Added
AMRAutonomous Mobile Robot
CNNConvolutional Neural Network
MDOMaterial Drop Off Location
MPUMaterial Pick Up Location
YOLOYou Only Look Once
GPUGraphics Processing Unit
ROIRegion of Interest
ANOVAAnalysis of Variance
HSDHonestly Significant Difference

References

  1. Haviland, J.; Sunderhauf, N.; Corke, P. A Holistic Approach to Reactive Mobile Manipulation. IEEE Robot. Autom. Lett. 2022, 7, 3122–3129. [Google Scholar] [CrossRef]
  2. Ghodsian, N.; Benfriha, K.; Olabi, A.; Gopinath, V.; Talhi, E.; Hof, L.A.; Arnou, A. A Framework to Integrate Mobile Manipulators as Cyber–Physical Systems into Existing Production Systems in the Context of Industry 4.0. Robot. Auton. Syst. 2023, 169, 104526. [Google Scholar] [CrossRef]
  3. Holland, J.; Kingston, L.; McCarthy, C.; Armstrong, E.; O’Dwyer, P.; Merz, F.; McConnell, M. Service Robots in the Healthcare Sector. Robotics 2021, 10, 47. [Google Scholar] [CrossRef]
  4. Lim, Z.Q.; Shah, K.W.; Gupta, M. Autonomous Mobile Robots Inclusive Building Design for Facilities Management: Comprehensive PRISMA Review. Buildings 2024, 14, 3615. [Google Scholar] [CrossRef]
  5. Zhao, X.; Chidambareswaran, T. Autonomous Mobile Robots in Manufacturing Operations. In 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE); IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
  6. Lackner, T.; Hermann, J.; Kuhn, C.; Palm, D. Review of Autonomous Mobile Robots in Intralogistics: State-of-the-Art, Limitations and Research Gaps. Procedia CIRP 2024, 130, 930–935. [Google Scholar] [CrossRef]
  7. Hercik, R.; Byrtus, R.; Jaros, R.; Koziorek, J. Implementation of Autonomous Mobile Robot in SmartFactory. Appl. Sci. 2022, 12, 8912. [Google Scholar] [CrossRef]
  8. Kubasáková, I.; Kubáňová, J.; Benčo, D.; Fábryová, N. Application of Autonomous Mobile Robot as a Substitute for Human Factor in Order to Increase Efficiency and Safety in a Company. Appl. Sci. 2024, 14, 5859. [Google Scholar] [CrossRef]
  9. Hulwan, D.; Rajurkar, A.; Gaikwad, A. Implementation of Industry 4.0 in Manufacturing Industry: An Autonomous Mobile Robots Case Study. Int. J. Perform. Eng. 2025, 21, 188–198. [Google Scholar] [CrossRef]
  10. Bhatnagar, H.; Krolak, P.D. Dispatching Mobile Robots in Flexible Manufacturing Systems: The Issues and Problems. In CAD/CAM Robotics and Factories of the Future ’90; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar] [CrossRef]
  11. Keith, R.; La, H.M. Review of Autonomous Mobile Robots for the Warehouse Environment. arXiv 2024, arXiv:2406.08333. [Google Scholar] [CrossRef]
  12. Lin, I.-L.; Chiang, C.-L.; Peng, M.-Y.; Liang, W.-H.; Chiang, Y.-Y. Application of 3D Convolutional Neural Networks for Continuous Motion Identification and Behavioral Safety Analysis of Factory Roll Cutting Machine Operators. Sens. Mater. 2024, 36, 5247. [Google Scholar] [CrossRef]
  13. Cai, R.; Li, J.; Tan, Y.; Tang, J.; Chen, X. Convolutional Neural Networks for Construction Safety: A Technical Review of Computer Vision Applications. Appl. Soft Comput. 2025, 180, 113374. [Google Scholar] [CrossRef]
  14. Stathatos, E.; Tzimas, E.; Benardos, P.; Vosniakos, G.-C. Convolutional Neural Networks for Raw Signal Classification in CNC Turning Process Monitoring. Sensors 2024, 24, 1390. [Google Scholar] [CrossRef] [PubMed]
  15. Cheng, C.-S.; Chen, P.-W.; Jen, H.-Y.; Wu, Y.-T. A Multimodal Convolutional Neural Network Framework for Intelligent Real-Time Monitoring of Etchant Levels in PCB Etching Processes. Mathematics 2025, 13, 2804. [Google Scholar] [CrossRef]
  16. Asadi, R.; Queguineur, A.; Wiikinkoski, O.; Mokhtarian, H.; Aihkisalo, T.; Revuelta, A.; Flores Ituarte, I. Process Monitoring by Deep Neural Networks in Directed Energy Deposition: CNN-Based Detection, Segmentation, and Statistical Analysis of Melt Pools. Robot. Comput.-Integr. Manuf. 2024, 87, 102710. [Google Scholar] [CrossRef]
  17. Li, W.; Li, T. Comparison of Deep Learning Models for Predictive Maintenance in Industrial Manufacturing Systems Using Sensor Data. Sci. Rep. 2025, 15, 23545. [Google Scholar] [CrossRef] [PubMed]
  18. Ferrisi, S.; Guido, R.; Lofaro, D.; Zangara, G.; Conforti, D.; Ambrogio, G. A CNN Architecture for Tool Condition Monitoring via Contactless Microphone: Regression and Classification Approaches. Int. J. Adv. Manuf. Technol. 2025, 136, 601–618. [Google Scholar] [CrossRef]
  19. Sadeghi Esfahlani, S.; Sanaei, A.; Ghorabian, M.; Shirvani, H. The Deep Convolutional Neural Network Role in the Autonomous Navigation of Mobile Robots. Remote Sens. 2022, 14, 3324. [Google Scholar] [CrossRef]
  20. Zain, L.; Shalaby, R. Real-Time Obstacle Avoidance for a Mobile Robot Using CNN-Based Sensor Fusion. arXiv 2025, arXiv:2509.08095. [Google Scholar] [CrossRef]
  21. Ballesta, M.; Payá, L.; Cebollada, S.; Reinoso, O.; Murcia, F. A CNN Regression Approach to Mobile Robot Localization Using Omnidirectional Images. Appl. Sci. 2021, 11, 7521. [Google Scholar] [CrossRef]
  22. Fourie, B.; Louw, L.; Bitsch, G. Development of a Dynamic Path Planning System for Autonomous Mobile Robots Using a Multi-Agent System Approach. Sensors 2025, 25, 5317. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Fuselage model with no panels (left) and with panels (right).
Figure 1. Fuselage model with no panels (left) and with panels (right).
Machines 14 00512 g001
Figure 2. Fuselage diagram with panels and corresponding holes numbered.
Figure 2. Fuselage diagram with panels and corresponding holes numbered.
Machines 14 00512 g002
Figure 3. OMRON LD-60 used in experimental runs for scenarios 2 and 3.
Figure 3. OMRON LD-60 used in experimental runs for scenarios 2 and 3.
Machines 14 00512 g003
Figure 4. Material drop off location (left) and material pick up location (right).
Figure 4. Material drop off location (left) and material pick up location (right).
Machines 14 00512 g004
Figure 5. A frame from the Rhombus camera stream monitoring the manufacturing area.
Figure 5. A frame from the Rhombus camera stream monitoring the manufacturing area.
Machines 14 00512 g005
Figure 6. CNN 1.1: No tools and OMRON with no material (top left). CNN 1.2: No tools and OMRON with material (top right). CNN 1.3: Tools present and OMRON with no material (bottom left). CNN 1.4: Tools present and OMRON with material (bottom right).
Figure 6. CNN 1.1: No tools and OMRON with no material (top left). CNN 1.2: No tools and OMRON with material (top right). CNN 1.3: Tools present and OMRON with no material (bottom left). CNN 1.4: Tools present and OMRON with material (bottom right).
Machines 14 00512 g006
Figure 7. CNN 1 training and validation loss (left) and validation accuracy (right).
Figure 7. CNN 1 training and validation loss (left) and validation accuracy (right).
Machines 14 00512 g007
Figure 8. CNN 2.1: Both panels on fuselage (top left). CNN 2.2: Panel 1 on fuselage (top right). CNN 2.3: No panels on fuselage (bottom left). CNN 2.4: Panel 2 on fuselage (bottom right).
Figure 8. CNN 2.1: Both panels on fuselage (top left). CNN 2.2: Panel 1 on fuselage (top right). CNN 2.3: No panels on fuselage (bottom left). CNN 2.4: Panel 2 on fuselage (bottom right).
Machines 14 00512 g008
Figure 9. CNN 2 training and validation loss (left) and validation accuracy (right).
Figure 9. CNN 2 training and validation loss (left) and validation accuracy (right).
Machines 14 00512 g009
Figure 10. Autonomous AMR dispatch logic diagram based on CNN classification.
Figure 10. Autonomous AMR dispatch logic diagram based on CNN classification.
Machines 14 00512 g010
Figure 11. Total operation time by dispatch scenario. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Figure 11. Total operation time by dispatch scenario. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Machines 14 00512 g011
Figure 12. Panel lead and non-value added time by dispatch scenario. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Figure 12. Panel lead and non-value added time by dispatch scenario. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Machines 14 00512 g012
Figure 13. Cumulative time per operator–scenario.
Figure 13. Cumulative time per operator–scenario.
Machines 14 00512 g013
Figure 14. Total operation time by scenario and operator. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Figure 14. Total operation time by scenario and operator. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Machines 14 00512 g014
Figure 15. Panel lead and non-value added time by scenario and operator. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Figure 15. Panel lead and non-value added time by scenario and operator. In each box plot, the central line represents the median, the box spans the interquartile range, and the whiskers extend to values within 1.5× the interquartile range. Individual data points are overlaid.
Machines 14 00512 g015
Table 1. One-way ANOVA results evaluating the effect of dispatch scenario on time metrics.
Table 1. One-way ANOVA results evaluating the effect of dispatch scenario on time metrics.
Response VariableF-Statisticp-Value
Total Operation Time51.844<0.001
Panel 1 Lead Time1239.917<0.001
Panel 2 Lead Time784.212<0.001
Non-Value-Added Time742.532<0.001
Table 2. Tukey HSD post hoc comparisons of dispatch scenarios across performance metrics.
Table 2. Tukey HSD post hoc comparisons of dispatch scenarios across performance metrics.
Response VariableComparisonMean Difference (s)Adjusted p-Value
Total Operation TimeScenario 1 vs. 2210.650<0.001
Scenario 1 vs. 314.5000.806
Scenario 2 vs. 3−196.150<0.001
Panel 1 Lead TimeScenario 1 vs. 294.100<0.001
Scenario 1 vs. 395.300<0.001
Scenario 2 vs. 31.2000.849
Panel 2 Lead TimeScenario 1 vs. 295.300<0.001
Scenario 1 vs. 392.400<0.001
Scenario 2 vs. 3−2.9000.543
Non-Value-Added TimeScenario 1 vs. 2189.400<0.001
Scenario 1 vs. 3−0.8000.989
Scenario 2 vs. 3−190.200<0.001
Table 3. Two-way ANOVA results for total operation time across operators and dispatch scenarios.
Table 3. Two-way ANOVA results for total operation time across operators and dispatch scenarios.
SourceSum of SquaresdfF-Statisticp-Value
Operator99,117.51738.182<0.001
Scenario553,723.300268.562<0.001
Operator × Scenario11,446.83360.4720.825
Residual193,831.20048
Table 4. Tukey HSD post hoc comparisons for operator effect on total operation time.
Table 4. Tukey HSD post hoc comparisons for operator effect on total operation time.
ComparisonMean Difference (s)Adjusted p-Value
Operator 1 vs. 2−114.9330.044
Operator 1 vs. 3−55.5330.563
Operator 1 vs. 4−56.0000.556
Operator 2 vs. 359.4000.506
Operator 2 vs. 458.9330.513
Operator 3 vs. 4−0.4671.000
Table 5. Two-way ANOVA results for Panel 1 lead time across operators and dispatch scenarios.
Table 5. Two-way ANOVA results for Panel 1 lead time across operators and dispatch scenarios.
SourceSum of SquaresdfFp-Value
Operator342.93332.8680.046
Scenario119,588.93321500.175<0.001
Operator × Scenario492.66762.0600.076
Residual1913.20048
Table 6. Tukey HSD post hoc comparisons for operator effect on Panel 1 lead time.
Table 6. Tukey HSD post hoc comparisons for operator effect on Panel 1 lead time.
ComparisonMean Difference (s)Adjusted p-Value
Operator 1 vs. 2−5.4670.989
Operator 1 vs. 3−0.1331.000
Operator 1 vs. 40.2671.000
Operator 2 vs. 35.3330.989
Operator 2 vs. 45.7330.987
Operator 3 vs. 40.4001.000
Table 7. Two-way ANOVA results for Panel 2 lead time across operators and dispatch scenarios.
Table 7. Two-way ANOVA results for Panel 2 lead time across operators and dispatch scenarios.
SourceSum of SquaresdfFp-Value
Operator119.26730.5400.657
Scenario117,521.7332797.569<0.001
Operator × Scenario615.33361.3920.237
Residual3536.40048
Table 8. Two-way ANOVA results for non-value-added time across operators and dispatch scenarios.
Table 8. Two-way ANOVA results for non-value-added time across operators and dispatch scenarios.
SourceSum of SquaresdfFp-Value
Operator3873.73337.233<0.001
Scenario480,326.93321345.328<0.001
Operator × Scenario5993.46765.596<0.001
Residual8568.80048
Table 9. Simple effects analysis: Tukey HSD comparisons of dispatch scenarios within each operator (non-value-added time).
Table 9. Simple effects analysis: Tukey HSD comparisons of dispatch scenarios within each operator (non-value-added time).
OperatorComparisonMean Difference (s)Adjusted p-Value
Operator 1Scenario 1 vs. 2193.400<0.001
Scenario 1 vs. 3−29.400<0.001
Scenario 2 vs. 3−222.800<0.001
Operator 2Scenario 1 vs. 2179.600<0.001
Scenario 1 vs. 3−2.0000.966
Scenario 2 vs. 3−181.600<0.001
Operator 3Scenario 1 vs. 2188.000<0.001
Scenario 1 vs. 325.4000.019
Scenario 2 vs. 3−162.600<0.001
Operator 4Scenario 1 vs. 2196.600<0.001
Scenario 1 vs. 32.8000.967
Scenario 2 vs. 3−193.800<0.001
Table 10. Simple effects analysis: Tukey HSD comparisons of operators within each dispatch scenario (non-value-added time).
Table 10. Simple effects analysis: Tukey HSD comparisons of operators within each dispatch scenario (non-value-added time).
ScenarioComparisonMean Difference (s)Adjusted p-Value
Scenario 1Operator 1 vs. 28.2000.246
Operator 1 vs. 32.4000.939
Operator 1 vs. 48.6000.212
Operator 2 vs. 3−5.8000.527
Operator 2 vs. 40.4001.000
Operator 3 vs. 46.2000.473
Scenario 2Operator 1 vs. 2−5.6000.970
Operator 1 vs. 3−3.0000.995
Operator 1 vs. 411.8000.785
Operator 2 vs. 32.6000.997
Operator 2 vs. 417.4000.526
Operator 3 vs. 414.8000.649
Scenario 3Operator 1 vs. 235.600<0.001
Operator 1 vs. 357.200<0.001
Operator 1 vs. 440.800<0.001
Operator 2 vs. 321.6000.015
Operator 2 vs. 45.2000.836
Operator 3 vs. 4−16.4000.076
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Madison, G.; Griser, G.M.; Truelson, G.; Churches, B.; Colaw, C.L.; Hurmuzlu, Y. Autonomous Dispatch of Mobile Robots in Manufacturing Using Convolutional Neural Networks. Machines 2026, 14, 512. https://doi.org/10.3390/machines14050512

AMA Style

Madison G, Griser GM, Truelson G, Churches B, Colaw CL, Hurmuzlu Y. Autonomous Dispatch of Mobile Robots in Manufacturing Using Convolutional Neural Networks. Machines. 2026; 14(5):512. https://doi.org/10.3390/machines14050512

Chicago/Turabian Style

Madison, Garrett, Grayson Michael Griser, Gage Truelson, Braden Churches, Christopher Lee Colaw, and Yildirim Hurmuzlu. 2026. "Autonomous Dispatch of Mobile Robots in Manufacturing Using Convolutional Neural Networks" Machines 14, no. 5: 512. https://doi.org/10.3390/machines14050512

APA Style

Madison, G., Griser, G. M., Truelson, G., Churches, B., Colaw, C. L., & Hurmuzlu, Y. (2026). Autonomous Dispatch of Mobile Robots in Manufacturing Using Convolutional Neural Networks. Machines, 14(5), 512. https://doi.org/10.3390/machines14050512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop