Figure 1.
Representative 2D part silhouettes, top row, and their corresponding feeder configurations, bottom row. From left to right: left panel shows a simple convex polygon with a single straight fence; middle panel shows an I-shaped profile with dual curved fences; right panel shows a highly asymmetric free-form part interacting with a mixed straight–curved layout. This visual taxonomy motivates the geometric diversity tackled in the present study.
Figure 1.
Representative 2D part silhouettes, top row, and their corresponding feeder configurations, bottom row. From left to right: left panel shows a simple convex polygon with a single straight fence; middle panel shows an I-shaped profile with dual curved fences; right panel shows a highly asymmetric free-form part interacting with a mixed straight–curved layout. This visual taxonomy motivates the geometric diversity tackled in the present study.
Figure 2.
Block diagram of the overall workflow illustrating the pipeline from STL import and physics simulation in CoppeliaSim to data representation using DFT, one-hot encoding, and PMF/CDF labels.
Figure 2.
Block diagram of the overall workflow illustrating the pipeline from STL import and physics simulation in CoppeliaSim to data representation using DFT, one-hot encoding, and PMF/CDF labels.
Figure 3.
Simulation overview. (a) Shows the CoppeliaSim simulation environment with a part on the linear feeder belt approaching a feeder configuration. (b) Shows time-stamped snapshots from 0 to 1800 ms of a typical simulation run, showing the part’s trajectory, fence contact, and final rest configuration. The sequence highlights stochastic elements such as initial orientation randomization and micro-impacts captured by Bullet and subsequently encoded in the orientation PMF.
Figure 3.
Simulation overview. (a) Shows the CoppeliaSim simulation environment with a part on the linear feeder belt approaching a feeder configuration. (b) Shows time-stamped snapshots from 0 to 1800 ms of a typical simulation run, showing the part’s trajectory, fence contact, and final rest configuration. The sequence highlights stochastic elements such as initial orientation randomization and micro-impacts captured by Bullet and subsequently encoded in the orientation PMF.
Figure 5.
Distribution of Jensen–Shannon (JS) divergence between the 100-iteration and 1000-iteration orientation PMFs across N = 1158 configurations. No configurations satisfy JS at 100 iterations (median JS ).
Figure 5.
Distribution of Jensen–Shannon (JS) divergence between the 100-iteration and 1000-iteration orientation PMFs across N = 1158 configurations. No configurations satisfy JS at 100 iterations (median JS ).
Figure 6.
DFT-based part representation. (a) Shows a conceptual illustration of representing a 2D part shape, blue outline, using a finite number of Discrete Fourier Transform coefficient epicycles. (b) Shows examples of DFT reconstruction using 15 coefficients for diverse part shapes. Original outlines, blue, are closely approximated by the reconstructed shapes, solid orange, demonstrating the representation’s fidelity.
Figure 6.
DFT-based part representation. (a) Shows a conceptual illustration of representing a 2D part shape, blue outline, using a finite number of Discrete Fourier Transform coefficient epicycles. (b) Shows examples of DFT reconstruction using 15 coefficients for diverse part shapes. Original outlines, blue, are closely approximated by the reconstructed shapes, solid orange, demonstrating the representation’s fidelity.
Figure 7.
Fence locations and encoding: Location 1 at m and Location 2 at m, both at m, each with lateral slots A/B at m and an outward offset to m; straight fences are rotated by angle and curved fences use radius m. The corresponding input uses one-hot encoding for fence type and presence at two locations, along with normalized angle values.
Figure 7.
Fence locations and encoding: Location 1 at m and Location 2 at m, both at m, each with lateral slots A/B at m and an outward offset to m; straight fences are rotated by angle and curved fences use radius m. The corresponding input uses one-hot encoding for fence type and presence at two locations, along with normalized angle values.
Figure 8.
Examples of fence configurations used in our dataset. Each panel shows a bird’s eye view of a walkway (grey) with fences (white lines) on the left and/or right. The top row illustrates single-fence cases: (a) a straight fence on the right side, (b) a straight fence on the left side, (c) a curved fence on the right side, and (d) a curved fence on the left side. The bottom row shows the four possible two-fence combinations: (e) straight fences on both sides, (f) a straight fence on the right and a curved fence on the left, (g) a curved fence on the right and a straight fence on the left, and (h) curved fences on both sides. These eight scenarios cover the possible arrangements considered in the analysis.
Figure 8.
Examples of fence configurations used in our dataset. Each panel shows a bird’s eye view of a walkway (grey) with fences (white lines) on the left and/or right. The top row illustrates single-fence cases: (a) a straight fence on the right side, (b) a straight fence on the left side, (c) a curved fence on the right side, and (d) a curved fence on the left side. The bottom row shows the four possible two-fence combinations: (e) straight fences on both sides, (f) a straight fence on the right and a curved fence on the left, (g) a curved fence on the right and a straight fence on the left, and (h) curved fences on both sides. These eight scenarios cover the possible arrangements considered in the analysis.
Figure 9.
Examples of final orientation distributions for different part–feeder configurations, represented as discrete PMFs with probability per 1° bin, not CDFs. The diversity in shapes, unimodal, multimodal, and uniform, highlights the complexity captured by the simulation data.
Figure 9.
Examples of final orientation distributions for different part–feeder configurations, represented as discrete PMFs with probability per 1° bin, not CDFs. The diversity in shapes, unimodal, multimodal, and uniform, highlights the complexity captured by the simulation data.
Figure 10.
Architecture of the standard fully connected regression model.
Figure 10.
Architecture of the standard fully connected regression model.
Figure 11.
VAE architecture. Part 1 is the part + feeder feature branch with 53 dimensions. Part 2 is the distribution branch with 360 CDF bins. The encoders share a latent space, and the decoder reconstructs both outputs.
Figure 11.
VAE architecture. Part 1 is the part + feeder feature branch with 53 dimensions. Part 2 is the distribution branch with 360 CDF bins. The encoders share a latent space, and the decoder reconstructs both outputs.
Figure 12.
Example predictions from model 3 on the test dataset (held-out configurations) for four representative cases, panels a–d. The model maintains good values on these held-out samples. Curves shown are circular CDFs, not PMFs, computed as unshifted cumulative sums from 0° of the 360-bin PMF; PMFs are smoothed at dataset creation, wrapped Gaussian with , and no shared shift is applied. Model 3 outputs PMFs, and we plot their derived CDFs.
Figure 12.
Example predictions from model 3 on the test dataset (held-out configurations) for four representative cases, panels a–d. The model maintains good values on these held-out samples. Curves shown are circular CDFs, not PMFs, computed as unshifted cumulative sums from 0° of the 360-bin PMF; PMFs are smoothed at dataset creation, wrapped Gaussian with , and no shared shift is applied. Model 3 outputs PMFs, and we plot their derived CDFs.
Figure 13.
Example predictions from model 3 on the parts dataset with unseen part geometries and only three unique parts for four representative cases. The significant performance drop highlights the difficulty of generalizing to new shapes. Curves shown are circular CDFs, not PMFs, computed as unshifted cumulative sums from 0° of the 360-bin PMF; PMFs are smoothed at dataset creation, wrapped Gaussian with , and no shared shift is applied. Model 3 outputs PMFs, and we plot their derived CDFs.
Figure 13.
Example predictions from model 3 on the parts dataset with unseen part geometries and only three unique parts for four representative cases. The significant performance drop highlights the difficulty of generalizing to new shapes. Curves shown are circular CDFs, not PMFs, computed as unshifted cumulative sums from 0° of the 360-bin PMF; PMFs are smoothed at dataset creation, wrapped Gaussian with , and no shared shift is applied. Model 3 outputs PMFs, and we plot their derived CDFs.
Figure 14.
Cross-convergence evaluation comparing within-level reconstruction (green, control) against partial-to-100% prediction (red). The within-level curve shows stable across all iteration levels, confirming that the VAE architecture is sound. The cross-convergence curve shows that predicting fully converged CDFs from partial-iteration inputs yields poor results () at fewer than 30 iterations, with substantial improvement only above 60–70%. Error bars show ±1 SD across holdout configurations. This demonstrates that partial simulations lack sufficient information to predict final distributions until convergence is nearly complete.
Figure 14.
Cross-convergence evaluation comparing within-level reconstruction (green, control) against partial-to-100% prediction (red). The within-level curve shows stable across all iteration levels, confirming that the VAE architecture is sound. The cross-convergence curve shows that predicting fully converged CDFs from partial-iteration inputs yields poor results () at fewer than 30 iterations, with substantial improvement only above 60–70%. Error bars show ±1 SD across holdout configurations. This demonstrates that partial simulations lack sufficient information to predict final distributions until convergence is nearly complete.
Table 1.
Key simulation and data processing parameters.
Table 1.
Key simulation and data processing parameters.
| Parameter | Value | Units/Notes |
|---|
| Simulator | CoppeliaSim Edu 4.6 | Platform [28] |
| Physics engine | Bullet 2.78 | Timestep 5 ms; 10 substeps [29] |
| Gravity | Not explicitly set | Scene default; not logged in API script |
| Contact solver iterations | Not explicitly set | Scene default; not logged |
| ERP/CFM | Not explicitly set | Engine defaults; not logged |
| Linear/angular damping | Not explicitly set | Scene defaults; not logged |
| Friction combine mode | Not explicitly set | Engine default; not logged |
| Restitution combine mode | Not explicitly set | Engine default; not logged |
| Conveyor speed | 0.10 | m/s; chosen for stable transport and fence interaction in simulation; not calibrated to a specific feeder |
| Friction coefficients (Coulomb ) | Parts 0.8; fences 0.5; belt 1.0 | Static = kinetic; effective parameters for stable, friction-dominated motion; not calibrated [29] |
| Restitution coefficients | | Highly damped contacts to reduce bounce in Bullet [29] |
| Initial part orientation () | (, , ) | Flat on belt, random yaw |
| Simulation iterations per config. | 1000 | Chosen based on JS divergence analysis (Figure 4) |
| Output angle range | | Degrees (simulator-native) |
| PMF smoothing | Wrapped Gaussian | Circular convolution; |
| Part representation | DFT | 15 largest coefficients (freq, mag, phase) |
| Feature vector (part) | 45 | Standardized parameters (z-score) |
| Feeder representation | 8 |
[3 one-hot type + 1 scalar angle min–max scaled to on training split; angle set to 0 when type=None] |
| Fence geometry (straight) | Fixed | Rigid cuboid; dimensions fixed in CoppeliaSim scene (length/width/thickness not varied; values not logged) |
| Fence geometry (curved) | | Arc segment with fixed cross-section (length/width/thickness fixed in scene; values not logged); placement uses radius r |
| Fence placement slots | ; ; | Locations 1/2 (downstream/upstream) and lateral A/B offsets |
| Conveyor implementation | CoppeliaSim Conveyor model | Built-in conveyor object with constant belt velocity; internal parameters not logged |
| Label representation (options) | Full (360)PCA (170/85) | Bin counts for PMF/CDF: PMF 360 or 170; CDF 360 or 85 |
Table 3.
VAE hyperparameters and CDF reconstruction.
Table 3.
VAE hyperparameters and CDF reconstruction.
| Parameter | Value |
|---|
| Latent Dimension | 35 |
| Learning Rate | 0.002 |
| Batch Size | 64 |
| Epochs | 120 |
| Optimizer | Adam, weight decay |
| Gradient Clipping | Max norm: 5.0 |
| Loss Weights: recon P + C, recon CDF, KL, KS penalty | 2, 8, 0.25, 20 |
| Gradient Penalty Weight, | 4 |
| KL Annealing Schedule | Weight grew linearly, capped at 0.25 |
Table 4.
Dataset summary, total samples = 1236.
Table 4.
Dataset summary, total samples = 1236.
| Dataset | Samples | Unique Parts | Purpose | Protocol |
|---|
| Main (Train) | 890 | 38 | Model training | – |
| Main (Test) | 158 | 38 | Held-out configurations (random split) | P1 |
| Fences | 78 | 18 | Unseen fence configurations | P1 |
| Parts | 110 | 3 | Unseen part geometries | P3 |
| Total (All) | 1236 | 41 | All datasets combined | – |
Table 5.
Regression model performance on training data: and in deg, mean ± SD; configurations.
Table 5.
Regression model performance on training data: and in deg, mean ± SD; configurations.
| Model | | [deg] |
|---|
| Model 1 | 0.90 ± 0.08 | |
| Model 2 | | |
| Model 3 | | |
| Model 4 | | |
| Model 5 | | |
| Model 6 | | |
| Model 7 | | |
| Model 8 | | |
Table 6.
Regression model performance on test dataset: and in deg, mean ± SD; configurations.
Table 6.
Regression model performance on test dataset: and in deg, mean ± SD; configurations.
| Model | | [deg] |
|---|
| Model 1 | | |
| Model 2 | | |
| Model 3 | | |
| Model 4 | | |
| Model 5 | | |
| Model 6 | | |
| Model 7 | | |
| Model 8 | | |
Table 7.
Regression model performance on fences dataset, unseen feeder configurations: and in degrees, mean ± SD; configurations.
Table 7.
Regression model performance on fences dataset, unseen feeder configurations: and in degrees, mean ± SD; configurations.
| Model | | [deg] |
|---|
| Model 1 | | |
| Model 2 | | |
| Model 3 | | |
| Model 4 | | |
| Model 5 | | |
| Model 6 | | |
| Model 7 | | |
| Model 8 | | |
Table 8.
Regression model performance on parts dataset: and in deg, mean ± SD; configurations.
Table 8.
Regression model performance on parts dataset: and in deg, mean ± SD; configurations.
| Model | | [deg] |
|---|
| Model 1 | | |
| Model 2 | | |
| Model 3 | | |
| Model 4 | | |
| Model 5 | | |
| Model 6 | | |
| Model 7 | | |
| Model 8 | | |
Table 9.
VAE reconstruction performance, mean ± SD, unitless, on held-out fences, , and parts, , datasets. The VAE was trained only on the main dataset, ; fences and parts were excluded from training. This table reports autoencoding reconstruction accuracy using both part + feeder features and CDF inputs, not conditional prediction from geometry alone.
Table 9.
VAE reconstruction performance, mean ± SD, unitless, on held-out fences, , and parts, , datasets. The VAE was trained only on the main dataset, ; fences and parts were excluded from training. This table reports autoencoding reconstruction accuracy using both part + feeder features and CDF inputs, not conditional prediction from geometry alone.
| Dataset | (Part + Feeder) | (CDF) |
|---|
| Fences | | |
| Parts | | |
Table 10.
Delta-to-full correction performance (CDF
) at representative iteration levels. Cost saved is
. Part split = unseen parts; config split = random configuration split with parts seen during training. Full 5% sweep results are provided in
Supplementary Section S11.
Table 10.
Delta-to-full correction performance (CDF
) at representative iteration levels. Cost saved is
. Part split = unseen parts; config split = random configuration split with parts seen during training. Full 5% sweep results are provided in
Supplementary Section S11.
| Level | Cost Saved (%) | Part Split () | Part Baseline | Config Split () | Config Baseline |
|---|
| 5% | 95.0 | 0.82 ± 0.22 | −0.02 ± 0.69 | 0.83 ± 0.21 | −0.02 ± 0.69 |
| 10% | 90.0 | 0.82 ± 0.21 | −0.01 ± 0.68 | 0.83 ± 0.21 | −0.01 ± 0.69 |
| 25% | 75.0 | 0.84 ± 0.18 | 0.03 ± 0.62 | 0.84 ± 0.20 | 0.03 ± 0.62 |
| 50% | 50.0 | 0.86 ± 0.16 | 0.31 ± 0.35 | 0.86 ± 0.18 | 0.31 ± 0.36 |
| 75% | 25.0 | 0.96 ± 0.07 | 0.89 ± 0.06 | 0.95 ± 0.06 | 0.89 ± 0.06 |
| 90% | 10.0 | 0.99 ± 0.01 | 0.99 ± 0.01 | 0.99 ± 0.01 | 0.99 ± 0.01 |