Next Article in Journal
Research on the Influence of the Parameters of the “AO-Shaped” Skywell of Traditional Huizhou Residential Houses on the Indoor Wind Environment
Previous Article in Journal
Experimental and Numerical Study on the Seismic Performance of Base-Suspended Pendulum Isolation Structure
Previous Article in Special Issue
Monetizing Digital Innovation in the AEC Industry: Real Estate Value Creation Through BIM and BMS Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow

by
Francisco Quitral-Zapata
1,2,*,
Rodrigo García-Alvarado
2,
Alejandro Martínez-Rocamora
3 and
Luis Felipe González-Böhme
4
1
Department of Architecture, Universidad Técnica Federico Santa María, San Joaquín 8940897, Chile
2
Department of Design and Theory of Architecture, Universidad del Bío-Bío, Concepción 4051381, Chile
3
IUACC, ArDiTec Research Group, Department of Architectural Constructions II, Higher Technical School of Building Engineering, University of Seville, Av. Reina Mercedes 4-a, 41012 Seville, Spain
4
Department of Architecture, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(15), 2712; https://doi.org/10.3390/buildings15152712
Submission received: 29 May 2025 / Revised: 25 July 2025 / Accepted: 29 July 2025 / Published: 31 July 2025
(This article belongs to the Special Issue Architectural Design Supported by Information Technology: 2nd Edition)

Abstract

Robotic timber joinery demands integrated, adaptive methods to compensate for the inherent dimensional variability of wood. We introduce a seamless robotic workflow to enhance the measurement accuracy of the Workpiece Coordinate System (WCS). The approach leverages a Zivid 3D camera mounted in an eye-in-hand configuration on a KUKA industrial robot. The proposed algorithm applies a geometric method that strategically crops the point cloud and fits planes to the workpiece surfaces to define a reference frame, calculate the corresponding transformation between coordinate systems, and measure the cross-section of the workpiece. This enables reliable toolpath generation by dynamically updating WCS and effectively accommodating real-world geometric deviations in timber components. The workflow includes camera-to-robot calibration, point cloud acquisition, robust detection of workpiece features, and precise alignment of the WCS. Experimental validation confirms that the proposed method is efficient and improves milling accuracy. By dynamically identifying the workpiece geometry, the system successfully addresses challenges posed by irregular timber shapes, resulting in higher accuracy for timber joints. This method contributes to advanced manufacturing strategies in robotic timber construction and supports the processing of diverse workpiece geometries, with potential applications in civil engineering for building construction through the precise fabrication of structural timber components.

1. Introduction

Robotic timber joinery is a research field that explores how robots can assist—and ultimately collaborate with—human carpenters in cutting complex timber joints and assembling timber-frame structures [1,2]. According to Beemer [3,4], the term joinery refers to both the craft of connecting timbers using woodwork joints and the joints themselves. This joinery is usually designed to connect the end of one timber piece to the side or end of another, often involving variations of the basic mortise-and-tenon joint. In the carpentry jargon, the term mortise refers to the rectangular cavity into which a tenon is inserted. The term tenon refers to a rectangular projection resulting from cutting into the end of a timber piece, flanked by one or more resulting shoulders and sized to match a mating mortise. Shoulder, in turn, refers to the element of the tenoned member that is perpendicular to the tenon cheek and which lies against the face of the mortised member; there can be as few as one and as many as four shoulders on the tenoned member. The term cheek refers to the broad surface of a tenon—the corresponding surface of a mortise (Figure 1).
In a broader context, the integration of digital technologies into architectural fabrication—understood as the design and production of architectural components using advanced fabrication techniques—has significantly transformed traditional construction practices [5], particularly in timber-based processes and joinery, where geometric accuracy is critical due to the inherent dimensional variability of the natural material [6,7]. Within this context, robotic timber joinery, in particular, has shown strong potential to overcome these challenges, offering high precision, repeatability, and efficiency in the production of complex architectural components [8,9].
However, ensuring the accuracy of robotic machining fundamentally depends on the correct definition of the Workpiece Coordinate System (WCS) [10], which corresponds to the object coordinate system as defined in the ISO standard [11]. Traditional measurement methods, which presume dimensional stability, are insufficient for timber components, which naturally exhibit geometric variations [12].
Contemporary industrial workflows reveal inherent limitations in conventional strategies for coordinate system calibration within robotic timber fabrication. For instance, the KUKA manual [13] describes the “3-point method” for calibrating the coordinates of the BASE—the term used in KUKA System Software (KSS) to define the WCS, comprising X, Y, and Z (position) and A, B, and C (orientation). This method is executed using a previously calibrated tool mounted on the robot flange. Firstly, the tool center point (TCP) is positioned at the origin of the coordinate system—typically a top vertex in timber workpieces. Secondly, the TCP is positioned along a point on the positive x-axis—usually the longitudinal edge of the timber workpiece. Finally, the TCP is positioned at a point on the XY-plane—commonly the top face of the timber workpiece. These three points define the new “BASE” or WCS (Figure 2).
This method presumes idealized geometry—namely, that timber vertices are sharp and that edges and faces form a perfectly square and parallel prism. In practice, however, timber workpieces frequently deviate from this ideal due to handling damage or dimensional distortions introduced during the drying process. Such imperfections complicate the accurate calibration of the WCS and may introduce cumulative errors in downstream robotic operations.
In addition, the method assumes that the tool center point (TCP) can be easily identified at a distinct vertex of the tool. This assumption becomes problematic in robotic machining scenarios involving tools such as milling cutters (Figure 3), chainsaws, or saw blades, whose geometries make TCP identification inherently challenging [14].
One alternative to improve measurement accuracy is to temporarily mount a tool with a well-defined vertex and use it as the TCP to define the WCS more accurately. While this approach can yield more accurate results, it introduces additional tool change operations into the workflow, interrupting the process and adding overhead each time a new workpiece requires measurement.
Despite the use of highly repeatable clamping and positioning systems, geometric discrepancies often persist—especially in repetitive joinery tasks involving multiple timber components.
Workpiece localization remains a persistent challenge, even in conventional CNC machining centers. Although tools such as edge finders and touch probes are widely available, achieving high precision still requires substantial intervention from skilled operators, rendering the process largely manual and time-consuming. To address these limitations, both researchers and industry have increasingly turned to automation-based solutions. For instance, computer vision systems equipped with image processing algorithms have been developed to streamline localization tasks and significantly reduce operator involvement [15]. In aerospace manufacturing, robotic milling systems frequently employ eye-in-hand laser profilers to capture accurate geometric data, thereby enhancing precision in adaptive machining contexts [16]. These examples underscore that workpiece localization continues to be a critical issue, even in technologically advanced sectors. While recent advancements in computer vision and point cloud processing have helped mitigate similar challenges in other industrial domains [17,18], adaptive coordinate measurement solutions tailored to the constraints and material variability encountered in robotic timber joinery remain underdeveloped. This gap underscores the complexity of automating processes that have historically depended on nuanced human expertise.
Traditional carpenters, for instance, have long recognized the inherent dimensional instability of wood—its tendency to warp, twist, or shrink according to moisture content and internal stress [3]. In response, they developed joinery techniques that not only tolerated such imperfections but actively anticipated and compensated for them. Central to traditional carpentry is the craftsman’s deep familiarity with wood as a living, variable material. This embodied, experience-based knowledge enables carpenters to determine appropriate tolerances for each assembly based on the specific condition of the timber [19]. Such a practice-based, adaptive approach stands in sharp contrast to contemporary robotic fabrication methods, which typically rely on exact digital representations and assume idealized, static geometry throughout the process.
As digital technologies increasingly permeate timber construction [20,21], this reliance on tacit knowledge and manual adjustments poses significant challenges for automation [22]. Unlike skilled human operators, robot systems require explicit geometric definitions, high-fidelity digital models, and tight manufacturing tolerances to execute tasks with precision. Consequently, the flexibility inherent to traditional carpentry must be reinterpreted as computational logic—translated into digital workflows and sensor-based feedback systems capable of recognizing and adapting to material variability [23].
Building on these advances, robotic fabrication has more broadly emerged as a transformative paradigm in architecture, offering new possibilities for integrating digital design and material performance. Over the past decade, it has enabled significant improvements in precision, efficiency, and material utilization across a wide range of applications [1,24]. Digital and robotic methods have facilitated the production of increasingly complex structural and architectural components, responding to the growing demand for sustainability and prefabrication [25,26]. However, wood’s anisotropic behavior, internal stress variation, and moisture-dependent deformation introduce additional complexity into digital manufacturing workflows. These material-specific challenges have driven the development of adaptive robotic strategies capable of managing geometric deviations during both the design and fabrication stages [27,28].
The integration of advanced sensing technologies into robotic fabrication remains in its early stages, with recent research beginning to explore their potential [29,30]—including structured-light cameras, laser scanners, and adaptive control strategies—to enable dynamic geometric referencing and more responsive machining processes. These technologies have laid the groundwork for new approaches in robotic timber construction, where material variability must be addressed more directly. For instance, Svilans et al. [31] proposed digital timber fabrication workflows that integrate computational models and sensing tools directly into the design-to-production chain. Their work, focused on free-form timber components, illustrates how material-specific behavior can be captured and fed back into the process, resulting in more informed geometric decisions and structurally optimized outcomes.
A number of recent studies have operationalized these concepts, demonstrating practical implementations of adaptive sensing and fabrication in robotic timber workflows. For example, Vestartas and Weinand [29] developed a workflow for processing raw timber, using a stationary laser scanner mounted on an ABB robotic arm. Their approach automates the geometric acquisition of unprocessed logs, significantly reducing manual effort. The captured data are aligned to the robot’s coordinate system and used for adaptive toolpath planning, enabling direct fabrication from minimally processed timber.
Building on this notion of a sensor-driven approach, Brunn et al. [30] employed an eye-in-hand Zivid 3D camera for cooperative robotic assembly involving three ABB robots. By merging multiple 3D scans into a unified point cloud registered to the robot’s coordinate system, their method enables accurate pick-and-place operations and path planning—an essential capability for disassembly and reassembly tasks on existing timber structures.
To address the added complexity of multi-robot coordination, Adel et al. [32] proposed a feedback-based framework for adaptive timber construction. Using 2D laser profiling, the system refines element placement in real time through scanning-based feature extraction. The material processing workflow includes cutting, placing the elements with flat ends pressed against the top and bottom plates, and fastening them to assemble a complete wall.
Expanding on these strategies, Chai et al. [33] present a multifunctional robotic platform for the flexible processing of tree forks. The system features a dual-gantry KUKA setup and a unified control interface (FURobot) implemented in Grasshopper. One robot performs clamping with a gripper, while the other handles scanning, cutting, and joint drilling. A calibrated camera captures interest points on the component, which are used to align a pre-generated 3D model obtained through photogrammetry. Using hand–eye calibration, the model is transformed into the robot’s coordinate system, enabling localization of the tree fork within the workspace.
Focusing on accuracy within the robot’s workspace, Pantscharowitsch et al. [34] investigated the machining performance of an ABB industrial robot when milling a tenon in glued-laminated timber. Using laser tracking and scanning, the study generated absolute and relative accuracy maps, evaluating the robot’s machining precision across various positions and assessing its ability to perform precise milling along a vertical plane.
These studies demonstrate significant progress in robotic timber construction; however, important differences remain when compared to the approach presented here. Previous work has explored workflows involving static machining portals operating over moving workpieces for large-scale glue-laminated components [31], scanning entire logs to generate digital inventories, and strategies for direct fabrication from irregular raw wood [29]. Other approaches emphasize pick-and-place operations for disassembly and reassembly [30], multi-robot systems for assembling and fastening straight timber members with flat ends [32], or dual-robot platforms using photogrammetry and circular saws for processing tree forks [33]. Another line of research has investigated machining precision across the robot’s workspace, aiming to define areas of higher milling accuracy [34]. While these contributions mark important advances, they do not fully address the challenges posed by dimensional variability in real-world workshop and prefabrication settings, where timber components frequently exhibit subtle yet significant deviations that can compromise assembly accuracy.
To bridge this gap, we introduce and experimentally validate an integrated robotic workflow that combines 3D vision sensing, structured point cloud analysis, and coordinate system feedback to dynamically update robotic toolpath generation prior to milling. The proposed method directly addresses the inherent dimensional variability of timber by identifying the actual position, orientation, and cross-sectional dimensions of each individual element before milling begins. The system employs a Zivid 3D camera in an eye-in-hand configuration on a KUKA industrial robot to enable rapid and precise geometric acquisition. A geometric algorithm processes the point cloud by cropping and fitting planes, computing the WCS, and extracting dimensional parameters. By automating WCS alignment based on actual geometry, this solution enables accurate, efficient, and flexible robotic milling—enhancing joinery quality while significantly reducing setup time. The proposed workflow offers a scalable and adaptive strategy for robotic timber joinery and contributes to bridging the gap between high-level digital design and the material variability encountered in real fabrication environments.

2. Materials and Methods

2.1. Robotic and Vision System Setup

The experiments were conducted at the Robotic Construction and Manufacturing Laboratory (RCML) at Universidad Técnica Federico Santa María, Santiago, Chile. The laboratory setup consists of a KUKA Quantec KR 210 R3100-2 C industrial robotic arm mounted overhead on a KL 4000 C linear axis with a total travel of 10 m. To constrain the experimental variables, the linear motion of the external axis was not utilized during the trials.
The calibrated end-effector consists of an HSD spindle (ES951—8 kW) with automatic tool-changing capabilities. For the purposes of this workflow, however, a single three-flute solid carbide helical end mill was used, featuring a 20 mm diameter and a 102 mm cutting length. This tool is commonly employed in the lab for robotic milling of timber joints made of radiata pine, Douglas fir, and oak.
A custom 3D-printed bracket was designed and fabricated to mount a Zivid 2 M70 3D camera onto the aluminum adapter that connects the spindle to the robot flange. The bracket positions the camera at a strategic angle relative to the tool center point (TCP), enabling both vision-based image acquisition and machining operations to be carried out without physical interference.
The Zivid 2 M70 is a high-precision structured light sensor engineered for industrial machine vision applications. It captures accurate, full-color point clouds by integrating depth, surface normals, and RGB data into a single acquisition. The camera operates optimally within a working distance of 500–1100 mm, with a fixed focal distance of 700 mm, and provides a maximum resolution of 2.3 megapixels (1944 × 1200 pixels). These specifications make it particularly suitable for mid-range scanning tasks in robotic setups that demand precise spatial reconstruction.
The RCML setup included a fixed calibration board and a working table configured as a clamping system for timber workpieces, providing stable and repeatable reference conditions for calibration, image acquisition, and milling operations (Figure 4).

2.2. Software

The workspace and system configuration were modeled using Rhinoceros 8 [35], Grasshopper [36], and KUKA|prc [37]. Within this environment, 20 distinct angular poses of the KUKA robot were defined with respect to the calibration board, aiming to span a representative range of orientations across the working volume. These angular configurations were exported and subsequently reused for the camera-to-robot calibration procedure. The same toolchain was also employed to visualize, verify, and program the robot’s milling paths for tenon joints in offline mode.
The core algorithm for camera calibration, point cloud acquisition, and WCS computation was implemented in Python 3.9.12, using the Visual Studio Code (1.100) development environment.
Robot motion during the calibration and capture processes was managed using RoboDK 5.9 [38] from the host laptop, which communicated with the KUKA Robot Controller (KRC) over a dedicated 1 Gb Ethernet connection. The KRC was running the C3 Bridge Interface Server [39], enabling remote clients to send commands and receive responses.
Point cloud capture and processing were handled using the Zivid SDK and API [40]. The NumPy library [41] was employed for efficient manipulation of arrays and transformation matrices, as well as for general point cloud data operations. Open3D [42] was used for visualizing the processed point clouds and performing geometric operations necessary for WCS computation. SciPy [43] was used for a specific statistical operation to determine the physical dimensions of the workpiece.
To manage configuration data and facilitate communication between scripts, the YAML library [44] was used for reading and writing structured files, enabling seamless data exchange between programming environments in a human-readable and easily editable format.

2.3. Camera-to-Robot Calibration

The Zivid 2 M70 camera was mounted in an eye-in-hand configuration directly on the robot flange and connected to the host laptop via a dedicated 10 Gb Ethernet network, ensuring optimal data acquisition and processing performance. The calibration routine (camera_robot_calibration.py, Figure 5) computed the transformation between the robot’s flange coordinate system and the camera’s coordinate frame. The resulting 4 × 4 transformation matrix, along with the residual error data, was stored in a calibration_results.yaml file, which is used in subsequent workflow steps and retained for debugging or verification purposes.
The calibration routine involves connecting to the robot and moving it through the aforementioned 20 predefined angular poses using methods provided by RoboDK. At each pose, a 3D image of the calibration target is captured using a custom eye-in-hand method implemented through the Zivid API. This process establishes a set of correspondences between the robot’s end-effector poses and the observed camera frames. Based on this dataset, the system computes the eye-in-hand transformation matrix, which enables accurate mapping of visual reference points to the robot’s workspace. This calibration ensures precise coordinate system alignment and supports robust 3D measurements for downstream operations.

2.4. Point Cloud Capture

Following the completion of the camera-to-robot calibration, a dedicated script (pointcloud_capture.py, Figure 5) was developed to enable dynamic point cloud acquisition. The robot is instructed to move to a predefined capture pose, positioning the timber workpiece within the camera’s field of view—approximately at the optimal focal distance of 700 mm—for accurate data acquisition. Once in position, the host laptop connects to the camera, updates the capture parameters, and initiates the acquisition process.
The Zivid camera outputs an organized point cloud, in which each 3D point corresponds directly to a pixel in the 2D image. This one-to-one mapping preserves the spatial structure of the data, meaning that adjacent pixels in the image correspond to neighboring points in the 3D space (Figure 6).
Using the flange-to-camera transformation, the 4 × 4 matrix corresponding to the capture pose—referred to as the world-to-camera matrix—is computed. This matrix is subsequently applied to the original point cloud, producing a transformed point cloud that is saved as transformed_cloud.ply. The *.ply format is used to facilitate visual verification against the workspace and system configuration modeled in Rhinoceros. The transformed point cloud is archived both for documentation purposes and for debugging.
The robot joint values at the moment of capture, along with the computed world-to-camera transformation matrix, are stored in a pose_capture_info.yaml file. This file is saved together with the original point cloud (original_pointcloud.zdf, Figure 5) and used in the subsequent step of the workflow.

2.5. Workpiece Coordinate System Computation

The main algorithm processes the geometric data from the point cloud to filter and retain only the subset of points corresponding to the exposed end of the timber workpiece. It then computes its local coordinate system (compute_wcs_pointcloud.py, Figure 5). The main steps of the method are as follows:
  • The world-to-camera matrix, stored in the pose_capture_info.yaml file, is loaded along with the original point cloud (original_pointcloud.zdf).
  • From this original point cloud (referenced to the camera coordinate frame), the point closest to the camera is located—specifically, the point with the minimum Z-value. In the capture configuration used, this point corresponds approximately to the top-front-right vertex of the timber piece. It is then transformed into the world coordinate system and stored as cropping_origin_world_xyz, which serves as the reference for a subsequent cropping operation.
  • Invalid points (Not a Number, NaN) are filtered out from the original point cloud, and all valid points are transformed from the camera frame to the world frame using the transformation matrix. A cropping operation is then applied based on the world X-coordinate: points whose X-value exceeds that of cropping_origin_world_xyz plus a 200 mm threshold are removed, effectively isolating the exposed end of the timber workpiece. The resulting cropped point cloud is retained for subsequent operations.
  • This step defines the origin and orientation. Three planes are sequentially fitted, excluding points (inliers) used for one plane from subsequent fittings. The Random Sample Consensus (RANSAC) algorithm —specifically, a customized fit_plane_ransac function—is used to identify planes that best fit subsets of the point cloud. RANSAC helps find the dominant plane within a set of candidate points by discarding points that deviate significantly (Figure 7).
    a.
    YZ’ Plane (red points): Points from the cropped cloud whose X-coordinates are close to the median X-value are selected. A plane (plane_yz_model) is fitted to these points using RANSAC, and the inliers are stored.
    b.
    XZ’ Plane (green points): From the remaining points (excluding the YZ’ plane inliers), points whose Y-coordinates are close to the median Y-value (of this subset) are selected. A second plane (plane_xz_model) is fitted to these points, and the inliers are stored.
    c.
    XY’ Plane (blue points): From the remaining points (excluding the inliers from the YZ’ and XZ’ planes), points whose Z-coordinates are close to the median Z-value (of this subset) are selected. A third plane (plane_xy_model) is fitted to these points, and the inliers are stored.
  • The intersection point of the three fitted planes (plane_yz_model, plane_xz_model, and plane_xy_model) is computed. This point (x, y, z) is defined as the origin of the WCS (wcs_xyz).
  • Using the normal vectors of the three fitted planes, the x, y, and z axes of a new orthonormal coordinate system are computed:
    a.
    The WCS x-axis is derived from the cross product of the normal vectors of the XZ’ and XY’ planes.
    b.
    The WCS y-axis is derived from the cross product of the normal vectors of the XY’ and YZ’ planes.
    c.
    The WCS z-axis is computed to be orthogonal to the WCS x and y axes.
The resulting axes are adjusted to ensure a right-handed coordinate system.
A rotation matrix is then constructed from these axes, and the Euler angles A, B, and C (following the ZYX convention of KUKA) are extracted from the rotation matrix to finally construct the WCS origin and orientation.
7.
The inlier points from the first fitted plane (YZ’ plane, red) are used. These points are transformed into the newly computed WCS (origin and orientation):
a.
Width is calculated by taking the mode of the Y-coordinates (in the WCS frame) from a subset of these transformed points—specifically, those with the highest Y-values.
b.
Height is calculated by taking the mode of the Z-coordinates (in the WCS frame) from a subset of these transformed points—specifically, those with the lowest Z-values.
The scipy.stats.mode function is used for this calculation.
8.
A dictionary is created containing the WCS (X, Y, Z, A, B, C) parameters defining the KUKA BASE origin and orientation. The calculated Width and Height values are added to this dictionary, which is saved as an output_parameters.yaml file for use in the next step of the workflow.
9.
The cropped point cloud in world coordinates, including its color information, is saved as cropped_cloud.ply using Open3D.
10.
A visualization of the cropped point cloud is prepared. The KUKA BASE coordinate system axes are rendered at the computed origin and orientation (WCS). The 3D scene is displayed for user validation.

2.6. Toolpath Generation for Milling

Using the Python 3 scripting component within Grasshopper, the YAML library is imported to read the output_parameters.yaml file and extract the X, Y, Z, A, B, and C parameters defining the KUKA origin and orientation. These values are loaded into KUKA|prc using the hardcoded BASE mode. The Width and Height values are then updated in the geometry that defines the milling toolpaths—specifically, in the tenon joint program previously created.
Visualization in Rhinoceros of the BASE coordinate system, the updated workpiece dimensions, and all milling trajectories all verify that the program will execute correctly and safely within the defined workspace. Finally, the .src file containing the KUKA Robot Language (KRL) code is sent to the robot for offline milling execution.

2.7. Experimental Validation Procedure

To assess the effectiveness of the proposed workflow for adaptive WCS measurement, a series of milling tests was conducted using Douglas fir timber specimens. The test pieces featured a nominal cross-section of 90 × 90 mm and a length of 1 m. Each specimen was cut using a miter saw to ensure square 90° end cuts with well-defined vertices. Douglas fir was specifically selected due to its characteristic dimensional variability, representative of typical conditions encountered in workshop environments. In addition, Douglas fir—commonly known as “pino oregón” in Chile—has both historical and practical significance in local timber construction. It is widely used for structural and finish applications and stands among the most commonly employed softwoods in Chilean carpentry. Its widespread availability, favorable strength-to-weight ratio, ease of machining, and pronounced aesthetic qualities make it an ideal reference material for evaluating robotic workflows under realistic fabrication scenarios.
Four experimental workflows for WCS measurement were evaluated and compared, using five specimens per approach, as follows:
I.
Fixed WCS measured once: A single BASE coordinate system was defined using the conventional manual 3-point method, relying on operator input and physical referencing of the timber workpiece. The same coordinate system and milling program were applied to all specimens.
II.
Individual WCS per piece: A separate BASE coordinate system was manually defined for each specimen using the 3-point method. A single milling program was used for all specimens, but in this case, the BASE data was updated automatically on the KRC.
III.
Individual WCS and cross-section per piece: For each specimen, both the BASE coordinate system (using the 3-point method) and the cross-sectional dimensions were measured and updated. These values were then used to regenerate and adjust the milling program accordingly.
IV.
Vision-based workflow: The algorithm described in Section 2.3, Section 2.4, Section 2.5, and Section 2.6 was implemented, utilizing the Zivid 2 M70 camera, along with automated calibration and point cloud acquisition, to dynamically compute the BASE coordinate system for each specimen.
For each method, five identical tenon joint milling operations were programmed and executed. Twenty specimens were processed under controlled laboratory conditions at the RCML, ensuring consistency across all test scenarios. To ensure the independence of each experimental run, a calibration test was conducted prior to executing each approach. This procedure involved measuring the BASE coordinate system using the corresponding method and milling a sacrificial specimen, which was intentionally excluded from the evaluation dataset. The purpose of this step was to eliminate any potential carryover of coordinate definitions, toolpath adjustments, or machine state between approaches, thereby isolating the effects of each WCS measurement strategy and ensuring that comparisons across methods remained valid and experimentally robust.
The evaluation focused on three main aspects: time per piece and total workflow duration, milling accuracy, and each system’s ability to accommodate dimensional variations in the timber workpieces.
The validation procedure involved the following steps:
  • Recording the time required to define the WCS using each of the four approaches;
  • Measuring the X, Y, and Z positional accuracy of the milled tenon joints by comparing the programmed toolpaths with the actual cuts on the specimens;
  • Performing all measurements using a caliper, following a consistent measurement protocol to ensure repeatability and reliability across all specimens;
  • Calculating the total 3D deviation of the milled joints relative to the intended toolpath geometry.

3. Results

3.1. Workflow Times

Controlled trials were conducted for each of the four workflows—I: Fixed WCS measured once, II: Individual WCS per piece, III: Individual WCS and cross-section per piece, and IV: Vision-based workflow—during which the total procedure time required to process five pieces was recorded. To enable a normalized and objective comparison among the different workflows, the mean time per piece was subsequently calculated for each case (Table 1). This derived metric provides a standardized basis for evaluating the relative efficiency and operational performance of each method.
For a more granular comparison, the mean time per piece was analyzed across six defined procedural steps (Table 2 and Figure 8):
  • Calibration: This step was only included in IV—Vision-based workflow. The camera-to-robot calibration routine required approximately 135 s to complete. This time was prorated across the five pieces, resulting in 27 s per piece. The routine involved the robot moving through 20 predefined poses, capturing images of a calibration board, and computing the corresponding transformation matrix.
  • Workpiece Clamping: For practical purposes, a uniform average clamping time was assumed across all workflows, as no significant variation was observed between approaches.
  • WCS Measurement: In I—Fixed WCS (once), the manual WCS measurement was performed only once, and its total duration was prorated across the five pieces.
    In II and III, an individual WCS measurement was carried out for each piece.
    In IV—Vision-based workflow, the time required to run both pointcloud_capture.py and compute_wcs_pointcloud.py was recorded and included in this step.
  • Cross-Section Measurement: In III—WCS + section per piece, this step involved manual measurement of the workpiece cross-section using a caliper.
    In IV—Vision-based workflow, equivalent operations were performed by specific functions within the compute_wcs_pointcloud.py script.
  • Program Data Update: This step accounts for the time required to enter or modify milling parameters based on the measured data.
    In I, it was performed only once, and the total duration was prorated across the five pieces.
    In II, this step was not recorded, as WCS updates were applied automatically via the robot’s BASE data.
    In III, individual WCS and cross-section values were manually updated in the program for each piece.
    In IV, the recorded time corresponds to the average duration the operator spent reviewing toolpaths offline to ensure safe execution of the generated code.
  • Milling Execution: Observed variations in milling time across workflows were minimal and considered negligible. Accordingly, a consistent average milling duration was used to standardize the comparison across all cases.
The fastest workflow was I—Fixed WCS (once), with a mean time per piece of 05:17, which serves as the baseline.
Compared to this reference, the following applies:
  • II—WCS per piece required approximately 50% more time per piece;
  • III—WCS + section per piece required approximately 79% more time due to the addition of manual cross-section measurement;
  • IV—Vision-based workflow showed only a 9% increase in time, making it the most efficient alternative after the baseline.

3.2. Positional Accuracy of the Milled Tenon Joints

The position of the milled tenon joints (Figure 9) was measured, and the relative displacement along the x, y, and z axes was calculated with respect to the ideal CAD model. This displacement is reported as the milling error along each axis (Table 3). For reference, a perfectly milled tenon would yield an error of 0.0 mm in X, Y, and Z, indicating perfect alignment with the nominal geometry.
Axis-specific errors reported in Table 3 were visualized using an error bar plot (Figure 10), where each data point corresponds to a measured specimen. In addition, the overall 3D positional errors were calculated and plotted independently in Figure 11 to provide a comparative view of total deviation across workflows.
The 3D positional error for each specimen was computed using the Euclidean norm, based on the individual deviations along the x, y, and z axes.
Finally, Table 4 compares the mean positional errors and standard deviations per workflow, summarizing the overall milling accuracy across approaches.
The data presented in Table 4 reveal substantial differences in positional accuracy across the four evaluated workflows. Workflow I—Fixed WCS (once) exhibited the highest 3D milling error, with a mean deviation of 4.27 ± 0.40 mm, primarily driven by large errors in the x and y axes, and high variability in y (±2.76 mm), indicating limited repeatability due to the static reference system. Workflow II—WCS per piece showed a slight improvement in 3D error (3.30 ± 1.02 mm), but still presented considerable dispersion, particularly along the y-axis.
A notable increase in accuracy was observed in Workflow III—WCS + section per piece, with a reduced 3D error of 2.20 ± 0.88 mm. Despite a higher z-axis error (1.22 mm), this approach benefited from piece-specific adjustments, especially in x. The most accurate and consistent results were achieved with Workflow IV—Vision-based, which yielded a significantly lower 3D error of 0.64 ± 0.21 mm. Errors in all three axes were minimal, with reduced standard deviations, confirming the effectiveness of vision-based automation for achieving high-precision milling with improved reliability across specimens.
To interpret the milling error values in relation to the WCS, it is important to consider the functional impact of each axis:
  • x-axis error represents a discrepancy in the length of the tenon;
  • y-axis error reflects a lateral misalignment of the tenon, typically manifested as dimensional differences between the left and right shoulders;
  • z-axis error corresponds to a vertical offset, which is often visually noticeable, as it directly cuts into the nominal geometry of the tenon, reducing its effective height.
A graphical example of this effect is illustrated in Figure 12, which highlights the consequences of z-axis displacement. This type of error is particularly critical, as it alters the functional geometry of the tenon, potentially compromising the structural fit and assembly of the joint.

4. Discussion

4.1. Comparison Between Traditional Mechanical Processing and Timber Processing

Traditional mechanical processing, commonly applied to materials such as metal or plastic, relies on consistent material properties, including homogeneity, isotropy, and predictable mechanical behavior. Consequently, established machining strategies typically involve fixed coordinate referencing, precise clamping systems, and standardized tooling parameters with minimal adaptation during production. This approach ensures repeatability and predictable outcomes across large batches of nearly identical workpieces.
In contrast, timber processing poses unique challenges due to wood’s inherent variability and anisotropic characteristics [45]. Timber exhibits significant dimensional changes influenced by moisture content, internal stresses, and grain orientation, while its machinability is affected by defects such as knots. These material-specific factors necessitate adaptive approaches, as fixed referencing systems can introduce considerable errors and misalignments. This problem is exacerbated by manufacturing defects, like inaccurate sawing or planing, which often result in non-square cross-sections. For instance, a timber piece with nominal dimensions of 90 × 90 mm can measure as much as 88 × 92 mm due to production variations or humidity changes.
The basic properties of wood—including density, moisture content, and grain orientation—directly influence its mechanical behavior and machinability, affecting cutting resistance, dimensional stability, and surface quality during robotic milling. Structural-grade Pseudotsuga menziesii (Douglas fir) presents well-defined ranges of permissible defects and density, which support its predictable performance under machining conditions [46]. In this study, although detailed modeling of wood variability is beyond the scope, the proposed workflow was tested using commercially available and dry specimens to ensure dimensional stability during evaluation.
Consequently, robotic timber processing demands dynamic measurement and adaptive calibration strategies to compensate for geometric variations at the individual workpiece level. The proposed robotic workflow integrates computer vision, structured point cloud processing, and adaptive toolpath generation, offering substantial improvements in machining accuracy and flexibility compared to conventional fixed-coordinate processes. By dynamically updating the WCS based on the actual material geometry, the presented method effectively addresses the core limitations of conventional machining when applied to variable and anisotropic materials such as timber.
Therefore, the innovation of this work is the development of a robust, vision-based adaptive strategy explicitly tailored to the unique properties of timber—a challenge that conventional mechanical processing strategies have not typically needed to address.

4.2. Adaptive Robotic Workflow for Timber Joinery

A similar strategy is presented by Geng et al. [18], who use an eye-in-hand 3D camera mounted on a welding robot to extract seam geometry from steel plates forming a concave triple-plane butt joint. They use a multi-plane RANSAC fitting algorithm to detect the intersection planes and plan the welding path. The algorithm takes approximately 5 s to compute a welding path for a single workpiece, with a reported maximum deviation of less than 1.5 mm compared to the hand-guided teaching method. Although the material and application differ, the geometric logic is comparable to our method for WCS computation in robotic timber joinery.
In the context of raw timber scanning, Vestartas and Weinand [29] report that acquiring point clouds of tree trunks (5–10 m long, 40–70 cm in diameter) using linear robot motions takes about 3–4 min. Their reference-point-based calibration achieves an accuracy of ±1.5 to ±2.0 mm, which they deem sufficient for locating timber logs in downstream joinery processes. The primary goal of their scanning is to generate complete digital stock models for use in fabrication workflows.
Chai et al. [33] explore photogrammetry as a means of capturing the complex geometry of tree forks. Their scanning and localization routine typically takes 20–30 min per component. Despite achieving a localization accuracy of approximately 3 mm, which they acknowledge may seem coarse, the authors argue that this level of precision does not compromise structural integrity at the scale of their applications. However, they also highlight the challenges posed by the irregularity and unpredictability of natural wood geometry.
Complementing these approaches, Pantscharowitsch et al. [34] focus on the absolute accuracy of robotic milling within the robot’s vertical workspace. To ensure consistent referencing, they incorporate a laser-tracking system and implement a seven-point measurement routine to establish a workpiece coordinate system prior to machining. Their analysis reports deviations ranging from −1.283 to +1.684 mm in the y-axis and from −2.591 to +0.402 mm in the z-axis—exceeding the ±2 mm tolerance in some cases. Notably, they identify a correlation between robot reach and accuracy, recommending mid-range positions within the robot’s working envelope to maximize precision. These findings underscore the importance of spatial calibration, referencing strategy, and workspace planning in robotic timber joinery.
These studies highlight a common challenge: the need for accurate, adaptable workpiece coordinate systems to ensure precision in robotic fabrication.
Dimensional accuracy is critical in timber joinery. Since jointed pieces may shrink unpredictably, an overly tight fit can cause splitting during assembly or raising, while a loose fit may lead to misalignment [47]. Some clearance is often necessary—especially when assembling multiple joints at once—but the ideal remains a smooth, sliding fit [3]. For example, in a typical mortise-and-tenon joint connecting a double-shouldered post to a sill (a horizontal timber resting on the foundation and linking the posts in a frame), the construction implications of previously reported positional errors can be interpreted as follows (Figure 13):
  • WCS X (red axis; Figure 13): A post tenon shorter than its nominal length may reduce the moment capacity of the joint [48,49]. Conversely, a longer post tenon may prevent the shoulders from properly transferring vertical loads onto the sill or may even lift the plate (the primary longitudinal timber in a frame that ties the bents at the top, stiffens the wall and roof planes, and supports the rafters) during assembly.
  • WCS Y (green axis; Figure 13): Lateral misalignment of the tenon, typically manifested as dimensional differences between the left and right shoulders, can lead to misalignment of the double-shouldered posts and may result in surface deformation when installing cladding or finishing elements over the frame.
  • WCS Z (blue axis; Figure 13): A vertical displacement directly cuts into the nominal geometry of the tenon, creating a mismatch with the mortise. This may allow the tenon to shift within the joint, reducing the tightness of the fit.
The dimensional discrepancies, common in workshop environments, can significantly compromise the precision and performance of timber joints—particularly in processes requiring high levels of geometric fidelity. Such deviations, if not addressed through proper adjustment or tolerance strategies, can propagate throughout the construction process. Dimensional adjustments in construction are critically relevant, as even minor discrepancies can lead to significant delays, on-site modifications, or misalignments during assembly. In severe cases, these deviations may compromise structural performance or result in long-term deterioration of building components.
This need for precision becomes particularly critical in the execution of carpentry connections for timber construction, which are especially relevant in frame structures—including, but not limited to, post-and-beam systems. While these joints can also be applied in certain ductile or assembly connections within solid, plate, or planar systems, their utility becomes especially evident in the connection of three-dimensional modules due to their ability to facilitate precise and reversible linking. Post-and-beam assemblies, in particular, involve orthogonal intersections that define the volumetric organization of architectural spaces [50]. Consequently, ensuring precision across all dimensional edges is critical—not only to guarantee geometric stability during assembly, but also to prevent delays associated with on-site adjustments and to enable effective maintenance or recovery of components during disassembly [51]. This level of precision is vital in the context of prefabrication and aligns with emerging principles of circular construction, which envision the reuse of structural components beyond the initial service life of the building.
To mitigate these challenges, the proposed vision-based workflow dynamically adapts milling trajectories to the actual dimensions of each workpiece. This ensures that joint components, such as mortises and tenons, can be fabricated with dimensional reciprocity, which is critical for both the structural integrity and the aesthetic quality of timber assemblies. By aligning toolpaths with real-world geometries, the system reduces risks associated with manual measurement errors and material inconsistencies, leading to tighter fits and improved joint performance in situ.
It is worth noting that, despite the presence of visible knots in some of the timber components—such as the middle element in Figure 9 IV—no significant impact was observed on the measurement process. In certain cases, knots are embedded within the timber and only become apparent during machining, when the cutting tool dislodges loose knot material. While this may produce a characteristic sound during milling, it does not affect the dimensional evaluation performed by the vision system.
In addition to enhancing precision, this adaptive approach improves the scalability of robotic timber fabrication by reducing reliance on standardized material dimensions and minimizing post-processing adjustments. It demonstrates how integrating coordinate system feedback—i.e., dynamically adjusting milling references based on measurements—into robotic programming can significantly improve outcomes in contexts where natural material variability is unavoidable.
Nonetheless, it is important to contrast this with conventional practices. An experienced robot programmer can typically complete the 3-point method in approximately 3.5 min (mean time measured in our experiments). In scenarios where workpieces exhibit stable and standardized dimensions, this process suffices, as the BASE coordinate system can be defined once and reused across multiple machining operations.
Yet, when handling repetitive tasks involving variable timber components, relying on a single predefined BASE frame introduces critical issues—particularly in the fabrication of precision-dependent timber joints.
Given the inherent variability of timber, even minor deviations between workpieces can lead to misalignments, resulting in loose fits or assembly failures. In traditional workflows, such discrepancies accumulate when machining multiple pieces without recalibrating, compromising both the precision and structural performance of the joints.
The adaptive workflow presented in this study addresses this limitation by recalculating the BASE coordinate system for each individual workpiece, ensuring alignment with the actual material geometry. This eliminates dependence on uniform dimensions and manual recalibration, offering clear advantages in small-batch production, customized fabrication, and the processing of timber with variable cross-sections.
While the workflow has demonstrated significant improvements in adaptability and milling precision, further experimentation is required to enhance its robustness and to package it as an integrated software solution. Future research will focus on exploring alternative geometric methods for calculating the WCS, potentially increasing precision when handling irregular or more complex geometries.
In particular, the current method assumes that the timber piece presents three reference surfaces that are approximately planar and mutually orthogonal, which facilitates the calculation of the WCS. This constraint limits the applicability of the approach in cases where pieces include non-orthogonal faces, angled cuts, or curved sections such as round timber or logs. In such scenarios, the method would need to be adapted—either by incorporating alternative geometric referencing strategies or by extending the vision-based surface analysis—to ensure accurate alignment and measurement.
Additionally, implementing and evaluating an eye-to-hand configuration may address operational limitations associated with the current eye-in-hand approach. Mounting the camera directly on the robot flange exposes it to factors such as vibrations and the accumulation of sawdust or fine wood particles, which could impact long-term sensor performance—even with industrial-grade 3D cameras. A fixed, external camera setup could mitigate these risks and enhance the reliability of vision-based measurement in continuous or demanding production environments.
These developments aim to expand the applicability of the workflow across diverse timber fabrication scenarios and further optimize adaptability in robotic milling processes.

5. Conclusions

This experimental validation demonstrates the potential of computer vision systems integrated into robotic workflows to enhance accuracy and adaptability in robotic timber joinery. Compared to conventional methods, the incorporation of a 3D camera sensor and detection algorithms enables an effective response to the dimensional variability of wood, achieving optimal performance in task execution without the need for standardized materials or manual measurement. Methodologically, the viability of an automated approach to defining the workpiece coordinate system is validated, marking a step forward toward more scalable, replicable, and robust solutions in robotic timber construction.
Among the four tested workflows, the vision-based system achieved the lowest mean 3D positional error at 0.64 ± 0.21 mm, with minimal dispersion across the x, y, and z axes (0.23 mm, –0.03 mm, and 0.36 mm, respectively). In contrast, traditional approaches based on fixed or manually defined workpiece coordinate systems exhibited 3D errors of up to 4.27 ± 0.40 mm, primarily due to accumulated deviations and the absence of per-piece calibration. Despite a moderate 9% increase in average processing time compared to the baseline, the vision-based workflow significantly improved geometric accuracy without requiring additional manual steps. This additional time can likely be reduced by further refining the algorithm. Calibration time becomes negligible in workflows involving multiple workpieces, as the fixed duration of the calibration step is amortized over a larger batch. In this comparison, the total calibration time was distributed across five pieces. Additionally, the offline toolpath verification step—currently included as a precautionary measure—could be eliminated with further robustness and reliability testing. As the current implementation remains experimental, there is substantial room for improving the codebase and optimizing the computational performance of the vision-based workflow.
While still under development, this approach creates new opportunities for customized production of structural components with high geometric accuracy, reducing operational time without compromising the fit or structural performance of the joinery.

Author Contributions

F.Q.-Z.: Conceptualization, methodology, software, investigation, resources, data curation, writing—review and editing, visualization, funding acquisition. R.G.-A.: Conceptualization, validation, investigation, writing—review and editing, supervision. A.M.-R.: Conceptualization, validation, investigation, writing—review and editing, supervision. L.F.G.-B.: Conceptualization, validation, investigation, resources, writing—review and editing, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was originally funded by the Ministry of Science of Chile through the National Agency for Research and Development (ANID) via the National PhD Scholarship, grant number 21202373. It is currently funded by Universidad Técnica Federico Santa María through the internal research project “Integrated Workpiece Measurement System for Robotic Timber Construction”, under the USM 2024 Research Initiation Line, project code PI_LII_24_06.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We are deeply grateful to the colleagues and teaching assistants of AReA (Área de Robots en Arquitectura) at the Department of Architecture, Universidad Técnica Federico Santa María, for their valuable support and collaboration throughout this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WCSWorkpiece Coordinate System
RANSACRandom Sample Consensus
KRCKUKA Robot Controller
KSSKUKA System Software
KRLKUKA Robot Language
RCMLRobotic Construction and Manufacturing Laboratory at Universidad Técnica Federico Santa María, Santiago, Chile.

References

  1. González Böhme, L.F.; Quitral Zapata, F.; Maino Ansaldo, S. Roboticus tignarius: Robotic reproduction of traditional timber joints for the reconstruction of the architectural heritage of Valparaíso. Constr. Robot. 2017, 1, 61–68. [Google Scholar] [CrossRef]
  2. González-Böhme, L.F.; Maino-Ansaldo, S. Uniones Carpinteras de Valparaíso: La Geometría de Ensambles y Empalmes; RIL Editores: Valparaíso, Chile, 2019; p. 156. [Google Scholar]
  3. Beemer, W. Learn to Timber Frame: Craftmanship, Simplicity, Timeless Beauty; Storey Publishing: North Adams, MA, USA, 2016. [Google Scholar]
  4. Beemer, W. Timber framing for beginners: VI. A glossary of terms. J. Timber Fram. Guild 2003, 68, 12–17. [Google Scholar]
  5. Eversmann, P.; Gramazio, F.; Kohler, M. Robotic prefabrication of timber structures: Towards automated large-scale spatial assembly. Constr. Robot. 2017, 1, 49–60. [Google Scholar] [CrossRef]
  6. Heesterman, M.; Sweet, K. Robotic Connections: Customisable Joints for Timber Construction. In Proceedings of the XXII SIGraDi International Conference of the Iberoamerican Society of Digital Graphics, São Carlos, Brazil, 7–9 November 2018. [Google Scholar]
  7. Quitral-Zapata, F.J.; González-Böhme, L.F.; García-Alvarado, R.; Martínez-Rocamora, A. Workflow for a Timber Joinery Robotics. In Proceedings of the XXIV SIGraDi International Conference of the Iberoamerican Society of Digital Graphics, Online, 18–20 November 2020; pp. 291–296. [Google Scholar]
  8. Koerner-Al-Rawi, J.; Park, K.E.; Phillips, T.K.; Pickoff, M.; Tortorici, N. Robotic timber assembly. Constr. Robot. 2020, 4, 175–185. [Google Scholar] [CrossRef]
  9. Geno, J.; Goosse, J.; Van Nimwegen, S.; Latteur, P. Parametric design and robotic fabrication of whole timber reciprocal structures. Autom. Constr. 2022, 138, 104198. [Google Scholar] [CrossRef]
  10. Hua, H.; Hovestadt, L.; Tang, P. Optimization and prefabrication of timber Voronoi shells. Struct. Multidiscip. Optim. 2020, 61, 1897–1911. [Google Scholar] [CrossRef]
  11. ISO 9787:2013; Robots and Robotic Devices—Coordinate Systems and Motion Nomenclatures. ISO: Geneva, Switzerland, 2013.
  12. Willmann, J.; Knauss, M.; Bonwetsch, T.; Apolinarska, A.A.; Gramazio, F.; Kohler, M. Robotic timber construction—Expanding additive fabrication to new dimensions. Autom. Constr. 2016, 61, 16–23. [Google Scholar] [CrossRef]
  13. KUKA Deutschland GmbH. KUKA System Software 8.6: Operating and Programming Instructions for End Users. 2022. Available online: https://xpert.kuka.com/ID/PB11700 (accessed on 28 July 2025).
  14. Meng, Y.; Sun, Y.; Chang, W.-S. Optimal trajectory planning of complicated robotic timber joints based on particle swarm optimization and an adaptive genetic algorithm. Constr. Robot. 2021, 5, 131–146. [Google Scholar] [CrossRef]
  15. de Araujo, P.R.M.; Lins, R.G. Cloud-based approach for automatic CNC workpiece origin localization based on image analysis. Robot. Comput.-Integr. Manuf. 2021, 68, 102090. [Google Scholar] [CrossRef]
  16. Gao, Y.; Gao, H.; Bai, K.; Li, M.; Dong, W. A Robotic Milling System Based on 3D Point Cloud. Machines 2021, 9, 355. [Google Scholar] [CrossRef]
  17. Guo, Q.; Yang, Z.; Xu, J.; Jiang, Y.; Wang, W.; Liu, Z.; Zhao, W.; Sun, Y. Progress, challenges and trends on vision sensing technologies in automatic/intelligent robotic welding: State-of-the-art review. Robot. Comput.-Integr. Manuf. 2024, 89, 102767. [Google Scholar] [CrossRef]
  18. Geng, Y.; Lai, M.; Tian, X.; Xu, X.; Jiang, Y.; Zhang, Y. A novel seam extraction and path planning method for robotic welding of medium-thickness plate structural parts based on 3D vision. Robot. Comput.-Integr. Manuf. 2023, 79, 102433. [Google Scholar] [CrossRef]
  19. Sobon, J.A. Hand Hewn The Traditions, Tools, and Enduring Beauty of Timber Framing; Storey Publishing: North Adams, MA, USA, 2019. [Google Scholar]
  20. Settimi, A.; Gamerro, J.; Weinand, Y. Augmented-reality-assisted timber drilling with smart retrofitted tools. Autom. Constr. 2022, 139, 104272. [Google Scholar] [CrossRef]
  21. Yang, X.; Amtsberg, F.; Sedlmair, M.; Menges, A. Challenges and potential for human–robot collaboration in timber prefabrication. Autom. Constr. 2024, 160, 105333. [Google Scholar] [CrossRef]
  22. Aguilera-Carrasco, C.A.; González-Böhme, L.F.; Valdes, F.; Quitral-Zapata, F.J.; Raducanu, B. A Hand-Drawn Language for Human–Robot Collaboration in Wood Stereotomy. IEEE Access 2023, 11, 100975–100985. [Google Scholar] [CrossRef]
  23. Lai, Z.; Xiao, Y.; Chen, Z.; Li, H.; Huang, L. Preserving Woodcraft in the Digital Age: A Meta-Model-Based Robotic Approach for Sustainable Timber Construction. Buildings 2024, 14, 2900. [Google Scholar] [CrossRef]
  24. Wagner, H.J.; Alvarez, M.; Groenewolt, A.; Menges, A. Towards digital automation flexibility in large-scale timber construction: Integrative robotic prefabrication and co-design of the BUGA Wood Pavilion. Constr. Robot. 2020, 4, 187–204. [Google Scholar] [CrossRef]
  25. Chai, H.; So, C.; Yuan, P.F. Manufacturing double-curved glulam with robotic band saw cutting technique. Autom. Constr. 2021, 124, 103571. [Google Scholar] [CrossRef]
  26. Cisneros-Gonzalez, J.J.; Rasool, A.; Ahmad, R. Digital technologies and robotics in mass-timber manufacturing: A systematic literature review on construction 4.0/5.0. Constr. Robot. 2024, 8, 29. [Google Scholar] [CrossRef]
  27. Gandia, A.; Gramazio, F.; Kohler, M. Tolerance-aware design of robotically assembled spatial structures. In Proceedings of the 42nd Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Philadelphia, PA, USA, 27–29 October 2022. [Google Scholar]
  28. Naboni, R.; Kunic, A.; Kramberger, A.; Schlette, C. Design, simulation and robotic assembly of reversible timber structures. Constr. Robot. 2021, 5, 13–22. [Google Scholar] [CrossRef]
  29. Vestartas, P.; Weinand, Y. Laser Scanning with Industrial Robot Arm for Raw-wood Fabrication. In Proceedings of the 37th International Symposium on Automation and Robotics in Construction (ISARC), Online, 27–28 October 2020. [Google Scholar]
  30. Bruun, E.P.G.; Besler, E.; Adriaenssens, S.; Parascho, S. Scaffold-free cooperative robotic disassembly and reuse of a timber structure in the ZeroWaste project. Constr. Robot. 2024, 8, 20. [Google Scholar] [CrossRef]
  31. Svilans, T.; Tamke, M.; Thomsen, M.R.; Runberger, J.; Strehlke, K.; Antemann, M. New Workflows for Digital Timber. In Digital Wood Design; Lecture Notes in Civil Engineering; Springer International Publishing: Cham, Switzerland, 2019; pp. 93–134. [Google Scholar]
  32. Adel, A.; Ruan, D.; McGee, W.; Mozaffari, S. Feedback-driven adaptive multi-robot timber construction. Autom. Constr. 2024, 164, 105444. [Google Scholar] [CrossRef]
  33. Chai, H.; Zhou, X.; Gao, X.; Yang, Q.; Zhou, Y.; Yuan, P.F. Integrated workflow for cooperative robotic fabrication of natural tree fork structures. Autom. Constr. 2024, 165, 105524. [Google Scholar] [CrossRef]
  34. Pantscharowitsch, M.; Moser, L.; Kromoser, B. A study of the accuracy of industrial robots and laser-tracking for timber machining across the workspace. Wood Mater. Sci. Eng. 2024, 20, 75–93. [Google Scholar] [CrossRef]
  35. McNeel, R. Rhinoceros®, Version 8 SR18; Robert McNeel & Associates. 2025. Available online: https://www.rhino3d.com/ (accessed on 28 July 2025).
  36. Rutten, D. Grasshopper®, Build 1.0.0008; Robert McNeel & Associates. 2025. Available online: https://www.grasshopper3d.com/ (accessed on 28 July 2025).
  37. Brell-Cokcan, S.; Braumann, J. KUKA|prc, Version 2025-02-24; Association for Robots in Architecture. 2025. Available online: https://robotsinarchitecture.org/ (accessed on 28 July 2025).
  38. RoboDK API for Python, Version 5.9. 2025. Available online: https://robodk.com/ (accessed on 28 July 2025).
  39. Lavygin, D. C3 Bridge Interface Server, Version 1.7.1. 2025. Available online: https://c3.ulsu.tech/ (accessed on 28 July 2025).
  40. Zivid SDK & Zivid Python, v2.15.0; Zivid AS. 2025. Available online: https://www.zivid.com/ (accessed on 28 July 2025).
  41. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  42. Zhou, Q.-Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  43. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  44. Simonov, K. YAML, PyYAML 6.0.2. 2024. Available online: https://github.com/yaml/pyyaml (accessed on 28 July 2025).
  45. Arriaga, F.; Wang, X.; Íñiguez-González, G.; Llana, D.F.; Esteban, M.; Niemz, P. Mechanical Properties of Wood: A Review. Forests 2023, 14, 1202. [Google Scholar] [CrossRef]
  46. Vásquez, L.; Hernández, G.; Campos, R.; Elgueta, P.; González, M. Grados Estructurales de la Madera Aserrada de Pino Oregón Clasificada Visualmente. Informe Técnico N° 196; Instituto Forestal: Concepción, Chile, 2013.
  47. Shanks, J.; Walker, P. Strength and stiffness of all-timber pegged connection. J. Mater. Civ. Eng. 2009, 21, 10–18. [Google Scholar] [CrossRef]
  48. Ogawa, K.; Sasaki, Y.; Yamasaki, M. Theoretical estimation of the mechanical performance of traditional mortise–tenon joint involving a gap. J. Wood Sci. 2016, 62, 242–250. [Google Scholar] [CrossRef]
  49. Liu, K.; Du, Y.; Hu, X.; Zhang, H.; Wang, L.; Gou, W.; Li, L.; Liu, H.; Luo, B. Investigating the Influence of Tenon Dimensions on White Oak (Quercus alba) Mortise and Tenon Joint Strength. Forests 2024, 15, 1612. [Google Scholar] [CrossRef]
  50. Poblete, C.; Hempel, R. Sistemas Estructurales en Madera; Universidad del Bío-Bío: Concepción, Chile, 1991. [Google Scholar]
  51. David, M.-N.; Miguel, R.-S.; Ignacio, P.-Z. Timber structures designed for disassembly: A cornerstone for sustainability in 21st century construction. J. Build. Eng. 2024, 96, 110619. [Google Scholar] [CrossRef]
Figure 1. Joinery jargon illustrated on a robot-milled tenon.
Figure 1. Joinery jargon illustrated on a robot-milled tenon.
Buildings 15 02712 g001
Figure 2. The “3-point method” for calibrating the coordinates of the BASE or WCS using an end mill positioned over a timber workpiece (the red arrow indicates the x-axis, the green arrow the y-axis, and the blue arrow the z-axis).
Figure 2. The “3-point method” for calibrating the coordinates of the BASE or WCS using an end mill positioned over a timber workpiece (the red arrow indicates the x-axis, the green arrow the y-axis, and the blue arrow the z-axis).
Buildings 15 02712 g002
Figure 3. The cutting edges of an end mill (left), used as a theoretical TCP, are visually aligned with the top vertex of a timber workpiece (right) to define the WCS using the 3-point method.
Figure 3. The cutting edges of an end mill (left), used as a theoretical TCP, are visually aligned with the top vertex of a timber workpiece (right) to define the WCS using the 3-point method.
Buildings 15 02712 g003
Figure 4. Experimental setup showing the overhead-mounted KUKA Quantec KR 210 R3100-2 C industrial robotic arm, with a fixed calibration board and a working table configured as a clamping fixture for timber workpieces.
Figure 4. Experimental setup showing the overhead-mounted KUKA Quantec KR 210 R3100-2 C industrial robotic arm, with a fixed calibration board and a working table configured as a clamping fixture for timber workpieces.
Buildings 15 02712 g004
Figure 5. Diagram of the vision-based workflow, showing the sequential steps, software modules, and hardware components.
Figure 5. Diagram of the vision-based workflow, showing the sequential steps, software modules, and hardware components.
Buildings 15 02712 g005
Figure 6. (Left): RGB image captured by the Zivid 2 M70 camera. (Right): Corresponding organized point cloud.
Figure 6. (Left): RGB image captured by the Zivid 2 M70 camera. (Right): Corresponding organized point cloud.
Buildings 15 02712 g006
Figure 7. Visualization of RANSAC plane fitting, showing inliers for YZ’ (red), XZ’ (green), and XY’ (blue) planes during the Workpiece Coordinate System Computation steps.
Figure 7. Visualization of RANSAC plane fitting, showing inliers for YZ’ (red), XZ’ (green), and XY’ (blue) planes during the Workpiece Coordinate System Computation steps.
Buildings 15 02712 g007
Figure 8. Stacked bar graph showing the mean time per piece for each procedural step across the four evaluated workflows.
Figure 8. Stacked bar graph showing the mean time per piece for each procedural step across the four evaluated workflows.
Buildings 15 02712 g008
Figure 9. A total of 20 specimens were tested, with five specimens processed under each of the four WCS measurement workflows.
Figure 9. A total of 20 specimens were tested, with five specimens processed under each of the four WCS measurement workflows.
Buildings 15 02712 g009
Figure 10. Distribution of milling errors (x, y, and z) across workflows.
Figure 10. Distribution of milling errors (x, y, and z) across workflows.
Buildings 15 02712 g010
Figure 11. Distribution of computed 3D positional errors by workflow.
Figure 11. Distribution of computed 3D positional errors by workflow.
Buildings 15 02712 g011
Figure 12. (Top): Specimen I–02, showing the highest z-axis error (2.18 mm) among all specimens. (Bottom): Specimen IV–04, with the highest z-axis error within the vision-based workflow (0.60 mm) but the lowest 3D error overall (0.36 mm). The images on the left were digitally mirrored to display the opposite perspective of the tenon shown on the right.
Figure 12. (Top): Specimen I–02, showing the highest z-axis error (2.18 mm) among all specimens. (Bottom): Specimen IV–04, with the highest z-axis error within the vision-based workflow (0.60 mm) but the lowest 3D error overall (0.36 mm). The images on the left were digitally mirrored to display the opposite perspective of the tenon shown on the right.
Buildings 15 02712 g012
Figure 13. WCS alignment and dimensional reference for the robot-milled tenon.
Figure 13. WCS alignment and dimensional reference for the robot-milled tenon.
Buildings 15 02712 g013
Table 1. Time performance metrics for the four experimental workflows: per-piece and total values.
Table 1. Time performance metrics for the four experimental workflows: per-piece and total values.
WorkflowMean Time Per Piece
(min:s)
Total Procedure Time
(min:s)
I—Fixed WCS (once)05:1726:25
II—WCS per piece07:5639:40
III—WCS + section per piece09:2647:10
IV—Vision-based workflow05:4428:40
Table 2. Breakdown of mean time per piece (in seconds) for each procedural step across the four WCS measurement workflows.
Table 2. Breakdown of mean time per piece (in seconds) for each procedural step across the four WCS measurement workflows.
WorkflowCalibrationWorkpiece
Clamping
WCS
Measurement
Cross-Section
Measurement
Program Data
Update
Milling
Execution
I—Fixed WCS (once)-11042-9156
II—WCS per piece-110210--156
III—WCS + section per piece-1102104050156
IV—Vision-based workflow27110161025 1156
1 Given the experimental nature of this workflow, an additional offline review of the toolpaths was included as a precautionary measure. The recorded time corresponds to the average duration of this verification step.
Table 3. Positional deviation (milling error) per specimen along the x, y, and z axes.
Table 3. Positional deviation (milling error) per specimen along the x, y, and z axes.
Specimen IDMilling Error
x mm
Milling Error
y mm
Milling Error
z mm
I—012.78−1.842.08
I—023.10−0.362.18
I—033.622.50−1.14
I—041.564.480.00
I—051.544.02−0.46
II—011.382.20−0.72
II—022.803.060.68
II—033.84−1.340.40
II—041.320.081.24
II—053.26−1.621.06
III—01−1.30−0.561.18
III—021.08−1.801.82
III—031.74−0.940.60
III—040.660.220.76
III—051.32−2.481.74
IV—010.100.460.58
IV—020.46−0.500.60
IV—030.580.320.02
IV—04−0.22−0.240.16
IV—050.24−0.180.44
Table 4. Axis-specific and 3D milling errors across workflows (mean ± standard deviation).
Table 4. Axis-specific and 3D milling errors across workflows (mean ± standard deviation).
WorkflowMean Error
in x mm
Mean Error
in y mm
Mean Error
in z mm
Mean 3D
Error mm
I—Fixed WCS (once)2.52 ± 0.931.76 ± 2.760.53 ± 1.514.27 ± 0.40
II—WCS per piece2.50 ± 1.110.48 ± 2.090.54 ± 0.783.30 ± 1.02
III—WCS + section per piece0.70 ± 1.18−1.11 ± 1.061.22 ± 0.552.20 ± 0.88
IV—Vision-based workflow0.23 ± 0.31−0.03 ± 0.400.36 ± 0.260.64 ± 0.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quitral-Zapata, F.; García-Alvarado, R.; Martínez-Rocamora, A.; González-Böhme, L.F. Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow. Buildings 2025, 15, 2712. https://doi.org/10.3390/buildings15152712

AMA Style

Quitral-Zapata F, García-Alvarado R, Martínez-Rocamora A, González-Böhme LF. Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow. Buildings. 2025; 15(15):2712. https://doi.org/10.3390/buildings15152712

Chicago/Turabian Style

Quitral-Zapata, Francisco, Rodrigo García-Alvarado, Alejandro Martínez-Rocamora, and Luis Felipe González-Böhme. 2025. "Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow" Buildings 15, no. 15: 2712. https://doi.org/10.3390/buildings15152712

APA Style

Quitral-Zapata, F., García-Alvarado, R., Martínez-Rocamora, A., & González-Böhme, L. F. (2025). Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow. Buildings, 15(15), 2712. https://doi.org/10.3390/buildings15152712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop