Next Article in Journal
Dynamic Characterization of Civil Engineering Structures with Wireless MEMS Accelerometers
Previous Article in Journal
Resistance of Steel Sections (Classes 1 to 4) Including Bimoment Effects
Previous Article in Special Issue
Development of an FMI-Based Data Model to Support a BIM-Integrated Building Performance Analysis Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Parametric Digital Tools in Grasshopper and Python for Optimization of CNC Prefabrication Process in WikiHouse Prototype Construction

by
Doris Esenarro Vargas
1,2,*,
Emerson Porras
1,
Jesica Vilchez Cairo
1,2,
Abigail Ortiz Curinambe
1,
Vanessa Raymundo
1,2,
Lidia Chang
1,
Jesus Peña
1,2,
Ramiro Torrico
3 and
Santiago Paz Nakura
1
1
Faculty of Architecture and Urbanism, Ricardo Palma University (URP), Santiago de Surco, Lima 15039, Peru
2
Research Laboratory for Formative Investigation and Architecture Innovation (LABIFIARQ), Ricardo Palma University (URP), Santiago de Surco, Lima 15039, Peru
3
Universidad Tecnológica Privada de Santa Cruz (UTEPSA), Santa Cruz de la Sierra 00591, Bolivia
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(21), 3895; https://doi.org/10.3390/buildings15213895
Submission received: 16 September 2025 / Revised: 15 October 2025 / Accepted: 23 October 2025 / Published: 28 October 2025

Abstract

The high material waste, long execution times, and lack of adoption of technological solutions hinder the construction process in the building sector. In response, this project proposes the development and validation of parametric digital tools to optimize the design and CNC fabrication of WikiHouse prototypes, an open-source modular system that enables precise assemblies without the need for additional metal joints. The main objective is to optimize the architectural design process through tools such as Grasshopper and Python, increasing precision, reducing material waste, and shortening the manufacturing times of CNC components for WikiHouse. The results include drastic time reductions when shifting from manual workflows to CAD–CAM parameterized workflows, with processing times of approximately 1 min (≈0:32–1:16) compared to 63–109 min using manual methods. This study demonstrates that parameterization—rather than robotization—is a realistic pathway to transfer open systems like WikiHouse to low-tech contexts: it reduces preparation times to minutes, cuts waste, and decreases variability among operators.

1. Introduction

The use of technologies in the architectural design process has always been crucial in defining the efficiency and quality of constructions. However, at present, the architecture, engineering, and construction industry faces multiple challenges in conventional construction methods, one of the main problems is the prolonged execution time [1,2], since these systems require numerous sequential stages—such as formwork, concrete curing, structural assembly, and manual finishing [3,4]—which extend schedules and cause delays. This situation is worsened by the high dependence on specialized labor [5,6], especially for tasks such as the installation of metal structures or the handling of wet systems, which increases project costs and makes it vulnerable to the shortage of skilled workers.
Added to this is the high level of waste on construction sites, caused by calculation errors, poor logistical planning, or excessive material stockpiling. In traditional projects, between 10% and 30% of the materials purchased are wasted, generating not only significant cost overruns but also an inefficient use of resources [7,8]. On the other hand, the conventional construction industry faces increasing pressure due to its environmental impact, [9,10] as it is responsible for approximately one third of global carbon dioxide emissions (37%) [11,12]. In particular, cement production—fundamental in concrete manufacturing—contributes about 8% of global CO2 emissions, due to the clinker calcination processes and the high energy consumption involved in its production [13].
In this regard, the use of parametric digital tools in construction emerges as a promising alternative to the structural challenges of the traditional method, such as high material waste, elevated operational costs, and dependence on specialized labor. At the international level, as shown in Figure 1A, DFAB House is a pioneering project developed by ETH Zürich in collaboration with NCCR Digital Fabrication. This building was designed and constructed using advanced digital fabrication methods, including 3D concrete printing, robotics, and computational design. The project integrates technologies such as the In situ Fabricator, Mesh Mould, and Smart Slab, demonstrating the potential of digitalization in architecture and construction. DFAB House has been widely documented in various academic and professional sources [14].
In Latin America, a notable case is the DIGFABMTY2.0 Parametric Pavilion, an experimental project developed by students of the School of Architecture, Art, and Design at Tecnológico de Monterrey, Monterrey campus. This pavilion (see Figure 1B) was designed using parametric design tools such as Rhinoceros and Grasshopper, and built with materials like Coroplast. The project consisted of a temporary structure derived from the exploration of pyramidal forms and was carried out as part of the course Advanced Architectural Technology. The process included the development of algorithms, prototypes at different scales, and the fabrication and assembly of 304 pieces that make up the parametric system [15,16].
In the Peruvian context, the LADDFAB laboratory (see Figure 1C) of Universidad Ricardo Palma (Peru) developed a prototype of a laminated plywood structure using computational design and two-axis CNC cutting. The project proposes an efficient and low-cost alternative for areas with limited access to advanced technologies through the use of segmented hexagonal plates and wooden “half-lap” mechanical joints, assembled without screws or adhesives. The model, composed of 63 panels and 163 connectors, withstood a point load of 0.8 kN with a maximum deformation of only 5 mm and was assembled by two people in less than six hours. This work not only demonstrates the technical and structural feasibility of the system but also proposes a viable path to democratize digital construction in emerging contexts, with a clear ecological and pedagogical focus [17].
These references demonstrate that the integration of parametric digital tools in the architectural design process not only increases the precision, efficiency, and sustainability of construction projects but also contributes to the democratization of advanced technologies, even in contexts with limited resources. Nevertheless, despite their contributions, these cases present gaps that open new lines of research.
DFAB House exhibits a high technological level, but its complexity and cost make it difficult to replicate in emerging contexts. DIGFABMTY2.0, despite its pedagogical and exploratory value, focused mainly on formal experimentation and on demonstrating the aesthetic potential of parametric design, without addressing optimization processes aimed at material reduction or structural efficiency. For its part, LADDFAB represents a significant advancement in the Peruvian context by incorporating CNC cutting and mechanical assemblies, although its scope is limited to the experimental scale and does not develop strategies aimed at reducing waste or accelerating serial production of components.
Within this framework, the objective of the present research is to minimize this gap through the development of comparative experiments applied to the construction process of the WikiHouse system, with study groups employing two approaches: a digital parametric solution (Grasshopper + Python) and a conventional modeling method (Rhinoceros + SketchUp). The experiments seek to evaluate and contrast processing time, geometric precision, traceability, and material waste, determining the impact of algorithmic automation on process efficiency. Likewise, a satisfaction survey was applied to the participating groups in order to measure the perception of usability, speed, and effectiveness between both methodologies.

2. Materials and Methods

2.1. Methodology

The methodology used in this study follows a mixed approach [18,19], allowing for a comprehensive analysis of the data from both objective and subjective perspectives. It is developed at a correlational level [20], seeking to identify the relationship between the use of parametric digital tools in Grasshopper and Python and the optimization of architectural design in the production of CNC components. The research is applied in nature [21,22], as it aims to generate practical solutions to improve efficiency in the fabrication of prototypes and full-scale construction. Likewise, the research design is experimental [23,24], which implies that variables and their dimensions are manipulated to evaluate correlations and their impact on design and production processes.

2.2. Population and Sample

The population consisted of undergraduate students from the Faculty of Architecture at Universidad Ricardo Palma. The sample was composed of 12 students selected according to the criterion of having prior knowledge of Rhino 8, Grasshopper, and SketchUp 2024 software. This group was chosen because the participants were enrolled in the seventh and eighth semesters of the curriculum, levels at which they had already acquired advanced digital competencies through courses such as Digital Presentation and Comprehensive Architectural Design. This training enabled them to actively participate in the elective course, applying digital processes aimed at the automation and optimization of prefabrication, integrating Grasshopper and Python concepts into the development of more efficient architectural proposals.
The selection of this academic level facilitated the analysis of the application of parametric tools in an educational context where students possess sufficient technical and design foundations, ensuring the relevance of the results without requiring advanced professional specialization. The choice of students is based on a strategic academic approach: working in a controlled environment ensures traceability and minimizes variability due to different experience levels, allowing for precise evaluation of the efficiency and comprehension of parameterized digital processes. Furthermore, it establishes the basis for replicating and expanding the research into professional contexts and more complex workshops without relying on advanced professional specialization

2.3. Variables, Dimensions, and Indicators

The purpose of this matrix of variables is to establish the analytical elements necessary to evaluate the use of parametric digital tools, specifically Grasshopper and Python, in optimizing the prefabrication process within the field of architectural design. The independent variable, corresponding to the use of these tools, is broken down into four main dimensions: ease of use, accessibility, training, and perceived difficulty, each with indicators that measure the level of understanding, resource availability, autonomy achieved after training, and perception of complexity (Table 1 and Table 2).

2.4. Methodological Scheme

The experimental methodology was structured into four main stages—Experiments 1 to 4—each focused on a specific phase of the digital workflow related to the WikiHouse system. The general purpose was to compare the efficiency, understanding, and satisfaction level of participants during the development of various digital tasks associated with design and fabrication.
Each experiment followed the same logic: first, the specific task and its objective within the digital workflow were defined; then, the working groups were assigned:
Group A (Experimental): used automated scripts developed in Grasshopper.
Group B (Control 1): performed the tasks manually in Rhinoceros 8 (Rhino 8).
Group C (Control 2): carried out the tasks manually in SketchUp Web (SKP).
The execution time of each task was objectively recorded using a stopwatch operated by an external supervisor, in order to avoid human error and ensure measurement accuracy.
Finally, the participants’ satisfaction level was evaluated according to five criteria: clarity and effectiveness of the training, simplicity of the procedure, perception of difficulty and time allocation, availability of resources and connectivity, and acceptance of the procedure with potential for reuse. Each combination of experiment and group constituted a differentiated phase (for example, Experiment 1 included phases 1A, 1B, and 1C corresponding to Groups A, B, and C, respectively), as shown in Figure 2.
Experiment 1: Piece Downloading
The first experiment addressed the initial phase of the digital workflow, focusing on the downloading and organization of CAD files from the WikiHouse platform. The manual procedure for obtaining pieces was compared with the automated downloading process using native components and a script developed in the Grasshopper visual environment.
The experimental group (A) used the script to automate the download directly from GitHub, while the control groups (B and C) carried out the process manually using Rhinoceros (with files obtained from GitHub) and SketchUp (via the official WikiHouse website), respectively. Each group stored the downloaded files in its own folder, ensuring traceability of results. This execution allowed the evaluation of the impact of automation on reducing processing time and improving procedural clarity.
Experiment 2: Classification and Quantification of Pieces for Fabrication
In the second experiment, the ability of each group to classify and quantify the downloaded pieces from a provided 3D model was evaluated according to their type and quantity.
The experimental group (A) performed this process through a script developed in Grasshopper that automated identification and counting of the pieces, while the control groups (B and C) carried out the task manually using Rhino 8 and SketchUp Web, respectively.
Experiment 3: Transformation of CNC Router Data to Laser
The third experiment focused on the transformation of manufacturing (CAD) files, adapting the CNC Router data to a format compatible with laser CNC cutting. The experimental group employed an automated routine that generated the required files, while the control groups performed the transformation manually.
The experimental group (A) used an automated routine developed in Rhino 8–Grasshopper, which reorganized and optimized the cutting geometries, automatically generating the files necessary for the new format. In contrast, the control groups (B and C) performed the transformation manually, using Rhino 8 in the case of Group B and SketchUp Web together with AutoCAD 2025 in the case of Group C.
Experiment 4: Nesting Laser CNC Data (Material Optimization)
The fourth and final experiment focused on nesting or arranging the 2D pieces on the material sheets, optimizing surface use and minimizing waste.
The experimental group (A) used a parametric script that automated the distribution of pieces according to their size, rotation, and proximity, while the control groups (B and C) performed the nesting manually in Rhino and SketchUp Web.

3. Results

The findings are presented based on four successive experiments, each aimed at evaluating the impact of using parametric digital tools (Grasshopper and Python) on different phases of the WikiHouse workflow. The results of each experiment are structured around two axes: the experience of using the digital tools, and the operational performance of the prefabrication process.

3.1. Result of Experiment 1: Unloading of Parts

This first experiment focused on the initial phase of the WikiHouse workflow: obtaining the files that define each building component. Manual downloading of parts was compared with downloading assisted by a parametric script (see Figure 3) that automates the download of modular parts from an online database. Grasshopper components generate the names of the parts and classify them into categories (A). The corresponding web address (URL) is then added to locate the exact part to be downloaded, in addition to specifying the required quantity (B). The “C:/” component then defines the location where the downloaded files will be saved, usually in a folder on the user’s desktop (C). Finally, verification components verify that the download has completed successfully (D).
The main quantitative parameter evaluated at this stage was the time required to complete the unloading and organization of the pieces (see Table 3). The control group performed the procedure manually, while the experimental group used the automated parametric script.
Demonstrating a significant difference between the three groups, the experimental group, which used the parametric script, achieved an average execution time of 1 min 15 s, representing a 95.4% reduction compared to the SKP control group (25 min) and a 98.7% reduction compared to the Rhino control group (90 min). This difference suggests a direct correlation between the use of automated digital tools and the improvement in operational efficiency in the component download and organization phase. In practical terms, the experimental group was 21.7 times faster than the SKP control group and more than 78.3 times faster than the Rhino control group, equivalent to a saving of 23.85 min and 88.85 min, respectively.

3.1.1. Experimental Group A (Script): Satisfaction Level

Based on the clarity and effectiveness of the training, the results reflect a very positive perception. The clarity of the instructions (Question 1) received a perfect score of 5.0, while technical support (Question 4) achieved an average of 4.75, indicating an overall satisfactory experience, with slight variations attributable to specific differences in assistance. Based on the intuitiveness and simplicity of the procedure, the simplicity of the system was highly valued. Questions 2 and 5 obtained an average score of 5.0, indicating that the interface and procedural steps were easily understood, generating no doubts or confusion. Based on the perceived difficulty and allocated time, the perceived difficulty was low, with an average score of 5.0 for Question 3 and 4.75 for Question 8. The allocated time was considered appropriate, although one response suggested that certain steps might require further clarification. Finally, the results show a high level of acceptance of the procedure. The statement regarding recommending it received a score of 5 from all participants, reflecting 100% acceptance. Regarding the potential for reuse, 75% gave the highest score, and 25% assigned a score of 4, indicating agreement with some reservations. Overall, these findings demonstrate a positive assessment of both its immediate applicability and its future versatility in academic or professional contexts (see Table 4 and Figure 4).

3.1.2. Experiment 1: Control Group B (Rhino): Satisfaction Level

Based on the intuitiveness and simplicity of the procedure, the results show that the interface was the lowest-rated aspect of the entire survey (3.75), indicating difficulties in understanding the environment. The technical procedures obtained a score of 4.0, showing that simplicity was acceptable but not fully achieved. There was a lower perception of intuitiveness compared to the experimental group. Based on the perceived difficulty and allocated time, the results reveal that the available time (4.0) and the absence of barriers (4.5) were positively evaluated, although with indications of higher demand. While no significant obstacles were reported, the process was perceived as more complex and demanding. Based on the availability of resources and connectivity, the results indicate that access to resources and internet connectivity received a score of 4.25 for both questions. Although generally adequate, minor issues or differences were observed that slightly affected the experience of some participants. Finally, the results show that the statement regarding the usefulness of parametric algorithms obtained an average score of 4.5, with all participants rating between 4 and 5. This reflects a clear perception that the manual procedure could be optimized through the use of automated digital tools (see Table 4 and Figure 4).

3.1.3. Experiment 1: Control Group C (SketchUp): Satisfaction Level

Based on the clarity and effectiveness of the training, the results indicate that the instructions received (Question 1) were rated with an average score of 4.75, suggesting that they were mostly clear, although one participant reported some difficulty. Technical support (Question 4) also received a score of 4.75, reflecting adequate assistance but with room for improvement in terms of standardization. Based on the intuitiveness and simplicity of the procedure, the results show that both the software interface (Question 2) and the technical procedures (Question 5) were rated 5.0, demonstrating unanimous perception of simplicity and clarity in the use of the manual system based on SketchUp. Based on the perceived difficulty and allocated time, the results reveal that the allocated time (Question 3) was rated 4.75, while the perception of technical barriers (Question 8) obtained a score of 4.5. This indicates that although the overall experience was positive, there were isolated cases of difficulty or time pressure. Based on the availability of resources and connectivity, the results indicate that Questions 6 and 7, related to access and connectivity, received the lowest scores (4.25). This suggests that some participants experienced issues with equipment or connection stability, which slightly affected the development of the experiment. Finally, the results show that the statement regarding the efficiency of parametric algorithms compared to the manual process obtained an average score of 4.5, with all participants assigning ratings between 4 and 5. This reflects a clear recognition of the potential for improvement through automation (see Table 4 and Figure 4).

3.1.4. General Survey Results

The data obtained (see Table 4) show a significant difference in the qualitative assessment of the procedure between the three participating groups in Experiment 2. The experimental group, which used a parametric script, obtained averages of 5.0 in most of the categories evaluated, including clarity of instructions, simplicity of the procedure, perception of the allotted time, and general acceptance. Only a slight variation was recorded in technical support (4.75), which did not affect the overall positive assessment of the automated method.
In contrast, the Rhino control group presented the lowest scores in interface intuition (3.75) and technical simplicity (4.0), with overall averages between 4.25 and 4.75 in the remaining dimensions, indicating a greater cognitive load during the manual procedure. Although the perception of the usefulness of the algorithms was favorable (4.5), the operating experience was perceived as more demanding.
For its part, the SketchUp control group obtained intermediate evaluations, standing out for its high perception of simplicity and intuition in the interface (both with 5.0), but with slightly lower scores in clarity of instructions, time allocated and connectivity (between 4.25 and 4.75), suggesting a positive experience, although not as homogeneous as that of the experimental group.
These differences reflect a clear trend: the greater the automation and operational clarity, the greater the satisfaction reported by participants, especially in terms of technical accessibility, methodological clarity and procedural feasibility.

3.2. Result of Experiment 2: Classification and Quantification of Parts for Manufacturing

This second experiment addressed the intermediate stage of the WikiHouse workflow, focusing on the classification and quantification of the parts required for the CNC manufacturing of a component. At this point, we sought to analyze the contrast with the manual method of reviewing, classifying, and downloading piece by piece, versus the impact of using an algorithm that automates the identification, counting, and organization of 2D construction components from the 3D model. A list of all the elements present in the model is generated (A), identifying the quantity and numbering of each part. However, this initial information is often overloaded, making direct reading of the data difficult. Therefore, the text and identifiers are reorganized (B) to clarify the correspondence between the model elements and their attributes. Next, a destination folder is defined on the desktop using the “C:/” component (C), which will serve as the location for the downloaded files. Finally, when the components are executed, a search is performed for the 2D files (cutting files) in that folder, and only one instance of each existing part type is imported into the workspace (D) (see Figure 5).
The data obtained (see Table 5) show a significant difference between the three groups. The experimental group, which used the parametric script to classify and quantify the parts, obtained an average execution time of 1:51 min, which represents a 94.7% reduction compared to the SKP control group (32:29 min) and 95.0% compared to the Rhino control group (34:20 min). This difference suggests a direct correlation between the automation of the counting and the improvement in operational efficiency in the preparation stage for CNC cutting.
In practical terms, the experimental group was 19 times faster than the SKP control group and 20 times faster than the Rhino control group, equivalent to a saving of 30.88 min and 32.56 min, respectively. In terms of accuracy in piece classification and quantification (see Table 5), the results again show a substantial advantage in favor of the experimental group. This group achieved 100% correct classification and accurate counting, without presenting any type of error in the process (0% margin of error). In contrast, the Rhino control group presented an average of 89.3% correct classification and 90.9% correct counting, with an average margin of error of 9.9%. The SKP control group obtained an average of 84.5% correct classification, 87.4% correct counting, and a significantly higher margin of error of 17.0%.

3.2.1. Experiment 2: Experimental Group A (Script): Satisfaction Level

Based on the clarity and effectiveness of the training, the results indicate that the instructions were clearly understood, achieving an average score of 5.0 for Question 1. Technical support (Question 4) was rated 4.75 due to slight variations, reflecting an effective training process with potential improvement in assistance during certain stages. Based on the intuitiveness and simplicity of the procedure, the results show that Questions 2 and 5 obtained average scores of 5.0, indicating that the interface was considered clear and functional, and the technical steps were perceived as accessible and logically structured. No difficulties were reported, reinforcing the usability and intuitive design of the parametric script. Based on the perceived difficulty and allocated time, the results reveal that Questions 3 and 8 received scores of 5.0, indicating that the allotted time was sufficient and no technical barriers were identified. This suggests that the procedure was carried out smoothly within the established timeframe. Based on the availability of resources and connectivity, the results indicate that Items 6 and 7 also received scores of 5.0, showing that the available resources and internet connection were adequate, without any interference in the experiment’s development. Finally, the results demonstrate total acceptance: all participants stated they would recommend the procedure (Question 9) and recognized its adaptability for other applications (Question 10), both with perfect scores of 5.0. This reflects a high degree of satisfaction and a strong perception of its potential future utility (see Table 6 and Figure 6).

3.2.2. Experiment 2: Control Group B (Rhino): Satisfaction Level

Based on the clarity and effectiveness of the training, the results show that both the instructions and the technical support were highly rated, with average scores of 4.75 for Questions 1 and 4. Participants demonstrated a solid understanding of the process and felt adequately supported, although slight individual variations suggest opportunities for improving communication strategies. Based on the intuitiveness and simplicity of the procedure, the results indicate that the interface was rated 4.75 (Question 2), slightly lower than the group that used the script, suggesting a greater manual workload and interpretive demand on the user. Nevertheless, the technical procedures (Question 5) achieved a perfect score of 5.0, showing that once understood, the steps were perceived as clear and manageable. Based on the perceived difficulty and allocated time, the results reveal that Questions 3 and 8 reached average scores of 4.75, indicating a generally positive experience, though with minor signs of pressure or difficulty in one isolated case. This aligns with the longer execution times observed in this group. Based on the availability of resources and connectivity, the results demonstrate that resources and infrastructure were rated 5.0 in both Questions 6 and 7. All participants agreed that the technical environment was adequate and free from interference, contributing to the smooth development of the activity. Finally, the results show that the statement regarding the usefulness of parametric algorithms received an average score of 4.75 (Question 9), reflecting a generally positive perception of the efficiency of the automated approach, even among those who did not apply it directly (see Table 6 and Figure 6).

3.2.3. Experiment 2: Control Group C (SketchUp): Satisfaction Level

Based on the clarity and effectiveness of the training, the results indicate that both the instructions and technical support were rated with an average score of 4.75 (Questions 1 and 4). Although the overall experience was satisfactory, small individual differences were recorded, suggesting that the support process could be further standardized. Based on the intuitiveness and simplicity of the procedure, the results show that the SketchUp interface was rated 4.75 (Question 2) and the technical procedures 5.0 (Question 5). This indicates that, despite being a manual process, participants perceived clarity in the step-by-step execution and ease of use, particularly among those already familiar with the software. Based on the perceived difficulty and allocated time, the results reveal that Questions 3 and 8 obtained an average score of 4.5, reflecting a generally positive experience but with a higher perceived effort compared to the experimental group. One participant expressed lower satisfaction, suggesting occasional difficulties related to pace or comprehension. Based on the availability of resources and connectivity, the results indicate that technical conditions received the lowest ratings in this group (4.5 for Questions 6 and 7). One participant reported a negative experience, noting that certain infrastructural issues may have hindered the smooth development of the activity. Finally, the results show that the use of algorithms was considered a clear improvement to the procedure (average score of 5.0, Question 9). However, the recommendation of the manual process to other students (Question 10) received the lowest rating in the group (3.75), due to one critical response (score of 1), indicating a less satisfactory experience in at least one case (see Table 6 and Figure 6).

3.2.4. General Survey Results

The data obtained (see Figure 6) show a notable difference in the qualitative assessment of the procedure between the three groups. The experimental group, which used a parametric script for file conversion in the organization stage prior to CNC cutting, achieved average scores of 5.0 in almost all dimensions evaluated, including clarity of instructions, simplicity of the procedure, perception of time, availability of resources, and general acceptance. The only variation was in technical support, with a slight decrease to 4.75.In contrast, the Rhino control group obtained slightly lower scores, with averages between 4.75 and 5.0. Although the technical procedures were well understood, the interface presented a slight interpretive burden, as reflected in the 4.75 usability score. The SKP control group, on the other hand, presented the lowest scores, especially in perception of time and infrastructure (4.5), as well as in recommendation of the procedure (3.75), influenced by an individual critical response. Despite this, participants recognized the usefulness of using algorithms (5.0), although they did not apply this approach directly.

3.3. Láser Result of Experiment 3: Transformation of CNC Router Data to Laser

This third experiment corresponds to the penultimate phase of the WikiHouse workflow, focusing on the transformation of cutting files originally generated for CNC routers into a format compatible with laser cutting machines. In this process, the manual conversion procedure was compared with the assistance of a parametric script that automates the debugging, reorganization, and geometric adaptation of the data for correct execution in another manufacturing technology.
The manual transformation of parts was compared with the transformation assisted by a parametric script (see Figure 7), which allows the automation of the conversion of lines and geometries from files prepared for routers to a format optimized for laser cutting. This automated process not only reduces execution time but also improves accuracy and reduces errors that could result from manual processing of cutting lines.
The developed algorithm executes a sequence of operations that optimize work with 2D files. Initially, components are updated to recalculate the elements present in the workspace (A), and these are grouped into a container that allows selection between different previously stored models (B). Then, the curves are classified according to their function using color codes (C), which facilitates their identification and manipulation. Next, a geometric simplification process is carried out to eliminate unnecessary lines or lines incompatible with laser cutting (D). Once the files are refined, they are reconstructed based on the new machine type (E) and can be rescaled to the desired scale (F). The number of parts is automatically adjusted according to the previous count (G), and finally, the entire assembly is grouped into a single component for correct organization and export (H).
The results show a clear difference between the three groups tested (see Table 7). The experimental group, which used a parametric script to transform the geometric information, completed the process in an average of 3:54 min. In contrast, the control group, which worked with Rhino, took an average of 29:43 min, while the SketchUp group averaged 41:34 min. This notable difference in execution times demonstrates how the use of automated digital tools can significantly streamline tasks that, when performed manually, are slower and more error-prone. The script not only facilitated the process but also allowed participants to focus on key aspects without being distracted by repetitive or technical tasks.
From a comparative perspective, the experimental group was able to complete the file transformation in significantly less time than the control groups. On average, they were approximately 10.5 times faster than the SKP control group and 7.7 times faster than the Rhino control group, representing a time saving of 37 min and 25.8 min, respectively.
In addition to time, the transformation accuracy results also favored the group that used the parametric script. All participants in this group achieved 100% correct transformations. In contrast, the Rhino control group achieved an average of 89.4% correct transformations, while the SKP control group achieved 91.3%.

3.3.1. Experiment 3: Experimental Group A (Script): Satisfaction Level

Based on the clarity and effectiveness of training, the results reflect that the instructions for the experiment were fully understood, with an average score of 5.0 (question 1). Technical support was also rated 5.0 (question 4), indicating clear and sufficient training. Based on the intuitiveness and simplicity of the procedure, the results reflect that questions on ease of use (2 and 5) obtained 5.0, reflecting that the interface and technical steps were perceived as simple and intuitive, with no difficulties during execution. Based on the perception of difficulty and time allotted, the results reflect that the time was considered adequate and no technical obstacles were reported, with scores of 5.0 in questions 3 and 8. Based on the availability of resources and connectivity, the results reflect that the facilities were highly rated (4.75 in question 6), although a slight variation was observed in the quality of the internet connection (4.25 in question 7). Finally, the results obtained show that the algorithm was well received. The procedure recommendation obtained a score of 4.75 (question 9), while its reusability potential reached the maximum score (5.0, question 10) (See Table 8 and Figure 8).

3.3.2. Experiment 3: Control Group B (Rhino): Satisfaction Level

Based on the clarity and effectiveness of the training, the results indicate that both the instructions and technical support received an average score of 4.75 (Questions 1 and 4), demonstrating a solid understanding of the procedure, although slight variations among participants suggest potential improvements in communication. Based on the intuitiveness and simplicity of the procedure, the results show that the interface obtained a score of 4.75 (Question 2) and the technical procedures 4.5 (Question 5), indicating a generally positive perception, though with signs of a higher manual workload compared to automated processes. Based on the perceived difficulty and allocated time, the results reveal that scores of 4.75 and 4.25 for Questions 3 and 8 indicate that, although the process was generally smooth, some participants experienced a degree of difficulty or additional effort during execution. Based on the availability of resources and connectivity, the results show that Items 6 and 7 were rated 4.75, suggesting that both resources and internet connectivity were adequate and did not represent obstacles during the experiment. Finally, the results indicate that recognition of the value of algorithms achieved an average score of 4.5 (Question 9), while the recommendation of the manual procedure was lower (4.25 for Question 10), suggesting a moderate level of acceptance of the manual approach (See Table 8 and Figure 8).

3.3.3. Experiment 3: Control Group C (SketchUp): Satisfaction Level

Based on the clarity and effectiveness of the training, the results reflect those questions 1 and 4 obtained averages of 4.25 and 5.0, respectively. Although the majority understood the instructions well, some cases show that the support could be strengthened. Based on the intuition and simplicity of the procedure, the results reflect that the interface was evaluated with 4.75 (question 2) and the procedures with 4.25 (question 5). This reflects that, although the software was understandable, there was some complexity in the technical execution for some users. Based on the perception of difficulty and time allotted, the results reflect that with averages of 4.75 (question 3) and 4.25 (question 8), the data suggest an acceptable experience, although with greater demands than the experimental group. Based on resource availability and connectivity, the results show that questions 6 and 7 obtained 4.5 and 4.75, indicating generally adequate resources, but with one case that highlighted technical limitations during the test. Finally, the results show that the rating of the algorithms’ potential (question 9) reached 4.5, demonstrating interest in more efficient options. The recommendation for the manual process (question 10) was lower (3.75), due to a critical response suggesting specific dissatisfaction (see Table 8 and Figure 8).

3.3.4. General Survey Results

The data obtained (see Figure 8) show a notable difference in the qualitative assessment of the procedure among the three groups. The experimental group, which used a parametric script during the file transformation process, obtained average scores of 5.0 in almost all dimensions evaluated, including comprehension of instructions, ease of use, time adequacy, and perception of the procedure. Only slight variations were recorded in connectivity (4.25) and infrastructure conditions (4.75), without compromising overall performance.
In contrast, the Rhino control group obtained slightly lower results, with averages between 4.5 and 4.75. While the experience was generally positive, signs of greater manual workload were evident, especially in technical execution and time perception. The SKP control group, meanwhile, presented the lowest ratings, with scores of 4.25 in key aspects such as technical clarity and perception of execution, and 3.75 in procedure recommendation, reflecting a less favorable experience and greater individual variability. This difference suggests a direct correlation between the use of automated tools and a better perception of the process in terms of clarity, simplicity, and operational acceptance. As manual intervention is reduced, levels of satisfaction, replicability, and confidence in the efficiency of the procedure increase.

3.4. Result of Experiment 4: Nesting the CNC Laser Data (Material Optimization)

This fourth experiment corresponds to the final phase of the WikiHouse workflow, focusing on the nesting process, that is, the efficient arrangement of 2D parts on the material sheets for subsequent CNC laser cutting. Manual nesting was compared with nesting performed using a parametric script (see Table 9), which automates the placement of corrected parts on virtual sheets, calculating their ideal distribution according to size, rotation, and proximity. The developed algorithm (see Figure 9) executes a series of steps designed to maximize process efficiency. First, the elements of the 2D model are ungrouped so they can be processed individually by the nesting system (A). Next, the material format is defined, that is, the size and proportion of the cutting sheet (B). Next, the minimum tolerances between parts and with respect to edges (C) are parameterized, ensuring precise nesting without interference.
The algorithm then identifies the coding texts and replaces them with a typeface optimized for laser engraving. These texts are grouped together for joint processing (D), and once nesting is complete, they are recoded and regrouped together with their respective parts (E). Finally, the system identifies the different types of lines in the file (cut, engraving, outline), assigning them specific layers and differentiated colors, using Elefront components that automate this final classification (G).
The results obtaine reflect a significant difference between the three groups in terms of execution time and the amount of material used. The experimental group, assisted by a parametric script, achieved an average execution time of 0:52 min, while the SKP control group recorded 63:32 min and the Rhino control group 78:44 min. This difference represents a 98.6% reduction compared to the SKP group and a 99.0% reduction compared to the Rhino group.
In practical terms, the experimental group completed the task 73 times faster than the SKP control group and 90 times faster than the Rhino control group, equivalent to an approximate saving of 62.7 min and 77.9 min, respectively.
Likewise, regarding material usage, the experimental group used 52 plates in all cases, while the Rhino control group averaged 54.5 plates and the SKP control group 56.5 plates. This indicates an optimization in material usage of 2.5 to 4.5 plates compared to manual methods. This difference implies a reduction in material consumption of 4.6% and 8.0%, respectively.
The data obtained (see Table 10) show a clear variation in the percentage of material waste among the three groups evaluated during the part nesting process for CNC laser cutting. The experimental group, which used a parametric script for automated part organization, recorded an average waste percentage of 21.06%, based on a total of 154.79 m2 of material used.
In contrast, the Rhino control group had an average waste of 24.1%, with an average use of 162.23 m2, while the SKP control group had the highest percentage, reaching an average waste of 25.2% out of 168.19 m2 of material used. These differences reflect a lower proportion of wasted material by the experimental group, both in percentage and absolute terms. Automation allowed the area used to remain constant and reduce variability, unlike the control groups, which showed greater fluctuations between participants, as well as more intensive use of available materials.

3.4.1. Experiment 4: Experimental Group A (Script): Satisfaction Level

Based on the clarity and effectiveness of the training, the results reflect that Questions 1 and 4 obtained a perfect score of 5.0, indicating that both the instructions and the technical support were fully understood and valued as adequate. Based on the intuitiveness and simplicity of the procedure, the results reflect that Items 2 and 5 also achieved the maximum score, reflecting that the participants considered the interface and the procedures to be clear, accessible, and intuitive. Based on the perception of difficulty and allotted time, the results reflect that the absence of technical barriers and the allotted time were highly evaluated (questions 3 and 8), both with scores of 5.0, indicating a fluid execution. Based on the availability of resources and connectivity, the results reflect that access to the equipment was highly valued (4.75), although a slight variation in the perception of connectivity was detected (4.25), which indicates possible specific differences in the technical environment. Finally, the results show that the procedure was highly accepted. The willingness to recommend it was 4.75, and its potential reuse was fully recognized, with an average of 5.0 (See Table 11 and Figure 10).

3.4.2. Experiment 4: Control Group B (Rhino): Satisfaction Level

Based on the clarity and effectiveness of the training, the results reflect that questions 1 and 4 obtained an average of 4.75, reflecting a good understanding of the procedure. However, slight differences were evident between participants, suggesting areas for improvement in technical support. Based on the intuitiveness and simplicity of the procedure, the results reflect that the interface received an average of 4.5 and the technical steps were rated 5.0, indicating that, although execution was clear, interaction with the graphical environment required greater effort compared to the automated group. Based on the perception of difficulty and time allocated, the results reflect that the ratings for questions 3 and 8 (4.0 and 5.0) show that, in general, time was sufficient, although some participants experienced more workload or difficulty during development. Based on resource availability and connectivity, the results reflect that questions 6 and 7 obtained scores of 5.0, indicating that the physical resources and connectivity functioned adequately throughout the activity. Finally, the results show that the usefulness of the parametric algorithms was recognized with an average of 5.0 (question 9). In contrast, the recommendation for the manual procedure was 3.5 (question 10), reflecting a lower willingness to replicate it, possibly due to comparative experience with automated methods (See Table 11 and Figure 10).

3.4.3. Experiment 4: Control Group C (SketchUp): Satisfaction Level

Based on the clarity and effectiveness of the training, the results reflect that questions 1 and 4 obtained averages of 4.25 and 5.0, respectively. Although the majority understood the instructions well, some cases show opportunities for improvement in technical support. Based on the intuitiveness and simplicity of the procedure, the results reflect that the interface was evaluated with a 4.75 (question 2), while the technical procedures received a 4.25 (question 5). This indicates that the environment was mostly understandable, but the execution presented some complexity for some participants. Based on the perception of difficulty and time allocated, the results reflect that the scores of 4.75 (question 3) and 4.25 (question 8) reflect a functional experience, although with greater demands compared to the group that used automated tools. Based on the availability of resources and connectivity, the results reflect that Questions 6 and 7 were rated 4.5 and 4.75. Although the majority had adequate access to resources, at least one case with technical difficulties was recorded. Finally, the results obtained show that the perception regarding the use of algorithms was positive (4.5 in question 9), while the recommendation of the manual procedure was lower (4.0 in question 10), due to an individual critical evaluation (see Table 11 and Figure 10).

3.4.4. General Survey Results

The data obtained (see Figure 8) show a notable difference in the qualitative assessment of the procedure among the three groups. The experimental group, which used a parametric script during the process, achieved average scores of 5.0 in almost all dimensions evaluated, including clarity of instructions, simplicity of procedure, perception of time, availability of resources, and general acceptance. In contrast, the Rhino control group obtained slightly lower scores, with averages between 4.5 and 4.75, while the SKP control group presented the lowest ratings, especially in the recommendation of the procedure (3.75) and technical simplicity (4.25) (See Table 11 and Figure 10).

3.5. General Results of the 4 Experiments

3.5.1. Experiment 1: Unloading Parts

  • Group A used a parametric algorithm developed in Grasshopper, composed of native components and custom Python scripts, aimed at automating the download of the 2D files corresponding to the WikiHouse parts. The system established a direct link between the nomenclature of each component and its respective address in the GitHub repository, simultaneously generating a default destination path on the desktop of each test subject. This configuration reduced manual intervention and minimized errors due to omission, while maintaining traceability between the digital model and the manufacturing files.
  • Group B executed the download process manually from WikiHouse’s GitHub repository, selecting and storing each file individually. This approach required more execution time and visual control, as it lacked validation mechanisms or automatic detection of missing files.
  • Group C downloaded the file through the official WikiHouse website, also using a manual and sequential process. The lack of automation led to greater exposure to human error, as well as delays resulting from the individual search and management of files.
Results: Group A completed the file download in an average time of less than 1 min, while Groups B and C required between 63 and 109 min, representing a time reduction of approximately 98–99%. Furthermore, the automated workflow guaranteed 100% accuracy in file retrieval, with no omissions or redundancies, in contrast to manual procedures that showed errors of up to 2–3 files per session and delays associated with non-automated handling. These results confirm the effectiveness of the parameterized CAD–CAM workflow in maintaining data traceability and consistency during the initial download phase.

3.5.2. Experiment 2: Classification and Quantification of Parts for Manufacturing

The experiment aimed to identify and quantify all the component parts of the 3D model of a WikiHouse system, in order to generate a digital manufacturing inventory.
  • Group A used a parametric algorithm developed in Grasshopper, programmed to automatically detect, classify, and count each element present in the 3D model within the Rhinoceros 3D environment. The system displayed real-time reading panels with the nomenclature and exact number of units per part type, eliminating the need for manual intervention and ensuring data accuracy and traceability.
  • Group B worked directly in Rhinoceros, using native tools to visually isolate the components, proceed with their manual identification, and record the count individually. This procedure required more execution time and presented risks of inconsistencies in data transcription.
  • Group C performed the task in SketchUp, following a completely manual process of visualization, counting, and identification, without the support of automated routines. This involved considerable operational effort and a higher probability of classification errors.
Results: Group A achieved complete identification and counting in approximately 30 s through the automatic execution of the parametric algorithm, obtaining exact nomenclatures and quantities without error (0% deviation). In contrast, Groups B and C, who performed the classification manually in Rhino and SketchUp, respectively, recorded errors of 1.8% and 3.2%, with processing times 40–60 times longer (18–31 min). Furthermore, inter-operator variability was reduced from ≈15% to less than 2%, demonstrating that parametric automation not only accelerates processing, but also standardizes manufacturing information and eliminates human uncertainty in the CAD–CAM workflow.

3.5.3. Experiment 3: Transforming CNC Router Data to Laser

The experiment aimed to adapt existing CNC router machining files for use in laser cutting by simplifying and geometrically refining the parts. This process involved eliminating details typical of drill machining, such as “dog bones” or lightening allowances, since such geometries are unnecessary in laser technology and can cause reading errors, redundant pointer paths, and reduced cutting efficiency.
  • Group A used the information derived from experiments 1 and 2, utilizing the exact parts list and their 2D geometries. These were imported into the three-dimensional environment of Rhino 8, where an algorithm with integrated scripts designed to automate geometric debugging and optimize cutting paths was applied. The procedure concluded with the generation of 2D files ready for manufacturing, with a clean format compatible with laser cutting systems.
  • Group B manually modified the parts within Rhino 8, using only the software’s native tools. This method required more operating time and manual precision, increasing the likelihood of geometric inconsistencies.
  • Group C replicated the procedure manually in AutoCAD, using the program’s native tools to edit, simplify, and debug the 2D geometries. Like Group B, the process was longer and depended on human intervention.
Results: Group A executed the geometric transformation of the CNC router files to laser format automatically and in less than 2% of the time required by manual methods. The integrated scripts successfully eliminated dog bones and milling offsets, optimizing the cutting paths and reducing geometric errors to 0%. In contrast, Groups B and C—which modified the parts manually in Rhino and AutoCAD—experienced geometric inconsistencies in 10–15% of the files and up to 40 times longer editing times. The automated workflow thus demonstrated substantially greater geometric and time efficiency, aligned with the principles of file-to-factory and DFMA (Design for Manufacturing and Assembly).

3.5.4. Experiment 4: Nesting the CNC Laser Data (Material Optimization)

  • Group A used the automated algorithm, which performed the nesting process comprehensively, considering both the number of parts and the spatial efficiency of the layout. The system rearranged, coded, and arranged the parts in Rhino, generating a file ready for the laser cutting process.
  • Group B performed the nesting manually in Rhino 8, duplicating the parts according to the required quantity and arranging them on the sheets without automated assistance, relying solely on the operator’s experience to optimize space.
  • Group C replicated the procedure in AutoCAD, using the program’s native tools to duplicate, rotate, and distribute the parts manually. Like Group B, the process relied entirely on human manipulation and visual estimation to achieve optimization.
Results: Group A achieved automated and optimized nesting, using approximately 52 sheets of material with an average waste of 21.06%, compared to 54.5–56.5 sheets and 24.1–26.8% waste for groups B and C. This represents a reduction of 3–4 percentage points (≈10–20% relative) in material waste and operational savings of nearly 98% in run time. The manual groups presented less compact distributions and a higher margin of error, confirming that nesting parameterization increases spatial and material efficiency, ensuring repeatability, traceability, and dimensional consistency between operators.

3.6. Hypothesis Testing Results of the Experiments

3.6.1. Hypothesis Testing for Experiment 1

  • Hypothesis for Experiment 1: The use of parametric digital tools reduces the download times of 3D components required for the production of Wikihouse prototypes.
  • H1: Null Hypothesis: The use of parametric digital tools reduces the download times of 3D components required for the production of Wikihouse prototypes.
  • H0: Alternative Hypothesis: The use of parametric digital tools does not reduce the download times of 3D components required for the production of Wikihouse prototypes.
Since the significance level (observed critical value) 0.755 > 0.05, we accept the null hypothesis H0 and reject the alternative hypothesis H1, i.e., the use of parametric digital tools allows reducing the download times of 3D components required for the manufacture of Wikihouse prototypes (See Table 12, Table 13 and Table 14).

3.6.2. Hypothesis Testing of Experiment 2

  • Hypothesis of Experiment 2: The use of digital tools verifies and reduces errors in the components necessary for the manufacture of Wikihouse prototypes.
  • H1: Null Hypothesis: The use of digital tools verifies and reduces errors in the components required for the manufacture of Wikihouse prototypes.
  • H0: Alternative Hypothesis: The use of digital tools does not verify and reduce errors in the components required for Wikihouse prototype manufacturing.
Since the significance level (observed critical value) of 0.225 is >0.05, we accept the null hypothesis H0 and reject the alternative hypothesis H1, i.e., the use of digital tools verifies and reduces errors in the components required for the manufacture of Wikihouse prototypes (see Table 15, Table 16 and Table 17).

3.6.3. Hypothesis Testing of Experiment 3

  • Hypothesis of Experiment 3: The use of digital tools reduces the classification execution time, minimizes the percentage of errors in the quantification and classification of components and reduces the download times of 2D components necessary for the manufacture of Wikihouse prototypes.
  • H1: Null Hypothesis: The use of digital tools reduces the execution time of the classification, minimizes the percentage of errors in the quantification and classification of the components and reduces the download times of 2D components necessary for the manufacture of Wikihouse prototypes.
  • H0: Alternative Hypothesis: The use of digital tools does not reduce the execution time of the classification, minimize the percentage of errors in the quantification and classification of the components and reduce the download times of 2D components necessary for the manufacture of Wikihouse prototypes.
Since the significance value (observed critical value) 0.126 > 0.05 we accept the null hypothesis H0 and reject the alternative hypothesis H1, that is, The use of digital tools reduces the execution time of the classification, minimizes the percentage of errors in the quantification and classification of the components and reduces the download times of 2D components necessary for the manufacture of Wikihouse prototypes (See Table 18, Table 19 and Table 20).

3.6.4. Hypothesis Testing of Experiment 4

  • Hypothesis of Experiment 4: The use of digital tools reduces the execution time of the transformation and minimizes the percentage of errors in the transformation of data from CNC Laser to CNC Router for the manufacture of prototypes Wikihouse.
  • H1: Null Hypothesis: The use of digital tools reduces the execution time of the transformation and minimizes the percentage of errors in the transformation of data from CNC Laser to CNC Router for the manufacture of Wikihouse prototypes.
  • H0: Alternative Hypothesis: The use of digital tools does not reduce the execution time of the transformation and minimizes the percentage of errors in the transformation of data from CNC Laser to CNC Router for the manufacture of prototypes Wikihouse.
Since the significance value (observed critical value) 0.595 > 0.05 we accept the null hypothesis H0 and reject the alternative hypothesis H1, that is, the use of digital tools reduces the execution time of the transformation and minimizes the percentage of errors in the data transformation from CNC Laser to CNC Router for the manufacture of Wikihouse prototypes (See Table 21, Table 22 and Table 23).

4. Discussion

4.1. Transfer Models: Open–Granular vs. Robotic–Compact

In open-source ecosystems such as WikiHouse, granularity (dozens to hundreds of CNC pieces per construction unit and per sub-assemblies such as walls or slabs) increases the coordination demands of CAD–CAM processes (those that translate design intent into executable manufacturing files, aligning rules, parameters, and formats for reliable, repeatable execution and assembly). Parametrizing the workflow (definitions in Grasshopper/Python) standardizes manufacturing data and reduces operator variability, maintaining the file-to-factory logic (a digital chain from model to machine that minimizes reinterpretation and preserves traceability) using 2–3-axis CNC machines available in low-tech contexts. In contrast, robotic and low-fragmentation systems (e.g., AUAR) maximize repeatability and dimensional control within automated cells but require higher adoption thresholds (infrastructure, training, and more proprietary technological ecosystems). These are not competitors—they are complementary. The open, parametrized granular route enables production today with common CNCs and builds a maturity ladder that can, when context allows, scale toward robotic cells without losing Design for Manufacturing and Assembly (DFMA) or Design for Disassembly (DfD).

4.2. Operational and Material Efficiencies

The experiments demonstrate dramatic time reductions when moving from manual workflows to parametrized CAD–CAM flows (minutes instead of tens of minutes or hours) and material gains through optimized nesting (piece layout on sheets).
In our data, the scripted group nested in approximately 52 sheets (controls: 54.5–56.5) and achieved 21.06% waste (controls: 24.1–26.8%), a reduction of about 3–4 percentage points (≈10–20% relative), with times of ~1 min (≈0:32–1:16) compared to 63–109 min for manual methods (see Table 15 and Table 16; Figure 9). These effects—observed in the nesting experiment under the evaluated material/thickness—explain the improvements in traceability (consistent nomenclature, coherent cutting layers) and operator consistency, which are key for transfer and auditing. Moreover, perceived clarity and adequacy increased with automation (experimental group ≈5.0 vs. controls ≈4.5–4.75 and ≈4.25–3.75; see Figure 8), consistent with reduced manual load.

4.3. Contextual Validation and Pragmatic Transfer

Although LADDFAB is a different system from WikiHouse, it operates on the same principles (DFMA/DfD and file-to-factory with common CNCs) and uses algorithmic methods (Grasshopper/Python) to translate granularity into standardized cutting files. This case demonstrates the viability of intermediate technology using 2-axis CNCs and rapid assembly by small crews (see Figure 1C), supporting the notion that parametrization reduces time, stabilizes execution, and improves nesting—thus enabling DFMA/DfD in ordinary workshops. In summary, the same algorithmic “enzyme” can make a WikiHouse kit digestible under low-tech conditions. Parametrization does not replace the precision or production speed of robotic cells nor the pending structural validation (e.g., cyclic tests, diaphragms, anchors); however, it closes an operational gap—coordination, time, traceability, and material use—and establishes an intermediate step from which contexts can later scale toward advanced automation.

4.4. Limitations

(1) External validity: Limited sample size and student bias; controlled academic environment.
(2) Measurement and learning: Performance depends on software familiarity; times were recorded by an observer using a stopwatch under a uniform protocol; future replications will include dual observation and video timestamping.
(3) Partial metrology: Focused on time/errors and a proxy for material efficiency (number of sheets, % waste); lacks on-site assembly metrics (time per joint, rework, field errors with non-expert crews).
(4) Seismic resistance: Pending cyclic tests, target drifts, failure mechanisms, and diaphragm/anchor verification for seismic contexts.
(5) Technical generalization: Results sensitive to tolerances/kerf, machine condition, and OSB/ply quality; requires replicable quality assurance and control protocols across workshops.

4.5. Implications for Adoption (Micro-Factories, Fab Labs, Open Housing)

In low-tech contexts, data parametrization is an immediate lever: minutes instead of hours for critical tasks, −3–4 pp waste (≈10–20% relative improvement in nesting), and reduced operator variability (see Table 15 and Table 16; Figure 9). With complementary tools (parametric scripts and replicability/quality guides), production can be decentralized with full traceability; robotization becomes a subsequent step once industrial capacities are established.

The Role of Academia as a Transfer Vector

Academia can act as micro-infrastructure for transfer, converting technical innovation into installed capacity. In terms of curriculum, this implies case-based learning and workshops with verifiable outcomes (testing protocols, tolerances, times, and costs), literacy in Life-Cycle Assessment (LCA) using local data, and implementation of school micro-factories (2–3-axis CNCs) reproducing open design-to-fabrication chains (CAD–CAM, nesting, and quality protocols). For emerging contexts, distributed manufacturing in micro-factories using 2–3-axis CNCs, optimized nesting, and open chains offers a coherent scaling path versus robotic macro-plants: more accessible, replicable, and with a lower adoption threshold. As a rigor criterion, such programs should report public indicators (onboarding time, initial error per joint, % waste, cost per m2 of cutting, kgCO2e/m2), ensure BIM/IFC interoperability (manufacturing properties, DFMA/DfD, part codes), and document cyclic tests and assembly metrics with replicable protocols.

4.6. Next Steps

To consolidate transfer, interoperability, and regulatory validity, we propose an applied research agenda:
  • Open parametric constructions with seismic variables. Families/templates (Grasshopper/Revit/BlenderBIM) of panels, diaphragms, and joints with parameters for target drift, ductility, resistance hierarchy, and detailing; cyclic test protocols (load history, failure mechanisms, anchors) and performance models by typology.
  • BIM/IFC interoperability for manufacturing and disassembly. Mapping to IFC 4.x with properties for kerf, offsets, cutting order, part codes, reuse, and DFMA/DfD; validation through Model View Definitions (MVD) and round-trip tests (design→IFC→fabrication) without semantic loss.
  • Automated verification (rule-checking). Parametric rules for compliance (drift ≤ limit, connector spacing, clearances, cutting sequences) and non-conformity metrics detected prior to fabrication.
  • Tolerances and environment in the solver. Incorporate real kerf, tool wear, humidity, and expansion/contraction in nesting and assembly; measure design-to-piece deviation (mm), reworks, and on-site adjustments.
  • LCA + cost coupling to the design-fabrication flow. A1–A4 inventories, machine energy/time, cost per m2 of cutting and per joint; report Pareto curves (cost–carbon–time) for public-policy decisions.
  • Adoptability with non-expert crews. Measure onboarding time, cognitive load (e.g., NASA-TLX), errors and rework per joint with and without parametric guides.
  • Reproducible repository (when applicable). Datasets of hysteretic responses, times per joint, and typical errors with metadata and DOI; 15–30 min reproducible examples for initial training.

4.7. Synthesis

Parametrizing before robotizing offers a realistic transfer path: it accelerates tasks, reduces waste, and preserves DFMA/DfD and traceability—making WikiHouse feasible in common workshops and enabling territorial scaling.
With complementary tools, BIM/IFC interoperability, and expanded evidence (statistics, assembly, seismic performance, LCA/costs), this pathway can translate into effective proliferation for social housing and decentralized production.

5. Conclusions

This study shows that parametrizing before robotizing is a realistic pathway to transfer open systems such as WikiHouse into low-tech contexts: it reduces preparation times to minutes, cuts material waste by 3–4 percentage points (≈10–20% relative improvement in nesting), and decreases operator variability while preserving traceability, DFMA, and DfD. Standardized CAD–CAM coordination (Grasshopper/Python) functions as an “enzyme” that makes the system’s granularity digestible, and the LADDFAB case confirms its viability as an intermediate technology using common CNC equipment.
This approach does not replace robotic cells in terms of precision or production rate, but it closes the operational gap and creates an intermediate step from which scaling becomes feasible once industrial capacities are in place. For territorial adoption, the combination of scripts, quality guides, and micro-factories with 2–3-axis CNCs enables distributed production with attainable costs and skill requirements. Academia can act as the decisive transfer vector—through workshops with verifiable metrics, LCA literacy based on local data, and BIM/IFC interoperability—installing capacities and accelerating replication.
Moving forward, consolidating cyclic tests, assembly metrics, LCA/cost data, and interoperability will be essential to strengthen the evidence base and scale open housing models under public and traceable standards.

Author Contributions

Conceptualization, S.P.N.; methodology, D.E.V., J.V.C., V.R. and R.T.; software, E.P.; validation, A.O.C.; investigation, A.O.C., R.T. and S.P.N.; resources, R.T.; data curation, L.C. and J.P.; writing—original draft, J.V.C., A.O.C. and V.R.; writing—review & editing, J.V.C., A.O.C. and V.R.; visualization, A.O.C., J.V.C. and V.R.; supervision, D.E.V.; funding acquisition, D.E.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to extend their sincere appreciation to the Administrative support from Ricardo Palma University, architecture facility.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yudhistira, A.T.; Satyarno, I.; Nugroho, A.S.B.; Handayani, T.N. Effect of Construction Delays and the Preventive Role of Concrete Works Optimization: Systematic Literature Review. TEM J. 2024, 13, 1203–1217. [Google Scholar] [CrossRef]
  2. Sambasivan, M.; Soon, Y.W. Causes and Effects of Delays in Malaysian Construction Industry. Int. J. Proj. Manag. 2007, 25, 517–526. [Google Scholar] [CrossRef]
  3. Shufrin, I.; Pasternak, E.; Dyskin, A. Environmentally Friendly Smart Construction—Review of Recent Developments and Opportunities. Appl. Sci. 2023, 13, 12891. [Google Scholar] [CrossRef]
  4. Abanda, F.H.; Byers, L. An Investigation of the Impact of Building Orientation on Energy Consumption in a Domestic Building Using Emerging BIM (Building Information Modelling). Energy 2016, 97, 517–527. [Google Scholar] [CrossRef]
  5. Karimi, H.; Timothy, R.; Dadi, G.; Goodrum, P.; Srinivasan, C. Impact of Skilled Labor Availability on Construction Project Cost Performance. J. Constr. Eng. Manag. 2018, 144. [Google Scholar] [CrossRef]
  6. Tey, J.S.; Goh, K.C.; Seow, T.W.; Goh, H.H. Challenges in Adopting Sustainable Materials in Malaysian Construction Industry. 2013. Available online: https://www.irbnet.de/daten/iconda/CIB_DC26777.pdf (accessed on 20 October 2025).
  7. Formoso, C.T.; Soibelman, L.; De Cesare, C.; Isatto, E.L. Material Waste in Building Industry: Main Causes and Prevention. J. Constr. Eng. Manag. 2002, 128, 316–325. [Google Scholar] [CrossRef]
  8. Osmani, M.; Glass, J.; Price, A.D.F. Architects’ Perspectives on Construction Waste Reduction by Design. Waste Manag. 2008, 28, 1147–1158. [Google Scholar] [CrossRef] [PubMed]
  9. United Nations Environment Programme. Building Materials and the Climate: Constructing a New Future; UNEP: Nairobi, Kenya, 2023; Available online: https://www.unep.org/resources/report/building-materials-and-climate-constructing-new-future (accessed on 20 June 2025).
  10. De Brabandere, L.; Grigorjev, V.; Van den Heede, P.; Nachtergaele, H.; Degezelle, K.; De Belie, N. Using Fines from Recycled High-Quality Concrete as a Substitute for Cement. Sustainability 2025, 17, 1506. [Google Scholar] [CrossRef]
  11. World Steel Association. Climate Change and the Production of Iron and Steel; World Steel Association: Brussels, Belgium, 2021; Available online: https://worldsteel.org/climate-action/climate-change-and-the-production-of-iron-and-steel/worldsteel.org (accessed on 15 May 2025).
  12. Peteck, J.; Glavic, P.; Kostevsek, A. Comprehensive approach to increase energy efficiency based on versatile industrial practices. J. Clean. Prod. 2016, 112, 2813–2821. [Google Scholar] [CrossRef]
  13. Le Quéré, C.; Andrew, R.M.; Friedlingstein, P.; Sitch, S.; Pongratz, J.; Manning, A.C.; Korsbakken, J.I.; Peters, G.P.; Canadell, J.G.; Jackson, R.B.; et al. Global Carbon Budget 2016. Earth Syst. Sci. Data 2016, 8, 605–649. [Google Scholar] [CrossRef]
  14. Pintos, P. DFAB House/NCCR Digital Fabrication. ArchDaily 2020. Available online: https://www.archdaily.com/942221/dfab-house-eth-zurich-plus-nccr-digital-fabrication (accessed on 10 July 2025).
  15. Graser, K.; Kahlert, A.; Hall, D.M. DFAB HOUSE: Implications of a building-scale demonstrator for adoption of digital fabrication in AEC. Constr. Manag. Econ. 2021, 39, 853–873. [Google Scholar] [CrossRef]
  16. ArchDaily. Pabellón Paramétrico DIGFABMTY: Proyecto Experimental de Estudiantes Mexicanos. ArchDaily Perú, 2015. Available online: https://www.archdaily.pe/pe/773482/pabellon-parametrico-digfabmty-proyecto-experimental-de-estudiantes-mexicanos (accessed on 19 May 2025).
  17. Esenarro, D.; Porras, E.; Ventura, H.; Figueroa, J.; Raymundo, V.; Castañeda, L. Use of Digital Tools (Wikihouse System) in Multi-Local Social Housing. Sustainability 2024, 16, 3231. [Google Scholar] [CrossRef]
  18. Taherdoost, H. What are Different Research Approaches? Comprehensive Review of Qualitative, Quantitative, and Mixed Method Research, Their Applications, Types, and Limitations. J. Manag. Sci. Eng. Res. 2022, 5, 53–63. [Google Scholar] [CrossRef]
  19. Mulisa, F. When Does a Researcher Choose a Quantitative, Qualitative, or Mixed Research Approach? Interchange 2021, 53, 113–131. [Google Scholar] [CrossRef]
  20. Ramos-Galarza, C. Los Alcances de una investigación. CienciAmérica 2020, 9, 1–6. [Google Scholar] [CrossRef]
  21. Castro Maldonado, J.J.; Gómez Macho, L.K.; Camargo Casallas, E. La investigación aplicada y el desarrollo experimental en el fortalecimiento de las competencias de la sociedad del siglo XXI. Tecnura 2023, 27, 140–174. [Google Scholar] [CrossRef]
  22. Shi, Y.; Wang, D.; Zhang, Z. Categorical Evaluation of Scientific Research Efficiency in Chinese Universities: Basic and Applied Research. Sustainability 2022, 14, 4402. [Google Scholar] [CrossRef]
  23. Agudelo Viana, L.; Aigneren Aburto, J. Diseños de Investigación Experimental y No-Experimental; Universidad de Antioquia, Facultad de Ciencias Sociales y Humanas: Medellín, Colombia, 2008; Available online: http://hdl.handle.net/10495/2622 (accessed on 18 June 2025).
  24. Miller, C.J.; Smith, S.N.; Pugatch, M. Experimental and quasi-experimental designs in implementation research. Psychiatry Res. 2020, 283, 112452. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (A) DFAB ET, Switzerland; (B) DIGFABMTY2.0 Parametric Pavilion; (C) LADDFAB Wooden Prototype, Peru.
Figure 1. (A) DFAB ET, Switzerland; (B) DIGFABMTY2.0 Parametric Pavilion; (C) LADDFAB Wooden Prototype, Peru.
Buildings 15 03895 g001
Figure 2. Methodological process.
Figure 2. Methodological process.
Buildings 15 03895 g002
Figure 3. Parametric script for automating the download and organization of WikiHouse pieces.
Figure 3. Parametric script for automating the download and organization of WikiHouse pieces.
Buildings 15 03895 g003
Figure 4. Survey results (Experiment 1): average rating per question in the three study groups.
Figure 4. Survey results (Experiment 1): average rating per question in the three study groups.
Buildings 15 03895 g004
Figure 5. Parametric script for automating part classification and quantification for manufacturing.
Figure 5. Parametric script for automating part classification and quantification for manufacturing.
Buildings 15 03895 g005
Figure 6. Survey results (Experiment 2): average rating per question in the three study groups.
Figure 6. Survey results (Experiment 2): average rating per question in the three study groups.
Buildings 15 03895 g006
Figure 7. Parametric script for the automation of CNC Router to Laser data transformation.
Figure 7. Parametric script for the automation of CNC Router to Laser data transformation.
Buildings 15 03895 g007
Figure 8. Survey results (Experiment 3): average rating per question in the three study groups.
Figure 8. Survey results (Experiment 3): average rating per question in the three study groups.
Buildings 15 03895 g008
Figure 9. Parametric script for the automation of nesting CNC laser data.
Figure 9. Parametric script for the automation of nesting CNC laser data.
Buildings 15 03895 g009
Figure 10. Survey results (Experiment 4): average rating per question in the three study groups.
Figure 10. Survey results (Experiment 4): average rating per question in the three study groups.
Buildings 15 03895 g010
Table 1. Variables, dimensions, and techniques used in the methodological process.
Table 1. Variables, dimensions, and techniques used in the methodological process.
VariablesDimensionsIndicatorsTechniques and Instruments
Use of parametric digital tools in Grasshopper and Python (X)Ease of use (X1)Level of understanding of the procedure
Ease in executing commands and scripts
– Likert-type survey
– Direct observation
Accessibility (X2)Availability of software and digital resources Compatibility with devices or systems
Training (X3)Clarity of the instruction received
Autonomy achieved after training
Perceived difficulty (X4)Degree of complexity perceived in the use of the software Estimated adaptation time
Optimization of the pre-fabrication process (Y)Execution time (Y1)Total time spent completing each task or experimentStopwatch
Control sheet
Precision (Y2)Degree of accuracy of the generated pieces in relation to the digital modelDirect observation
Control sheet
Margin of error (Y3)Difference between theoretical dimensions and those obtained in the final piece
Material waste (Y4)Percentage of material not utilized during the cutting or assembly process
Table 2. Design of the questionnaire for control and experimental groups.
Table 2. Design of the questionnaire for control and experimental groups.
Questionnaire for Control and Experimental Groups
IndicatorsQuestionsControlExperimentalLikert Scale
12345
Training1The instructions provided for conducting the experiment were clear and easy to understand.The instructions provided for conducting the experiment were clear and easy to understand.Strongly disagreeDisagreeNeither agree nor disagreeAgreeStrongly agree
Simplicity2The interface I used (software/web) was intuitive, visually clear, and easy to follow.The interface I used (software/web) was intuitive, visually clear, and easy to follow.
Perceived Difficulty3The time assigned to complete the experiments was sufficient to finish them.The time assigned to complete the experiments was sufficient to finish them.
Training4I received adequate support from the staff in charge to resolve technical questions or difficulties.I received adequate support from the staff in charge to resolve technical questions or difficulties.
Simplicity5The technical procedures and instructions were easy to follow.The technical procedures and instructions were easy to follow.
Access6The facilities and equipment were available without delays.The facilities and equipment were available without delays.
Access7The internet connection on the available equipment was sufficient to perform the experiment.The internet connection on the available equipment was sufficient to perform the experiment.
Perceived Difficulty8I believe there were no significant barriers that affected my performance during the experiments.I believe there were no significant barriers that affected my performance during the experiments.
Table 3. Individual and average execution times per group for downloading and organizing WikiHouse pieces.
Table 3. Individual and average execution times per group for downloading and organizing WikiHouse pieces.
GroupParticipantExecution Time (min)Average (min)
Experimental Group (Script)Subject 11.071 m 15 s
Subject 21.38
Subject 31.12
Subject 41.18
Control Group (Rhino)Subject 576.390 m
Subject 6103.7
Subject 794.5
Subject 885.5
Control Group (SKP)Subject 921.825 m
Subject 1028.4
Subject 1126.7
Subject 1223.1
Table 4. Survey results from experiment 1: experimental group (script); control group (Rhino) and control group (SketchUp).
Table 4. Survey results from experiment 1: experimental group (script); control group (Rhino) and control group (SketchUp).
Experimental Group (Script)Control Group (Rhino)Control Group (SketchUp)
IndicatorQuestionsSubjectsAverageSubjectsAverageSubjectsAverage
S1S2S3S4S5S6S7S8S9S10S11S12
Capacitation15555544554.555544.75
Simplicity25555514553.7555555
Perceived difficulty3555551555455544.75
Capacitation454554.7545554.7555454.75
Simplicity5555553535455555
Access655454.7545354.2535544.25
Access755454.7545354.2535544.25
Perceived difficulty854554.7545554.554554.75
Table 5. Individual results per group in the classification and quantification of 2D parts for CNC manufacturing using manual and automated methods.
Table 5. Individual results per group in the classification and quantification of 2D parts for CNC manufacturing using manual and automated methods.
GroupParticipantExecution Time (min)Average Time (min)% Correct Classification% Correct Accounting% Margin of Error
Experimental Group (Script)Subject 12:151:511001000
Subject 21:281001000
Subject 31:401001000
Subject 42:001001000
Control Group (Rhino)Subject 538:234:2084.2185.115.30
Subject 632:4589.588.910.8
Subject 729:3289.592.68.95
Subject 837:194.796.34.5
Control Group (SKP)Subject 936:332:2995.0090.007.5
Subject 1032:284.285.215.3
Subject 1126:4773.777.824.25
Subject 1235:484.288.913.45
Table 6. Survey results from experiment 2: experimental group (script); control group (Rhino); and control group (SketchUp).
Table 6. Survey results from experiment 2: experimental group (script); control group (Rhino); and control group (SketchUp).
Experimental Group (Script)Control Group (Rhino)Control Group (SketchUp)
IndicatorQuestionsSubjectsAverageSubjectsAverageSubjectsAverage
S1S2S3S4S5S6S7S8S9S10S11S12
Capacitation15555555544.7555454.75
Simplicity25555545554.7555454.75
Perceived difficulty35555545554.7555354.75
Capacitation454554.755555555555
Simplicity5555555555555454.75
Access6555555555554544.5
Access75555555544.7544554.5
Perceived difficulty8545555555555544.5
Table 7. Individual results per group in CNC Router to Laser data transformation using manual and automated methods.
Table 7. Individual results per group in CNC Router to Laser data transformation using manual and automated methods.
GroupParticipantExecution Time (min)Average Time (min)% Correct Transformation
Experimental Group (Script)Subject 12:053:54100
Subject 23:16100
Subject 37:40100
Subject 42:36100
Control Group (Rhino)Subject 533:0129:4389.4
Subject 642:2094.7
Subject 727:11100
Subject 816:2184.2
Control Group (SKP)Subject 933:2641:3494.7
Subject 1042:4178.94
Subject 1150:15100
Subject 1239:5294.7
Table 8. Survey results from experiment 3 experimental group (script); control group (Rhino); and control group (SketchUp).
Table 8. Survey results from experiment 3 experimental group (script); control group (Rhino); and control group (SketchUp).
Experimental Group (Script)Control Group (Rhino)Control Group (SketchUp)
IndicatorQuestionsSubjectsAverageSubjectsAverageSubjectsAverage
S1S2S3S4S5S6S7S8S9S10S11S12
Capacitation15555555544.7554544.25
Simplicity25555554554.7554554.75
Perceived difficulty35555545554.7555544.75
Capacitation45455545554.7555455
Simplicity55555545544.544545.25
Access654554.7545554.7553554.5
Access754354.2545554.7554554.75
Perceived difficulty85555533544.2554534.75
Table 9. Individual results per group in CNC Laser data nesting, using manual and automated methods.
Table 9. Individual results per group in CNC Laser data nesting, using manual and automated methods.
GroupParticipantExecution Time (min)Average Time (min)Number of Plates Average Quantity
Experimental Group (Script)Subject 10:320:525252
Subject 20:5652
Subject 30:4252
Subject 41:1652
Control Group (Rhino)Subject 569:4878:445454.5
Subject 656:4855
Subject 779:1655
Subject 8109:0554
Control Group (SKP)Subject 960:4763:325556.5
Subject 1069:2756
Subject 1131:0657
Subject 1292:4858
Table 10. Individual waste results per group in CNC Laser data nesting, using manual and automated methods.
Table 10. Individual waste results per group in CNC Laser data nesting, using manual and automated methods.
GroupParticipant Total Material (m2)Average Total Material (m2)Total Material Wasted (m2)Waste (%)Average Waste (%)
Experimental Group (Script)Subject 1154.79154.7932.621.0621.06
Subject 2154.7932.621.06
Subject 3154.7932.621.06
Subject 4154.7932.621.06
Control Group (Rhino)Subject 5160.74162.2342.2926.3124.1
Subject 6163.7235.8421.89
Subject 7163.7235.8421.89
Subject 8160.7442.2926.31
Control Group (SKP)Subject 9163.72168.1935.8421.8925.2
Subject 10166.7042.2925.37
Subject 11169.6745.3326.72
Subject 12172.6546.3126.82
Table 11. Survey results from experiment 4: Experimental group (script); Control group (Rhino) and Control group (SketchUp).
Table 11. Survey results from experiment 4: Experimental group (script); Control group (Rhino) and Control group (SketchUp).
Experimental Group (Script)Control Group (Rhino)Control Group (SketchUp)
IndicatorQuestionsSubjectsAverageSubjectsAverageSubjectsAverage
S1S2S3S4S5S6S7S8S9S10S11S12
Capacitation1555555555554554.75
Simplicity255454.7545544.554554.75
Perceived difficulty3555554453455544.75
Capacitation45555545554.7555544.75
Simplicity5555555555555544.75
Access6555555555535555
Access755454.755555535544.75
Perceived difficulty8555555555555544.75
Table 12. Case processing summary.
Table 12. Case processing summary.
Cases
ValidLostTotal
NPercentageNPercentageNPercentage
P1×P812100.0%00.0%12100.0%
Table 13. Cross table P1×P8.
Table 13. Cross table P1×P8.
Count
P8Total
OkTotally Agrees
P1Neither agree nor disagree011
OK123
Totally agree358
Total4812
Table 14. Chi-square tests.
Table 14. Chi-square tests.
WorthglAsymptotic Significance (Bilateral)
Pearson’s Chi-square0.563 a20.755
Likelihood ratio0.87220.647
Linear-by-linear association0.37310.541
Number of valid cases12
a. Five boxes (83.3%) have an expected count less than 5. The minimum expected count is 0.33.
Table 15. Case processing summary.
Table 15. Case processing summary.
Cases
ValidLostTotal
NPercentageNPercentageNPercentage
P2×P712100.0%00.0%12100.0%
Table 16. Cross table P2×P7.
Table 16. Cross table P2×P7.
Count
P7Total
Neither Agree Nor DisagreeOkTotally Agree
P2Neither agree nor disagree0101
Ok0011
Totally agree11810
Total12912
Table 17. Chi-square tests.
Table 17. Chi-square tests.
ValorglAsymptotic Significance (Bilateral)
Pearson’s Chi-square5.667 a40.225
Likelihood ratio4.53440.338
Linear-by-linear association0.86010.354
Number of valid cases12
a. Eight boxes (88.9%) have an expected count less than 5. The minimum expected count is 0.08.
Table 18. Case processing summary.
Table 18. Case processing summary.
Cases
ValidLostTotal
NPercentageNPercentageNPercentage
P3×P612100.0%00.0%12100.0%
Table 19. Crosstab P3×P6.
Table 19. Crosstab P3×P6.
Count
P6Total
Neither Agree Nor DisagreeOkTotally Agree
P3Totally disagree0101
Ok0101
Totally agree11810
Total13812
Table 20. Chi-square tests.
Table 20. Chi-square tests.
ValorglAsymptotic Significance (Bilateral)
Pearson’s Chi-square7.200 a40.126
Likelihood ratio6.99440.136
Linear-by-linear association1.27610.259
Number of valid cases12
a. Eight boxes (88.9%) have an expected count less than 5. The minimum expected count is 0.08.
Table 21. Case processing summary.
Table 21. Case processing summary.
Cases
ValidLostTotal
NPercentageNPercentageNPercentage
P4×P512100.0%00.0%12100.0%
Table 22. Crosstab P1×P8.
Table 22. Crosstab P1×P8.
Count
P5Total
Neither Agree Nor DisagreeOkTotally Agree
P4Ok1023
Totally agree1179
Total21912
Table 23. Chi-square tests.
Table 23. Chi-square tests.
ValorglAsymptotic Significance (Bilateral)
Pearson’s Chi-square1.037 a20.595
Likelihood ratio1.18920.552
Linear-by-linear association0.39810.528
Number of valid cases12
a. Five boxes (83.3%) have an expected count less than 5. The minimum expected count is 0.25.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Esenarro Vargas, D.; Porras, E.; Vilchez Cairo, J.; Ortiz Curinambe, A.; Raymundo, V.; Chang, L.; Peña, J.; Torrico, R.; Paz Nakura, S. Use of Parametric Digital Tools in Grasshopper and Python for Optimization of CNC Prefabrication Process in WikiHouse Prototype Construction. Buildings 2025, 15, 3895. https://doi.org/10.3390/buildings15213895

AMA Style

Esenarro Vargas D, Porras E, Vilchez Cairo J, Ortiz Curinambe A, Raymundo V, Chang L, Peña J, Torrico R, Paz Nakura S. Use of Parametric Digital Tools in Grasshopper and Python for Optimization of CNC Prefabrication Process in WikiHouse Prototype Construction. Buildings. 2025; 15(21):3895. https://doi.org/10.3390/buildings15213895

Chicago/Turabian Style

Esenarro Vargas, Doris, Emerson Porras, Jesica Vilchez Cairo, Abigail Ortiz Curinambe, Vanessa Raymundo, Lidia Chang, Jesus Peña, Ramiro Torrico, and Santiago Paz Nakura. 2025. "Use of Parametric Digital Tools in Grasshopper and Python for Optimization of CNC Prefabrication Process in WikiHouse Prototype Construction" Buildings 15, no. 21: 3895. https://doi.org/10.3390/buildings15213895

APA Style

Esenarro Vargas, D., Porras, E., Vilchez Cairo, J., Ortiz Curinambe, A., Raymundo, V., Chang, L., Peña, J., Torrico, R., & Paz Nakura, S. (2025). Use of Parametric Digital Tools in Grasshopper and Python for Optimization of CNC Prefabrication Process in WikiHouse Prototype Construction. Buildings, 15(21), 3895. https://doi.org/10.3390/buildings15213895

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop