1. Introduction
The sustained growth of university enrollment worldwide has placed significant pressure on the physical and functional capacity of higher education institutions. According to UNESCO, by 2024, more than 256 million individuals were enrolled in tertiary education programs, representing an 80% increase compared to 2010 [
1]. This unprecedented expansion reflects not only an increase in demand for higher education but also the urgent need for adequate physical infrastructure to support such growth. The global expansion of higher education has intensified pressure on university campuses, which now face the dual challenge of accommodating more students while ensuring spaces remain functional, safe, and aligned with evolving academic needs. In the case of Peru, this tension is aggravated by reactive maintenance practices and limited budgets, creating a structural gap between enrollment growth and the functional capacity of existing educational spaces [
2,
3].
In Latin America, these dynamics unfold within contexts marked by unequal investment and long-standing infrastructure deficits. While countries such as Chile and Brazil have made advances in educational infrastructure modernization, others, including Peru, continue to struggle with deficiencies in comfort, accessibility, and technological integration. At Ricardo Palma University, for example, a study revealed that 54% of surveyed students perceived no substantial improvements in renovated spaces between 2014 and 2018 [
4,
5].
As illustrated in
Figure 1, global disparities in higher education enrollment rates highlight inequities not only in access but also in the quality of infrastructure that supports academic activities. For Peru, this global contrast underscores how deficiencies in planning, outdated construction systems, and underfunded maintenance programs exacerbate existing inequalities in higher education, directly affecting students’ learning conditions and overall university quality [
1,
6]. In other words, while countries with higher enrollment rates have managed to couple expansion with systematic investments in infrastructure, the Peruvian case reflects how rapid enrollment growth has not been matched by adequate maintenance and modernization policies, producing an increasing gap between demand and capacity.
One of the recurring problems in university infrastructure management in Peru is the absence of standardized instruments to evaluate academic spaces after maintenance or renovation interventions. While projects are executed with the intention of modernizing facilities, there is no systematic way to verify whether these efforts enhance durability, functionality, or curricular fitness [
7,
8]. In practice, this weakness translates into premature deterioration, fragmented interventions, and limited evidence to guide future investments. Without reliable methodological indicators, universities risk making decisions based on superficial aspects, neglecting the long-term impact of maintenance on academic performance [
9,
10].
Maintenance, although often overlooked, is one of the most critical phases in the life cycle of a building. It is not merely a technical afterthought but a stage that directly affects sustainability, cost-efficiency, and user experience. Studies indicate that maintenance can represent between 60% and 80% of a building’s total life cycle costs [
10], a figure that highlights its strategic importance.
Figure 2 illustrates the four dimensions proposed for evaluating constructability in the maintenance of academic infrastructure: functionality, use, investment, and alignment with the curricular plan [
11,
12]. These criteria enable a more integrated perspective, combining technical evaluation with educational objectives. Despite this importance, decisions regarding infrastructure investment in Peruvian universities are frequently influenced by immediate needs and budgetary constraints rather than by systematic methodologies. Consequently, many interventions lack continuity, present deficiencies in ventilation or spatial distribution, or fail to incorporate emerging technologies. Institutional assessments have confirmed that even renovated environments continue to exhibit shortcomings in comfort, flexibility, and technological adequacy, which negatively impact both teaching and learning activities [
13]. This reality demonstrates the necessity of adopting models that evaluate maintenance not as an isolated phase but as a multidimensional process with long-term implications.
Constructability provides a useful conceptual framework to address these shortcomings. Initially designed to improve construction efficiency during design and execution, it has gradually expanded to encompass other phases of the building life cycle [
14].
As shown in
Figure 3, the life cycle of a building consists of six key stages, namely, planning, design, construction, operation, maintenance, and decommissioning, with maintenance being the longest and most influential for ensuring safety, efficiency, and adaptability of academic spaces [
11,
12,
15]. By applying constructability principles to maintenance, universities can anticipate potential deficiencies, optimize resources, and align technical decisions with educational goals.
The adaptation of constructability principles to the maintenance phase requires reinterpreting them in a way that reflects the specific context of higher education.
Table 1 summarizes the evaluation criteria and associated variables used in this study, which include functionality (circulation flow, spatial flexibility, equipment integration), use (compatibility with activities, accessibility, intensity of use), investment efficiency (resource optimization, maintenance costs, durability), and curricular alignment (fit with academic programs, space distribution, scheduling). This framework provides a structured approach to evaluating interventions and ensuring that resources are used effectively to enhance both infrastructure and learning conditions [
16,
17].
Another innovative aspect of this study is the inclusion of Post-Occupancy Evaluation (POE). POE is essential in educational contexts because it integrates user perceptions, comfort, accessibility, and adequacy of learning conditions into the evaluation process, complementing technical assessments that might otherwise overlook these dimensions [
18,
19]. In environments where adaptability and flexibility are critical, POE ensures that interventions are aligned not only with architectural standards but also with the day-to-day experiences of students and faculty. In Peru, however, the lack of articulation between physical investments and curricular requirements remains a persistent challenge. Renovations are often executed without considering academic intensity, compatibility with specific activities, or the flexibility needed to accommodate curriculum changes, producing underutilized environments and inefficient use of resources [
20].
This article proposes and validates an innovative methodology that combines constructability criteria with post-occupancy evaluation. Using a quasi-experimental design, two comparable academic spaces within the Faculty of Architecture and Urbanism at Ricardo Palma University were analyzed. One of them, computer laboratories, was renovated under the proposed criteria, while the other, a BIM laboratory, remained unaltered, serving as a control group [
21]. This design enabled a systematic comparison of pre- and post-intervention conditions, providing robust evidence of the impact of constructability-based maintenance on both technical performance and user satisfaction [
4,
15,
16].
The contributions of this study are threefold: first, it develops a methodological approach to assess constructability during the maintenance phase of university infrastructure; second, it empirically demonstrates the impact of this methodology on perceived quality and service performance using statistical tests; and third, it offers a replicable and context-sensitive framework that can guide decision-making in other higher education institutions.
Section 2 presents methodological design,
Section 3 outlines the results,
Section 4 discusses the implications for practice and policy, and
Section 5 concludes with recommendations for future research.
Operational Definitions and Scope. In this study, “a methodological approach” refers to a mixed-method, quasi-experimental design that integrates technical evaluation (checklists and normative contrast) with user-centered tools (Likert surveys and structured observation) to compare an intervened space with a non-intervened control. “Constructability in the maintenance phase” is understood as the systematic application of constructability principles, adapted from CCI Chile, to planning, execution, and control of maintenance activities to optimize functionality, use, investment, and curricular alignment during the longest stage of the building life cycle. “University quality,” for the purposes of this article, denotes the perceived quality and service performance of academic spaces by their users (students and technical staff), in coherence with educational-organization criteria such as fitness for purpose, safety/well-being, and alignment with curricular requirements [
21,
22].
2. Materials and Methods
This study adopts a mixed-methods approach to evaluate the impact of applying a tool based on constructability principles on the perceived quality of academic spaces in the context of Peruvian higher education. Based on an applied, explanatory, and quasi-experimental design (
Figure 4), two comparable spaces were selected: one that underwent intervention and another that did not, both located within the Faculty of Architecture and Urbanism at Ricardo Palma University. This approach allowed for the identification of measurable differences between the two environments and provided insight into how technical criteria and user experience influence the evaluation of university infrastructure. Furthermore, the study is based on the hypothesis that integrating constructability criteria into academic maintenance processes can significantly improve users’ perception of quality in university environments.
As shown in
Table 2, the methodological framework adopted in this study integrates the fundamental components needed to evaluate the influence of constructability criteria on the maintenance of educational buildings in a structured manner. It is an applied and comparative approach that combines both quantitative and qualitative tools, such as Likert-scale surveys, semi-structured interviews, and technical checklists, to gather data from multiple perspectives. The study population includes both direct users (students) and technical staff responsible for maintenance, allowing for robust methodological triangulation. This framework makes it possible not only to measure the perception of spatial quality but also to compare it against national and international technical standards such as the RNE and ISO 21001. Overall, the research design reflects an effort to align the planning, execution, and control of maintenance processes with the actual needs of the university environment.
2.1. Study Design
The study was conducted under a quasi-experimental design with non-randomized groups, allowing for a comparative analysis of the perceived quality of two equivalent academic spaces. One space, a computer laboratory, was intervened using an innovative methodological proposal, while the other, a BIM laboratory, retained its original configuration and served as the control group. This design enabled a more precise observation of the effects attributable to the architectural intervention based on the adapted constructability criteria [
23].
A pretest and post-test evaluation were carried out from both a technical perspective and the users’ perception, with the aim of identifying significant differences in variables related to comfort, functional adequacy, operational efficiency, and curricular alignment. The implemented methodology was based on the adaptation of four key principles from the constructability model developed by CCI Chile: functionality, use, investment efficiency, and alignment with the curricular plan [
24].
This design enabled a more precise observation of the effects attributable to the architectural intervention based on the adapted constructability criteria [
23]. To ensure replicability, all steps of the quasi-experimental design were documented in a procedural protocol including site selection, timing of pre/post evaluations, and identical instruments across groups.
Rationale for Site Selection. The computer laboratory (intervention) and the BIM laboratory (control) were selected due to comparable size, user load, and curricular centrality in digital design courses; both exhibit intensive technological use and similar scheduling, enabling a like-for-like contrast of maintenance decisions.
2.2. Participants and Context
The study population consisted of senior architecture students who used both evaluated spaces during the 2023-2 academic semester. A non-probabilistic, purposive sampling method was applied, selecting students who had direct experience with both environments. A minimum of 30 students were surveyed, meeting the threshold required to perform statistical analyses with acceptable significance levels [
25].
In addition, the technical perspective of ten professionals and technicians from the university’s Maintenance Office was included. Their participation was essential to validate the feasibility of the methodology from both operational and regulatory perspectives. The intervened spaces were characterized by intensive use and were directly related to key academic activities such as computer-aided design, three-dimensional modeling, and BIM simulation. The sampling rationale was to capture both user perception and technical feasibility, thereby triangulating perspectives. The study period was one full academic semester (2023-2), ensuring repeated exposure of participants to both spaces.
2.3. Instruments and Data Collection
Three primary instruments were developed and specifically validated to assess the influence of the applied methodology on both technical and perceptual aspects:
User Perception Survey: Administered as both pretest and post-test to students using a five-point Likert scale. It assessed variables such as perceived spatial quality, thermal comfort, lighting, accessibility, furniture distribution, and functional adequacy.
Technical Evaluation Form: Designed for technical staff, this instrument evaluated the implementation of the four adapted constructability criteria across the three phases of maintenance: planning, execution, and control. It included indicators such as compliance with design requirements, operational sustainability, costs, and durability [
26].
Structured Observation Guide: Used by the research team to document physical, technical, and spatial conditions of both environments. This tool supported the triangulation of user perceptions with observable evidence.
All instruments were validated by expert judgment, including architects, faculty members, and maintenance technicians. The content validity indices exceeded 0.80. The reliability of the user perception survey was verified using Cronbach’s alpha, with results ranging from 0.82 to 0.90 across dimensions, ensuring the internal consistency of the applied questionnaire [
27].
Each dimension was measured with multiple items: for example, Planning and Design included questions on adequacy of spatial distribution, compliance with regulations, and furniture integration; Execution considered indicators such as ventilation, lighting, and safety; Control examined maintenance monitoring, durability, and compliance with scheduling; while Perceived Quality included user comfort, accessibility, and overall satisfaction.
All instruments were piloted with a small group (n = 8) to refine wording and scale reliability.
2.4. Data Analysis
The data was processed using a mixed-methods approach, combining both quantitative and qualitative analyses. On the quantitative side, descriptive and inferential statistics were applied, including paired-sample Student’s
t-tests and ANOVA, to compare results from pretest and post-test stages, as well as between control and experimental groups. This analysis enabled the identification of statistically significant differences attributable to architectural intervention, particularly regarding thermal comfort, spatial flexibility, and alignment with the academic curriculum [
28].
On the qualitative side, structured interviews with technical staff and observation records were analyzed through content analysis. This triangulation helped to validate users’ perceptions against actual conditions observed in the field.
Additionally, a comparative technical evaluation matrix was developed to assess both spaces. This matrix was based on regulatory dimensions from the Peruvian National Building Code (RNE) and the ISO 21001 [
21], which targets educational organizations. The comparison enabled the identification of objective physical improvements associated with the implementation of the proposed methodology.
Given the ordinal nature of Likert responses and the non-normality expected in small educational samples, we prioritized non-parametric tests (Wilcoxon signed-rank and Mann–Whitney U) with a two-tailed α = 0.05; effect directions were interpreted alongside medians and percentage shifts to enhance practical significance. Quantitative analyses were carried out using SPSS v.27; qualitative data from interviews and observation guides were examined through content analysis. Two independent researchers coded the data manually to enhance reliability and reduce bias. All datasets were anonymized prior to analysis.
3. Results
The Results section is organized into five subsections corresponding to the methodological dimensions defined in
Section 2: (i) overall influence of constructability on service quality, (ii) planning and design, (iii) execution, (iv) control, and (v) technical–normative contrast. This structure allows for a stepwise presentation from general findings to specific dimensions, ensuring logical coherence and alignment with the methodological framework.
This chapter presents the findings obtained after implementing the constructability evaluation methodology in the maintenance of university buildings. The main objective of the research was to analyze the impact of this methodology on the perceived quality of academic infrastructure by comparing two equivalent environments: one intervened using adapted constructability criteria, and the other maintained without modifications, serving as the control group. The results are structured according to the methodological dimensions evaluated, planning and design, execution, and maintenance control, and are aligned with the perceived service quality indicators: user satisfaction, environmental comfort, safety conditions, and alignment with the academic curriculum.
3.1. General Influence of Constructability Methodology on the Quality of University Infrastructure Services
The overall results show that the implementation of the methodology had a positive impact on both maintenance performance and the perceived quality of infrastructure services.
Table 3 summarizes the comparison of both variables before and after the intervention.
As shown in
Table 4, there was an increase in both variables. The level of maintenance performance improved from “effective” to “efficient,” while user satisfaction rose from “slightly satisfied” to “satisfied.” These changes reflect a significant improvement attributable to the structured application of constructability principles.
As shown in
Figure 5, following the implementation of the methodology, 50% of the technical staff rated the intervened environment as either “efficient” or “highly efficient,” whereas only 40% gave this rating to the non-intervened environment, indicating a 10% improvement.
As shown in
Figure 6, the intervened laboratory reached 56% of students reporting being “satisfied” or “very satisfied,” representing a 6% increase compared to the non-intervened environment.
To statistically verify the significance of these differences, the Wilcoxon signed-rank test (
Table 5) and the Mann–Whitney U test (
Table 6) was applied, with the results presented below.
The results of the three tests confirm the existence of statistically significant differences between the compared groups (
p < 0.005), validating the positive effect of the implemented methodology on the quality of service and the maintenance of academic infrastructure (
Table 7).
3.2. Planning and Design of Maintenance and Its Impact on Perceived Quality of Academic Spaces
Following the general results, the analysis now turns to the planning and design stage, which represents the anticipatory phase of maintenance. Planning and designing maintenance activities represent the most decisive phases to ensure that academic spaces adequately meet pedagogical and operational needs. By applying constructability methodological criteria at this stage, the aim was to align technical processes with the user experience, adopting an integrated and anticipatory approach.
The comparison between the intervened and non-intervened laboratories revealed substantial improvements in perceived quality from both students and technical staff.
Table 8 presents the results obtained for both key variables before and after the intervention:
As shown in
Table 8, both variables exhibit a one-point improvement on the scale, indicating a shift from an “acceptable” level to an “optimal” one. This improvement is reinforced by interview testimonials, where users highlighted greater coherence between the functional requirements of the course and the physical conditions of the environment.
As shown in
Figure 7, 50% of the maintenance personnel rated the intervened environment as efficient or notably efficient, compared to 40% in the non-intervened laboratory. This indicates a 10% increase in favorable technical perception following the implementation of the methodology.
According to
Figure 8, student satisfaction with lighting and ventilation increased from 48% before the intervention to 72% after the intervention, representing a statistically significant improvement.
To statistically verify the significance of these improvements, the Wilcoxon signed-rank test was applied, with the results presented for the planning and design stage (
Table 9) and for the perception of academic space quality (
Table 10).
The above results confirm that the implemented methodology had a significant effect on improving both technical planning and the perception of academic spaces (
Table 11). Together, these findings demonstrate that anticipatory design based on constructability criteria enables better resource optimization and creates more functional and comfortable environments for university education.
3.3. Execution of Maintenance and Its Influence on Safety and Wellbeing in University Environments
Building on planning/design, the next focus is the execution stage, where planning decisions materialize into tangible interventions. The maintenance execution phase represents the point at which planning decisions materialize. An efficient execution aligned with constructability principles can generate direct impacts on physical safety, user wellbeing, and academic performance. This section evaluates the relationship between the application of such methodology and tangible improvements in university environmental conditions (
Table 12).
The results show a consistent one-point improvement in both variables following the intervention, suggesting a positive impact on operational effectiveness and the environmental conditions of the academic setting. This improvement was supported by field observations and the technical responses from maintenance personnel. Specifically, the intervention included the replacement of conventional desks with modular ergonomic furniture, the installation of structured cabling to improve connectivity, and the application of technical porcelain flooring to enhance durability. These tangible changes were directly associated with improvements in safety and comfort.
As shown in
Figure 9, 50% of the staff considered that the intervened environment reached a level of “efficient” or “efficient with merit,” compared to only 40% in the non-intervened environment, reflecting a 10% improvement in the perception of execution.
According to
Figure 10, the intervened laboratory obtained 57% positive responses (“satisfied” and “very satisfied”) compared to 43% in the non-intervened environment, representing a 14% improvement in this dimension.
To statistically assess the intervention effects, the Wilcoxon signed-rank test was applied, with significant results obtained for maintenance execution (
Table 13) and for the perception of safety and well-being (
Table 14).
Specifically, the intervention included the replacement of conventional desks with modular ergonomic furniture, installation of structured cabling to improve connectivity, and application of technical porcelain flooring that enhanced both durability and hygiene. The results obtained reflect a significant influence of the implemented methodology on the improvement of maintenance execution, especially regarding environmental hygiene, safe access, adequate ventilation, and risk control (
Table 15). These findings confirm that an execution phase based on structured technical criteria can positively impact on the perception of well-being and safety among university users.
3.4. Maintenance Control and Its Relationship with Perceived Quality of Academic Space
After execution, attention shifts to the control stage, emphasizing supervision and monitoring to ensure long-term quality. Maintenance control represents the final stage of the technical process, during which the intervention is monitored, validated, and reviewed. At this stage, the rigorous application of a methodology based on constructability not only ensures the quality of execution but also helps maintain the long-term functionality of spaces, optimizing resources and enhancing user experience.
In this study, the influence of technical control, based on four key criteria (functionality, use, investment, and curricular alignment), was evaluated in relation to users’ perception of academic space quality. To this end, the conditions before and after the intervention were compared (
Table 16).
The results show a uniform improvement of one point in both variables. This progression suggests that strengthening technical control supports not only compliance with operational standards but also enhances users’ perception of order, spatial coherence, and comfort.
As shown in
Figure 11, 70% of the technical staff rated the intervened environment as efficient or highly efficient in terms of maintenance control, compared to only 50% in the non-intervened environment. This 20% difference represents a notable improvement in the consistency and quality of project monitoring.
As shown in
Figure 12, the intervened environment reached 50% satisfaction and high satisfaction, surpassing the non-intervened environment by 6%. This reflects a more favorable perception of the academic environment because of effective supervision.
To compare the control and intervention groups, the Wilcoxon signed-rank test was applied, showing significant differences in maintenance execution (
Table 17) and in the perception of safety and well-being (
Table 18).
The statistical analyses demonstrate that the constructability-based intervention had significant impacts on the maintenance control dimension, consolidating a more rigorous, transparent, and standards-aligned management model (
Table 19). This model is consistent with regulatory frameworks such as the Peruvian National Building Regulations (RNE), ISO 21001, and BIM principles for higher education infrastructure.
3.5. Technical-Normative Contrast: Physical–Spatial Analysis of Academic Environments
Finally, a technical–normative contrast was conducted to validate whether perceived changes corresponded to objective improvements against standards. As a complement to the statistical findings, a comparative technical analysis was conducted between the intervened and non-intervened environments, considering physical–spatial, operational, and regulatory criteria. This phase is aimed at triangulating quantitative and qualitative data through an objective evaluation based on recognized standards, to verify whether the changes perceived by users and technical staff were reflected in tangible improvements in infrastructure.
The evaluation was guided by the methodological principles of the constructability model proposed by CCI Chile (2024), adapted to four key criteria: functionality, use, investment, and curricular alignment. In addition, national and international standards were incorporated, including the Peruvian National Building Regulations (RNE), ISO 21001, ISO 9241-6, ISO 11064-4, and the guidelines of the BIM Plan Peru.
As observed in
Table 20, the intervened environment shows notable improvements across all evaluated criteria. Greater functional efficiency was recorded in terms of spatial distribution, accessibility, and ergonomic furniture design. Additionally, better environmental conditions (lighting and ventilation) were identified, along with a higher level of technological adequacy and curricular alignment. For instance, the increase in area per person (1.90 m
2 vs. 1.71 m
2) improved circulation and reduced overcrowding, which students highlighted as enhancing comfort. Likewise, improved ventilation and optimized lighting were perceived by users as contributing to thermal stability and reduced visual fatigue. The technological upgrades (48 PCs with BIM software and interactive whiteboards) were explicitly mentioned by students as strengthening curricular alignment and academic performance.
This contrast confirms that the application of a technically and normatively driven methodology not only improves user perception but also ensures sustainable transformations aligned with the contemporary demands of higher education.
4. Discussion
The results of this research confirm that the implementation of a methodology grounded in the principles of constructability has a significant effect on improving the quality of university infrastructure. This conclusion is supported by a robust analysis that integrates quantitative, qualitative, and technical–normative approaches, offering clear and systematic evidence of the effectiveness of the applied intervention.
From a statistical perspective, the Z value = −2.705 obtained through the Wilcoxon test, with a significance level of α = 0.007, indicates significant differences in perceived quality between the control and experimental groups. This favorable difference for the intervened environment is also reflected in the technical analyses, which report objective improvements in key indicators such as accessibility, thermal comfort, lighting, and curricular alignment. The methodology not only articulated technical and operational dimensions, but also incorporated functional and pedagogical aspects, in accordance with normative frameworks such as the National Building Code (RNE), ISO 21001, and the Constructability Guide by CCI Chile (2024).
The positive outcomes were evident both in user perceptions and in technical evaluations. The analyzed figures show notable percentage increases: for instance, during the planning and design stage, student satisfaction increased by 6%, and technical evaluation improved by 10% compared to the non-intervened environment. This impact is directly associated with improved spatial adequacy for academic activities, in line with standards such as ISO 11064-4 and the BIM Plan Peru guidelines.
Likewise, the execution of maintenance activities showed significant improvements. Following the intervention, a 14% increase in perceived efficiency was recorded, reflecting enhancements in response times, handling of critical access areas, and resource optimization. These findings demonstrate that a more technical and anticipatory maintenance management approach can directly influence user safety and well-being.
In the control phase, there was a 6% increase in student satisfaction and a 20% improvement in the maintenance team’s evaluation, further consolidating the positive influence of the applied methodology. These results are complemented by the physical–spatial comparative analysis, in which the intervened environment consistently outperformed the non-intervened one across the four evaluated criteria: functionality, use, investment, and curricular alignment. These technical improvements are in line with current regulations and meet the evolving academic demands of higher education.
In summary, this study demonstrates that applying constructability principles during the maintenance phase effectively addresses structural gaps that persist in many Latin American universities. This incorporation not only enhances comfort, safety, and functionality levels within educational spaces but also strengthens the alignment between infrastructure and institutional learning objectives. In this sense, constructability transcends its traditional association with the early phases of a building’s life cycle and reaffirms itself as an essential tool for the continuous management of educational quality.
5. Conclusions
The application of a methodology to evaluate constructability in the maintenance of university buildings proved to be effective, coherent, and replicable. The analysis confirmed its positive influence on the functional, technical, and perceived quality of academic environments, allowing for tangible improvements in the spatial and operational conditions of the intervened areas.
The proposed methodology, based on the criteria of functionality, use, investment, and curricular alignment, and guided by the twelve principles of CCI Chile (2024), successfully integrated technical considerations with the real needs of the educational community. This integration enabled more efficient and sustainable maintenance processes in accordance with regulatory standards such as the National Building Code, ISO 21001, and the BIM Plan Peru guidelines.
Statistical results empirically validated both the general and specific hypotheses. The improvements observed across the planning, execution, and control stages of maintenance were significant, not only in terms of user perception but also from a technical and normative evaluation standpoint. Furthermore, this study demonstrated that constructability should not be restricted to the early phases of a building’s life cycle. Its application during the maintenance stage can optimize resources, minimize rework, and enhance the overall quality of the academic environment.
The evaluation of key indicators, such as compliance with quality requirements, fulfillment of initial design, cost and time efficiency, and social and environmental impact, provided a comprehensive perspective on university maintenance performance. These indicators, beyond offering technical evidence, also helped establish useful parameters for institutional decision-making.
Overall, the developed methodology represents a strategic management tool that is technically sound and scalable. Its implementation not only improves the physical quality of university facilities but also strengthens the relationship between infrastructure and curriculum. It fosters environments that are more functional, safe, and aligned with contemporary academic goals. This proposal can be adopted and adapted by other institutions seeking to transition toward more efficient, sustainable, and user-centered maintenance models.