1. Introduction
Ensuring the safety and quality of engineering outputs remains a critical challenge for governments, industry, and society. In many countries, improving the quality of engineering education and professional development is considered one of the most pressing issues. This is largely due to the high level of responsibility of engineers, whose decisions and actions directly affect public safety, environmental sustainability and the reliability of critical infrastructure [
1,
2,
3]. History has shown that inadequate qualifications of engineers can lead to serious consequences, including industrial accidents, structural failures, and man-made disasters. In this context, it is also important to define safety and quality standards governing engineering practice, as well as to provide basic training in the functional safety of technologies to ensure the safe use of machines, equipment and products by third parties. International standards such as ISO 9001 (quality management) [
4], ISO 45001 (occupational health and safety) [
5], and IEC 61508 (functional safety) [
6] are of particular relevance in this regard. Moreover, the ergonomic education of engineers can significantly contribute to these aspects, since safety is considered an integral part of ergonomics. In Europe, for example, certification systems recognize the professional qualification of ergonomists within the framework of the CREE (Certified European Ergonomist) system, which emphasizes the importance of human-centered design and a safe working environment. These competencies related to safety and ergonomics are considered in the certification system as part of several evaluation criteria, including those related to developing engineering solutions in accordance with specified requirements, evaluating the social implications of engineering practice, and adhering to professional ethics and standards. This raises the question of the extent to which engineers and technicians should receive basic and continuing education in ergonomics and safety-related competencies, especially when they perform functions as safety specialists.
However, in current practice, technicians and engineers typically receive only basic instruction in occupational safety, while systematic ergonomics training is rarely included in their education. Specialized methods, such as the Machine Time Method (MTM), are usually applied only in specific industrial settings and are not part of standard engineering curricula. As manufacturing systems become more complex, there is a growing need to strengthen both initial education and continuing professional development in ergonomics and occupational safety. This is particularly important for engineers serving as safety officers, who are responsible for risk assessment and design of safe workplaces. Introducing ergonomics-focused modules and basic MTM training could help them prevent injuries more effectively and improve the overall efficiency of production systems.
In this context, professional certification systems play a critical role in verifying engineers’ competence and readiness to practice responsibly. However, traditional certification models typically rely on fixed validity periods (e.g., 3 or 5 years), regardless of individual competence, experience or the intensity of technological progress in specific fields. This creates a risk that engineers may continue working under outdated certifications that no longer reflect their actual readiness to meet modern safety and technical requirements [
7].
Therefore, a two-stage system for recognizing engineers’ qualifications operates in many countries [
8,
9]. The first stage is the accreditation of engineering educational institutions, which ensures their compliance with international quality standards (ABET in the USA, ECUK in the UK, CEAB in Canada, JABEE in Japan, etc.). The second stage is the professional certification of engineers, which aims at recognizing their competencies within a specific field of engineering practice (NCEES in the USA, ECUK in the UK, Engineers Canada in Canada, IPEJ in Japan, etc.).
Professional certification of engineers is one of the most important mechanisms for verifying their competencies. The main outcomes of professional certification include the development of engineering education and the engineering profession, improved preparation and quality of graduates from technical education institutions, and motivation of engineers to enhance their professional competencies [
10,
11,
12,
13,
14,
15,
16]. With regard to safety-related competencies, certification systems should assess not only the reliable functioning of technologies but also the human-oriented aspects of engineering practice. These include competencies in risk assessment, compliance with international safety and quality standards (e.g., ISO 45001, IEC 61508), and the ability to design work processes and environments that ensure the safe operation and ease of use of machines and products by third parties. In addition, ergonomic assessment is a critical aspect of safety competencies, as it links technical reliability with human well-being. However, in some countries, such training is not a systematic component of engineering education, which highlights the need to strengthen certification mechanisms to ensure the consistent development and assessment of safety competencies.
The literature analysis showed that there are many models of certifications in other countries [
17,
18,
19,
20,
21,
22,
23,
24]. For example, in the USA, certification is conducted on a voluntary basis. The certification procedure is controlled by the National Council of Examiners for Engineering and Surveying (NCEES). The NCEES develops and evaluates the exams for candidates seeking certification as Fundamentals of Engineering (FE), Professional Engineer (PE), and Structural Engineer (SE). The certificate validity period is 3 years. The certificate renewal procedure is based on the principles of continuing professional development [
25]. In the United Kingdom and other European countries, the national engineering regulatory body is the Engineering Council. This body is responsible for developing and maintaining standards of professional competencies and for implementing principles of ethical behavior in the field of engineering activities. Engineers can be awarded the following titles: Chartered Engineer (CEng), Incorporated Engineer (IEng), Engineering Technician (EngTech), and Information and Communications Technologies Technician (ICTTech). The certificate validity period is 3 years [
26].
In Russia, the certification of engineers is carried out on the basis of the APEC Register of Engineers. The documents of the applicant for certification are submitted to and reviewed by the Secretariat of the Russian Surveillance Committee. Then the Certification Center conducts an examination to verify the applicant’s competencies. The decision on the recognition of the engineer’s professional competencies is made by the Russian Supervision Committee.
In Kazakhstan, the certification process is carried out on a voluntary and mandatory basis. Currently, Kazakhstan is implementing the National Qualifications System, which is a comprehensive set of legal and institutional regulations governing demand and supply of qualifications, ensuring the linkage between economic sectors, the labor market, and the system of vocational education and training. The main elements of the national qualifications system are the National Qualifications Framework, sectoral qualifications frameworks, professional standards, educational programs, and certification of specialists. The general requirements for the certification procedure are specified in the Law of the Republic of Kazakhstan “On Professional Qualifications” (No. 314-VIII, dated 4 July 2023). The period of validity of the certificate is regulated by the Register of Professions of the Republic of Kazakhstan and can be 1, 2, 3 or 5 years.
However, in the presented models of certification, the connection between the validity period of the certificate and the characteristics of the candidate (such as the level of competencies and work experience), as well as the characteristics of the engineering field (such as the intensity of technological development), is not addressed. The certificate validity period is typically represented as a fixed value (3–5 years). This fixed-duration approach may not adequately reflect the dynamic nature of engineering practice and evolving competency requirements. To address this gap, we propose a new model in which the duration of certification validity is determined by a set of objective criteria, including the engineer’s demonstrated competencies, accumulated professional experience, and the intensity of technological advancement within the relevant engineering discipline.
As shown in
Figure 1, the model is represented as a black-box model. The inputs to the model include the engineer’s competencies, work experience, and the intensity of technological development within the engineering discipline, while the output is the certificate validity period.
This study contributes to the literature by introducing a novel model that combines expert-based ranking, fuzzy logic, and bibliometric indicators (Q1/Q2 Scopus data) to dynamically predict the validity period of engineering certificates. Unlike previous fixed-duration approaches, the proposed model enables personalized and data-driven certification outcomes. Thus, the aim of this paper is to develop an approach for analyzing the influence of candidate characteristics on the validity period of engineering certificates.
In recent years, fuzzy logic has been increasingly applied in engineering to support multi-criteria decision-making and enhance personalization in complex systems. For example, Lee and Kang [
27] developed a three-phased fuzzy logic model to evaluate smart TV operating systems, illustrating the potential of fuzzy reasoning to process diverse inputs and produce nuanced outcomes. Similarly, Su et al. [
28] proposed a fuzzy analytic hierarchy process combined with set pair analysis for evaluating micro-edge honing quality, demonstrating how fuzzy models can improve the interpretability and precision of engineering assessments. These approaches support the premise of this study that fuzzy-based models offer powerful tools for increasing the fairness and accuracy of professional engineering certification.
The proposed methodology employs fuzzy set theory to address uncertainties in assessing an engineer’s competence and related factors. The effectiveness of this approach depends on the correct definition of membership functions and aggregation of individual criteria. Each criterion is evaluated by subject matter experts, providing input data for the model. This combination of expert assessments and fuzzy logic enables a more flexible and accurate determination of overall competence, which is subsequently used to establish the recommended certificate validity period.
2. Materials and Methods
2.1. Description of the Approach for Determining the Certificate Validity Period
To determine the certificate validity period, a conventional scale ranging from 0 to 1 is used. The scale values increase in increments of 0.1. Each value on the scale corresponds to a specific certificate validity period (
Table 1).
Three input parameters—the level of the engineer’s competencies, the engineer’s work experience, and the intensity of technological development within the engineering field—are evaluated as scores S1, S2 and S3, respectively, each ranging from 0 to 1. To determine the certificate validity period, the average score (Sa) is calculated as follows:
The scale for the certificate validity period ranges from 0 to 5 years. A value of 0 years corresponds to failure in the certification exam. The maximum value of 5 years is based on the commonly accepted maximum period in many countries.
Given that the relationship between the input parameters—the engineer’s competence level and work experience—and the certificate validity period is direct (i.e., an increase in these parameters leads to an increase in the certificate validity period), their change from low to high values corresponds to a change from 0 to 1 on the normalized scale. Conversely, the relationship between the intensity of technological development within the engineering field and certificate validity period is inverse. Therefore, as the intensity parameter increases from low to high, its corresponding normalized scale value decreases from 1 to 0.
Thus, after calculating the scores S1, S2, and S3, and consequently, the average score Sa, the certificate validity period can be determined using
Table 1.
To evaluate the applicability of the proposed model, the validity period of Engineer X’s certificate was assessed. The engineer’s characteristics presented in
Table 2 were used as input data for the model.
The predicted validity period was compared with the officially established regulatory validity period in the Republic of Kazakhstan, which made it possible to evaluate the accuracy and practical applicability of the developed model.
Potential risks associated with a lack of competence are then assessed based on the severity of potential consequences across several dimensions, including safety, business continuity, financial, environmental, and regulatory impacts. Expert judgment is used to evaluate each dimension, and the scores are normalized to a range between 0 and 1. The probability of insufficient competence is then combined with the average normalized severity to calculate an overall risk score. Finally, the baseline certification period, derived from the competence score, is compared to the specified risk intervals, and the final recertification period is adjusted, if necessary, to reflect both engineer’s competence and the assessed risk level.
2.2. Determination of the Score for Evaluating an Engineer’s Competence Level (S1)
The level of engineer’s competencies was evaluated using established assessment criteria. The determination of these criteria for the certification procedure was based on 12 evaluation criteria (C1, C2, C3, …, C12) defined in the Washington Accord [
29,
30]. The names of these criteria are presented in
Table 3.
As shown in
Table 3, the twelve evaluation criteria encompass the full spectrum of knowledge, skills, and competencies required of a modern engineer, including a solid grounding in the fundamental sciences and engineering principles; the ability to analyze and design complex systems; conduct evidence-based research; apply modern technologies; assess the social and environmental impacts of engineering practice; adhere to professional ethics; work effectively in teams and demonstrate leadership; communicate efficiently; and maintain a commitment to continuous professional development.
The engineer’s level of competence is evaluated by six experts (E1–E6), all of whom are specialists in the field of mechanical engineering. The assessment of criteria is conducted through both theoretical and practical examinations.
Prior to the evaluation, the criteria are ranked according to their relative importance [
31,
32]. For this purpose, the experts are asked to complete two forms: the first form is used to assess the level of importance of each criterion, while the second form is used for peer assessment of the experts themselves. A ten-point rating scale is applied in both forms.
The results of the assessment of the significance of criteria on a ten-point scale are presented in
Table 4.
The results of the mutual assessment of experts are presented in
Table 5.
Then, absolute experts’ ratings are converted into relative values. For this purpose, each expert’s rating is divided by the sum of all ratings. The resulting relative values of the experts are presented in
Table 6.
A rank table is then compiled (
Table 7), in which the highest value in each row is assigned rank 1, the second highest value is assigned rank 2, and so on.
The Spearman rank correlation coefficient is used to evaluate the degree of concordance between expert assessments. The coefficients are calculated for each pair of experts. As an example, the formula for calculating the Spearman rank correlation coefficient for the first and second experts is given below [
32]:
In Equation (2), represents the deviation between ranking assessments of the first and second experts for criteria used in the certification of engineers, and denotes the number of criteria.
The values of the Spearman correlation coefficients calculated using Equation (2) are as follows: , , , , , , , , , , , , , , .
After calculating the Spearman rank correlation coefficients for all pairwise combinations of experts, i.e.,
, a correlation matrix of rank relationships is constructed (
Table 8).
Using the results of the second form, the expert competence coefficients are calculated according to the following formula [
32]:
In Equation (3), is the total score assigned, and is the number of experts. Based on Equation (3), the values of the expert competence coefficients (, , , , , ) were calculated.
Then, the weight coefficients
of each expert, taking into account their competence, are determined by multiplying the expert competence coefficients by their assessments [
32], i.e.,:
The resulting values of the weight coefficients are as follows: , , , , , , , , , , , .
By comparing the values of calculated using Equation (4), a rank is assigned to each criterion.
As shown in
Table 9, the criterion with the highest importance is No. 12 (“Commitment to continual professional development and lifelong learning”), while the criterion with the lowest importance is No. 10 (“Proficiency in communication and knowledge of foreign languages”).
After the criteria have been ranked, each of the six experts assigns a score between 0 and 1 to each criterion, as presented in
Table 10.
Each score is multiplied by the weighting coefficient
of the corresponding criterion. A weighted sum of the scores is calculated for each expert, resulting in a single final score per expert. To obtain the overall final score, these six individual scores are averaged using the arithmetic mean. The results of calculations are presented in
Table 11.
Thus, the score evaluating the level of engineer’s competence (S1) is 0.67.
2.3. Determination of the Score Considering Work Experience (S2)
Work experience is another input parameter affecting the certificate validity period. In practice, the terms “High work experience”, “Medium work experience”, and “Low work experience” are commonly used. However, the number of years corresponding to these categories may vary depending on the type of engineering specialization. That is, what is considered “high experience” in one engineering field may be classified as “medium” or even “low” in another, reflecting the specific requirements, complexity, and professional standards of each discipline. To evaluate the value of the work experience parameter, fuzzy set theory is applied [
33,
34,
35]. Fuzzy set theory allows a gradual assessment of the membership of elements in a set using membership functions with values in the real unit interval [0,1].
Let us define two sets: the set of linguistic variables L = {
}, where
represents “Low work experience”,
represents “Medium work experience”, and
represents “High work experience”; and the set of scores
, 0 <
< 1. A fuzzy set describing a linguistic term over the set of scores is represented as follows:
It is necessary to determine the degrees of membership of the elements of the set to elements of the set , i.e., to find for all and .
The membership function is defined based on a statistical analysis of the expert group’s opinions. Experts complete a questionnaire in which they express their assessment of the degree to which each element , belongs to the fuzzy set ().
In the assessment of engineer X’s work experience, experts E1, E2, E3, E4, E5, and E6 were asked to classify the engineer’s experience in the field of Mechanical Engineering into the categories “Extensive work experience”, “Average work experience” and “Limited work experience”. The following experience ranges were proposed for evaluation: “more than 10 years”, “7–10 years”, “3–6 years”, “1–2 years”, “less than 1 year”. Each expert assessed which of these ranges corresponded to each experience category, taking into account the specific requirements and professional standards of the Mechanical Engineering discipline (
Table 12).
Based on the results of experts’ responses the degree of membership to the fuzzy set
is calculated using the following formula:
In Equation (6), is the number of experts, and represents the opinion of the k-th expert regarding the membership of element in the fuzzy set , with , and . Expert scores are assumed to be binary, , where “1” indicates the presence of the fuzzy set properties in element , and “0” indicates its absence.
In
Table 13, the numbers above the line represent the votes cast by the experts for the membership of the corresponding element of the universal set in the fuzzy set.
The numbers below the line indicate the degrees of membership calculated using Equation (6). Thus, using fuzzy set theory, the evaluation of the work experience set is performed by a membership function with values in the real unit interval [0,1].
Given that the candidate has 8 years of experience,
Table 13 considers the range of 7 to 10 years. Within this range, the majority of experts classified the engineer’s experience as “Average work experience”. The corresponding S2 value, based on the maximum number of votes, is 0.7.
2.4. Determination of the Score for the Intensity of Technological Development Within an Engineering Discipline (S3)
The determination of the score for the intensity of technological development within a given engineering type is conducted by analyzing the number of papers published in Q1 and Q2 journals currently indexed in Scopus [
36].
Table 14 presents the number of Q1 and Q2 papers for the period from 2019 to 2023.
Based on the data presented in
Table 14, the intensity of technological development within the engineering type can be evaluated using the following formula:
The results of the calculated values of the intensity of technological development using Equation (7) are presented in
Table 15.
As shown in
Table 15, the highest value of technological development intensity corresponds to the field of Engineering (miscellaneous), while the lowest value corresponds to the field of Media Technology.
Using proportional scaling and considering the indirect relationship between technological development intensity and certificate validity period, the conversion to a conventional scale can be performed as follows:
In Equation (8), represents the intensity of technological development for a given engineering type, and is the maximum of technological development intensity among all engineering types (in this case corresponding to the Engineering (miscellaneous)).
For Engineer X, working in the field of Mechanical Engineering, the value of S3 calculated using Equation (8) is 0.38.
The results of the proposed model are presented in
Table 16.
Thus, the score evaluating the engineer’s level of competence (S1) is 0.67, the score evaluating work experience (S2) is 0.70, and the score evaluating the intensity of technological development in the field of mechanical engineering (S3) is 0.38.
Based on the proposed formula, the score assessing the validity period of the certificate is 0.60.
According to
Table 1, the certificate validity period is set at 3 years.
2.5. Risk Assessment Based on Competence and Consequences
In addition to the average competence score (Sa), a structured assessment of the potential risk associated with the recertification intervals is introduced. This risk combines the probability of insufficient competence and the severity of the possible consequences.
The possible consequences of insufficient competence are divided into individual aspects, each of which is rated by experts on a scale from 1 to 5 (1 = minimal impact, 5 = catastrophic). To integrate these scores into the risk level calculation, they are normalized to a 0–1 range by dividing by 5. The results are presented in
Table 17.
Along with safety risk, business continuity, financial loss, environmental impact, and regulatory consequences, a baseline indicator of ergonomic education was incorporated into the assessment model. Limited ergonomic awareness may increase safety risks and reduce long-term performance. Therefore, ergonomics was integrated into the risk analysis as a separate consequence category, highlighting the importance of regular ergonomics training as a preventative measure to sustain competence and mitigate risks.
The probability
that an engineer’s competence will be insufficient at the time of recertification is estimated as:
In Equation (9),
represents an aggregate measure of competence, professional experience, and technological intensity, calculated as a weighted sum:
In Equation (10), , , are the average weights estimated by experts.
Based on expert assessments, the average weights of the parameters were assigned as , , . Using these weights, the competence score was calculated as 0.608. Consequently, the probability of insufficient competence at the time of recertification is .
Then, the overall risk score
is determined by multiplying the probability of insufficient competence (
) by the average severity (
):
Given that the average severity () is 0.6, the overall risk score is calculated as 0.25.
The threshold values for mapping the overall risk score
to the recertification intervals are determined by experts, taking into account the criticality of the engineering area, historical experience, and regulatory requirements. The calculated overall risk score
is then compared with the risk-certification period table (
Table 18), allowing the recertification interval to be adjusted according to the assessed risk level.
The mapping of risk score to recommended recertification intervals is based on the principle that higher risk levels require shorter intervals to ensure timely reassessment and risk mitigation. The thresholds (≥0.6, 0.3–0.6, <0.3) correspond to high, medium, and low risk levels, providing a practical framework for adjusting certification periods according to both competence and severity of consequences. These thresholds can be further adapted to the criticality requirements of specific industries.
The exact validity period of the certificate is determined by combining the engineer’s competence assessment (Sa) with the calculated risk score (R). The base validity period of the certificate is first determined from Sa using
Table 1. This period is then compared with the risk score intervals: if the base validity period of the certificate falls within the range corresponding to the assessed risk, it is retained; if it exceeds the upper limit of the interval, it is reduced accordingly.
For the engineer in question, the aggregated competence score (Sa = 0.6) corresponds to a baseline certification period of 3 years. The calculated risk score of 0.25 falls within the low-risk range (4–5 years). Since the baseline period lies within this range, the final recommended certification period remains 3 years. This approach ensures that the validity period of the certificate reflects both the engineer’s competence and the assessed risk level, providing a transparent and practical method for determining recertification intervals.
3. Results and Discussion
The obtained results confirm that the certificate validity period in professional engineering certification can be determined when personalized factors are taken into account. In the case studied, considering the engineer’s competence level, work experience, and technological intensity of the engineering field resulted in a certificate validity period that was two years shorter than the standard period. This finding highlights the potential of the proposed model to optimize the certification process by providing more tailored and evidence-based outcomes, particularly for fields characterized by rapid technological change or varying competency requirements.
As shown in
Table 16, the assessment of the certificate validity period was calculated as the arithmetic mean of the values S1 = 0.67, S2 = 0.7, and S3 = 0.38. The most significant factors influencing the certificate validity period are the engineer’s level of knowledge and work experience, with S1 = 0.67 and S2 = 0.70, respectively.
The greatest influence on the assessment of the engineer’s competence level (S1) was exerted by criterion C12 (“Commitment to continual professional development and lifelong learning”), for which the engineer received the highest marks and which, according to the expert assessment method, has the greatest weight.
For the parameter reflecting the engineer’s work experience (S2), the value S2 = 0.7 was obtained based on four expert votes.
Regarding the third parameter—the assessment of the technology development intensity by engineering type (S3)—its value was 0.38, as this engineering field is characterized by a relatively high intensity of technology development. This, in turn, indicates that the engineer should undergo re-certification more frequently.
The risk assessment is compared with the recommended recertification intervals based on the principle that higher risk levels require shorter intervals to ensure timely reassessment and risk mitigation. Thresholds for high, medium and low risk provide a practical basis for adjusting the certification periods according to both competence and the severity of the potential consequences and can be adapted to specific industry requirements. The exact validity period is determined by combining the engineer’s competence assessment (Sa) with the calculated risk assessment (R): the base period is derived from Sa using
Table 1 and compared with the risk intervals, retained if it falls within the range or reduced if it exceeds the upper limit. For the engineer in question, Sa = 0.6 corresponds to a base period of 3 years, while a risk assessment of 0.25 falls in the low risk range (4–5 years); thus, the final recommended certification period remains 3 years. This approach ensures that the validity of certification reflects both competence and assessed risk, providing a transparent and practical method for determining recertification intervals.
The findings of this study align with the broader trend of applying fuzzy logic to improve decision-making in complex engineering systems. While our model focuses on predicting certificate validity based on expert and bibliometric data, similar methods have been employed to evaluate technical systems and operational alternatives. For instance, Lee and Kang [
27] developed a three-phased fuzzy MCDM model to support decision-making in consumer electronics evaluation, and Su et al. [
28] demonstrated the utility of trapezoidal fuzzy AHP in assessing manufacturing quality. These approaches confirm the potential of fuzzy models to enhance objectivity and transparency in structured, multi-criteria assessments.
The proposed model could be applied in a broader international context, particularly in countries seeking to improve their national qualification systems or to align their certification practices with global standards such as the Washington Accord. By incorporating both objective (bibliometric) and subjective (expert) assessments, the model can be adapted to different engineering disciplines and regulatory environments. Its modular structure also allows for future integration with digital platforms or decision-support tools used by certification centers and professional bodies.
However, several limitations of the study should be acknowledged. The expert reviews used to determine competency scores were limited to a single institution and region, which may introduce subjective bias. In addition, the model was tested on only one representative case, which does not fully capture the diversity of engineering disciplines and experience profiles. Future research should aim to validate the model on a larger sample, include cross-institutional expert panels, and consider additional input variables such as ethical qualifications, education, or continuing professional development activities.
4. Conclusions
This study investigated the key factors influencing the validity period of engineering certificates, which serve as a critical indicator for ensuring the safety and quality of engineering outcomes. A factor model was developed, integrating three input parameters: engineers’ competencies, professional experience, and the technological intensity of a specific engineering field. These parameters were quantified using expert assessments, fuzzy set theory, and bibliometric analysis of Scopus publications for the first and second quarters. The output parameter—the certificate validity period—was determined using a lookup table that associates the average input scores with the recommended duration.
The results indicate that these factors significantly influence the recommended certification duration. For instance, when applied to data from the Kazakhstan certification registry, the model suggested an optimal validity period of three years for mechanical engineers, compared to the statutory five years, representing a 40% reduction. This finding addresses the original research question, demonstrating that fixed certification periods may not adequately reflect variations in competence, experience, and technological dynamics.
The proposed model offers certification bodies, policymakers, and engineering organizations a practical tool for tailoring certification timelines to individual engineers’ profiles. Its application is particularly relevant in engineering sectors characterized by rapid technological change, where timely reassessment of competencies is essential for maintaining safety and minimizing occupational risks. More adaptive certification schedules can support evidence-based workforce planning, risk prevention, and the modernization of national qualification systems.
However, this study has certain limitations. It relies on expert judgment within a limited national context, which may not fully capture the diversity of engineering fields worldwide. Further research should aim to validate the model using larger datasets across multiple countries and disciplines, incorporate additional factors such as continuous professional development, and explore the potential for integration with digital certification platforms to enable automated reassessment planning.