Next Article in Journal
Comparative Study on the Isothermal Reduction Kinetics of Iron Oxide Pellet Fines with Carbon-Bearing Materials
Previous Article in Journal
Understanding Revisit Intention towards Religious Attraction of Kartarpur Temple: Moderation Analysis of Religiosity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness

Industrial Design Engineering, TU Delft, Landbergstraat 15, 2628CE Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(14), 8634; https://doi.org/10.3390/su14148634
Submission received: 16 May 2022 / Revised: 25 June 2022 / Accepted: 4 July 2022 / Published: 14 July 2022
(This article belongs to the Section Resources and Sustainable Utilization)

Abstract

:
The Circular Economy Action Plan adopted by the European Commission aims to keep value in products as long as possible through developing product-specific requirements for durability and repairability. In this context, various scoring systems have been developed for scoring product repairability. This study assessed the objectivity and completeness of six major repair scoring systems, to see what further development may be required to make them policy instruments for testing product repairability. Completeness of the scoring systems was assessed by comparing them to the latest literature on what design features and principles drive product repairability. Objectivity was determined by assessing whether the scoring levels in each criterion were clearly defined with a quantifiable and operator-independent testing method. Results showed that most of the criteria in the scoring systems were acceptably objective and complete. However, improvements are recommended: The health and safety criterion lacked objectivity and has not yet been fully addressed. Further research is required to expand the eDiM database, and to identify whether the additional accuracy provided by eDiM compared to disassembly step compensates for the increased difficulty in testing. Finally, assessment of reassembly and diagnosis should be expanded. Addressing these gaps will lead to the development of a scoring system that could be better used in policymaking, and for assessment by consumer organizations, market surveillance authorities, and other interested stakeholders, to promote the repairability of products.

1. Introduction

Consumer goods are nowadays less durable and repairable than in the past, and the average product lifetime of products seems to be decreasing [1]. This contributes towards an increase in waste electronic and electrical equipment (WEEE), which has been growing at the rate of 2–5% per year [2]. A report by the Organization for Economic Co-operation and Development (OECD) indicates that extending product lifetime could help solve this issue [3]. As a response, The Circular Economy Action Plan adopted by the European Commission sets out to keep value in products as long as possible through developing product-specific requirements for durability and repairability [4]. In this context, various scoring systems have been developed for scoring the repairability of electronic and electrical equipment (EEE) [5,6,7,8,9,10]. Such scoring systems could also contribute to ongoing and future standardization to provide designers and market surveillance authorities (MSAs) with recommendations on improving the repairability of products. Additionally, this could empower consumers to make informed choices when buying their products.
A good scoring system should be objective and provide a complete assessment of the repairability of products [11]: The scoring system should be assessed on whether it reflects science-based literature on design aspects related to repairability. These elements are crucial for application in policymaking, and for assessment by consumer organizations, MSAs, and other interested stakeholders, to promote the repairability of products.
Bracquene et al. [12] compared three scoring systems: AsMeR (Assessment Matrix for ease of Repair) [6], ONR:192012 (Label of Excellence for Durable, Repair Friendly, Designed Electrical and Electronic Appliances) [7], and iFixit 2018 [9] for vacuum cleaners. Following this, they also provided a comparison of AsMeR and RSS (Joint Research Centre Repair Scoring System) [5] for washing machines [13]. However, this research does not assess the completeness of these scoring systems. Furthermore, the following recent scoring systems have not been assessed: iFixit 2019 (smartphone repairability scoring system) [8], FRI (French Reparability Index) [14], and EN45554 (general methods for the assessment of the ability to repair, reuse and upgrade energy-related products) [15].
This paper fills the gaps by answering the following questions: First, how do the current scoring systems reflect science-based literature on design aspects related to repairability? Second, how objective are the current scoring systems? By answering these research questions, this study aims to provide insights and opportunities for improvements in repairability scoring systems in general.
This research was conducted in two steps: firstly, research of the literature was conducted to identify what design features and principles influence the repairability of the products. This was carried out to determine what design elements should be captured by the repairability scoring system. Afterwards, those design features and principles taken from the literature were compared with six chosen scoring systems and standards; Assessment Matrix for ease of Repair (AsMeR) [12]; Joint Research Centre Repair Scoring System (RSS) [5]; iFixit 2019 (smartphone repairability scoring system) [8]; General methods for the assessment of the ability to repair, reuse and upgrade energy-related products (EN 45554) [15]; Label of Excellence for Durable, Repair-Friendly, Designed Electrical and Electronic Appliances (ONR:192012) [7]; and French Reparability Index (FRI) [14]. This comparison assessed the completeness of the scoring systems. Secondly, this study assessed the objectivity of the scoring systems by analysing and comparing the scoring methods of the different scoring systems.

Scoring Systems for Repairability

Several repairability assessment systems are currently available. Six scoring systems were chosen for this study based on the following criteria:
  • The criteria for these scoring systems are publicly available in the English language.
  • The evaluation method used is quantitative or at least semi-quantitative in nature, to provide a more objective assessment and enable ranked comparisons of products.
  • It must be the latest iteration or version of the assessment system from the organisation/group.
Table 1 provides an overview of the chosen six scoring systems. These criteria were expected to overlap, firstly because they all measure the repairability of electrical and electronic equipment (EEE), but also because newer scoring systems tend to have been developed after consideration and study of previous scoring systems.

2. Method

2.1. Assessing Completeness of the Scoring Systems

From December 2020 to February 2021, a review of the literature was conducted to identify design principles, features, and guidelines related to the repairability of household electronic and electrical equipment. Relevant scientific literature related to design aspects of repairability was identified via the Google Scholar search engine and SCOPUS citation database.
Search terms were “design”, “features”, “principles”, and “guidelines”. These were followed by “repair OR maintain”. Additionally, the search term focused on the following product categories: “appliance”, “household products”, “EEE”, “white goods”, “brown goods”, “electrical and electronic equipment”, “mobile phones”, “vacuum cleaner”, “laptop”. This was an iterative process where different combinations of the provided terms were used. Wildcards were used to ensure wide coverage, and a proximity criterion of within 5 was used to narrow down the relevant results with co-occurring search terms (see Figure 1). The search was conducted within titles, abstracts, and keywords, in papers published from 2000 to 2021. The search was also focused on the following subject areas: engineering, material science, environmental science, industrial design, and design.
This review focused on aspects related to the physical design of the product. These included design features, principles, and guidelines related to the repairability of household electronic and electrical equipment. Articles beyond the aforementioned scope were excluded. These included elements related to automotive products and textiles, and also user and market aspects related to repairability (such as spare part prices and availability). The results were screened for their relevancy firstly by checking headings, then by reviewing the abstract and conclusion. A full review of the paper was then conducted, and relevant articles selected. Additional papers were identified via snowballing using the reference list of a paper or its citations to identify additional articles [16].
During the analysis phase, each chosen paper was read, and sections marked wherever design-related aspects related to repairability were mentioned. The design aspects were considered relevant only if the addressed repairability aspects were an outcome of an empirical study.
Two studies have been conducted previously on design guidelines and principles related to repairability: The paper by Boeva et al. [17] provides nine relevant recommendations related to repairability originating from 34 different sources. Similarly, Den Hollander provides 16 design principles related to the repairability of products originating from six different pieces of literature published before 2016. To avoid multiple references, the literature already addressed by Boeva et al. [17] and Den Hollander [18] was not considered in our study.
Results of our analysis were clustered into design features and principles empirically shown in the literature to improve repairability, to enable a comparison with the criteria measured by the different scoring systems. The completeness of the scoring system was determined by checking whether the identified design elements were reflected in the scoring system.

2.2. Assessing Objectivity of the Scoring Systems

Objectivity is important for the repeatability of scores. To assess objectivity, the criteria presented in the different scorings systems were clustered within the identified design features and principles (see Table 2). Afterwards, each criterion and its testing method were categorized into three levels: objective, semi-objective, and subjective, based on the following criteria:
  • Objective: Each level score that can be achieved is clearly defined, the testing action to achieve the score can be quantified and is operator-independent.
  • Semi-objective: Whilst the testing action can be quantified, no clear indication is given on how each level of the score is achieved, causing a degree of operator dependence.
  • Subjective: One or more testing actions cannot be quantified objectively; the result is operator-dependent.

3. Results and Discussion

This section first shows how well each analysed scoring system captures the design elements that have been empirically shown in the literature to drive repairability. It then assesses the completeness and objectivity of each scoring system, as well as highlighting differences between them.
Considering both the literature and different scoring systems, a total of 17 different design elements were identified that are considered important for repairability in EEE. Table 2 provides the list of design elements and their descriptions based on the literature. Table 3 provides an overview of scoring systems compared to the literature. In general, all criteria in the scoring system seem to be reflected in the literature.
Table 3 shows that seven out of eighteen aspects related to repairability from the literature were well reflected in most (more than three) of the scoring systems. These include disassembly, fastener type, tools required, Information content, standardized parts and interface, and firmware reset. In contrast, seven aspects (coloured in red) were not addressed or partially addressed. This is described below.

3.1. Aspects not Addressed or only Partially Addressed by the Scoring System

Four aspects were not addressed directly by any of the scoring systems: “ease of handling”, “interchangeability”, “redundancy”, and “material selection”. These may be missing from the scoring system because, as the table shows, there is much less literature on them than other aspects of repair. Similarly, “diagnosis” and “health and safety risk” are only partially addressed. However, they may still be sufficiently important to include in the scoring.
The first aspect not addressed in the scoring system is “ease of handling”. Features such as small size, low centre of gravity and handles make product manipulation (flipping, tilting, etc.) easier during disassembly, and make it easier to take the product for repair. However, the absence of these features does not seem to severely alter the repairability of the product.
The second aspect not addressed by the scoring system is “interchangeability”. Interchangeability allows for component testing [23] as well as facilitating the removal and replacement of the component. Additionally, interchangeability allows for part replacement with third-party spare parts. Interchangeability of components could also enable extracted components from old products to be used for repair; however, minimal data are available on how often this is repair scenario occurs in the EU. Further investigation may be required to determine the extent of component extraction from old products, and to what extent third-party spare parts are used for repair within the EU. This could be achieved by surveying and observing repairers and their repair process.
The third aspect not addressed is “robustness”. This principle ensures that handling and disassembling actions during repair do not break or damage the product [34], It also increases confidence during disassembly [25]. The majority of the scoring systems (4/6) indicate that if breakage occurs during the disassembly process, the fastener for the part being disassembled is considered “non-removable”, and this “fastener removability and reusability” criterion is partially addressed by that of “material selection”/ “robustness”. However, testing the robustness of the product is normally carried out through complex simulations, destructive stress tests and accelerated life tests [43], all requiring significant resources. This most likely outweighs the benefit of having this criterion in the repairability scoring system. Further research may be needed to determine if an easier testing method could be developed to test for material selection/robustness of products. One method to achieve this is by checking products for features which influence robustness (e.g., a curved screen is generally more prone to breakage than a flat screen). These features can be extracted from a database of failed products. This is currently under investigation and will be published in an upcoming paper under the EU Horizon PROMPT project. However initial research shows that product failure can be caused by multiple design principles, and it is difficult to reliably assess the robustness of products by considering design features alone.
Similarly, the literature is unclear on the extent to which redundancy in a product promotes repair. “Redundancy” relates to providing an excess of functionality and/or material in products or parts that allow for normal wear or removal of material as part of a recovery intervention [32]. This principle was found to help users locate and isolate faults [23,25]. However, this redundancy normally increases the material requirements and cost of the product. Therefore, this design feature may not justify the additional cost and materials needed for manufacture.
One of the two partially addressed aspects is “diagnosis”. For most of the scoring systems (4/6), the ability of the products to sense faults and alert the user via a display or error codes is regarded as diagnosis, and a criterion for it is developed accordingly. However, according to Arcos et al. [25], various other design features also play a role in ease of diagnosis for users (such as transparent housing, and having easily accessible testing points). This parameter in the scoring systems could be developed by incorporating the results from Arcos et al. [25]. Additionally, ONR 192102 consists of “low-level function when faulty” and “operation after removal of the cover” as criteria for diagnosis; these two features have not been addressed by any other scoring system and could be an interesting feature to be incorporated towards assessment of diagnosis.
The other incompletely addressed aspect is health and safety risk. Safety concerns include the safety of the person performing the repair, the safety of using the product after repair, and safety related to damaging the product during or after the repair. Aspects of safety during repair have been addressed by the majority of scoring systems (En 45554, RSS, ONR192102, iFixit), but safety after a repair has not been addressed by any of them. Safety after repair is important if a product that has been incorrectly repaired becomes dangerous when operated (e.g., an incorrectly reattached lawnmower blade might fly out at high speed). There is only limited literature on product and user safety during and after repair of EEE. The public report of Inegmardotter et al. [44] indicates that most repair actions are safe to perform and others could be made safe through relatively small design changes. However, repair safety has been identified as one of the barriers to pushing forward product repair from political and company perspectives [45]. Therefore, to overcome this barrier, it is crucial that health and safety aspects are fully and transparently addressed in a reparability scoring system.

3.2. Interdependencies between Design Elements

Several interdependencies were observed between the design elements: fastener type, tools required, fastener visibility, reassembly, modularity, interchangeability, material robustness, design simplicity, information availability, and handling. These elements have all been identified as influencing the overall ease of disassembly of the product [8,18,20,21,38]. Additionally, diagnosis related to physical design seems to be influenced by the aspects of interchangeability, modularity, disassembly, design simplicity/complexity, robustness, and information availability [23,25].
These interdependencies between different design elements might lead to double counting in scores and may also indicate that not all the identified design elements need to be scored to provide a useful assessment of repairability. An assessment addressing the relation between the related disassembly and repairability elements can be observed in the Ease of disassembly Metric (eDiM) [20]. eDiM already addresses the following elements: disassembly, reassembly, tool type and fastener visibility. If a scoring system (such as AsMer) already uses eDiM, then these aspects are implicitly covered and may not need a separate scoring criterion. In essence, a scoring system might be simplified by eliminating some metrics without losing important information. Simplifying a scorecard could ease its application since it simplifies implementation and testing by manufacturers and surveillance authorities [20].

3.3. Comparing Scoring Systems

Table 4 shows how well the scoring system reflects design principles and features identified in the literature. Additionally, this table shows how scores are determined, and assesses their objectivity. All the criteria from the French repairability index were identified as objective. However, it was the least complete of the scoring systems and lacks criteria that currently are more qualitative (such as diagnosis and safety aspects). RSS was the most complete scoring system, covering 11 criteria, out of which 6 were objective. The scorecard with the least objectivity was ONR 192102, specifically because most of the criteria could be scored out of 5 or 10 but no specific instruction was provided on how each increment should be assessed.
Two criteria (diagnosis, and health and safety) were semi-objective across the majority of the scoring systems. Firstly, for diagnosis, the term “intuitive interface” in EN 45554, RSS, and AsMer needs further clarification to provide better objectivity. In terms of safety, the iFixit score is clear and objective, indicating specific tools (e.g., wire cutter and knife) and features (open pouch battery) that relate to safety risks. However, the RSS system is more subjective; it refers to the low voltage directive (2006/95/EC) and machinery directive (2006/42/EC) saying “machinery must be designed and constructed in such a way as to allow access in safety to all areas where intervention is necessary during operation, adjustment and maintenance of the machinery, and other safety information needed.” Similarly, concerning safety, EN45554 and RSS indicate whether a process can or cannot be carried out in specific environments (home use, workshop, production) and whether specific skills (layman, generalist, expert, manufacturer., not feasible) are required to carry out the repair process. However, details on what aspects are measured to determine the suitability of repair environments and also the skills required are lacking and are susceptible to subjectivity. Ingemarsdotter et al., [44] provide a risk assessment framework that could be applied to analyse the safety risk of household products. This framework builds on Failure Mode Effect Analysis (a widely applied method for failure analysis of products), and Rapid Exchange of Information System (a commonly agreed framework for risk assessment of consumer products). This framework could be further developed and implemented to assess the risk to safety objectively during and after the repair.
The majority of these scoring systems (RSS, ASMER, FRI, iFixit) have to be calibrated with a reference value to work effectively. This reference value is normally calibrated through scoring a range of products (cheap-to-expensive, and variation in designs) from a specific product category, and determining an average, a minimum, and a maximum threshold [46]. However, the number and range of products required for this calibration, and how often calibration should be carried out, are both still unclear and there is an opportunity to further research and establish a standard protocol to identify this reference value.
For ease of disassembly, most of the scoring systems (5/6) either measure time or the number of disassembly steps, and each method has both benefits and drawbacks. Disassembly time is subjective, depending on who is disassembling the product [20]. A more objective measurement is to record disassembly action based on Maynard Operation Sequence Techniques (MOST), where time represents the performance of an average skilled operator, under standard conditions at a normal pace [47]. This lets us create a proxy time as was carried out in Ease of Disassembly Metric (eDiM) [21]. This method is recognized as more representative of the ease of disassembly of the product than the number of disassembly steps. Furthermore, when assessing ease of disassembly, there is a significant difference between eDiM times and disassembly step counts [13], and eDiM captures the diversity of product designs better than the disassembly step counts. However, fully implementing eDiM would require a disassembly time database of all possible disassembly actions. Currently, the database is limited to ICT products, and the process of calculating eDiM is more labour-intensive than counting disassembly steps. Providing better representation of ease of disassembly might be important for a scoring system that places a high weight on disassembly, as well as for consumer organisations, manufacturers, designers, and MSAs that would like to assess the ease of product disassembly. Therefore, further research is required on eDiM to expand, simplify, and determine the balance between accuracy and ease of testing.
The iFixit scoring system also has another disassembly criteria called the “path of entry”, which describes the ease of disassembly to the point where critical components are visible [8]. This combines the criteria of disassembly time and tools required to disassemble until the critical components are visible, and therefore seems to have a similar testing method as ease of disassembly. Although iFixit already has a separate criterion related to disassembly time and tools, the path of entry assesses tools required and disassembly until the point where all the critical components are visible. Furthermore, criteria related to the path of entry are reflected in the report of iFixit market observations [42], which describes how an easy path of entry builds confidence in users self-repairing their products. Additionally, these criteria also help in diagnosis since viewing the critical component could be required by users during the diagnosis process [23,25]. Therefore, “path of entry” is a good addition to the disassembly criteria for a scoring system assessing self-repair.
An aspect of reassembly, “fastener removability and reusability” was addressed by most of the scoring systems. However, only two out of six scoring systems considered reassembly time in their criteria (EN 45554 and the AsMer scoring system indicate checking the reassembly time using the EDIM). However, the newer scoring criteria of RSS and iFixit only instruct to check if reassembly is possible; and they consider reassembly the opposite of disassembly. Therefore, there is a discrepancy in the importance given to this matter between the scoring methods. However, the report by Peters et al. [21] shows that reassembly time in some cases is higher than disassembly time. This is generally due to an additional action required to position fasteners (such as screws) and components. Furthermore, positioning design features such as spring-loaded components, and long routed cables further add to the reassembly time. eDiM partially covers the additional actions for positioning fasteners in its method, however specific reassembly actions such as assembling spring-loaded components and routing long cables are not considered in this method. Therefore, the eDiM database could be further expanded to address more reassembly-specific actions. Additionally, If a scoring system considered the disassembly step instead of eDiM, then, additional elements influencing the reassembly (e.g., criteria addressing cable routing) should be added as a step.
Two design elements for which most scoring systems agree and provided straightforward objective test procedures were “fastener removability and reusability” and “tools required”. ADEME, EN 45554, and RSS apply similar criteria to fasteners (reusable, non-reusable, non-removable). These criteria, and also the testing method (disassemble and check fastener type) seem to be consistent across the different scoring systems and testing parameters seem to be straightforward and objective. Similarly, the “tools required” parameters appear to be in agreement across the scoring system. The list of tools is well defined, and most of the scoring systems (4/6) reference EN 45554 standards. The criterion and test for tools required seem to be clear and objective.
No list or other reference of standardized parts and interface is given for any of the scoring systems. Whilst RSS and EN 45554 consider the presence or absence of a standard interface per part, AsMer and ONR 192102 adopt a more subjective approach. RSS advises checking the manufacturer’s information, whilst ONR 192102 suggests disassembling and checking the interface/part. However, objectively assessing standard parts and interfaces would require a list of standard parts and interfaces similar to that of the “tools required” criterion. Listing these standard parts, however, seems difficult given the large diversity of parts and components. Additionally, enforcing standardisation may impede innovation. Instead, the benefits of standardisation (as discussed in Table 2) could be addressed by the following criteria: (a) spare parts cost and availability, (b) tool required, (c) information accessibility of product identification, (d) ease of diagnosis, (e) ease of disassembly, (f) safety, and (g) interchangeability of components. Most of these criteria are already present in scoring systems; therefore, if the aforementioned criteria are addressed, standardisation as a separate criterion may not be required.
“Information accessibility” scores the ability of the public and of repairers to access repair information. The information content required by the different scoring systems is presented in Table 5. This table shows that “repair instruction”, “exploded view”, “diagnosis information”, “safety measures”, “procedure to reset to working condition”, and “disassembly sequences” have been addressed by most (4/6) scoring systems. This is followed by; “product identification”, “tools required”, “replacement/supplier information”, “circuit diagram”, “component identification”, “maintenance instructions”, and “error codes”. Most of the scoring systems seem to agree that information on diagnosis, safety, disassembly, and reset are important and that such information should be provided by the manufacturer. The testing procedure for obtaining this information seems to involve checking the official website, consulting a manual, or calling customer service. This criterion and its testing procedure seem straightforward and objective and could be easily implemented. However, apart from information on diagnosis, safety, disassembly sequences and factory reset, there is a discrepancy between scoring systems on what additional information from the manufacturer could be important. This may require further research.
In addition, the majority (4/6) of the scoring systems assess information accessibility on a product level and do not specify to what extent this applies to the most frequently occurring faults. This could result in invalid scoring (e.g., if the company gives repair information on just one fault, they may still attain a favourable score). Therefore, for information that is dependent on specific faults (such as repair information, and diagnosis information), it is important to provide information covering the most frequently occurring faults.
Suitable media for communicating this information could include printed manuals, websites, digital information carriers such as QR codes, DVDs, or flash drives, and the telephone [15,46]. AsMer, ONR 192102, and iFixit have clear criteria on how information on safety, disassembly, and product and component identification should be relayed; with “attached to the product” scoring highest, followed by access to a manual or website video. For the rest of the scoring systems, the medium of information does not seem to matter as long as it can be accessed by the public. Again, there are discrepancies concerning the importance of the information medium among the aforementioned systems. However, the literature shows that providing visual markings on the product (such as numbering wires, or warning signs) assists in correct reassembly and decreases the safety hazard [44]. Similarly, providing component identification numbers assists in buying the correct spare parts for replacement [34]. Therefore, it could be important to assess the information medium for disassembly, safety, and component identification.

4. Recommendations for Future Work

Our analysis found several opportunities for improvement in the current scoring systems, and also identified its own limitations. Both of these suggest recommendations for future work. Our primary recommendations are to improve the current scoring systems in the following ways:
-
Assessments of health and safety were semi-objective across the majority of the scoring systems. Therefore, there is an opportunity to develop objective criteria and testing methodologies for assessing health and safety of the user and the product during and after repair.
-
The eDiM method database could be expanded and further simplified to measure the ease of disassembly more universally. Additionally, the question of whether the additional accuracy provided by eDiM compared to disassembly step compensates for the increased difficulty in testing needs to be considered.
-
Since time for reassembly is sometimes higher than for disassembly, it might be important to consider ease of reassembly as a separate criterion whenever eDiM is not used.
-
In terms of repair information content, it is important to establish what information is most critical to promote repair. Additionally, information that is dependent on specific faults/components should be addressed at the fault/component level instead of the product level.
This study’s limitations may also provide opportunities for future work. Ease of testing and validity were both discussed only partially. Whilst a scoring system could be complete and objective with all the aspects required to score repairability, such a scoring system might be too burdensome to score products within a feasible budget and time. Therefore, future work could investigate balancing ease of testing versus objectivity and completeness of the testing program. Future work could also further test the validity and feasibility of different scoring systems by having multiple test personnel independently test different products with each scoring system and checking levels of agreement. This is planned for upcoming research.
This review focused on how scoring systems in the current literature reflect physical design features, principles, and guidelines related to the repairability of household electronic and electrical equipment and on how they are tested. However, research has also shown the importance of user and market aspects in repair. Future research could investigate how the current scoring systems reflect this, by testing user and market aspects related to repairability.
While this research was intended to create tools useful for policy makers, it was beyond the scope of this project to predict what specific policy types would be most effective in using the tools. Therefore, further research is recommended to find the most effective policies to best improve repairability, such as taxing, mandatory product labelling, mandatory minimum reparability scores, or other implementations.

5. Conclusions

This study assessed the objectivity and completeness of six major repair scoring systems, to see what further development may be required to make them policy instruments. The completeness of each scoring system was assessed by comparing it to the latest literature on the design features and principles that drive product repairability. Similarly, the objectivity of each scoring system was assessed by comparing whether the presented scoring levels per criteria in each scoring system were clearly defined, with a quantifiable and operator-independent testing method. In general, most of the scoring systems were acceptably objective and complete. FRI and iFixit scoring systems were found to be the most objective, and JRC was the most complete. However, they could all be further improved by recommendations presented in the paper.
Addressing the gaps presented in this paper would lead to the development of an ideal scoring system with an effective testing program that could be used for policy making. Additionally, this scoring system could also be used for assessment by consumer organizations, MSAs, and other interested stakeholders, to promote the repairability of products which will, in turn, improves their lifetime.

Author Contributions

Conceptualization—All authors; Writing—original draft, S.D.; Writing—review & editing, S.D., J.F. and R.B.; Supervision, J.F. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the European Commission under the Horizon 2020 Premature Obsolescence Multi stakeholder Product Testing Program (PROMPT) (Grant Agreement number 820331).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bakker, C.; Wang, F.; Huisman, J.; den Hollander, M. Products that go round: Exploring product life extension through design. J. Clean Prod. 2014, 69, 10–16. [Google Scholar] [CrossRef]
  2. Baldé, C.; Forti, V.; Gray, V.; Kuehr, R.; Stegmann, P. Suivi des Déchets d’équipements Électriques et Électroniques à l’échelle Mondiale 2017: Quantités, Flux et Ressources; UNITAR: Geneva, Switzerland, 2017. [Google Scholar]
  3. OECD. Material Resources, Productivity and the Environment; OECD: Paris, France, 2015; pp. 1–14. [Google Scholar]
  4. European Commission. Circular Economy Action Plan. 2020. Available online: https://ec.europa.eu/environment/topics/circular-economy/first-circular-economy-action-plan_en (accessed on 10 August 2021).
  5. Sanfelix, J.; Cordella, M.; Alfieri, F. Methods for the Assessment of the Reparability and Upgradability of Energy-Related Products: Application to TVs Final Report; European Commission Publications Office: Seville, Spain, 2019. [Google Scholar] [CrossRef]
  6. Bracquené, E.; Brusselaers, J.; Dams, Y.; Peeters, J.; de Schepper, K.; Duflou, J.; Dewulf, W. ASMER BENELUX Repairability Criteria for Energy Related Products; Study in the BeNeLux Context to Evaluate the Options to Extend the Product Life Time; BeNeLux: Bruxelles, Belgium, 2018. [Google Scholar]
  7. ONR 192102; Label of Excellence for Durable, Repair Friendly, Designed Electrical and Electronic Appliances. Beuth Publishing: Berlin, Germany, 2014.
  8. Flipsen, B.; Huisken, M.; Opsomer, T.; Depypere, M. IFIXIT Smartphone Reparability Scoring: Assessing the Self-Repair Potential of Mobile ICT Devices. PLATE Conf. 2019, 2019, 18–20. [Google Scholar]
  9. iFixit. Smartphone Repairability Scores 2021. Available online: https://www.ifixit.com/smartphone-repairability (accessed on 2 August 2021).
  10. Ademe, M.H.; Ciarabelli, L.; Alma, D.; Eric, E.W.M.; Virginie, L.; Guillaume, D.; Benjamin, M.; Astrid, L.F. Benchmark International Du Secteur De La Reparation; Agemce de l’Environment: Paris, France, 2018. [Google Scholar]
  11. Franceschini, F.; Galetto, M.; Maisano, D. Management by Measurement: Designing Key Indicators and Performance Measurement Systems: With 87 Figures and 62 Tables; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  12. Bracquene, E.; Peeters, J.R.; Burez, J.; de Schepper, K.; Duflou, J.R.; Dewulf, W. Repairability evaluation for energy related products. Procedia CIRP 2019, 80, 536–541. [Google Scholar] [CrossRef]
  13. Bracquene, E.; Peeters, J.; Alfieri, F.; Sanfelix, J.; Duflou, J.; Dewulf, W.; Cordella, M. Analysis of evaluation systems for product repairability: A case study for washing machines. J. Clean. Prod. 2021, 281, 125122. [Google Scholar] [CrossRef]
  14. Indice de Réparabilité. 2021. Available online: https://www.ecologie.gouv.fr/indice-reparabilite (accessed on 19 April 2022).
  15. EN 45554. General Methods for the Assessment of the Ability to Repair, Reuse and Upgrade Energy-Related Products; European Committee for Electrotechnical Standardization: Brussels, Belgium, 2021. [Google Scholar]
  16. Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the EASE ‘14: 18th International Conference on Evaluation and Assessment in Software Engineering, London, UK, 13–14 May 2014. [Google Scholar] [CrossRef]
  17. Bovea, M.D.; Pérez-Belis, V. Identifying design guidelines to meet the circular economy principles: A case study on electric and electronic equipment. J. Environ. Manag. 2018, 228, 483–494. [Google Scholar] [CrossRef] [PubMed]
  18. Den Hollander, M.C. Design for Managing Obsolescence: Design Methodology for Preserving Product Integrity in a Circular Economy. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 2018. [Google Scholar]
  19. EN 62542:2013; Environmental Standardization for Electrical and Electronic Products and Systems—Glossary of Terms. European Committee for Electrotechnical Standardization: Bruxelles, Belgium, 2013.
  20. Vanegas, P.; Peeters, J.R.; Cattrysse, D.; Tecchio, P.; Ardente, F.; Mathieux, F.; Dewulf, W.; Duflou, J.R. Ease of disassembly of products to support circular economy strategies. Resour. Conserv. Recycl. 2018, 135, 323–334. [Google Scholar] [CrossRef] [PubMed]
  21. Peeters, J.R.; Tecchio, P.; Vanegas, P. eDIM: Further Development of the Method to Assess the Ease of Disassembly and Reassembly of Products: Application to Notebook Computers; Publications Office of the European Union: Luxembourg, 2018. [Google Scholar] [CrossRef]
  22. Bonvoisin, J.; Halstenberg, F.; Buchert, T.; Stark, R. A systematic literature review on modular product design. J. Eng. Des. 2016, 27, 488–514. [Google Scholar] [CrossRef]
  23. Pozo Arcos, B.; Bakker, C.A.; Flipsen, B.; Balkenende, R. Practices of Fault Diagnosis in Household Appliances: Insights for Design. J. Clean. Prod. 2020, 265, 121812. [Google Scholar] [CrossRef]
  24. Cordella, M.; Sanfelix, J.; Alfieri, F. Development of an Approach for Assessing the Reparability and Upgradability of Energy-related Products. Procedia CIRP 2018, 69, 888–892. [Google Scholar] [CrossRef]
  25. Pozo Arcos, B.; Bakker, C.A.; Flipsen, B.; Balkenende, R. Faults in consumer products are difficult to diagnose, and design is to blame: A user observation study. J. Clean. Prod. 2021, 319, 128741. [Google Scholar] [CrossRef]
  26. Dangal, S.; van den Berge, R.; Pozo Arcos, B.; Faludi, J.; Balkenende, R. Perceived capabilities and barriers for do-it-yourself repair. In Proceedings of the 4th PLATE 2021 Conference, Virtual, 26–28 May 2021. [Google Scholar]
  27. Moss, M. Designing for Minimal Maintenance Expense: The Practical Application of Reliability And Maintainability. Quality and Reliability Series Part 1; Marcel Dekker: New York, NY, USA, 1985. [Google Scholar]
  28. Perera, H.S.C.; Nagarur, N.; Tabucanon, M.T. Component part standardization: A way to reduce the life-cycle costs of products. Int. J. Prod. Econ. 1999, 60, 109–116. [Google Scholar] [CrossRef]
  29. Deloitte. Study on Socioeconomic Impacts of Increased Reparability—Final Report; Prepared for the European Commission, DG ENV; Publications Office of the European Union: Luxembourg, 2016. [Google Scholar] [CrossRef]
  30. Shahbazi, S.; Jönbrink, A.K. Design guidelines to develop circular products: Action research on nordic industry. Sustainability 2020, 12, 3679. [Google Scholar] [CrossRef]
  31. Pérez-Belis, V.; Braulio-Gonzalo, M.; Juan, P.; Bovea, M.D. Consumer attitude towards the repair and the second-hand purchase of small household electrical and electronic equipment. A Spanish case study. J. Clean. Prod. 2017, 158, 261–275. [Google Scholar] [CrossRef]
  32. Keoleian, G.; Menerey, D. Life Cycle Design Guidance Manual: Environmental Requirements and the Product System; Office of Research and Development: Washington, DC, USA, 1993. [Google Scholar]
  33. Viegand Maagøe A/S; Van Holsteijn en Kemna B.V. Review Study on Vacuum Cleaners Final Report; European Commission, Directorate-General for Energy: Brussels, Belgium, 2019. [Google Scholar]
  34. Tecchio, P.; Ardente, F.; Mathieux, F. Understanding lifetimes and failure modes of defective washing machines and dishwashers. J. Clean. Prod. 2019, 215, 1112–1122. [Google Scholar] [CrossRef]
  35. Sabbaghi, M.; Cade, W.; Behdad, S.; Bisantz, A.M. The current status of the consumer electronics repair industry in the U.S.: A survey-based study. Resour. Conserv. Recycl. 2017, 116, 137–151. [Google Scholar] [CrossRef] [Green Version]
  36. Dewberry, E.; Saca, L.; Moreno, M.; Sheldrick, L.; Sinclair, M. A Landscape of Repair. Sustain Innov. 2016, 2016, 76–85. [Google Scholar]
  37. IFixit. Repair Market Observations from Ifixit; IFixit: San Luis Obispo, CA, USA, 2019. [Google Scholar]
  38. Flipsen, B.; Bakker, C.; van Bohemen, G. FLIPSEN Developing a reparability indicator for electronic products. In Proceedings of the 2016 Electron Goes Green 2016+ (EGG), Berlin, Germany, 6–9 September 2016; pp. 1–9. [Google Scholar] [CrossRef]
  39. Jaeger-Erben, M.; Frick, V.; Hipp, T. Why do users (not) repair their devices? A study of the predictors of repair practices. J. Clean. Prod. 2020, 286, 125382. [Google Scholar] [CrossRef]
  40. Ackermann, L.; Mugge, R.; Schoormans, J. Consumers’ perspective on product care: An exploratory study of motivators, ability factors, and triggers. J. Clean. Prod. 2018, 183, 380–391. [Google Scholar] [CrossRef]
  41. Jef, R.P.; Paul, V.; Cattrysse, D.; Tecchio, P.; Mathieux, F.; Ardente, F. Study for a Method to Assess the Ease of Disassembly of Electrical and Electronic Equipment. Method Development and Application to a Flat Panel Display Case Study; Publications Office of the European Union: Luxembourg, 2016. [Google Scholar] [CrossRef]
  42. Laitala, K.; Klepp, I.G.; Haugrønning, V.; Throne-Holst, H.; Strandbakken, P. Increasing repair of household appliances, mobile phones and clothing: Experiences from consumers and the repair industry. J. Clean. Prod. 2021, 282, 125349. [Google Scholar] [CrossRef]
  43. Willems, G. Electronics Design-for-eXcellence Guideline, Design-for-Robustness of Electronics; IMEC: Louvain, Belgium, 2019; pp. 1–36. [Google Scholar]
  44. Ingemarsdotter, A.E.; Stolk, M.; Balkenende, R. Design for Safe Repair in a Circular Economy; Technical University Delft: Delft, The Netherlands, 2021. [Google Scholar]
  45. Svensson-Hoglund, S.; Richter, J.L.; Maitre-Ekern, E.; Russell, J.D.; Pihlajarinne, T.; Dalhammar, C. Barriers, enablers and market governance: A review of the policy landscape for repair of consumer electronics in the EU and the U.S. J. Clean. Prod. 2021, 288, 125488. [Google Scholar] [CrossRef]
  46. Cordella, M.; Alfieri, F.; Sanfelix, J. JRC Analysis and Development of aJ JRC Repair—Scoring System for Repair and Upgrade of Products—Final Report; European Commission Publications Office: Seville, Spain, 2019. [Google Scholar] [CrossRef]
  47. Zandin, K.B. MOST Work Measurement Systems, 4th ed.; Taylor & Francis Group: Abingdon, UK, 2002. [Google Scholar]
Figure 1. Overview of the search process followed. “*” = search term uses wildcards.
Figure 1. Overview of the search process followed. “*” = search term uses wildcards.
Sustainability 14 08634 g001
Table 1. Overview of the chosen six scoring systems (“VC” = Vacuum cleaner, “WM” = washing machine, “DW” = Dishwasher).
Table 1. Overview of the chosen six scoring systems (“VC” = Vacuum cleaner, “WM” = washing machine, “DW” = Dishwasher).
Scoring SystemMainly Based onProducts That Can Be TestedDetails
EN 45554 (2020)
  • literature research on product repairability
  • Co-construction by professional organizations, manufacturers, distributors, repairers, NGOs, and experts.
All EEEThe general method of assessment for repair, reuse, and upgrade. Provides a generic set of tools and is not tailored to specific products. Intended for both professional repairers and self-repairers.
FRI (2020)
  • Literature research on product repairability
  • Co-construction by professional organizations, manufacturers, distributors, repairers, NGOs, start-ups, and experts.
Washing machines, TVs, Laptops, Smartphones, Lawnmowers,Based on five criteria: documentation, disassembly, spare part availability, spare part price, and additional product-based criteria. Intended for both professional repairers and self-repairers.
iFixit (2019)
  • Literature research on product repairability
  • Co-construction by iFixit experts, and sustainability (SMART) consortium.
Mobile phonesEight criteria focused on assessing ease of self-repair.
RSS (2019)
  • Literature research following preliminary EN45554 and
    AsMer2018
  • Co-construction by industry, trade associations, repairers, academia).
  • Case studies.
VCs, laptops, TVs, mobile phones, WMs, DWsAssessment of repairability, reusability, and upgradability. Intended for professional repairers.
AsMer (2018)
  • Literature research on product repairability
  • Case studies.
All EEEBased on five main repair steps: product identification, failure diagnostic, disassembly and reassembly, spare part replacement, and restoring to working condition. Three different repairability criteria: information provision, product design, and service. Intended for professional repairers and self-repairers.
ONR 192102 (2014)
  • Co-construction by repairers and the Federal Ministry of Land, Forestry, Environment, and Water.
Brown goods and white goodsAssessment of both durability and repairability. Criteria are related to product design, provision of information and services. Intended for professional repairer
Table 2. Overview of design features and principles empirically shown to drive repairability, and their descriptions in the literature.
Table 2. Overview of design features and principles empirically shown to drive repairability, and their descriptions in the literature.
Design Features and PrinciplesDefinition and How It Relates to Repair
DisassemblyThe product is taken apart so that it can subsequently be reassembled and made operational [19]. Required to access components for most repairs [20].
ReassemblyAssembling a product to its original configuration after disassembly [21]. Required to return a product to operation.
Fastener removability and reusabilityFacilitation of removability of fasteners while ensuring that there is no impairment of the parts [or product] due to the process. Required for disassembly and ease of reassembly.
Fastener visibilityWhether more than 0.5 mm2 of the fastener surface area is visible when looking at fastening direction [20], and visual cues [8]. Facilitates product disassembly.
Tools requiredNumber and type of tools necessary for repair of the product [15].
ModularityThe product design is composed of different modules. A module can consist of one or more components. Modules can be separated from the rest of the product as self-contained, semi-autonomous chunks; and they can be recombined with other components [22]. Modularity improves diagnosis [23], product disassembly, [24] and spare part price. The degree of modularity needs to be balanced—bundling into bigger modules decreases disassembly time but makes spare parts expensive, and vice versa.
DiagnosisProcess of isolating the reason for product failure. Diagnosis is facilitated by designed signals (text, light, sound, or movement) [23]. Even without these features, visible surfaces and component accessibility for inspection can also promote failure isolation [25].
Health and safetyHealth and safety risks to the user during and after repair. Features minimizing safety risks also increasing confidence in product disassembly and reassembly [26].
Standard parts and interfaceEnforcing “the conformance of commonly used parts and assemblies to generally accepted design standards for configuration, dimensional tolerances, performance ratings, and other functional design attributes” [27]. Standardization beneficially affects spare part cost and availability, tooling, component identification complexity, and skill levels required, and increases the interchangeability of components during maintenance and repair [28].
Information accessibilityInformation available to the product user and repairers. Whilst this is not directly a design element, manuals and labels are provided with the product. Guides repair process [23,25,29,30,31].
Design simplicity/complexityA minimal number of disassembly steps and/or disassembly time [24], and simplicity in understanding the interface and malfunction feedback to assist failure diagnosis [25].
Adaptability/ upgradabilityAdaptability allows performance of the designed functions in a changing environment. Upgrading enhances the functionality of a product [18]. Software-related issues in the product can sometimes be repaired through updates.
Ease of handlingFeatures such as small size, low centre of gravity and the presence of handles all promote ease of product handling [17,18]. Facilitates disassembly process during product manipulation.
InterchangeabilityAssuring components can be replaced in the field with no reworking required to achieve a physical fit. Allows for component testing [23,25] and facilitates component replacement.
RobustnessSelecting designs that are robust. Assures products do not break during repair [8]; increases confidence during disassembly [25].
RedundancyProviding an excess of functionality and/or material in products or parts. Allows removal of material as part of a recovery intervention [32]. Functional redundancy assists fault location and isolation [23].
Firmware resetSoftware and the electronics-related issues can be fixed via reset [33] Reset functions facilitate cause-oriented diagnosis [23]
Table 3. Overview of scoring systems compared to the literature [17,18,23,29,30,31,34,35,36,37,38,39,40,41,42]. Red rows = missing or partially addressed design elements in the scoring system. Bullet points = addressed aspects, Hollow bullet points = partially addressed aspects. Numbers in the column Bovea et al. [17], and Den Hollander et al. [18] = the number of papers they list relating to each design principle.
Table 3. Overview of scoring systems compared to the literature [17,18,23,29,30,31,34,35,36,37,38,39,40,41,42]. Red rows = missing or partially addressed design elements in the scoring system. Bullet points = addressed aspects, Hollow bullet points = partially addressed aspects. Numbers in the column Bovea et al. [17], and Den Hollander et al. [18] = the number of papers they list relating to each design principle.
Design aspect related to reparabilityScoring SystemLiterature
EN 45554RSS(JRC)AsMer (Benelux)ONR 192102FRIiFixit 2018Bovea et al. [17]Den Hollander [18]Shahbazi et al. [30]Tecchio et al. [34]Pozo Arcos et al. [23]Victoria et al. [31]Deloitte [29]Sabaghi et al. [35]Dewberry et al. [36]IFixit [37]Filpsen et al. [38]Jaeger et al. [39]Ackermann et al. [38]Jef et al. [41]Laitala et al. [42]
Disassembly246
Reassembly 6
Fastener removability and Reusability 16
Fastener Visibility 11
Tools Required3
Modularity 135
Diagnosis 13
Health and safety risk (design)
Standard parts and interface 4
Repair Information to user
Updatebility / Adaptability 282
Design simplicity/ Complexity 291
Handling 71
Interchangeability 2
Material selection/ Robustness 1
Redundancy 2
Firmware Reset
Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. “Dis.” = Disassembly, “Rea.” = Reassembly, “Mfr.” = Manufacturer, “c.” = check, “#” = number of.
Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. “Dis.” = Disassembly, “Rea.” = Reassembly, “Mfr.” = Manufacturer, “c.” = check, “#” = number of.
Design Elements Scoring SystemTesting Method Details
EN 45554RSS (JRC)AsMer (Benelux)ONR 192102FRIiFixit
DisassemblyTestDis. time or # stepsDis. time or # stepsDis. timeDis. possibilityDis. # stepsDis. time, Path• Dis. required
• “Possibility” = Possibility of full Dis.
• * = c. with reference value
• cont.: Continuous levels
scoring levelslevels not determined. Dis. step or time (EdiM) *4 levels of Dis. step / time (EdiM)4 levels of Dis. step / time (EdiM)10 levels for possibility of Dis.
5 levels of Dis. Effort
4 levels of Dis. StepContinuous Dis. time, Path of entry*
ReassemblyTestRea. timeRea. time, c. infoRea. time------------------• Dis. & Rea. required
• “ c. info” = check Information on Rea.
• * = check with reference value
scoring levelsRea. time (EdiM) *2: Description of Rea., Reass. time (EdiM) *4: Rea. time (EdiM) *N/AN/AN/A
ModularityTest------------Dis. AbilityDis. Ability------------• Dis. & c. disassembly & possibility for critical components to be reducible
scoring levelsN/AN/A3: 50% replaceable, 75% replaceable, all replaceable10: all reducible to individual componentsN/AN/A
Fastener TypeTestDis. & c. typeDis. & c. type------Dis. & c. typeDis. & c. type------• Disassemble & check fastener type
* = (reusable > removable > non removable)
scoring levels3 *3 *------10: Non removable3 *N/A
Fastener VisibilityTestDis. c. visibility------------------------Dis c. visibility• check fastener visibility during • Dis: Dis. required
scoring levels3: Visible, not visible > hiddenN/AN/AN/AN/A3: highlighted, visible, not visible
Tools RequiredTestDis. & c. toolsDis. & c. toolsDis. & c. toolsDis. & c. toolsDis. & c. toolsDis. & c. tools• check tools needed during Dis.
• dis: Dis. required
• “prop.” = proprietary
scoring levels4: No or basic, product specific, commercially available, prop., not removable3: Basic, product specific, prop.3: Basic, product specific, prop.5: Intuitive device operation4: Basic or supplied, product specific, prop., not removable4: basic, product specific, prop., requires heat gun
DiagnosisTestcause f. & c. interface operability, c. interface, c. available documents.cause f. & c. interface operability, c. interface, c. available documents.cause f. & c. interface operability, c. interface,cause f. & c. interface operability------------• “f.” = fault
• documents availability could be manual, official website or through service centre call.
scoring levels4: Intuitive, coded, additional software/Hardware & prop.)4: Intuitive, coded, additional software/Hardware & prop.)4: Intuitive interface, coded, additional software/Hardware & prop.)10: display & test mode, 10: low level operation, operation after cover removal------N/A
Health & Safety risk during repair (design)Test------c. mfr. instructions------Dis. & c. features------Dis. & c. featuresInstructions could be included via, manual, official website or by service centre call.
scoring levelsN/A1: Instruction from mfr.N/A5: Protection in control processors, 4: danger warning signs, 5: warnings on sensitive components.N/A8: Battery case type, adhesive use & type. Requirement of heating & sharp tools
Working Environment (safety)Testc. mfr. Instructionc. mfr. Instruction------------------------check Instruction from mfr. for work environment required for repair (via, manual, official website or through service centre call.)
scoring levels3: any condition, workshop, production environment3: any condition, workshop, production environmentN/AN/AN/AN/A
Skill Required (safety)Testc. mfr. Instructionc. mfr. Instruction------------------------check Instruction from mfr. for skill required for repair (via, manual, official website or by service centre call.)
scoring levels4: Layman, Generalist, Expert, mfr., Not feasible3: Layman, Expert, mfr.N/AN/AN/AN/A
Information mediaTest------------c. info mediac. info media c. info mediacheck information media as listed on the criteria.
scoring levelsN/AN/A4: Attached to product, manual, website, not available4: Attached to product, Manual, website, toll free contact support, local fee contact supportN/A3: Attached to product, video, on website
Information ContentTestc. mfr. Instructions, c. mediamfr., c. mediamfr., c. mediamfr., c. mediamfr.c. media• check actual availability in different media
• check manufacturers declaration
scoring levels9: c. presence (Table 5)9: c. presence (Table 5)9: c. presence (Table 5)13: c. presence (Table 5)13: c. presence (Table 5)5: c. presence (Table 5)
std. parts & interfaceTestc. mfr. Info.c. mfr. Info.c. mfr. Info.dis. c. type------------• check manufacturer Information
• “std.”= standardised
• “prop.” = proprietary
scoring levels3: std. part & interface, prop. part with std. interface, prop. part with non-std. interface2: non-prop. & Has a std. interface, prop. or lacks std.3 (all parts std., few parts std., no std.2: std. interface, non std. interfaceN/AN/A
Reset (firmware & Card)Testc. Possibilityc. possibility, c. informationc. informationc. possibility, c. informationc. instruction------• c. possibility to reset by trying to reset the product
• c. information & instruction on firmware reset
scoring levels4: Integrated, external, service, not possible4: Integrated, external, service, not possible1: Possibility to reset1: Possibility to reset1: Possibility to restN/A
Design SimplicityTest------------------operate. c.------------operate: operate the device & c. intuitiveness.
scoring levelsN/AN/AN/Aintuitive device operation (5)N/AN/A
Table 5. Information required in different scoring systems. Bullet points = Information aspect addressed by the scoring system.
Table 5. Information required in different scoring systems. Bullet points = Information aspect addressed by the scoring system.
Information AvailabilityScoring System
EN 45554RSS (JRC)AsMerONR 192102FRIIFixit
Features being claimed in update
Update method
Documentation of updates offered after the point of sale
repair Instructions/manual/bulletin
Product identification
Component identification
exploded view
Regular maintenance instructions
Diagnosis information/testing procedure/ Troubleshooting chart
Repair/Upgrade service offered by the manufacturer
safety measures related to use, maintenance, and repair
List of available updates
Disassembly instruction
Reassembly sequence
Product identification
Fault detection software
PCB/Electronic board diagram
Error codes
3D printing of spare parts
Reconditioning
Procedure to reset to working condition
Service centre accessibility
Transportation instructions
Circuit/Wiring diagram
Replacement supplier/supply information
Tools required
Service plan of electrical boards
Training materials for repair
Recommended torque for fasteners
Compatibility of parts with other products
functional specification of parts
reference values for measurements
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dangal, S.; Faludi, J.; Balkenende, R. Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness. Sustainability 2022, 14, 8634. https://doi.org/10.3390/su14148634

AMA Style

Dangal S, Faludi J, Balkenende R. Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness. Sustainability. 2022; 14(14):8634. https://doi.org/10.3390/su14148634

Chicago/Turabian Style

Dangal, Sagar, Jeremy Faludi, and Ruud Balkenende. 2022. "Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness" Sustainability 14, no. 14: 8634. https://doi.org/10.3390/su14148634

APA Style

Dangal, S., Faludi, J., & Balkenende, R. (2022). Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness. Sustainability, 14(14), 8634. https://doi.org/10.3390/su14148634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop