Design Aspects in Repairability Scoring Systems: Comparing Their Objectivity and Completeness

: The Circular Economy Action Plan adopted by the European Commission aims to keep value in products as long as possible through developing product-speciﬁc requirements for durability and repairability. In this context, various scoring systems have been developed for scoring product repairability. This study assessed the objectivity and completeness of six major repair scoring systems, to see what further development may be required to make them policy instruments for testing product repairability. Completeness of the scoring systems was assessed by comparing them to the latest literature on what design features and principles drive product repairability. Objectivity was determined by assessing whether the scoring levels in each criterion were clearly deﬁned with a quantiﬁable and operator-independent testing method. Results showed that most of the criteria in the scoring systems were acceptably objective and complete. However, improvements are recommended: The health and safety criterion lacked objectivity and has not yet been fully addressed. Further research is required to expand the eDiM database, and to identify whether the additional accuracy provided by eDiM compared to disassembly step compensates for the increased difﬁculty in testing. Finally, assessment of reassembly and diagnosis should be expanded. Addressing these gaps will lead to the development of a scoring system that could be better used in policymaking, and for assessment by consumer organizations, market surveillance authorities, and other interested stakeholders, to promote the repairability of products.


Introduction
Consumer goods are nowadays less durable and repairable than in the past, and the average product lifetime of products seems to be decreasing [1]. This contributes towards an increase in waste electronic and electrical equipment (WEEE), which has been growing at the rate of 2-5% per year [2]. A report by the Organization for Economic Co-operation and Development (OECD) indicates that extending product lifetime could help solve this issue [3]. As a response, The Circular Economy Action Plan adopted by the European Commission sets out to keep value in products as long as possible through developing product-specific requirements for durability and repairability [4]. In this context, various scoring systems have been developed for scoring the repairability of electronic and electrical equipment (EEE) [5][6][7][8][9][10]. Such scoring systems could also contribute to ongoing and future standardization to provide designers and market surveillance authorities (MSAs) with recommendations on improving the repairability of products. Additionally, this could empower consumers to make informed choices when buying their products. A good scoring system should be objective and provide a complete assessment of the repairability of products [11]: The scoring system should be assessed on whether it reflects science-based literature on design aspects related to repairability. These elements

•
The criteria for these scoring systems are publicly available in the English language. • The evaluation method used is quantitative or at least semi-quantitative in nature, to provide a more objective assessment and enable ranked comparisons of products. • It must be the latest iteration or version of the assessment system from the organisation/group. Table 1 provides an overview of the chosen six scoring systems. These criteria were expected to overlap, firstly because they all measure the repairability of electrical and electronic equipment (EEE), but also because newer scoring systems tend to have been developed after consideration and study of previous scoring systems. Table 1. Overview of the chosen six scoring systems ("VC" = Vacuum cleaner, "WM" = washing machine, "DW" = Dishwasher).

Scoring System
Mainly Based on Products That Can Be Tested Details EN 45554 (2020) • literature research on product repairability • Co-construction by professional organizations, manufacturers, distributors, repairers, NGOs, and experts.

All EEE
The general method of assessment for repair, reuse, and upgrade. Provides a generic set of tools and is not tailored to specific products. Intended for both professional repairers and self-repairers. • Literature research on product repairability • Co-construction by professional organizations, manufacturers, distributors, repairers, NGOs, start-ups, and experts.
Washing machines, TVs, Laptops, Smartphones, Lawnmowers, Based on five criteria: documentation, disassembly, spare part availability, spare part price, and additional product-based criteria.
Intended for both professional repairers and self-repairers.
Mobile phones Eight criteria focused on assessing ease of self-repair.

AsMer (2018)
• Literature research on product repairability • Case studies. All EEE Based on five main repair steps: product identification, failure diagnostic, disassembly and reassembly, spare part replacement, and restoring to working condition. Three different repairability criteria: information provision, product design, and service. Intended for professional repairers and self-repairers.

ONR 192102 (2014)
• Co-construction by repairers and the Federal Ministry of Land, Forestry, Environment, and Water.
Brown goods and white goods Assessment of both durability and repairability. Criteria are related to product design, provision of information and services. Intended for professional repairer

Assessing Completeness of the Scoring Systems
From December 2020 to February 2021, a review of the literature was conducted to identify design principles, features, and guidelines related to the repairability of household electronic and electrical equipment. Relevant scientific literature related to design aspects of repairability was identified via the Google Scholar search engine and SCOPUS citation database.
Search terms were "design", "features", "principles", and "guidelines". These were followed by "repair OR maintain". Additionally, the search term focused on the following product categories: "appliance", "household products", "EEE", "white goods", "brown goods", "electrical and electronic equipment", "mobile phones", "vacuum cleaner", "laptop". This was an iterative process where different combinations of the provided terms were used. Wildcards were used to ensure wide coverage, and a proximity criterion of within 5 was used to narrow down the relevant results with co-occurring search terms (see Figure 1). The search was conducted within titles, abstracts, and keywords, in papers published from 2000 to 2021. The search was also focused on the following subject areas: engineering, material science, environmental science, industrial design, and design.
This review focused on aspects related to the physical design of the product. These included design features, principles, and guidelines related to the repairability of household electronic and electrical equipment. Articles beyond the aforementioned scope were excluded. These included elements related to automotive products and textiles, and also user and market aspects related to repairability (such as spare part prices and availability). The results were screened for their relevancy firstly by checking headings, then by reviewing the abstract and conclusion. A full review of the paper was then conducted, and relevant articles selected. Additional papers were identified via snowballing using the reference list of a paper or its citations to identify additional articles [16].
(see Figure 1). The search was conducted within titles, abstracts, and keywords, in papers published from 2000 to 2021. The search was also focused on the following subject areas: engineering, material science, environmental science, industrial design, and design. This review focused on aspects related to the physical design of the product. These included design features, principles, and guidelines related to the repairability of household electronic and electrical equipment. Articles beyond the aforementioned scope were excluded. These included elements related to automotive products and textiles, and also user and market aspects related to repairability (such as spare part prices and availability). The results were screened for their relevancy firstly by checking headings, then by reviewing the abstract and conclusion. A full review of the paper was then conducted, and relevant articles selected. Additional papers were identified via snowballing using the reference list of a paper or its citations to identify additional articles [16].
During the analysis phase, each chosen paper was read, and sections marked wherever design-related aspects related to repairability were mentioned. The design aspects were considered relevant only if the addressed repairability aspects were an outcome of an empirical study.
Two studies have been conducted previously on design guidelines and principles related to repairability: The paper by Boeva et al. [17] provides nine relevant recommendations related to repairability originating from 34 different sources. Similarly, Den Hollander provides 16 design principles related to the repairability of products originating from six different pieces of literature published before 2016. To avoid multiple references, the literature already addressed by Boeva et al. [17] and Den Hollander [18] was not considered in our study.
Results of our analysis were clustered into design features and principles empirically shown in the literature to improve repairability, , to enable a comparison with the criteria measured by the different scoring systems. The completeness of the scoring system was determined by checking whether the identified design elements were reflected in the scoring system. During the analysis phase, each chosen paper was read, and sections marked wherever design-related aspects related to repairability were mentioned. The design aspects were considered relevant only if the addressed repairability aspects were an outcome of an empirical study.

Assessing Objectivity of the Scoring Systems
Two studies have been conducted previously on design guidelines and principles related to repairability: The paper by Boeva et al. [17] provides nine relevant recommendations related to repairability originating from 34 different sources. Similarly, Den Hollander provides 16 design principles related to the repairability of products originating from six different pieces of literature published before 2016. To avoid multiple references, the literature already addressed by Boeva et al. [17] and Den Hollander [18] was not considered in our study.
Results of our analysis were clustered into design features and principles empirically shown in the literature to improve repairability, to enable a comparison with the criteria measured by the different scoring systems. The completeness of the scoring system was determined by checking whether the identified design elements were reflected in the scoring system.

Assessing Objectivity of the Scoring Systems
Objectivity is important for the repeatability of scores. To assess objectivity, the criteria presented in the different scorings systems were clustered within the identified design features and principles (see Table 2). Afterwards, each criterion and its testing method were categorized into three levels: objective, semi-objective, and subjective, based on the following criteria: • Objective: Each level score that can be achieved is clearly defined, the testing action to achieve the score can be quantified and is operator-independent. • Semi-objective: Whilst the testing action can be quantified, no clear indication is given on how each level of the score is achieved, causing a degree of operator dependence. • Subjective: One or more testing actions cannot be quantified objectively; the result is operator-dependent.

Results and Discussion
This section first shows how well each analysed scoring system captures the design elements that have been empirically shown in the literature to drive repairability. It then assesses the completeness and objectivity of each scoring system, as well as highlighting differences between them.
Considering both the literature and different scoring systems, a total of 17 different design elements were identified that are considered important for repairability in EEE. Table 2 provides the list of design elements and their descriptions based on the literature. Table 3 provides an overview of scoring systems compared to the literature. In general, all criteria in the scoring system seem to be reflected in the literature. Table 3 shows that seven out of eighteen aspects related to repairability from the literature were well reflected in most (more than three) of the scoring systems. These include disassembly, fastener type, tools required, Information content, standardized parts and interface, and firmware reset. In contrast, seven aspects (coloured in red) were not addressed or partially addressed. This is described below.

Aspects not Addressed or only Partially Addressed by the Scoring System
Four aspects were not addressed directly by any of the scoring systems: "ease of handling", "interchangeability", "redundancy", and "material selection". These may be missing from the scoring system because, as the table shows, there is much less literature on them than other aspects of repair. Similarly, "diagnosis" and "health and safety risk" are only partially addressed. However, they may still be sufficiently important to include in the scoring.
The first aspect not addressed in the scoring system is "ease of handling". Features such as small size, low centre of gravity and handles make product manipulation (flipping, tilting, etc.) easier during disassembly, and make it easier to take the product for repair. However, the absence of these features does not seem to severely alter the repairability of the product.
The second aspect not addressed by the scoring system is "interchangeability". Interchangeability allows for component testing [23] as well as facilitating the removal and replacement of the component. Additionally, interchangeability allows for part replacement with third-party spare parts. Interchangeability of components could also enable extracted components from old products to be used for repair; however, minimal data are available on how often this is repair scenario occurs in the EU. Further investigation may be required to determine the extent of component extraction from old products, and to what extent third-party spare parts are used for repair within the EU. This could be achieved by surveying and observing repairers and their repair process.
The third aspect not addressed is "robustness". This principle ensures that handling and disassembling actions during repair do not break or damage the product [34], It also increases confidence during disassembly [25]. The majority of the scoring systems (4/6) indicate that if breakage occurs during the disassembly process, the fastener for the part being disassembled is considered "non-removable", and this "fastener removability and reusability" criterion is partially addressed by that of "material selection"/ "robustness". However, testing the robustness of the product is normally carried out through complex simulations, destructive stress tests and accelerated life tests [43], all requiring significant resources. This most likely outweighs the benefit of having this criterion in the repairability scoring system. Further research may be needed to determine if an easier testing method could be developed to test for material selection/robustness of products. One method to achieve this is by checking products for features which influence robustness (e.g., a curved screen is generally more prone to breakage than a flat screen). These features can be extracted from a database of failed products. This is currently under investigation and will be published in an upcoming paper under the EU Horizon PROMPT project. However initial research shows that product failure can be caused by multiple design principles, and it is difficult to reliably assess the robustness of products by considering design features alone. Table 3. Overview of scoring systems compared to the literature [17,18,23,[29][30][31][34][35][36][37][38][39][40][41][42]. Red rows = missing or partially addressed design elements in the scoring system. Bullet points = addressed aspects, Hollow bullet points = partially addressed aspects. Numbers in the column Bovea et al. [17], and Den Hollander et al. [18] = the number of papers they list relating to each design principle.  Similarly, the literature is unclear on the extent to which redundancy in a product promotes repair. "Redundancy" relates to providing an excess of functionality and/or material in products or parts that allow for normal wear or removal of material as part of a recovery intervention [32]. This principle was found to help users locate and isolate faults [23,25]. However, this redundancy normally increases the material requirements and cost of the product. Therefore, this design feature may not justify the additional cost and materials needed for manufacture.

Fastener removability and Reusability
One of the two partially addressed aspects is "diagnosis". For most of the scoring systems (4/6), the ability of the products to sense faults and alert the user via a display or error codes is regarded as diagnosis, and a criterion for it is developed accordingly. However, according to Arcos et al. [25], various other design features also play a role in ease of diagnosis for users (such as transparent housing, and having easily accessible testing points). This parameter in the scoring systems could be developed by incorporating the results from Arcos et al. [25]. Additionally, ONR 192102 consists of "low-level function when faulty" and "operation after removal of the cover" as criteria for diagnosis; these two features have not been addressed by any other scoring system and could be an interesting feature to be incorporated towards assessment of diagnosis.
The other incompletely addressed aspect is health and safety risk. Safety concerns include the safety of the person performing the repair, the safety of using the product after repair, and safety related to damaging the product during or after the repair. Aspects of safety during repair have been addressed by the majority of scoring systems (En 45554, RSS, ONR192102, iFixit), but safety after a repair has not been addressed by any of them. Safety after repair is important if a product that has been incorrectly repaired becomes dangerous when operated (e.g., an incorrectly reattached lawnmower blade might fly out at high speed). There is only limited literature on product and user safety during and after repair of EEE. The public report of Inegmardotter et al. [44] indicates that most repair actions are safe to perform and others could be made safe through relatively small design changes. However, repair safety has been identified as one of the barriers to pushing forward product repair from political and company perspectives [45]. Therefore, to overcome this barrier, it is crucial that health and safety aspects are fully and transparently addressed in a reparability scoring system.

Interdependencies between Design Elements
Several interdependencies were observed between the design elements: fastener type, tools required, fastener visibility, reassembly, modularity, interchangeability, material robustness, design simplicity, information availability, and handling. These elements have all been identified as influencing the overall ease of disassembly of the product [8,18,20,21,38]. Additionally, diagnosis related to physical design seems to be influenced by the aspects of interchangeability, modularity, disassembly, design simplicity/complexity, robustness, and information availability [23,25].
These interdependencies between different design elements might lead to double counting in scores and may also indicate that not all the identified design elements need to be scored to provide a useful assessment of repairability. An assessment addressing the relation between the related disassembly and repairability elements can be observed in the Ease of disassembly Metric (eDiM) [20]. eDiM already addresses the following elements: disassembly, reassembly, tool type and fastener visibility. If a scoring system (such as AsMer) already uses eDiM, then these aspects are implicitly covered and may not need a separate scoring criterion. In essence, a scoring system might be simplified by eliminating some metrics without losing important information. Simplifying a scorecard could ease its application since it simplifies implementation and testing by manufacturers and surveillance authorities [20].  Table 4 shows how well the scoring system reflects design principles and features identified in the literature. Additionally, this table shows how scores are determined, and assesses their objectivity. All the criteria from the French repairability index were identified as objective. However, it was the least complete of the scoring systems and lacks criteria that currently are more qualitative (such as diagnosis and safety aspects). RSS was the most complete scoring system, covering 11 criteria, out of which 6 were objective. The scorecard with the least objectivity was ONR 192102, specifically because most of the criteria could be scored out of 5 or 10 but no specific instruction was provided on how each increment should be assessed. Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.  Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.          Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.   interface,  Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.   interface,  Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.   interface,     Dis. c. visibility                                           3 * ------10:                                   Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.     Table 4. Scorecard analysis for criteria on diagnosis and component accessibility. Green cells = objective, yellow cells = semi-objective, and red cells = subjective. "Dis." = Disassembly, "Rea." = Reassembly, "Mfr." = Manufacturer, "c." = check, "#" = number of.          (Table 5) 9: c. presence (Table 5) 9: c. presence (Table 5) 13: c. presence (Table 5) 13: c. presence (Table 5) 5: c. presence (                                         specific skills (layman, generalist, expert, manufacturer., not feasible) are required to carry out the repair process. However, details on what aspects are measured to determine the suitability of repair environments and also the skills required are lacking and are susceptible to subjectivity. Ingemarsdotter et al., [44] provide a risk assessment framework that could be applied to analyse the safety risk of household products. This framework builds on Failure Mode Effect Analysis (a widely applied method for failure analysis of products), and Rapid Exchange of Information System (a commonly agreed framework for risk assessment of consumer products). This framework could be further developed and implemented to assess the risk to safety objectively during and after the repair. The majority of these scoring systems (RSS, ASMER, FRI, iFixit) have to be calibrated with a reference value to work effectively. This reference value is normally calibrated through scoring a range of products (cheap-to-expensive, and variation in designs) from a specific product category, and determining an average, a minimum, and a maximum threshold [46]. However, the number and range of products required for this calibration, and how often calibration should be carried out, are both still unclear and there is an opportunity to further research and establish a standard protocol to identify this reference value.

Design
For ease of disassembly, most of the scoring systems (5/6) either measure time or the number of disassembly steps, and each method has both benefits and drawbacks. Disassembly time is subjective, depending on who is disassembling the product [20]. A more objective measurement is to record disassembly action based on Maynard Operation Sequence Techniques (MOST), where time represents the performance of an average skilled operator, under standard conditions at a normal pace [47]. This lets us create a proxy time as was carried out in Ease of Disassembly Metric (eDiM) [21]. This method is recognized as more representative of the ease of disassembly of the product than the number of disassembly steps. Furthermore, when assessing ease of disassembly, there is a significant difference between eDiM times and disassembly step counts [13], and eDiM captures the diversity of product designs better than the disassembly step counts. However, fully implementing eDiM would require a disassembly time database of all possible disassembly actions. Currently, the database is limited to ICT products, and the process of calculating eDiM is more labour-intensive than counting disassembly steps. Providing better representation of ease of disassembly might be important for a scoring system that places a high weight on disassembly, as well as for consumer organisations, manufacturers, designers, and MSAs that would like to assess the ease of product disassembly. Therefore, further research is required on eDiM to expand, simplify, and determine the balance between accuracy and ease of testing. The iFixit scoring system also has another disassembly criteria called the "path of entry", which describes the ease of disassembly to the point where critical components are visible [8]. This combines the criteria of disassembly time and tools required to disassemble until the critical components are visible, and therefore seems to have a similar testing method as ease of disassembly. Although iFixit already has a separate criterion related to disassembly time and tools, the path of entry assesses tools required and disassembly until the point where all the critical components are visible. Furthermore, criteria related to the path of entry are reflected in the report of iFixit market observations [42], which describes how an easy path of entry builds confidence in users self-repairing their products. Additionally, these criteria also help in diagnosis since viewing the critical component could be required by users during the diagnosis process [23,25]. Therefore, "path of entry" is a good addition to the disassembly criteria for a scoring system assessing self-repair.
An aspect of reassembly, "fastener removability and reusability" was addressed by most of the scoring systems. However, only two out of six scoring systems considered reassembly time in their criteria (EN 45554 and the AsMer scoring system indicate checking the reassembly time using the EDIM). However, the newer scoring criteria of RSS and iFixit only instruct to check if reassembly is possible; and they consider reassembly the opposite of disassembly. Therefore, there is a discrepancy in the importance given to this matter between the scoring methods. However, the report by Peters et al. [21] shows that reassembly time in some cases is higher than disassembly time. This is generally due to an additional action required to position fasteners (such as screws) and components. Furthermore, positioning design features such as spring-loaded components, and long routed cables further add to the reassembly time. eDiM partially covers the additional actions for positioning fasteners in its method, however specific reassembly actions such as assembling spring-loaded components and routing long cables are not considered in this method. Therefore, the eDiM database could be further expanded to address more reassembly-specific actions. Additionally, If a scoring system considered the disassembly step instead of eDiM, then, additional elements influencing the reassembly (e.g., criteria addressing cable routing) should be added as a step.
Two design elements for which most scoring systems agree and provided straightforward objective test procedures were "fastener removability and reusability" and "tools required". ADEME, EN 45554, and RSS apply similar criteria to fasteners (reusable, nonreusable, non-removable). These criteria, and also the testing method (disassemble and check fastener type) seem to be consistent across the different scoring systems and testing parameters seem to be straightforward and objective. Similarly, the "tools required" parameters appear to be in agreement across the scoring system. The list of tools is well defined, and most of the scoring systems (4/6) reference EN 45554 standards. The criterion and test for tools required seem to be clear and objective.
No list or other reference of standardized parts and interface is given for any of the scoring systems. Whilst RSS and EN 45554 consider the presence or absence of a standard interface per part, AsMer and ONR 192102 adopt a more subjective approach. RSS advises checking the manufacturer's information, whilst ONR 192102 suggests disassembling and checking the interface/part. However, objectively assessing standard parts and interfaces would require a list of standard parts and interfaces similar to that of the "tools required" criterion. Listing these standard parts, however, seems difficult given the large diversity of parts and components. Additionally, enforcing standardisation may impede innovation. Instead, the benefits of standardisation (as discussed in Table 2) could be addressed by the following criteria: (a) spare parts cost and availability, (b) tool required, (c) information accessibility of product identification, (d) ease of diagnosis, (e) ease of disassembly, (f) safety, and (g) interchangeability of components. Most of these criteria are already present in scoring systems; therefore, if the aforementioned criteria are addressed, standardisation as a separate criterion may not be required.
"Information accessibility" scores the ability of the public and of repairers to access repair information. The information content required by the different scoring systems is presented in Table 5. This table shows that "repair instruction", "exploded view", "diagnosis information", "safety measures", "procedure to reset to working condition", and "disassembly sequences" have been addressed by most (4/6) scoring systems. This is followed by; "product identification", "tools required", "replacement/supplier information", "circuit diagram", "component identification", "maintenance instructions", and "error codes". Most of the scoring systems seem to agree that information on diagnosis, safety, disassembly, and reset are important and that such information should be provided by the manufacturer. The testing procedure for obtaining this information seems to involve checking the official website, consulting a manual, or calling customer service. This criterion and its testing procedure seem straightforward and objective and could be easily implemented. However, apart from information on diagnosis, safety, disassembly sequences and factory reset, there is a discrepancy between scoring systems on what additional information from the manufacturer could be important. This may require further research.
In addition, the majority (4/6) of the scoring systems assess information accessibility on a product level and do not specify to what extent this applies to the most frequently occurring faults. This could result in invalid scoring (e.g., if the company gives repair information on just one fault, they may still attain a favourable score). Therefore, for information that is dependent on specific faults (such as repair information, and diagnosis information), it is important to provide information covering the most frequently occurring faults.
Suitable media for communicating this information could include printed manuals, websites, digital information carriers such as QR codes, DVDs, or flash drives, and the telephone [15,46]. AsMer, ONR 192102, and iFixit have clear criteria on how information on safety, disassembly, and product and component identification should be relayed; with "attached to the product" scoring highest, followed by access to a manual or website video. For the rest of the scoring systems, the medium of information does not seem to matter as long as it can be accessed by the public. Again, there are discrepancies concerning the importance of the information medium among the aforementioned systems. However, the literature shows that providing visual markings on the product (such as numbering wires, or warning signs) assists in correct reassembly and decreases the safety hazard [44]. Similarly, providing component identification numbers assists in buying the correct spare parts for replacement [34]. Therefore, it could be important to assess the information medium for disassembly, safety, and component identification.

Recommendations for Future Work
Our analysis found several opportunities for improvement in the current scoring systems, and also identified its own limitations. Both of these suggest recommendations for future work. Our primary recommendations are to improve the current scoring systems in the following ways: -Assessments of health and safety were semi-objective across the majority of the scoring systems. Therefore, there is an opportunity to develop objective criteria and testing methodologies for assessing health and safety of the user and the product during and after repair. - The eDiM method database could be expanded and further simplified to measure the ease of disassembly more universally. Additionally, the question of whether the additional accuracy provided by eDiM compared to disassembly step compensates for the increased difficulty in testing needs to be considered. - Since time for reassembly is sometimes higher than for disassembly, it might be important to consider ease of reassembly as a separate criterion whenever eDiM is not used. - In terms of repair information content, it is important to establish what information is most critical to promote repair. Additionally, information that is dependent on specific faults/components should be addressed at the fault/component level instead of the product level.
This study's limitations may also provide opportunities for future work. Ease of testing and validity were both discussed only partially. Whilst a scoring system could be complete and objective with all the aspects required to score repairability, such a scoring system might be too burdensome to score products within a feasible budget and time. Therefore, future work could investigate balancing ease of testing versus objectivity and completeness of the testing program. Future work could also further test the validity and feasibility of different scoring systems by having multiple test personnel independently test different products with each scoring system and checking levels of agreement. This is planned for upcoming research.
This review focused on how scoring systems in the current literature reflect physical design features, principles, and guidelines related to the repairability of household electronic and electrical equipment and on how they are tested. However, research has also shown the importance of user and market aspects in repair. Future research could investigate how the current scoring systems reflect this, by testing user and market aspects related to repairability.
While this research was intended to create tools useful for policy makers, it was beyond the scope of this project to predict what specific policy types would be most effective in using the tools. Therefore, further research is recommended to find the most effective policies to best improve repairability, such as taxing, mandatory product labelling, mandatory minimum reparability scores, or other implementations.

Conclusions
This study assessed the objectivity and completeness of six major repair scoring systems, to see what further development may be required to make them policy instruments. The completeness of each scoring system was assessed by comparing it to the latest literature on the design features and principles that drive product repairability. Similarly, the objectivity of each scoring system was assessed by comparing whether the presented scoring levels per criteria in each scoring system were clearly defined, with a quantifiable and operator-independent testing method. In general, most of the scoring systems were acceptably objective and complete. FRI and iFixit scoring systems were found to be the most objective, and JRC was the most complete. However, they could all be further improved by recommendations presented in the paper.
Addressing the gaps presented in this paper would lead to the development of an ideal scoring system with an effective testing program that could be used for policy making. Additionally, this scoring system could also be used for assessment by consumer organizations, MSAs, and other interested stakeholders, to promote the repairability of products which will, in turn, improves their lifetime.