Next Article in Journal
Parallel Broadband Femtosecond Reflection Spectroscopy at a Soft X-Ray Free-Electron Laser
Next Article in Special Issue
Prediction of Machine Inactivation Status Using Statistical Feature Extraction and Machine Learning
Previous Article in Journal
General Moving Object Localization from a Single Flying Camera
Previous Article in Special Issue
Audio-Visual Tensor Fusion Network for Piano Player Posture Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Derivation of Defect Priorities and Core Defects through Impact Relationship Analysis between Embedded Software Defects

1
Graduate School of Public Policy and Information Technology, Seoul National University of Science and Technology, Gongneung-ro 232, Nowon-gu, Seoul 01811, Korea
2
Department of Industrial Engineering, Seoul National University of Science and Technology, Gongneung-ro 232, Nowon-gu, Seoul 01811, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(19), 6946; https://doi.org/10.3390/app10196946
Submission received: 4 September 2020 / Revised: 28 September 2020 / Accepted: 2 October 2020 / Published: 4 October 2020
(This article belongs to the Collection Big Data Analysis and Visualization Ⅱ)

Abstract

:
As embedded software is closely related to hardware equipment, any defect in embedded software can lead to major accidents. Thus, all defects must be collected, classified, and tested based on their severity. In the pure software field, a method of deriving core defects already exists, enabling the collection and classification of all possible defects. However, in the embedded software field, studies that have collected and categorized relevant defects into an integrated perspective are scarce, and none of them have identified core defects. Therefore, the present study collected embedded software defects worldwide and identified 12 types of embedded software defect classifications through iterative consensus processes with embedded software experts. The impact relation map of the defects was drawn using the decision-making trial and evaluation laboratory (DEMATEL) method, which analyzes the influence relationship between elements. As a result of analyzing the impact relation map, the following core embedded software defects were derived: hardware interrupt, external interface, timing error, device error, and task management. All defects can be tested using this defect classification. Moreover, knowing the correct test order of all defects can eliminate critical defects and improve the reliability of embedded systems.

1. Introduction

Embedded systems are used in various industries, including automotive, railway, construction, medical, aerospace, shipbuilding, defense, and space. However, these systems have software defects that can cause fatal accidents. In the medical field, a safety-critical system defect in radiotherapy resulted in more than six human injuries caused by excessive radiation in two years [1]. In the space sector, the Ariane 5 Flight 501 that failed its maiden flight is reportedly an example of an accident caused by a software defect [2]. In the defense sector, these defects caused the deaths of 28 US Army soldiers and left 98 injured when a Patriot missile malfunctioned in Dhahran, Saudi Arabia [3]. As such, fatal consequences can occur if defects are not eliminated in embedded systems. Therefore, all defects must be collected and classified so that no untested embedded defects exist. As a result of investigating an embedded software defect study, many studies based their analyses on embedded architecture, as well as defects in interface, dynamic memory, and exception handling, and other miscellaneous defects found in airplane or space-exploration applications. Although these studies are meaningful in each field, all defects have not been consolidated and classified. If defects from some previous studies are the only ones referenced and tested, there may be defects that have not been tested and cause serious problems.
To solve such problems, this study collected all possible defects experienced globally and classified them into a systematic and integrated view by applying a content analysis technique. In addition, when a defect occurs, the defect can affect other defects. Due to this characteristic, the defect can become more and more serious. In this paper, these defects are considered core defects because they can cause critical failures in the embedded system. These defects were derived by analyzing the impact relationship between embedded software defects and applying decision-making trial and evaluation laboratory (DEMATEL) methods. Using these methods will enable developers to eliminate defects without omission using integrated defect types and improve embedded software quality via the intensive management of core defects.
Various defects in the pure software field have been collected, classified, and studied worldwide [4,5]. Mäntylä and Lassenius [4] argue that code review often undermines the benefits of core review by focusing on the number of defects instead of the defect type. For this reason, they collected and categorized defects that are useful in code review. Huh and Kim [5] collected a series of pure software defect studies (Table 1) and classified the defects into specific functional categories in their meta-analysis (Table 2). Using the analytic network process (ANP), they could derive what they identified as core defects in general software applications, such as personnel, salary, and accounting systems. By analyzing the impact relationship of those pure software defects, they could derive a set of core defects. In their study, they concluded that targeting core defects could eliminate any related peripheral defects, making it a more efficient troubleshooting method. The present study expands on Huh and Kim’s [5] defect-classifying study, using their list of pure software defects as the present study’s list of pure software area defects.
Next, the present study investigated other studies that focused on embedded software defects. Barr [9,10] presented 10 important embedded software defects: race conditions, non-reentrant functions, missing volatile keywords, stack overflow, heap fragmentation, memory leaks, deal locks, priority inversions, and incorrect priority assignments. On the other hand, Lutz [11] classified 387 errors encountered during the Voyager and Galileo missions. Meanwhile, Hagar [12] presented test methods for various embedded defects. Lee et al. [13] defined 11 faults for input, control, and output, and then tested them within a vehicle’s embedded system. Jung et al. [14] studied defects that violate the Motor Industry Software Reliability Association-C (MISRA-C) 2004 coding rules, using static analysis tools. Then, Bennett and Wennberg [15] tried a different approach to defect analysis by studying a method of cost-effective testing using an integrated test for five types of defects found during spacecraft development. Researchers like Seo [16] studied and tested defects that occur in the interface between the software (SW) and hardware (HW) of embedded systems. Choi [17] defined dynamic memory defects and subsequently tested for them in embedded systems. Studies, such as Lee’s [18] 2010 study, went as far as examining methods for recovering faults through exception processing routines when they occurred in embedded systems. Other researchers still approached their analysis by manually injecting and testing the defects, such as Cotroneo et al. [19], who investigated their test’s ability to inject defects into an embedded system. Lee et al. [20] and Lee and Park [21] conducted similar fault injection tests, putting six defects into the defense embedded system. Lee [22] then also studied fault injection tests for six orthogonal defect classification (ODC) defects in an aerospace embedded system.
Despite the exhaustive number of studies conducted, they have neither comprehensively aggregated these defects nor classified them as mutually exclusive and collectively exhaustive (MECE). Moreover, there has been no attempt to derive significant defects using the influence relationship of the defects. For this reason, this study collected globally embedded software defects and classified them as mutually exclusive and collectively exhaustive(MECE), with the intent to derive core defects so that they can be applied to embedded software applications.

2. Materials and Methods

All collected defects were categorized as MECE to address numerous unique defects noted by the different researchers. This study used content analysis to categorize and integrate terms based on their characteristics and meanings [23], thus creating the categories used here. Then, the DEMATEL method was used to identify the impact relationship between the defect categories and distinguish cause defects from effect defects [24].

2.1. Content Analysis

First, the present study used content analysis, a method suited for studying multifaceted and sensitive phenomena and its characteristics, to categorize many defects [25]. This technique categorizes and structures information derived from textual material, quantifying qualitative data. However, the method can be time-consuming, and problems may arise when interpreting or transforming ambiguous or extensive information. Moreover, content analyses may suffer from researcher overinterpretation, calling into question the validity of the analysis [25]. However, with a step-by-step analysis, it is one way to effectively classify sensitive topics, with its constructed categories open to change whenever appropriate throughout the analysis process [26]. This mixed approach to data analysis enables researchers to measure the reliability of their classifications [27]. Generally, content analysis is performed using either Honey’s content analysis technique or the bootstrapping technique. The present study applied the bootstrapping content analysis technique and utilized a seven-step procedure, as shown in Table 3 [23].
Following the bootstrapping procedures (Table 3), the researcher and collaborator classied each of the elements, respectively, and matched classifications are shown in Table 4. Researcher categories are recorded in the left column and collaborator categories are recorded in the top row. When there is a matched category for the category in the left column and the category in the top row, this is adopted as an agreed category. If the categories do not match, new categories have to be created after discussion between the researcher and collaborators. The agreed categories were adopted through this repeated process of consensus. Additionally, elements are recorded within the diagonal cell of the agreed categories (elements are recorded as construct number). When the recorded elements in a diagonal cell match, they are then adopted as agreed elements. If any elements do not match, the researcher and collaborator reclassify through their discussion. The agreed elements were adopted through this repeated process of consensus. The reliability of the agreed elements is measured by the classification index, with reliability referring to the percentage of agreed values located diagonally. In this scale, 80%–89% or more is rated as good, while 90% or more is rated as excellent [23].

2.2. DEMATEL Method

The decision-making trial and evaluation laboratory (DEMATEL) method was initially developed to solve complex and intertwined problems by the Science and Human Affairs Program of the Battelle Memorial Institute of Geneva. This study used the DEMATEL method for five reasons: (1) it can analyze the impact of relationships between complex factors; (2) it can create an impact relationship map (IRM) that can be used to visualize the relationship between factors, clearly illustrating one’s effect on another; (3) the alternatives can be ranked, and these weights can be measured through a six-step derivation process to get the cause-and-effect relationships between elements [28]; (4) factors affected by other factors are assigned a lower priority, whereas factors that affect others are given higher priority [29]; and (5) lastly, a similar methodology was conducted by Seyed-Hossein et al. [30] with notably positive results, wherein they performed a reprioritization of the system failure modes by applying the DEMATEL method to the defects observed in the turbocharged engine. Their experiment covered the disadvantages of the traditional risk priority number (RPN) method for the failure mode and effects analysis (FMEA) defect of the said engine. This DEMATEL method, however, has two primary disadvantages: (1) the factors are only ranked according to the relationship between them, and (2) a relative weight cannot be assigned to each expert evaluation [31].

2.2.1. Step 1: Deriving the Direct Relation Matrix (DRM)

The impact values that the i row element affects the j column elements were collected from respondents, and the DRM was calculated, averaging the impacts values. The DRM A is shown below in Equation (1).
A = a 11 a 1 j a 1 n a i 1 a i j a i n a n 1 a n j a n n

2.2.2. Step 2: Normalizing the Matrix

The largest value or values are chosen by comparing the maximum value of the sum of rows to the maximum value of the sum of columns, seen in Equation (2). The DRM (A) is then divided by this value; then, the normalized matrix (N) is calculated, as seen in Equation (3).
s = m a x [ m a x j = 1 n A j i ,   m a x i = 1 n A i j ]
N = A s

2.2.3. Step 3: Calculating for the Total Relation Matrix (TRM, T)

The total influence matrix T is calculated by adding together all the direct and indirect effects using the normalized direct influence matrix N to get the TRM, T.
T = N + N 2 + N 3 + + N m = N ( I N ) 1 ,         w h e n   m

2.2.4. Step 4: Separating the Influencing (Cause) Elements and the Influenced (Effect) Elements

The sum of the rows (D) shows the level of direct influence. Meanwhile, the sum of the columns (R) represents the level of indirect influence, as seen in Equations (5) and (6). The D value numerically expresses the degree of how much one factor affects other factors, while the R-value expresses the degree of how much one factor is affected by other factors. On the one hand, D+R is the sum of the values affecting other factors and the values affected by other factors. On the other hand, D-R is the difference in the values that affect other factors and the value affected by other factors. The larger the value of D-R, the greater the influencing power of the factors; the smaller its value, the more it is affected by other factors [32]. The factors with positive D-R values are considered the cause group, while the elements with negative D-R values are considered the effect group [25].
D = [ D i ] n × 1 = [ i = 1 m T i j ] n × 1
R = [ R j ] 1 × n = [ i = 1 n T i j ] n × 1

2.2.5. Step 5: Calculating the Threshold

The threshold is calculated as the average of the matrix, as seen in Equation (7) [28].
σ = i = 1 n j = 1 n [ T i j ] n

2.2.6. Step 6: Drawing the Cause and Effect Diagram

The cause and effect diagrams visualize the complex interrelationships of all elements and provide information on the most important elements and influencing factors [33]. The diagram is drawn using the values of the matrix elements greater than the threshold [34].

2.3. Research Procedure

This study went through several stages: beginning with data collection, then the standardization of terms, content analysis, survey collection, and, finally, the derivation of core defects (Figure 1). In the first stage, previously studied embedded software defects and critical factors are collected without omission. Second, the terms are standardized to eliminate the errors caused by differences in classification, as the terms used by researchers did not match. Third, the bootstrapping content analysis technique is used (Table 3). Fourth, the opinions of experts are collected through questionnaires, and the cause-and-effect relationships among defects are analyzed using the DEMATEL technique. Finally, the core defects are derived by analyzing the resulting cause-and-effect relationship diagram.

2.4. Materials

2.4.1. Collected Critical Embedded Elements and Embedded Software Defects

Only pure software defects and embedded hardware-controlling software defects were collected for this study. It must be noted that hardware-controlling defects were excluded from the study. Using the pure software defects that were previously studied (Table 1) and classified (Table 2), embedded software defects were collected, as shown in Table 5.

2.4.2. Standardization of Terms

As the terms of defects studied by each researcher in Table 2 and Table 5 are not consistent, this study standardized the terms of defects. Standardization was discussed with four embedded software experts (as shown in Table 6) who helped classify representative words based on the defects classified in the previous studies in Table 2. The standardized terms of defects that were identified are shown in Table 7.

2.4.3. Embedded Software Defects via Content Analysis

Using the content analysis procedures (as shown in Table 3), the researchers and collaborators (as shown in Table 6) classified defects in Table 7 and used the reliability table in Table 4 to derive matching classifications. This process was repeated several times to extract the 12 embedded software defects shown in Table 8. The categorization index, a ratio of the agreed value located on the diagonal line of Figure 2, was used to confirm the reliability of the agreed-upon classifications. The classification index was evaluated at about 96%, with 64 of the 66 defects agreed upon by the researchers and experts, qualifying it as “excellent.”

3. Results

3.1. Derived 12 Embedded Software Defects

The 12 embedded software defect classes and their sub-defects were generated after analyzing and standardizing the numerous terms collected. This defect classification includes all the collected defects, and targeting the defects summarily listed here may mitigate the risk that a defect will remain untested. Next, using the DEMATEL method, this study determined the relationships between embedded software defects to derive core defects

3.2. Expert Opinions on the Influence Relationships of Embedded Software Defects

The opinions of 16 experts (with an average of 9.5 years of embedded software development experience) were collected using a survey to analyze the impact of the 12 identified defects. These experts are professional engineers, top engineers, and information technology (IT) auditors, with 6 to 20 years of embedded software experience, as shown in Table 9, along with their specific fields and survey analysis results. The impact values of the 12 defects were collected from these experts with values ranging from zero to four (zero—no impact, one—low impact, two—normal impact, three—high impact, four—very high impact). Cronbach’s α was used to measure the reliability of the survey. Its value was 0.906 using the SPSS tool and the Cronbach’s α, as shown in Table 9.

3.3. DEMATEL Analysis of Expert Opinions

The DEMATEL method was applied to the questionnaire in stages to analyze the impact relationships of defects. First, for the collected questionnaire values, the arithmetic mean was calculated using equation (1), and this was used to generate the generalized matrix (A; Table 10). Second, to normalize the generalized matrix (A), the maximum value was calculated using equation (2) and applied to equation (3), thus deriving a normalized matrix (N). Third, the TRM (T) was calculated (Table 11) by multiplying N by the inverse matrix of the unit matrix (equation 4). Fourth, this study calculated (equation 5) for the sum of columns (D) and the sum of the rows (R), (D+R), and (D-R) factors in the TRM (T) of Table 12. Fifth, the threshold value was calculated using equation (7) to get a value of 0.4928. Values smaller than the threshold values in matrix (T) were identified as having no impact, whereas larger values have an impact. Finally, the factor (D-R) in Table 12 is set on the y-axis, while the factor (D+R) is set on the x-axis. These are then used to draw the impact relationship map.

3.4. Influence Analysis between Embedded Software Defects

The DEMATEL method determined the degree of a defect’s influence power as each defect related to each other, enabling this study to plot an IRM, as shown in Figure 3. The D column lists are the sum of rows, and the R column lists are the sum of columns. The D value numerically expresses the degree to which one defect affects other defects, while the R-value expresses the degree to which one defect is affected by other defects. D+R is the sum of the values affecting other defects and the values affected by other defects. The D+R value is useful for identifying the total value of defects. On the other hand, D-R is the difference of values that affect other defects and the value affected by other defects. The larger the value of D-R, the greater the influencing power of the defect, while the smaller its value, the more it is affected by other defects. Therefore, defects with positive D-R values are cause defects and belong to the cause group, while defects with negative D-R values are effect defects and belong to the effect defect group [25]. Additionally, the affecting defect, or the cause defect, should be tested first because it affects other defects. Meanwhile, the affected defect should be tested later as it is affected by other defects [30]. Therefore, defects with higher D-R values should be tested first, whereas defects with lower D-R values should be tested later.
As shown in Table 13, the D-R values of the following defects are positive and should be tested first: wrong logic (E9), wrong function (E10), flash memory and file system defects (E6), data and shared memory (E4), dynamic memory (E5), internal software interface (E7), and exception handling (E12). Meanwhile, the D-R values of the following defects are negative and should be tested later: task management (E11), device driver (E1), hardware interrupt (E2), timing error (E3), and the external interface (E8) defects.
As defined at the beginning of the paper, core defects refer to defects that increasingly get more severe due to other defects. Based on this, the five core defects that have been identified in the effect group with negative D-R values and different characteristics are the following: external interface (E6), timing error (E9), hardware interrupt (E8), device driver (E7), and task management (E3). Minimizing these defects is vital, so tests that are appropriate for each defect characteristic should be performed. The characteristics of the five core defects that were derived are as follows: the primary core defects are defects with the smallest D-R value, which is an external interface fault, including network, serial port, and human interface. For example, if there are defects in the human interface, other functions can still work in a particular way and cause problems even if the user requests for the desired function. It can be understood as the most important fault as it can lead to serious problems due to incorrect operation if commands from an external system are incorrectly received. The second important defect is the hardware interrupt defect, which includes operations like dividing by zero, overflow, underflow, etc. If defects that interrupt processing occur, serious problems may follow. The third important defect is the device driver, providing the interface to control the hardware. Defects occurring in the device driver are essential to note because they prevent users from predicting how the embedded system will operate. The fourth important defect is the timing error. Embedded systems can be directly linked to human life, such as automobile autonomous navigation systems, automatic navigation systems in aviation, nuclear power plant control systems, and missile control devices in the defense industry. If an immediate response function times out, unpredictable consequences may occur. The last important defect is the task management defect since tasks may not be performed normally due to deadlock, race condition, etc.
In a comprehensive interpretation of core defects, this study found that embedded systems should be executed robustly without being affected by external systems and environments, and that interrupt should be handled correctly. They should then be implemented to respond to different types of hardware. The desired function must be performed within a limited time, ensuring that the original function is executed faithfully without damaging other task types. Therefore, applying a test method suitable for such characteristics would be the best way to minimize defects.

3.5. Validation with Embedded Software Developer Experts

This study collected six critical defects—considered the most important of the 12 embedded defects listed in Table 8—from 10 embedded software development experts to confirm the reliability of the study results. Table 14 illustrates that although some experts suggested that logic defects, exception handling, data, and dynamic memory defects are also important, the common opinion is that hardware interrupts, external software interface, timing error, device drivers, and task management defects are the biggest impediments to proper system functioning. When looking at the important defects derived from experts and the core defects derived from this study, there are only slight differences, with the rest being approximately identical.

3.6. The Difference between Previous Studies and This Study

Current embedded software defect research only includes specific areas that researchers consider essential, as shown in Table 5. If defects are tested according to previous studies, there may be defects that are not tested, which may cause failures. For this reason, the present study collected embedded software defects worldwide, classified them as MECE and organized them into 12 categories and sub-defects to solve this problem. Therefore, if 12 categories and sub-defects are used and tested, they can account for all defects tested, minimizing the failure of the embedded system.
Given that defects affect each other, problems can arise if the effect defect is tested first and the cause defect is tested later. For example, if a cause defect is found after removing the effect defects, the effect defects must be tested again, as the cause defects may affect the effect defects. Therefore, testing the cause defects first and then testing the effect defect later is a way to minimize the defect without running multiple tests [30]. In the present study, the cause defects and the effect defects were derived by analyzing the influence relationship between defects. Thus, the defect can be eliminated by testing the cause defects first and then testing the effect defects later.
Embedded software defects range from minor defects to severe defects. Naturally, more weight should be placed on severe defects than minor ones to improve the safety of embedded system. Various embedded software defects have been studied, but there is insufficient research on major defects to minimize embedded system failures. In the present study, the influence relationship between defects was analyzed to identify the major defects. The cause defects and effect defects were identified using the influence relationship between the defects. Cause defects may not cause failure by eliminating their own defects. However, even if effect defects are eliminated by their own defects, defects can be caused by cause defects. Therefore, effect defects should be intensively managed and tested more than cause defects. The effect defects derived in this study are called core defects, and it was determined that hardware-dependent defects are greatly affected by other defects. Therefore, if in-depth tests are conducted on the core defects derived in this study, the failure of the embedded system can be minimized.

4. Conclusions

This study was able to derive 12 defect categories and sub-defects using the content analysis technique, draw the cause and effect relationship between embedded software defects, and derive core defects using the DEMATEL method. After studying the data yielded throughout the different stages of the study’s analyses, the results show that the core embedded software defects were the external interface defect, the hardware interrupt defect, device driver defect, timing error, and task management defect.
What this study does is integrate and organize pure software defects and embedded software defects from around the world, opening avenues for other researchers to improve software quality. This study also helps mitigate the risks that come from critical defects that might not have been tested. Moreover, the impact relationships between defects can be better mapped through the diagrams presented here. Lastly, using the cause and effect relationships, this study constructed a basis for estimating defect weights. Future studies may validate and use them as criteria for targeting embedded software defects.
There are also many industrial applications for this study. First, by eliminating the time required to collect and classify the defects, one may immediately inspect and target any defects that may be present. Second, when developing an embedded system, systems can remove defects more efficiently and effectively using a guide that orders defects by importance. Third, analyzing the priorities of the defects may facilitate a more accessible selection of the appropriate embedded software test technique. Lastly, when performing a fault injection test, this study suggests that more defects can be injected and tested in the source code where core defects are likely to occur.
In this study, embedded software defects were classified into 12 defect categories and sub-defects. Moreover, the influence relationship of defects was analyzed for each of the 12 defect categories and classifying the defects into cause group defects and effect group defects. However, the weight of the defect was not completely calculated, and the influence relationship of the sub-defects was not analyzed. Therefore, future studies that derive the weights of sub-defects and studies that analyze the influence relationship of sub-defects to identify cause defects and effect defects at the level of sub-defects are essential. In addition, future researchers can look into how to improve the defect removal rate while conducting embedded tests (such as defect injection tests) using the defects derived in this study, compared to the existing tests.

Author Contributions

Conceptualization, S.M.H. and W.-J.K.; methodology, S.M.H. and W.-J.K.; validation, W.-J.K.; investigation, S.M.H.; resources, S.M.H. and W.-J.K.; writing—original draft preparation, S.M.H.; writing—review and editing, S.M.H. and W.-J.K.; project administration, S.M.H. and W.-J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leveson, N.G.; Turner, C.S. An investigation of the Therac-25 accidents. Computer 1993, 26, 18–41. [Google Scholar] [CrossRef]
  2. Lann, G.L. The Ariane 5 Flight 501 failure—A case study in system engineering for computing systems. [Research Report] RR-3079. INRIA 1996. Available online: https://hal.inria.fr/inria-00073613 (accessed on 2 September 2020).
  3. Cousot, P.; Cousot, R. A gentle introduction to formal verification of computer systems by abstract interpretation. In Logics and Languages for Reliability and Security; Esparza, J., Spanfelner, B., Grumberg, O., Eds.; IOS Press: Amsterdam, The Netherlands, 2010; pp. 1–29. [Google Scholar] [CrossRef]
  4. Mäntylä, M.V.; Lassenius, C. What types of defects are really discovered in code reviews? IEEE Trans. Softw. Eng. 2009, 35, 430–448. [Google Scholar] [CrossRef] [Green Version]
  5. Huh, S.-M.; Kim, W.-J. A method to establish severity weight of defect factors for application software using ANP [Korean]. J. KIISE 2015, 42, 1349–1360. Available online: http://www.riss.kr/link?id=A101312445 (accessed on 2 September 2020). [CrossRef]
  6. IEEE. IEEE standard classification for software anomalies. In IEEE Std. 1044–2009 (Revision of IEEE Std 1044–1993); IEEE: New York, NY, USA, 2010; pp. 1–23. Available online: http://www.ctestlabs.org/neoacm/1044_2009.pdf (accessed on 2 September 2020).
  7. Huber, J.T. A Comparison of IBM’s Orthogonal Defect Classification to Hewlett Packard’s Defect Origins, Types and Modes 1.0. Hewlett Packard Co. 1999. Available online: http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2883 (accessed on 2 September 2020).
  8. IBM. Orthogonal Defect Classification v5.2 for Software Design and Code. IBM. 2013. Available online: https://researcher.watson.ibm.com/researcher/files/us-pasanth/ODC-5-2.pdf (accessed on 2 September 2020).
  9. Barr, M. Five top causes of nasty embedded software bugs. Embed. Syst. Des. 2010, 23, 10–15. [Google Scholar]
  10. Barr, M. Five more top causes of nasty embedded software bugs. Embed. Syst. Des. 2010, 23, 9–12. [Google Scholar]
  11. Lutz, R. Analyzing software requirements errors in safety-critical, embedded systems. In Proceedings of the IEEE International Symposium on Requirements Engineering; IEEE: San Diego, CA, USA, 1993; pp. 126–133. [Google Scholar] [CrossRef]
  12. Hagar, J.D. Software Test Attacks to Break Mobile and Embedded Devices; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013. [Google Scholar]
  13. Lee, S.Y.; Jang, J.S.; Choi, K.H.; Park, S.K.; Jung, K.H.; Lee, M.H. A study of verification for embedded software [Korean]. Ind. Eng. Manag. Sys. 2004, 11, 669–676. Available online: http://www.riss.kr/link?id=A60279480 (accessed on 2 September 2020).
  14. Jung, D.-H.; Ahn, S.-J.; Choi, J.-Y. Programming enhancements for embedded software development-focus on MISRA-C [Korean]. J. KIISE Comp. Pract. Lett. 2013, 19, 149–152. Available online: http://www.riss.kr/link?id=A99686225 (accessed on 2 September 2020).
  15. Bennett, T.; Wennberg, P. Eliminating embedded software defects prior to integration test. J. Def. Softw. Eng. Triakis Corp. 2005. Available online: https://pdfs.semanticscholar.org/3070/1dcef9b58d6c167751aaeaff7c9628cf04c4.pdf (accessed on 2 October 2020).
  16. Seo, J. Embedded Software Interface Test Based on the Status of System [Korean]. Ph.D. Thesis, Department of Computer Science and Engineering Graduate School, EWHA Womans University, Seoul, Korea, 2009. Available online: http://www.riss.kr/link?id=T11551362 (accessed on 2 September 2020).
  17. Choi, Y.N. Automated Debugging Cooperative Method for Dynamic Memory Defects in Embedded Software System Test [Korean]. Master’s Thesis, Department of Computer Science and Engineering Graduate School, EWHA Womans University, Seoul, Korea, 2010. Available online: http://dspace.ewha.ac.kr/handle/2015.oak/188271 (accessed on 2 September 2020).
  18. Lee, S. Automated Method for Reliability Verification in Embedded Software System Exception Handling Test [Korean]. Master’s Thesis, Department of Computer Science and Engineering Graduate School, Ewha Womans University, Seoul, Korea, 2011. Available online: https://dspace.ewha.ac.kr/handle/2015.oak/188659 (accessed on 2 September 2020).
  19. Cotroneo, D.; Lanzaro, A.; Natella, R. Faultprog: Testing the accuracy of binary-level software fault injection. IEEE T. Depend. Secure. 2016, 15, 40–53. [Google Scholar] [CrossRef]
  20. Lee, H.-J.; Yoon, J.-H.; Lee, K.-Y.; Lee, D.-W.; Na, J.-W. Reclassification of fault, error, and failure types for reliability verification of defense embedded systems [Korean]. Proc. Inst. Control Robot. Syst. 2012, 7, 925–932. Available online: http://www.riss.kr/link?id=A99705651 (accessed on 2 September 2020).
  21. Lee, H.-J.; Park, J.-W. JTAG fault injection methodology for reliability verification of defense embedded systems [Korean]. J. Korea Acad. Ind. Coop. Soc. 2013, 14, 5123–5129. [Google Scholar] [CrossRef] [Green Version]
  22. Lee, H.-J. Statistical JTAG Fault Injection Methodology for Reliability Verification of Aerospace Embedded Systems [Korean]. Master’s Thesis, Department of Electronics and Information Engineering Korea Aerospace University, Gyeonggi, Korea, 2012. Available online: http://www.riss.kr/link?id=T12740845 (accessed on 2 September 2020).
  23. Jankowicz, D. The Easy Guide to Repertory Grids; John Wiley and Sons: Chichester, UK, 2005. [Google Scholar]
  24. Si, S.-L.; You, X.-Y.; Liu, H.-C.; Zhang, P. DEMATEL technique: A systematic review of the state-of-the-art literature on methodologies and applications. Math. Probl. Eng. 2018. [Google Scholar] [CrossRef] [Green Version]
  25. Elo, S.; Kyngäs, H. The qualitative content analysis process. J. Adv. Nurs. 2008, 62, 107–115. [Google Scholar] [CrossRef] [PubMed]
  26. White, M.D.; Marsh, E.E. Content analysis: A flexible methodology. Libr. Trends 2006, 55, 22–45. Available online: http://hdl.handle.net/2142/3670 (accessed on 2 September 2020). [CrossRef] [Green Version]
  27. Mayring, P. Qualitative content analysis. Qual. Soc. Res. 2000, 1, 1159–1176. [Google Scholar] [CrossRef]
  28. Sumrit, D.; Anuntavoranich, P. Using DEMATEL method to analyze the causal relations on technological innovation capability evaluation factors in Thai technology-based firms. Int. T. J. Eng. Manage. Appl. Sci. Technol. 2013, 4, 81–103. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.411.8216&rep=rep1&type=pdf (accessed on 2 September 2020).
  29. Asgharpour, M.J. Group Decision Making and Game Theory in Operation Research, 3rd ed.; University of Tehran Publications: Enghelab Square, Iran, 2003. [Google Scholar]
  30. Seyed-Hosseini, S.M.; Safaei, N.; Asgharpour, M.J. Reprioritization of failures in a system failure mode and effects analysis by decision making trial and evaluation laboratory technique. Reliab. Eng. Syst. Safe. 2006, 91, 872–881. [Google Scholar] [CrossRef]
  31. Ölçer, M.G. Developing a spreadsheet based decision support system using DEMATEL and ANP approaches. Master’s Thesis, DEÜ Fen Bilimleri Enstitüsü, Turkey, 2013. Available online: http://hdl.handle.net/20.500.12397/7682 (accessed on 2 September 2020).
  32. Hsu, C.-C. Evaluation criteria for blog design and analysis of causal relationships using factor analysis and DEMATEL. Expert. Syst. Appl. 2012, 39, 187–193. [Google Scholar] [CrossRef]
  33. Shieh, J.-I.; Wu, H.-H.; Huang, K.-K. A DEMATEL method in identifying key success factors of hospital service quality. Knowl. Based Syst. 2010, 23, 277–282. [Google Scholar] [CrossRef]
  34. Yang, Y.-P.O.; Leu, J.-D.; Tzeng, G.-H. A novel hybrid MCDM model combined with DEMATEL and ANP with applications. Int. J. Oper. Res. 2008, 5, 160–168. [Google Scholar]
  35. Ji, S.; Bao, X. Research on software hazard classification. Procedia Eng. 2014, 80, 407–414. [Google Scholar] [CrossRef] [Green Version]
  36. Noergaard, T. Embedded Systems Architecture: A Comprehensive Guide for Engineers and Programmers, 1st ed.; Elsevier Inc.: Oxford, UK, 2005. [Google Scholar]
  37. Choi, H.; Sung, A.; Choi, B.; Kim, J. A functionality-based evaluation model for embedded software [Korean]. J. KIISE Softw. App. 2005, 32, 1192–1205. Available online: http://www.riss.kr/link?id=A82294417 (accessed on 2 September 2020).
  38. Seo, J.; Choi, B. An interface test tool based on an emulator for improving embedded software testing [Korean]. J. KIISE Comp. Pract. Lett. 2008, 32, 547–558. Available online: http://www.riss.kr/link?id=A82300048 (accessed on 2 September 2020).
  39. Sung, A.; Choi, B.; Shin, S. An interface test model for hardware-dependent software and embedded OS API of the embedded system. Comp. Stand. Inter. 2007, 29, 430–443. [Google Scholar] [CrossRef]
  40. Rodriguez Dapena, P. Software Safety Verification in Critical Software Intensive Systems. Ph.D. Thesis, Technische Universiteit Eindhoven, Eindhoven, The Netherlands, 2002. [Google Scholar] [CrossRef]
  41. Sung, A. Interface based embedded software test for real-time operating system [Korean]. Ph.D. Thesis, Department of Computer Science and Engineering, Ewha Womans University, Seoul, Korea, 2007. Available online: http://www.riss.kr/link?id=T11039605 (accessed on 2 September 2020).
  42. Jung, H. Failure Mode Based Test Methods for Embedded Software [Korean]. Master’s Thesis, Ajou University Graduate School of Engineering, Suwon-si, Korea, 2007. Available online: http://www.riss.kr/link?id=T11077107 (accessed on 2 September 2020).
  43. Jones, N. A Taxonomy of Bug Types in Embedded Systems. Stack Overflow, Embeddedgurus.Com. 2009. Available online: https://embeddedgurus.com/stack-overflow/2009/10/a-taxonomy-of-bug-types-in-embedded-systems (accessed on 2 September 2020).
  44. Durães, J.A.; Madeira, H.S. Emulation of software faults: A field data study and a practical approach. IEEE Trans. Softw. Eng. 2006, 32, 849–867. [Google Scholar] [CrossRef]
Figure 1. Research procedure for classification of collected defects and derivation of core defects.
Figure 1. Research procedure for classification of collected defects and derivation of core defects.
Applsci 10 06946 g001
Figure 2. The final reliability table that was agreed upon between the researchers and collaborators based on the standardization of terms shown in Table 7.
Figure 2. The final reliability table that was agreed upon between the researchers and collaborators based on the standardization of terms shown in Table 7.
Applsci 10 06946 g002
Figure 3. The impact relationship map of embedded software defects.
Figure 3. The impact relationship map of embedded software defects.
Applsci 10 06946 g003
Table 1. Huh and Kim’s [5] references for pure software defects.
Table 1. Huh and Kim’s [5] references for pure software defects.
ResearchersSoftware Defects
IEEE 1044 [6]IEEE 1044-1993 standard classification for software anomalies
HP Huber JT [7]Hewlett Packard’s defect origins, types, and modes
IBM ODC [8]Orthogonal defect classification (ODC) for software design and code
Other researchersCollected software defects defined by 21 researchers worldwide
Table 2. Huh and Kim’s [5] classified list of pure software defects.
Table 2. Huh and Kim’s [5] classified list of pure software defects.
CategoryDefectsSub-Defects
LogicConditional statementChecking, duplicating IF statement, empty IF statement, compared with other variables, missing important conditions (case, etc.)
Rotation logicInfinite loops, infinite recursions, algorithm, logic sequences, flow control, error checking, check scope, status handling, missing a step
Concurrent logicSynchronization, race conditions, mutual exclusions, critical sections, concurrent processing, coordination process, condition loads
Interface, TimingExternal interfaceHuman interface error, different protocols, incorrect protocols
Wrong function (internal interface)Return pointers, incorrect Application programming interface(API), software architecture, function/class/object relationship, no existence subroutines, missing return values, incorrect parameter’s error, calls incorrect subroutines, calls incorrect module, incorrect interrupt
I/O timingTime overrun, incorrect Input/Output(I/O) timing
ComputationDivision by zeroDivide by zero
ExpressionWrong operator, incorrect parenthesis usage, different unit calculations, incorrect sign usage, missing expressions, wrong expressions
Precision lossMixed modes, round/truncation calculations, underflow, overflow
DataData structureError of data design, wrong data structures, wrong data units, pack/unpack
Data usageLeaks, use after free, un-assignment, initialization pointer, memory violation, other variable type usages, other variable use dimensions, null pointers, wrong index, use other flags, error script variable usage, save/access errors, initialization errors
Data value selfWrong input data, wrong operation data, wrong external data, wrong sensor data
Table 3. The classification procedures of bootstrapping content analysis.
Table 3. The classification procedures of bootstrapping content analysis.
StepProcedures
1Create an appropriate category that contains the attributes of the first element
2Create a new category if the following element is different from the first element
3Distribute the following elements into similar categories
4Combine and detach existing categories as needed to create new ones
5Repeat until all elements are classified
6Place all unclassifiable elements in the miscellaneous group
7Repeat classification until the elements in the miscellaneous group account for less than 5% of the total
Table 4. An example of a reliability table where elements are matched by researchers and collaborators.
Table 4. An example of a reliability table where elements are matched by researchers and collaborators.
CollaboratorCategory C1
(Matched with R1)
Category C2
(Matched with R2)
Category C3
(Non-Matched)
Researcher
Category R1
(matched with C1)
1.1, 2.1, 4.2, 3.2, 3.3
(matched)
1.4
(non-matched)
-
Category R2
(matched with C2)
-2.2, 2.3, 2.4, 3.1, 3.4
(matched)
2.5
(non-matched)
Category R3
(non-matched)
-1.2, 4.3
(non-matched)
Category R4
(non-matched)
-4.4
(non-matched)
4.1, 1.3
(non-matched)
Table 5. Embedded software defects and sources.
Table 5. Embedded software defects and sources.
ResearcherEmbedded Software DefectsResearcherEmbedded Software Defects
Barr [9]Race condition
Non-reentrant function
Missing volatile keyword
Stack overflow
Heap fragmentation
Barr [10]Memory leak
Deadlock
Priority inversion
Incorrect priority assignment
Ji and Bao [35]Initialize
Input
Interface
Output
Control
Fault detection
Fault handles
Performance
Noergaard [36]Managing data: serial and parallel I/O
Interfacing the I/O components
Device driver
Flash memory
Multitasking and process management
Memory management
I/O and file system management
Choi et al. [37]Task management
Inter-task communication
Time management
Interrupt
Signal processing
I/O management
Memory management
Networking
File system
Jung et al. [14]Types
Declarations and definitions
Pointer type conversion
Arithmetic type conversion
Expressions
Control flow
Control statement expressions
Switch statements
Functions
Hagar [12]Data computation bug
Structural logic flow
Long duration control
Logic and control law
Data
Computation
Software(S/W)-to-Hardware(H/W) interface
H/W-to-S/W interface
S/W-system fault tolerance
S/W error recovery
H/W to S/W communications bug
Time-related
Human interface
Seo and Choi [38]Memory
Timer
I/O device
Task management
Exception handling
Inter-task communication
Virtual memory management
Physical memory management
Time management
Interrupt handling
I/O management (i.e., device driver I/O)
Networking
File system
Sung et al. [39]Task management
Inter-task management
Time management
Interrupt/signal/exception handling
Memory management
I/O management
Networking
File system
I/O device
Timer
Hardware initialization
Rodriguez -Dapena [40]Calculations faults
Data faults
Internal interface faults
Logic faults
Control flow faults
Interface between components
Control flow between components
H/W to S/W interface faults
H/W to S/W interface
User interface faults
Seo [16]S/W-to-H/W interface
S/W-to-S/W interface
Lee [18]Exception handling
Lutz [11]Process flowInterface specificationBennett and Wennberg [15]Internal faults
Interface faults
Program faultInternal faults
Interface faults
Functional faults
Function faultsOperating faults
Condition faults
Behavior faults
Sung [41]Kernel
Interface
Task management
Inter-task communication
Time management
Interrupt/exception handling
Memory management
I/O management
Networking
File system
Lee [22]Assignment
Checking
Interface
Algorithm
Lee et al. [20]; Lee and Park [21]Time out
Data violation
Complete with delay
Error without effect
Exception
Hardware interface
Jung
[42]
Control logicJones [43]Getting bored and runningRun-time environment (e.g., stack and heap allocation, memory models, etc.)
SensorMissing S/W logic between sensor and system
Wrong alalog/digital conversion
Knocking off the obvious mistakesInitialization
Pointer dereferencing
Arithmetic errors
KeyWrong A/D conversion table,
Key alone event
Background/foreground issuesReentrancy
Atomicity
Interrupt response times
LCD panelDisplay error
IndicatorIndication error
Buzzer
Timing relatedResource allocation mistake
Priority/schedule issues
Deadlocks
Priority inversion
Race conditions
Motor actuator
Analog/digital(A/D) conversion error
Digital/analog(D/A) conversion error
Durães and Madeira [44]; Cotroneo et al. [19]Missing variable initialization
Missing variable assignment with a value
Missing variable assignment with an expression
The incorrect value assigned to a variable
Missing function call
Missing IF construct + statements
Missing IF construct + statements + ELSE construct
Missing small and localized part of the algorithm
Missing IF construct around statements
Missing AND in expression used as branch condition
Missing OR in expression used as branch condition
Wrong variable used in the parameter of function call
Wrong arithmetic expression in function call parameter
Lee et al. [13]Input data
handling logic
Aanalog/digital sampling
Aanalog/digital conversion
Fail-safe
Interrupt
Control logicExpression
Data processing
Branch control
Loop control
Output data
Handling logic
Output port set
Abort output
(Incorrect time, feedback control error)
Fail-safe
YN Choi [17]Memory allocationLeakage
Zero allocation
Fail allocation
Memory accessNull pointer access
Free pointer access
Invalid pointer access
Outbound access
Collision
Memory freeIllegal free
Null pointer free
Duplicate free
Table 6. Four embedded software experts involved in terms standardization.
Table 6. Four embedded software experts involved in terms standardization.
ExpertsEmbedded Software Expert
Field of Embedded Software DevelopmentNumber of Years of Embedded Software ExperienceExperts Class or Certification
Experts 1Mobile, Internet of thing20 yearsIT Auditor
Experts 2Internet Cable TV, IoT, Device driver10 yearsProfessional engineer
Experts 3Mobile, industrial device control8 yearsTop engineer
Experts 4Industrial control8.5 yearsProfessional engineer
Table 7. Standardized embedded software defect terms.
Table 7. Standardized embedded software defect terms.
CodeEmbedded Software DefectsCodeEmbedded Software Defects
1-1Data access4-1Wrong interrupts
1-2Shared memory4-2Incorrect subroutine called
1-3Data violation4-3Nonexistent subroutine call
1-4Data boundary error4-4Wrong parameter
1-5Type mismatch4-5Inter-task communication
1-6Save storage data4-6Internal interface
1-7Flash memory4-7Module interface
1-8Memory initialization4-8Incorrect API usage
1-9Memory management4-9Wrong protocol
1-10Memory access4-10Software architecture
1-11Resource leaks4-11Exception handling
1-12Memory free error4-12None sensor logic
1-13Memory overflow error5-1Missing computation
1-14Memory violation error5-2Incorrect operand and operator
2-1Wrong H/W interface5-3Incorrect parenthesis
2-2I/O devices5-4Round and truncate
2-3User interfaces5-5Sign convention
2-4External interface5-6Divide by zero
2-5Send and receive packets error5-7Arithmetic overflow and underflow
2-6Networking6-1Wrong logic
2-7Input value error6-2Non-reentrant function
2-8Output signal6-3Wrong objects
2-9Data I/O process6-4Wrong relationship
2-10Incorrect sensor data6-5Incorrect return
3-1Optimization6-6Logic error
3-2Time out7-1Infinite loops
3-3Time fault causes data loss7-2If and case statements
3-4Complete with delay7-3Check variables
3-5Time delay7-4Serialization
3-6Feedback control error7-5Deadlock
3-7Set time and read7-6Concurrent processing
3-8Time management7-7Task management
7-8Recursion
Table 8. Derived final embedded software defects.
Table 8. Derived final embedded software defects.
CodeEmbedded Software DefectsDefinitionSub-Defects
E1Wrong logicControl logic and calculationControl flow, if, case, loop statements, divided by zero
E2Wrong functionFunction itself defectsNon-reentrant function, incorrect objects
E3Task managementConcurrent processing errorDeadlock, race condition, task management
E4Exception handlingDevice driver exception handle errorSoftware exception handling excluding device driver error
E5Internal software interfaceCommunication error between softwareInternal interface, inconsistent module interface, wrong parameter
E6External interfaceCommunication error with the external systemNetworking, send and receive packet error, human interface
E7Device driverHardware control device driverI/O device, I/O port process, I/O device status
E8Hardware interruptThe processing routine for hardware interruptNon interrupt routine, incorrect interrupt routine, process error
E9Timing errorDefects that cannot complete in timeTime out, time delay, feedback control error, set time and read time
E10Data, shared memoryData and static memoryData definition, data access, shared memory
E11Dynamic memoryDefect using dynamic memoryMemory initialization, memory management, resource leaks, memory overflow
E12Flash memory and file systemData storage deviceFlash memory, storage data save
Table 9. Summary of Cronbach’s α analysis results for survey of embedded expert respondents.
Table 9. Summary of Cronbach’s α analysis results for survey of embedded expert respondents.
No.Embedded Expert RespondentsSurvey Analysis Result of Cronbach’s α
Field of Embedded Software DevelopmentNumber of Years of Embedded Software ExperienceRespondent Class or CertificationScale Average if This Item was DeletedScale Distribution if This Item was DeletedModified Full CorrelationCronbach’s α if this Item was Deleted
R1IP CCTV, IoT, Device driver10P.E. *26.63998.3580.7650.895
R2Device driver6.5P.E. *26.88299.6150.7360.897
R3Industrial device control6P.E. *27.778103.5590.3120.911
R4Mobile, IoT20IT auditor27.333101.2310.5340.902
R5Mobile, IoT9.5IT auditor27.22299.4470.6760.898
R6Mobile10.5IT auditor27.13297.3460.6710.897
R7Mobile14P.E. *27.681102.6800.5840.901
R8Mobile, IoT12P.E. *27.20899.6490.5460.902
R9IoT8Top engineer27.11198.1550.6320.899
R10Network cam,7P.E. *26.75798.8430.7170.897
R11Mobile, industrial device control8Top engineer27.819101.7430.4780.904
R12Industrial control8.5P.E. *27.88298.7480.5690.901
R13Mobile10P.E. *27.88298.7480.5690.901
R14Intrusion Prevention System6P.E. *27.56997.9390.4770.906
R15IoT5Top engineer26.60498.5070.6930.897
R16Home automation5Top engineer27.16796.2520.6160.900
Average9.125N/AN/AN/AN/A0.906
* P.E.: Professional engineer.
Table 10. Generalized cause and effect matrix (A).
Table 10. Generalized cause and effect matrix (A).
E1E2E3E4E5E6E7E8E9E10E11E12
E10.00002.31252.62502.37501.68751.62502.43752.18752.62502.50002.62502.5000
E22.31250.00002.75002.37502.31252.18752.18752.12502.18752.25002.31251.6250
E31.93751.75000.00002.12502.37502.43752.25002.56252.62502.31252.31251.6875
E42.31251.93752.31250.00001.75001.93751.75001.81252.06251.75001.62501.3750
E51.93752.06252.37501.93750.00002.18752.06251.87502.25001.37501.62501.5000
E61.62501.37501.75001.68751.75000.00001.87502.06252.81251.43751.37500.8750
E71.56251.50001.81251.62501.68752.18750.00002.31252.50001.81251.87501.2500
E81.37501.25001.93751.31251.62502.18752.75000.00002.50001.68751.68751.4375
E91.62501.68752.37501.75002.00002.75002.43752.37500.00001.68751.37501.3750
E102.31252.37502.31252.00002.00001.87502.00001.93751.93750.00002.31251.6875
E112.25002.31252.18752.12502.00001.75002.00001.87502.06252.31250.00001.6250
E121.68751.62502.12501.37501.68751.75002.06252.31251.81252.12501.93750.0000
Table 11. Total cause and effect matrix (T).
Table 11. Total cause and effect matrix (T).
E1E2E3E4E5E6E7E8E9E10E11E12
E10.47420.54150.64440.55620.53870.58250.62660.61030.66620.57150.57110.4786
E20.54210.44350.63070.54160.54440.58460.60070.59070.63430.54670.54480.4360
E30.52050.49920.52290.52390.53780.58450.59390.59610.63900.53950.53530.4306
E40.47320.44740.53650.38790.45630.50060.50810.50350.54720.46020.45260.3714
E50.46770.45830.54720.46570.39980.51810.52740.51440.56310.45480.45950.3811
E60.40990.38970.47180.41030.41680.38840.46860.46840.52560.40860.40310.3210
E70.43370.41890.50340.43360.44050.49510.42910.50540.54590.44770.44620.3549
E80.41890.40240.49800.41480.43050.48690.51810.41400.53680.43550.43190.3545
E90.45730.44560.54720.45940.47300.53830.54120.53230.48380.46530.45140.3766
E100.51120.49910.58030.49820.50260.53940.55910.54950.58790.43450.51400.4130
E110.50480.49270.57110.49800.49810.53040.55400.54230.58650.51300.42620.4072
E120.44660.43170.52380.43350.44900.48880.51340.51500.53250.46790.45810.3156
Table 12. Results of the cause and effect analysis of embedded software defects.
Table 12. Results of the cause and effect analysis of embedded software defects.
CodeDefectsDRD+RD-R
E1Wrong logic6.865.6612.521.20
E2Wrong function6.645.4712.111.17
E3Task management6.526.5813.10−0.05
E4Exception handling5.645.6211.270.02
E5Internal software interface5.765.6911.440.07
E6External interface5.086.2411.32−1.16
E7Device driver5.456.4411.89−0.99
E8Hardware interrupt5.346.3411.68−1.00
E9Timing error5.776.8512.62−1.08
E10Data, shared memory6.195.7511.930.44
E11Dynamic memory6.125.6911.820.43
E12Flash memory and file system5.584.6410.220.94
Table 13. Embedded software defects sorted with D+R and D-R.
Table 13. Embedded software defects sorted with D+R and D-R.
CodeDefects Sorted by D+RD+RCodeDefects Sorted by D-RD-R
E3Task management13.1E1Wrong logic1.2
E9Timing error12.62E2Wrong function1.17
E1Wrong logic12.52E12Flash memory and file system0.94
E2Wrong function12.11E10Data, shared memory0.44
E10Data, shared memory11.93E11Dynamic memory0.43
E7Device driver11.89E5Internal software interface0.07
E11Dynamic memory11.82E4Exception handling0.02
E8Hardware interrupt11.68E3Task management−0.05
E5An internal software interface11.44E7Device driver−0.99
E6External interface11.32E8Hardware interrupt−1
E4Exception handling11.27E9Timing error−1.08
E12Flash memory and file system10.22E6External interface−1.16
Table 14. Opinions of embedded software development experts on important defects.
Table 14. Opinions of embedded software development experts on important defects.
CodeDefectsOpinions of 10 Embedded Software ExpertsRank
12345678910
E1Device driver644 1231 14
E2H/W interrupt42223145 41
E3Timing error53 54523653
E4Data and shared memory 65 5
E5Dynamic memory25 6
E6Flash memory and file system1 6 52
E7Internal software interface 1
E8External interface 6335312232
E9Wrong logic 1 6 36
E10Wrong function 44
E11Task management3 51 45 5
E12Exception handling 442 6 6

Share and Cite

MDPI and ACS Style

Huh, S.M.; Kim, W.-J. The Derivation of Defect Priorities and Core Defects through Impact Relationship Analysis between Embedded Software Defects. Appl. Sci. 2020, 10, 6946. https://doi.org/10.3390/app10196946

AMA Style

Huh SM, Kim W-J. The Derivation of Defect Priorities and Core Defects through Impact Relationship Analysis between Embedded Software Defects. Applied Sciences. 2020; 10(19):6946. https://doi.org/10.3390/app10196946

Chicago/Turabian Style

Huh, Sang Moo, and Woo-Je Kim. 2020. "The Derivation of Defect Priorities and Core Defects through Impact Relationship Analysis between Embedded Software Defects" Applied Sciences 10, no. 19: 6946. https://doi.org/10.3390/app10196946

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop