Finest Magic Cloth or a Naked Emperor? The SKQuest Data Set on Software Metrics for Improving Transparency and Quality
Abstract
:1. Introduction
“You software guys are too much like the weavers in the story about the Emperor and his new clothes. When I go out to check on a software development the answers I get sound like, ‘We’re fantastically busy weaving this magic cloth. Just wait a while and it’ll look terrific.’ But there’s nothing I can see or touch, no numbers I can relate to, no way to pick up signals that things aren’t really all that great. And there are too many people I know who have come out at the end wearing a bunch of expensive rags or nothing at all”.[5]
- The SKQuest data set that is provided as supplementary material to this paper,
- The survey instrument (see Section 2 and Appendix A),
- Key figures of the SKQuest data set (see Section 3.1),
- A demonstration of the data set by means of investigation using its data:
- ○
- If transparency is a problem in software projects,
- ○
- If there is a desire for more transparency,
- ○
- Whether metrics can improve transparency,
- ○
- How a metric tool fits into the existing tool landscape (Section 3.2).
- An attempt to replicate findings from a smaller study [6] regarding how the participants’ role and situation influence the perception of metrics with the larger SKQuest data set (see Section 3.3),
- A discussion of the above findings and the SKQuest data set (see Section 4).
2. Materials and Methods
2.1. Survey Background
2.2. Original Goals of the Survey and the Four Primordial Questions
- Is transparency a problem in software projects?
- Is there a desire for more transparency in such projects?
- Can metrics contribute to improving the situation?
- How can AENEAS fit into the existing tool landscape?
2.3. Conduction of the Survey
2.4. Recruiting of Participants
- Personal contacts as participants,
- Personal contacts as multipliers that emailed to their contacts and posted messages on social media platforms,
- Presentations and presence at two European conferences, an industry workshop on software for space, and an ECSS standardization group meeting,
- Providing a web-based form so that participants could easily invite other participants (using a mailto link),
- Paid advertisements in search engines and social media networks.
2.5. The Survey Instrument
2.6. The Underlying Quality Model
2.7. Overview of the Software Metrics in SKQuest
2.8. Data Filtering, Permutation, Correcting, and Amending
2.9. Related Work
3. Results
3.1. Key Figures of the Data Set
3.2. Answering the Four Primordial Questions
3.2.1. Question 1: Is Transparency a Problem in Software Development Projects?
3.2.2. Question 2: Is There a Desire for More Transparency in Projects?
3.2.3. Question 3: Can Metrics Contribute to Improving the Situation?
3.2.4. Question 4: How Can AENEAS Fit into the Current Tool Landscape?
3.3. Replication of Earlier Results
- (Qi) Do you agree with the statement about software and the emperor’s new clothes?
- (Qii) Do you wish for more transparency in software development?
- (Qiii) Would regular delivery of ECSS metrics help you fulfill your role?
3.3.1. Mapping of the Attitude Variables
3.3.2. Mapping Demographic Variables and Replication of Statements
4. Discussion
4.1. Potential Effects on Existing and Future Standards
4.2. Threats to Validity
5. Conclusions
- Transparency is a problem in software development projects, i.e., stakeholders—in particular, customers—miss transparency. Increased transparency leads to higher satisfaction with project execution, improves processes, and reduces the fear of overlooking project risks.
- Although respondents feel quite well informed and acknowledge certain risks from transparency itself, there is a desire for more transparency.
- Metrics are not the primary means of improving transparency in a software development project, but they can help.
- To improve the benefit of software metrics, they should be exchanged between supplier and customer. However, opinions regarding delivery frequency are polarized. Metrics for functionality, reliability, and maintainability are desired the most.
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
- 5LI: 5-point Likert-scaled item. The range is usually from fully disagree (−2) over neutral (0) to fully agree (2).
- MLI: multiple 5-point Likert-scaled items.
- MC: multiple-choice questions, i.e., multiple-answer options can be selected. The responses are stored as multiple boolean yes/no values for the respective options.
- NT: number entered as text.
- SC: single choice question; only one answer option can be selected.
- YN: single choice question with only yes or no answers.
Topic | QT | Question Item and Answer Options |
---|---|---|
Introduction | n/a | Background and organization of the survey (e.g., context, topic, duration, etc.), information regarding anonymity, and voluntariness of participation. |
Metadata | n/a | $mDuration: Duration of the interview in seconds. $mComplete: True iff the interview was completed successfully. $mCompleteness: Estimate how complete the response is, i.e., to how many logical blocks of questions there is an answer. Note that complete responses may have $mCompleteness < 1 since respondents might have skipped questions. $mBogus: True iff a participant is suspected of not having answered seriously, as determined by manual analysis. We recommend filtering these out; however, since we may have misjudged, we leave the responses in the data set. |
Demographics | ||
Involved in software | YN | $involvedInSw: Whether the participant is involved in at least one project that includes the development, procurement, contracting, or integration of software. |
Domain | MC | Domains the participant works in. While this item provided over 20 answer options, the values are mapped to only two values for anonymization reasons: $domAerospace: Whether the participant is working on aerospace projects. $domNonAero: Whether the participant is working on non-aerospace projects. |
Office location | Country where the participant’s office is located. Data is not included for anonymization reasons. | |
Company size | Size of the participant’s company/institution in five predefined size levels. Data is not included for anonymization reasons. | |
Experience | MC | Several multiple-choice questions regarding experience: $expYDom: Years of professional experience in the domain: “<5”, “5–10” (precisely: more than 5 but no more than 10 years), “10–20” (precisely: more than 10 but no more than 20 years), “>20”. $expSwDev: Software development experience: “Very low” to “very high”. $expSwMet: Software metrics experience: “Very low” to “very high”. |
Development culture | SC | Background of the participant to give a rough indication of the kind and culture of development work. $culture: “Is your work more focused on research or products?” Values: Fundamental (4) or applied research (3) vs. product development (2) or manufacturing (1). $pubOrPriv: The variable stores information on whether the respondent’s organization is a “private” (for-profit) company or a “public” institute/institution. |
Project demographics | ||
Choose “the project” | n/a | Upon reaching this part and before continuing with the questionnaire, participants were instructed to now think of one concrete current or past project. For the remainder of this survey, participants should answer project-related questions with respect to this project. |
Project role | MC | The role or roles that the participant fulfilled in the context of the project. $roleClvl: C-level management. $roleHead: head of division/department/team/organizational unit. $rolePM: project manager. $roleEngL: head of engineering/development. $roleCM: configuration manager. $roleDev: engineer or developer. $roleQA: (software) quality or product assurance. $roleAdm: controller, project administrator, legal support. $roleStud: student; originally not included but frequently named as “other” category. $roleOth: other. |
The agility of project management | SC | $agility: How is the project managed? Fully traditional, Rather traditional, Equally agile and traditional, Rather agile, and Fully Agile. |
Project budget | SC | $budget: Average overall annual budget of the project. Values: “<100 K €”, “100 K–1 M €”, “1 M–10 M €”, and “>10 M €”. |
Customer vs. supplier | MC | Customer and supplier roles in projects are often very distinct views. This aspect needs to be reflected in the questions presented to participants. The participants’ responses to this question, therefore, had an enormous impact on what the rest of the survey looked like. “Software is developed …” $isSupplier: “… for an external customer”. $isCustomer: “… by an external supplier”. $isDev4Self: “… by my own organization for our own purposes or our own products”. Note: Some participants reported having problems classifying one of the three options. We, therefore, added an “Other” option, enabling all question items. However, this did not add usable results. |
Public customer | SC | $custPP: “Is your direct customer in “the project” a public sector entity or a private entity?” (Only suppliers). Values: public, or private. |
Status quo of project execution, product quality, and transparency | ||
Satisfaction with project/product quality | MLI | Status quo of and satisfaction with project execution and product quality. “I am completely satisfied with …” $ssSoftware: “… the software delivered by our organization”. (Only suppliers). $ssProcess: “… the compliance to the processes used for software development for our customer”. (Only suppliers). $ssEfficiency: “… the efficiency of the processes used for software development for our customer”. (Only suppliers). $scSoftware: “… the software delivered to me by my external software supplier”. (Only customers). $scProcess: “… the compliance to the processes used by my external software supplier”. (Only customers). $scEfficiency: “… the efficiency of the processes used by my external software supplier”. (Only customers). $soSoftware: “… the software produced by my own organization for our own purposes”. (Only internal development). $soProcess: “… the compliance to the processes used for the development of software by my own organization for our own purposes”. (Only internal development). $soEfficiency: “… the efficiency of the processes used for the development of software by my own organization for our own purposes”. (Only internal development). |
Satisfaction with transparency | MLI | Satisfaction with the status quo of transparency in the project. $ssInfoStatus: “I feel well informed about the development status quo of the software delivered by us to our external customer in the project”. (Only suppliers). $ssSurprises: “Surprises (e.g., regarding schedule or cost) happen regarding the development of software delivered by us to our external customer in the project”. (Only suppliers). $ssInfoQuality: “I feel well informed about the quality (e.g., functionality, dependability, …) of software developed and delivered by us to our external customer in the project”. (Only suppliers). $ssOverinfo: “The external customer knows too much about the project and our processes”. (Only suppliers). $scInfoStatus: “I feel well informed about the development status quo of the software delivered to us by our external suppliers in the project”. (Only customers). $scSurprises: “Surprises (e.g., regarding schedule or cost) happen regarding the development of software delivered to us by our external customer in the project”. (Only customers). $scInfoQuality: “I feel well informed about the quality (e.g., functionality, dependability, …) of software developed and delivered to us by our external customer in the project”. (Only customers). $scOverinfo: “Receiving less information from our external supplier(s) in the project would be fine for me”. (Only customers). $soInfoStatus: “I feel well informed about the development status quo of the software developed internally in the project”. (Only internal development). $soSurprises: “Surprises (e.g., regarding schedule or cost) happen regarding the development of software developed internally in the project”. (Only internal dev). $soInfoQuality: “I feel well informed about the quality (e.g., functionality, dependability, …) of software developed internally in the project”. (Only internal dev). $soOverinfo: “Other organizational entities of my organization get to know too much about the project”. |
Transparency activities | MC | Activities that are in place that increase transparency. “Which of the following activities are already part of the development process of the software?” $tRegRel: “Regular releases of intermediate software versions to the customer”. $tRegManMeet: “Regular face-to-face meetings between managing representatives of the different stakeholders”. $tRegTechMeet: “Regular face-to-face meetings between the technical staff of the different stakeholders”. $tRegReflect: “Regular reflection of the effectiveness of the process and possible improvement”. $tFocusInteract: “Focus on interaction between individuals rather than between institutions”. $tUseMet: “Regular use of software metrics by the software team”. $tDlvrMet: “Regular delivery of software metrics to the customer”. $tCommMet: “Regular communication of software metrics to other stakeholders within the developing organization”. $tMiles: “Regular milestone or phase reviews with quality gates”. $tDoc: “Detailed documentation”. $tDocUpd: “Regular updates of documentation”. $tTeamVisit: “Regular visits to the software team by the customer”. |
Emperor’s New Clothes | 5LI | Agreement with the anecdote of the metaphorical comparison between software developers and the weavers of the emperor’s new clothes. |
Metric use | SC | Current use of metrics in the project. $tsMetReport: “How many metrics do you report regularly for your project?” (Only suppliers or internal development). $tcMetReport: “How many metrics are reported regularly to you for your project?” (Only customers or internal development). Values: “None”, “<5” (Up to 5), “5–10” (More than 5 but no more than 10), and “>10” (More than 10). |
Metric format | SC | Current use of machine-readable data formats for reporting metrics. $tsMetMachRead: “Do you report the software metrics in a machine-readable format (e.g., XML and CSV)?” (Only suppliers or for internal development). $tcMetMachRead: “Are software metrics reported in a machine-readable format to you (e.g., XML and CSV)?” (Only customers or for internal development). Values: Yes, Partially, and No. |
The role of transparency for project success | ||
Process and quality | SC | This item establishes a relationship between process quality and product quality, i.e., how important the process is deemed for product quality. $relStrProcQual: “How important do you consider the software development process for the quality of the developed product?” Note: The question item was implemented as a slider. Values: 0.0 (unimportant), 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0 (important). |
Effects of transparency | MLI | A broad analysis of the wide range of effects that transparency can have on the project, its execution, and its environment. “Increased transparency …” $tEffMeetObj: “… reduces the risk of not meeting the development objectives”. $tEffMeetBudget: “… reduces the risk of exceeding the budget”. $tEffMeetQual: “… reduces the risk of insufficient product quality”. $tEffIncEffort: “… substantially increases the effort on the side of the information provider”. $tEffNoBenefitS: “… has no benefits for the information provider”. $tEffNoBenefit: “… benefits no one”. $tEffImpStakCom: “… improves communication between stakeholders”. $tEffImpTrustInS: “… strengthens the customer’s trust in the external supplier(s)”. $tEffExploitS: “… gives the customer leverage to take advantage of the external supplier(s)”. $tEffInfoLeak: “… increases the risk of information leaks to competitors”. $tEffImpOwnrspC: “… leads to a stronger sense of product ownership by the customer”. $tEffImpCtrl: “… improves control over the development progress, reducing schedule risks”. $tEffIncDiscuss: “… also implies more effort for discussing the information with the customer”. $tEffProcImpInsight: “… gives better insight into possible improvements of development processes”. $tEffIncCResponsibility: “… increases the customer’s co-responsibility for failure”. $tEffAssessQuality: “… allows for more objective assessment of product quality”. $tEffImpCommitC: “… leads to improved commitment by the customer”. $tEffAssessProcImp: “… allows for more objective assessment of process improvements”. $tEffAssessEmpPerf: “… enables more objective performance assessment of team members and employees”. $tEffInappEmpMonitor: “… makes employees feel inappropriately monitored”. $tEffInappSMonitor: “… may be considered inappropriate monitoring by external supplier(s)”. $tEffExploitEmp: “… enables employers to take advantage of their employees”. $tEffEmpReplaceable: “… makes people become more easily replaceable”. |
Transparency and project success | SC | Expectations of customers and suppliers of how increased transparency affects the quality of process, product, and project execution, i.e., whether it gets worse or better. $tEffSProc: “Regarding the software provided by you, if transparency increases relative to its current state, the process …” (Only supplier). $tEffSProd: “Regarding the software provided by you, if transparency increases relative to its current state, the product …” (Only supplier). $tEffSProj: “Regarding the software provided by you, if transparency increases relative to its current state, the project execution …” (Only supplier). $tEffCProc: “Regarding the software provided to you, if transparency increases relative to its current state, the process …” (Only customer). $tEffCProd: “Regarding the software provided to you, if transparency increases relative to its current state, the product …” (Only customer). $tEffCProj: “Regarding the software provided to you, if transparency increases relative to its current state, the project execution …” (Only customer). $tEffOProc: “Regarding the software provided internally, if transparency increases relative to its current state, the process …” (Only internal development). $tEffOProd: “Regarding the software provided internally, if transparency increases relative to its current state, the product …” (Only internal development). $tEffOProj: “Regarding the software developed internally, if transparency increases relative to its current state, the project execution …” (Only internal development). Values: “(gets) much better” (5), “(gets) slightly better” (4), “(is) not affected” (3), “(gets) slightly worse” (2), and “(gets) much worse” (1). |
Increasing transparency | ||
Acceptance of increased transparency | 5LI | Likert-scale item measuring whether suppliers and development teams would accept an increase in transparency. $accIncT: “I would be willing to increase the transparency of our process and the current state of software development in the project for our customer(s)”. (Only suppliers and internal development). |
Activities that increase transparency | MC | Assessment of whether certain activities are useful for increasing transparency. Note: The role of metrics for increasing transparency is discussed separately and in more detail in the next block. “What other measures do you consider useful for increasing transparency?” $tIncARegManMeet: “More face-to-face meetings between managing representatives of the different stakeholders”. $tIncARegTechMeet: “More face-to-face meetings between the technical staff of the different stakeholders”. $tIncARegReflectCS: “More frequent reflection on the effectiveness of the process and possible improvements by customer and supplier”. $tIncARegReflectTeam: “More frequent reflection on the effectiveness of the process and possible improvements by the software team”. $tIncARegReflectOrg: “More frequent reflection on the effectiveness of the process and possible improvements by different units of the organization”. $tIncACloserCoopCS: “Closer cooperation between customer and supplier”. $tIncAFocusInteract: “Increased focus on the interaction between individuals rather than between institutions”. $tIncAMilestones: “More milestone or phase reviews with quality gates”. $tIncATeamVisits: “More frequent visits to the software team by the customer”. $tIncADoc: “Provision of more detailed documentation”. $tIncADocUpd: “Provision of more up-to-date documentation”. |
Increasing transparency through metrics | 5LI | Several Likert-scale items measuring the respondents’ assessment of whether metrics can increase transparency: $tIncAMetCFreq: “Transparency increases for the customer when software metrics are delivered to the customer more often”. $tIncAMetCMore: “Transparency increases for the customer when more software metrics are delivered to the customer”. $tIncAMetSFreq: “Transparency increases for the software team when metrics are used by the team more often”. $tIncAMetSMore: “Transparency increases for the software team when more metrics are used by the team”. $tIncAMetOFreq: “Transparency increases inside the organization when metrics are communicated to other stakeholders within the developing organization more often”. $tIncAMetOMore: “Transparency increases inside the organization when more metrics are communicated to other stakeholders within the developing organization”. |
The usefulness of and increasing transparency with metrics | ||
Acceptable cost of metrics | NT | A percentage value of the yearly project budget that would be an acceptable effort for collecting metrics. $accMetCost: “What percentage of the yearly budget would you consider acceptable for regularly gathering software metrics in the project?” Values: An integer from 0 to 100. |
Metric delivery frequency | SC | Recommended frequency of updating and exchanging measured software metrics. $metRecFreqS: “What would be a good frequency for delivering up-to-date software metrics in your project to your external customer?” (Only supplier). $metRecFreqC: “What would be a good frequency for receiving software metrics from your external software suppliers?” (Only customer). $metRecFreqO: “What would be a good frequency for updating software metrics for your internal software development?” (Only internal development). Values: “quarterly” (Up to once every three months), “monthly” (More often than once per quarter but not more often than once per month), “biweekly” (More often than once per month but not more often than once every two weeks), “weekly” (More often than once every two weeks but not more often than once per week), “daily” (More often than once per week), and in “real time”. |
Metric usefulness | MLI | Participants were asked to rate the overall usefulness of various software metrics. Only ten metrics out of the full set of 41 ECSS metrics were presented to participants. The metrics were selected randomly for each participant. $metric[…]: “[Short explanation of metric.] [Metric] is a useful software metric”. |
New metrics | MC | Upon what kind of metrics should research and development of new metrics focus? It was possible to choose individual product quality characteristics from the ISO-25000 and process effectiveness and efficiency. $newMetFunc: product quality, functionality. $newMetRel: product quality, reliability. $newMetMaint: product quality, maintainability. $newMetReuse: product quality, reusability. $newMetSfty: product quality, suitability for safety. $newMetSec: product quality, security. $newMetUsa: product quality, usability. $newMetEffi: product quality, efficiency. $newMetComp: product quality, compatibility. $newMetPort: product quality, portability. $newMetDevEffe: software process, development effectiveness. $newMetDevEffi: software process, development efficiency. |
Quality of tools and support for metrics | ||
Open metrics database benefits | MLI | Estimated usefulness of a cross-project and cross-institutional database with project information and software metrics for different communities. “A cross-project and cross-institutional database of project information including software metrics would be useful to …” $metDbBenefitSpace: “… the space community”. $metDbBenefitIndustry: “… industry”. $metDbBenefitScience: “… science”. $metDbBenefitDevCom: “… the software development community”. |
Open metrics database contribution | SC | Willingness to contribute data to a cross-project and cross-institutional database with software metrics, and whether such contribution would need to guarantee anonymity. $metDbContrib: “Would you personally be willing to provide such data to such a database?” Values: “openly” (Yes, even if the data are not anonymized), “anonymously” (Yes, if the data are anonymized), or “no”. |
Current metrics tool landscape | MLI | Statements regarding the availability and suitability of software metrics tools. $mtMissRelMet: “Current metrication tools do not support the metrics relevant to me”. $mtPricy: “All metrication tools relevant to me are too expensive”. $mtEffortAnalysis: “Assessing the metric data with current tools requires just too much effort”. $mtOldData: “With current tools, metric data are always too old when it becomes available”. $mtLackSources: “None of the tools available supports all the data sources I need”. $mtDisjunct: “I need multiple tools, but they cannot be integrated into a complete solution”. $mtNeedFormat: “We urgently need some standards for the exchange of metric data between tools”. $mtNewSources: “I often have new data sources which need to be integrated with the metric collection tools”. $mtProcMismatch: “Metrication tools are difficult to integrate with our processes”. $mtITMismatch: “Available metrication tools don’t fit with our IT”. $mtApproveData: “There must be a process to approve metrics data before it is disclosed to the customer”. $mtNeedOSS: “I would only use a metrication tool, if it is available under an open-source license”. |
Appendix B
Code/Var. Name | Metric Name | Short Description |
---|---|---|
A.3.3.01 metricReqAlloc | Requirement allocation [var. swdc] | Percentage of Software Requirements allocated to Software Design Components. |
A.3.3.01 metricSysReqAlloc | Requirement allocation [var. Sys2Sw] | Percentage of System Requirements allocated to Software Level Requirements. |
A.3.3.02 metricReqImpl | Requirement implementation coverage | Percentage of correctly implemented and validated Software Requirements. |
A.3.3.03 metricReqWithTBD | Requirement completeness | Percentage of Software Requirements containing TBC/TBD to be confirmed/defined. |
A.3.3.04 metricVVCmplness | V&V coverage | Percentage of Software Requirements covered by at least one verification or validation activity. |
A.3.3.05 metricBugHistory | SPR/NCR trend analysis | Evolution of open vs. closed bug reports over time. |
A.3.3.06 metricReqClarity | Requirement clarity | Percentage of software requirements containing ambiguous phrases. |
A.3.3.07 metricCheckDocSui | Suitability of development documentation | Percentage of positively answered questions in the checklist for documentation suitability. |
A.3.3.08 metricCodingRuleCompl | Adherence to coding standards | Percentage of positively evaluated coding standard checks. |
A.3.3.09 metricCPUsed | CPU margin | Minimum unused CPU capacity during operation. |
A.3.3.10 metricMemUsed | Memory margin | Percentage of available memory used. |
A.3.3.11 metricMcCabe | Cyclomatic complexity (VG) | Average cyclomatic complexity per source code module. |
A.3.3.12 metricNesting | Nesting level | Maximum nesting level per source code module. |
A.3.3.13 metricLocPerMod | Lines of code | Number of lines of code per module without comments/blank lines. |
A.3.3.14 metricCommentLines | Comment frequency | Percentage of comment lines in source code per module. |
A.3.3.15 metricReqTested | Requirement testability | Percentage of Software Requirements validated by a test. |
A.3.3.16 metricFanOut | Modular span of control | The number of subroutines the functions call on average. |
A.3.3.17 metricCoupling | Modular coupling | The median level of coupling between pairs of modules. |
A.3.3.18 metricCohesion | Modular cohesion | The maximum level of cohesion of source code modules. |
A.3.3.19 metricCheckRelAct | Process reliability adequacy | Percentage of positively answered questions in the checklist for performed reliability activities. |
A.3.3.20 metricBranchCov | Structural coverage [var. branch] | Percentage of branches executed during testing. |
A.3.3.20 metricMCDCov | Structural coverage [var. MC/DC] | Percentage of MC/DC coverage achieved during testing. |
A.3.3.20 metricStatementCov | Structural coverage [var. statement] | Percentage of source code statements executed during testing. |
A.3.3.21 metricOpenBugs | SPR/NCR status | Number of open bugs by criticality class over time. |
A.3.3.22 metricCompBehavSw | Environmental software independence | Percentage of design components expected to maintain their correct behavior in a different software environment. |
A.3.3.23 metricCompBehavHw | System hardware independence | Percentage of design components expected to maintain their correct behavior in a different hardware environment. |
A.3.3.24 metricReuseChkl | Reusability checklist | Percentage of positively answered questions in the checklist for reuse potential. |
A.3.3.25 metricModLineReuse | Reuse modification rate | Percentage of modified/added lines of existing/reused software. |
A.3.3.26 metricSafetyChkl | Safety activities adequacy | Percentage of positively answered questions in the checklist for performed safety activities. |
A.3.3.27 metricSecurityChkl | Security checklist | Percentage of positively answered questions in the checklist for performed security activities. |
A.3.3.28 metricAmbigUM | User documentation clarity | Percentage of sentences in the user manual containing ambiguous phrases. |
A.3.3.29 metricUserDocCmpl | User documentation completeness | Percentage of sections within the user manual containing TBC/TBD to be confirmed/defined. |
A.3.3.30 metricUserManSui | User manual suitability | Percentage of positively answered questions in the checklist for user manual suitability. |
A.3.3.31 metricCheckMMI | Adherence to MMI standards | Percentage of positively answered questions in the checklist for adherence to man-machine-interface standards. |
A.3.3.32 metricCmmiLvl | Process assessment [ECSS-HB-Q-80-02] | The assessed level of the contractor’s process capability and maturity. |
A.3.3.33 metricMilestoneMet | Milestone tracking | Difference between planned and actually achieved dates for project milestones. |
A.3.3.34 metricEffortMet | Effort tracking | Estimated and actual effort figures for each ongoing or completed task relevant to the software project. |
A.3.3.35 metricSizeStable | Code size stability | Estimated and actual physical lines of code for each major design component. |
A.3.3.36 metricReqStable | Requirement stability | Percentage of software requirements added modified and deleted since the last software version. |
A.3.3.37 metricOpenItems | RID/action status | Number of open action items from milestone reviews over time. |
A.3.3.38 metricVVProgress | V&V progress | Percentage of successfully completed verification and validation activities. |
References
- Guanter, L.; Kaufmann, H.; Segl, K.; Foerster, S.; Rogass, C.; Chabrillat, S.; Kuester, T.; Hollstein, A.; Rossner, G.; Chlebek, C.; et al. The EnMAP Spaceborne Imaging Spectroscopy Mission for Earth Observation. Remote Sens. 2015, 7, 8830–8857. [Google Scholar] [CrossRef]
- Prause, C.R.; Bibus, M.; Dietrich, C.; Jobi, W. Software product assurance at the German space agency. J. Softw. Evol. Proc. 2016, 28, 744–761. [Google Scholar] [CrossRef]
- Donaldson, S.E.; Siegel, S.G. Successful Software Development, 2nd ed.; Prentice Hall PTR: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
- Tu, Y.-C.; Tempero, E.; Thomborson, C. Evaluating Presentation of Requirements Documents: Results of an Experiment. In Requirements Engineering; Junqueira Barbosa, S.D., Chen, P., Cuzzocrea, A., Du, X., Filipe, J., Kara, O., Kotenko, I., Sivalingam, K.M., Ślęzak, D., Washio, T., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 120–134. ISBN 978-3-662-43609-7. [Google Scholar]
- Boehm, B.W. Software and Its Impact: A Quantitative Assessment; RAND Corporation: Santa Monica, CA, USA, 1972. [Google Scholar]
- Prause, C.R.; Hönle, A. Emperor’s New Clothes: Transparency Through Metrication in Customer-Supplier Relationships. In Product-Focused Software Process Improvement; Kuhrmann, M., Schneider, K., Pfahl, D., Amasaki, S., Ciolkowski, M., Hebig, R., Tell, P., Klünder, J., Küpper, S., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 288–296. ISBN 978-3-030-03672-0. [Google Scholar]
- Betta, J.; Boronina, L. Transparency in Project Management—From Traditional to Agile. Adv. Econ. Bus. Manag. Res. 2018, 56, 446–449. [Google Scholar] [CrossRef]
- ECSS-Q-ST-80C; Space Product Assurance: Software Product Assurance. ECSS Executive Secretariat: Noordwijk, The Netherlands, 2017.
- ECSS-Q-HB-80-04A; Space Product Assurance: Software Metrication Programme Definition and Implementation. ECSS Executive Secretariat: Noordwijk, The Netherlands, 2011.
- ISO/IEC/IEEE 15939:2017; Systems and Software Engineering—Measurement Process. ISO: Geneva, Switzerland, 2017.
- LamaPoll. Sichere Online Umfrage. Available online: http://www.lamapoll.de (accessed on 23 March 2023).
- ISO/IEC 25010:2011; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—System and Software Quality Models. ISO: Geneva, Switzerland, 2011.
- Basili, V.R.; McGarry, F.E.; Pajerski, R.; Zelkowitz, M.V. Lessons learned from 25 years of process improvement. In Proceedings of the 24th International Conference on Software Engineering—ICSE ′02, Orlando, FL, USA, 19–25 May 2002; Tracz, W., Magee, J., Young, M., Eds.; ACM Press: New York, NY, USA, 2002; p. 69, ISBN 158113472X. [Google Scholar]
- Chidamber, S.R.; Kemerer, C.F. A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 1994, 20, 476–493. [Google Scholar] [CrossRef]
- Saraiva, J.; Barreiros, E.; Almeida, A.; Lima, F.; Alencar, A.; Lima, G.; Soares, S.; Castor, F. Aspect-oriented software maintenance metrics: A systematic mapping study. In Proceedings of the 16th International Conference on Evaluation & Assessment in Software Engineering (EASE 2012), Ciudad Real, Spain, 14–15 May 2012; IET: Hong Kong, China, 2012; pp. 253–262, ISBN 978-1-84919-541-6. [Google Scholar]
- Bouwers, E.; van Deursen, A.; Visser, J. Towards a catalog format for software metrics. In Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics, ICSE ′14, 36th International Conference on Software Engineering, Hyderabad, India, 3 June 2014; Counsell, S., Marchesi, M.L., Visaggio, A., Zhang, H., Venkatasubramanyam, R., Eds.; ACM: New York, NY, USA, 2014; pp. 44–47, ISBN 9781450328548. [Google Scholar]
- Sayyad Shirabad, J.; Menzies, T.J. The PROMISE Repository of Software Engineering Databases. 2005. Available online: http://promise.site.uottawa.ca/SERepository (accessed on 13 March 2023).
- Vogel, M.; Knapik, P.; Cohrs, M.; Szyperrek, B.; Pueschel, W.; Etzel, H.; Fiebig, D.; Rausch, A.; Kuhrmann, M. Metrics in automotive software development: A systematic literature review. J. Softw. Evol. Proc. 2021, 33, e2296. [Google Scholar] [CrossRef]
- Le Son, H.; Pritam, N.; Khari, M.; Kumar, R.; Phuong, P.; Thong, P. Empirical Study of Software Defect Prediction: A Systematic Mapping. Symmetry 2019, 11, 212. [Google Scholar] [CrossRef]
- Choras, M.; Springer, T.; Kozik, R.; Lopez, L.; Martinez-Fernandez, S.; Ram, P.; Rodriguez, P.; Franch, X. Measuring and Improving Agile Processes in a Small-Size Software Development Company. IEEE Access 2020, 8, 78452–78466. [Google Scholar] [CrossRef]
- Brüggemann, S.; Prause, C. Status quo agiler Software-Entwicklung in der europäischen institutionellen Raumfahrt. In Proceedings of the Deutscher Luft- und Raumfahrtkongress (DLRK), Friedrichshafen, Germany, 4–6 September 2018; pp. 1–8. [Google Scholar]
- Ofem, P.; Isong, B.; Lugayizi, F. On the Concept of Transparency: A Systematic Literature Review. IEEE Access 2022, 10, 89887–89914. [Google Scholar] [CrossRef]
- Saraiva, R.; Medeiros, A.; Perkusich, M.; Valadares, D.; Gorgonio, K.C.; Perkusich, A.; Almeida, H. A Bayesian Networks-Based Method to Analyze the Validity of the Data of Software Measurement Programs. IEEE Access 2020, 8, 198801–198821. [Google Scholar] [CrossRef]
- de Vaus, D.A. Surveys in Social Research, 5th ed.; Routledge: London, UK, 2002; ISBN 0415268575. [Google Scholar]
Topic | Summary of Question Item and Answer Options |
---|---|
Introduction | The interview started with the background and organization of the survey (e.g., context, topic, duration, etc.), information regarding anonymity, and voluntariness of participation. The data set also contained metadata for each response including duration, completeness, whether the respondent was invited through a search engine, and a manual assessment of whether the response was valid. |
Demographics | There were a couple of questions directed at understanding the participant’s background and context: whether they were involved in a software project, the domain of work, company size, office location, experience with software and metrics, and development culture with respect to whether it is research or product development. For reasons of anonymization, office location and company size are not included in the published data set, and the domains are collapsed into aerospace and non-aerospace. |
Project demographics | Upon reaching this part and before continuing with the questionnaire, participants were instructed to now think of one concrete current or past project. For the remainder of this survey, participants should answer project-related questions with respect to “the project”. Information about the project gathered here included the participant’s roles in the project, the degree of agility (as in agile software development) in project management, the project budget, whether participants acted as customers or suppliers in the project, and (if applicable) whether the customer was a public customer. |
Status quo of project execution, product quality, and transparency | Several questions addressed the current situation in “the project” with respect to satisfaction with how the project ran, the quality of the product, and transparency. It asked about performed activities that increase transparency, and the use and technical reporting of software metrics. |
The role of transparency for project success | This part of the survey addressed the role that transparency has in project success. It covers the expected importance of the development process for product quality, the various effects that transparency has on project success, whether transparency positively or negatively impacts project execution, and process and product quality. |
Increasing transparency | Another complex of questions is whether increased transparency is acceptable to participants, and what activities are useful for increasing transparency. Additionally, a special focus with additional question items was provided on software metrics as a means of improving transparency. |
The usefulness of and increasing transparency with metrics | This set of questions addressed the usefulness of metrics in general. It also inquired how often metrics should be used to improve transparency, and what cost for the metric gathering would be acceptable. Finally, it asked for a perspective on what kinds of metrics might be missing that could guide further research. |
Quality of tools and support for metrics | The last part of the survey addressed the vision of supporting aerospace, industry, and research with a cross-institutional database of metrics, which kinds of future metrics are needed, and it captured opinions about the current tool landscape for gathering metrics. |
Closing remarks | The survey concluded with closing remarks and the opportunity to provide a free-text comment regarding the survey or any other open points. The free-text comment is not included (see discussion below). As a thank you for their participation, participants were asked to which charitable organization we should donate (which is also not included). |
Interrelations for the Status Quo of Transparency Variables | Well Informed about Software Quality | Occurrence of Surprises | Development Outsiders Are Over-Informed |
---|---|---|---|
Well informed about project status | 0.67/0.75/0.74 | −0.10 ⚠/0.42/0.32 | 0.38/0.20/0.24 |
Well informed about software quality | −0.03 ⚠/0.40/0.48 | 0.51/0.30/0.39 | |
Occurrence of surprises | 0.16 ⚠/0.50/0.39 |
Satisfied with … | Well Informed about Project Status | Well Informed about Software Quality | Occurrence of Surprises | Development Outsiders Are Over-Informed |
---|---|---|---|---|
Product quality | 0.61/0.47/0.63 | 0.59/0.50/0.56 | −0.04 ⚠/0.20/0.26 | 0.14 ⚠/0.30/0.54 |
Process compliance | 0.66/0.63/0.53 | 0.70/0.61/0.48 | −0.04 ⚠/0.42/0.29 | 0.41/0.26/0.60 |
Process efficiency | 0.58/0.58/0.54 | 0.61/0.60/0.59 | 0.16 ⚠/0.36/0.31 | 0.48/0.35/0.58 |
Fully Disagree | Disagree | Neutral | Agree | Fully Agree | Mean [−2, 2] | |
---|---|---|---|---|---|---|
Overall | 2 | 3 | 7 | 9 | 15 | 0.89 |
(fraction) | 6% | 8% | 19% | 25% | 42% | - |
Public entities | 1 | 2 | 4 | 5 | 8 | 0.85 |
(fraction) | 5% | 10% | 20% | 25% | 40% | - |
Private entities | 1 | 1 | 2 | 4 | 6 | 0.93 |
(fraction) | 7% | 7% | 14% | 29% | 43% | - |
Fully agile or agile projects | 0 | 1 | 1 | 2 | 6 | 1.30 |
(fraction) | 0% | 10% | 10% | 20% | 60% | - |
Well Informed about Project Status | Well Informed about Software Quality | Occurrence of Surprises | Development Outsiders Are Over-Informed | |
---|---|---|---|---|
Customer | 0.46 ⚠ | 0.74 | - | 0.66 ⚠ |
Supplier | - | - | 0.54 | 0.45 |
Internal development | - | - | 0.55 | 0.40 ⚠ |
Increased Frequency | More Metrics | Mean | |
---|---|---|---|
For the customer | +0.81 | +0.63 | +0.72 |
For the software team itself | +1.24 | +1.05 | +1.14 |
For others within dev. organization | +1.05 | +0.88 | +0.96 |
The mean of rows above | +0.99 | +0.82 | +0.91 |
Customers’ opinions | +1.13 | +0.89 | +1.00 |
Suppliers’ opinions | +1.03 | +0.86 | +0.94 |
Demographic Factors | Qi | Qii | Qiii |
---|---|---|---|
Size | 0.49 vs. 0.08 | 0.45 vs. 0.29 * | 0.57 * vs. 0.11 |
Hierarchy level | 0.31 vs. 0.08 | 0.15 vs. 0.03 | −0.03 vs. 0.13 |
Is Customer? | 0.43 * vs. 0.03 | 0.21 vs. 0.04 | −0.07 vs. −0.03 |
Is product/quality assurance? | 0.36 vs. 0.00 | 0.07 vs. 0.28 * | 0.16 vs. −0.01 |
Is Software? | −0.49 * vs. 0.15 | −0.01 vs. 0.03 | 0.28 vs. 0.18 * |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Prause, C.R.; Gerlich, R. Finest Magic Cloth or a Naked Emperor? The SKQuest Data Set on Software Metrics for Improving Transparency and Quality. Standards 2023, 3, 136-168. https://doi.org/10.3390/standards3020012
Prause CR, Gerlich R. Finest Magic Cloth or a Naked Emperor? The SKQuest Data Set on Software Metrics for Improving Transparency and Quality. Standards. 2023; 3(2):136-168. https://doi.org/10.3390/standards3020012
Chicago/Turabian StylePrause, Christian R., and Ralf Gerlich. 2023. "Finest Magic Cloth or a Naked Emperor? The SKQuest Data Set on Software Metrics for Improving Transparency and Quality" Standards 3, no. 2: 136-168. https://doi.org/10.3390/standards3020012
APA StylePrause, C. R., & Gerlich, R. (2023). Finest Magic Cloth or a Naked Emperor? The SKQuest Data Set on Software Metrics for Improving Transparency and Quality. Standards, 3(2), 136-168. https://doi.org/10.3390/standards3020012