# Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

## 3. Sig Formula

**Lemma**

**1.**

**Proof.**

**Lemma**

**2.**

**Proof.**

**Remark**: A similar lemma could be proven: If the number of consecutive dots below the baseline increases, the values of both $si{g}_{Left}(y,\widehat{y})$ and $si{g}_{Right}(y,\widehat{y})$ decrease. □

## 4. Characteristics of Sig Formula

- If all predicted values are greater than all the respective actual values (i.e., all dots are above the baseline), then, according to Equation (11):$$sig(y,\widehat{y})=\frac{1}{N}\sum _{i=1}^{N}{a}_{i}=\frac{1}{N}\sum _{i=1}^{N}(+1)=1$$
- If all predicted values are smaller than all the respective actual values (i.e. all dots are below the baseline), then, according to Equation (11):$$sig(y,\widehat{y})=\frac{1}{N}\sum _{i=1}^{N}{a}_{i}=\frac{1}{N}\sum _{i=1}^{N}(-1)=-1$$
- If there is no difference between each predicted value and the respective actual value (i.e., all dots are on the baseline), then, according to Equation (11):$$sig(y,\widehat{y})=\frac{1}{N}\sum _{i=1}^{N}{a}_{i}=\frac{1}{N}\sum _{i=1}^{N}\left(0\right)=0$$
- In general, the predicted and actual values fluctuate. Some predicted values might be greater than and others might be smaller than the respective actual ones. The dots in Figure 1 then form a cloudy distribution around the baseline. The value of $sig(y,\widehat{y})$ lies then between –1 and +1.
- Moreover, in the case of a uniform symmetry (named UniSym), i.e., if the values alternate around the baseline (one dot is above the baseline, the next one below, the next above, etc.), then with the increasing number N of the observations, the value of $sig(y,\widehat{y})$ approaches 0.$$\underset{N\to \infty}{lim}sig(y,\widehat{y})=\underset{N\to \infty}{lim}\frac{1}{N}\sum _{i=1}^{N}{a}_{i}=0$$

## 5. Research Questions

- RQ1: What are the difference of the $si{g}_{Left}$, $si{g}_{Right}$, and $sig$ formulas?
- RQ2: What is the importance of additional information related to the $PRED\left(x\right)$ and $MMRE$ in measuring the performance of the predicted model?

## 6. Results and Discussion

- Model 1: The predicted efforts are random guessing such that their values are mostly greater than the real values, and the $PRED\left(0.25\right)$ reaches the maximum compared with Models 5, 6, 7, and 8.
- Model 2: The predicted values are random guessing, as opposed to Model 1, where the $PRED\left(0.25\right)$ is equal to the $PRED\left(0.25\right)$ obtained from Model 1.
- Model 3: The predicted efforts are random guessing such that the first half of the predicted values is mostly less than the real values, but the remaining are mostly greater than the real values, where the $PRED\left(0.25\right)$ is assumed equal to the $PRED\left(0.25\right)$ obtained from Model 1.
- Model 4: It is assumed to be similar to Model 3 in the inverse sense. Furthermore, the predicted values in this model were purposefully chosen to minimize the $MMRE$.
- Models 5, 6, 7, and 8: The predicted efforts are based on the rule that one or more initial predicted values are greater/less than the actual values, and one or more subsequent predicted values are greater/less than the actual values. The following predicted values follow the same sequence as the previous ones. The rule is repeated until the testing dataset is exhausted. Furthermore, we assumed their $PRED\left(0.25\right)$ are the same; they are greater than 0.7, but less than $PRED\left(0.25\right)$, and their $MMRE$ is greater than or equal to Models 1, 2, 3, and 4.

- The predicted values produced by Model 1 are mostly greater than the corresponding real values (all dots lie above the baseline), while Model 2 is the opposite.
- In Model 3, the dots in the first half mostly lie below and the dots in the second half mostly lie above the baseline, and vice versa in Model 4.
- The dots in Models 5 and 6 lie around the baseline, but the number of dots above the baseline is greater than the number of dots below the baseline (systematically overestimated). This is to demonstrate that the $sig$ value is positive.
- The dots in Model 7 also lie around the baseline, but the number of dots above the baseline is smaller than the number of dots below the baseline. This is to demonstrate that the $sig$ value is negative (systematically underestimated). Moreover, Model 8 is an approximately UniSym model. The $sig$ value then reaches zero.

## 7. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Azzeh, M.; Nassif, A.B. Project productivity evaluation in early software effort estimation. J. Softw. Evol. Process.
**2018**, 30, e21110. [Google Scholar] [CrossRef] - Braz, M.R.; Vergilio, S.R. Software effort estimation based on use cases. In Proceedings of the 30th Annual International Computer Software and Applications Conference (COMPSAC’06), Chicago, IL, USA, 17–21 September 2006; Volume 1, pp. 221–228. [Google Scholar]
- Mahmood, Y.; Kama, N.; Azmi, A.; Khan, A.S.; Ali, M. Software effort estimation accuracy prediction of machine learning techniques: A systematic performance evaluation. Softw. Pract. Exp.
**2022**, 52, 39–65. [Google Scholar] [CrossRef] - Munialo, S.W.; Muketha, G.M. A review of agile software effort estimation methods. Int. J. Comput. Appl. Technol. Res.
**2016**, 5, 612–618. [Google Scholar] [CrossRef] - Silhavy, R.; Silhavy, P.; Prokopova, Z. Evaluating subset selection methods for use case points estimation. Inf. Softw. Technol.
**2018**, 97, 1–9. [Google Scholar] [CrossRef] - Trendowicz, A.; Jeffery, R. Software project effort estimation. Found. Best Pract. Guidel. Success Constr. Cost Model.
**2014**, 12, 277–293. [Google Scholar] - Azzeh, M.; Nassif, A.B.; Attili, I.B. Predicting software effort from use case points: Systematic review. Sci. Comput. Program.
**2021**, 204, 102596. [Google Scholar] [CrossRef] - Conte, S.D.; Dunsmore, H.E.; Shen, Y.E. Software Engineering Metrics and Models; Benjamin-Cummings Publishing Co., Inc.: San Francisco, CA, USA, 1986. [Google Scholar]
- Praynlin, E. Using meta-cognitive sequential learning neuro-fuzzy inference system to estimate software development effort. J. Ambient. Intell. Humaniz. Comput.
**2021**, 12, 8763–8776. [Google Scholar] [CrossRef] - Fadhil, A.A.; Alsarraj, R.G.H.; Altaie, A.M. Software cost estimation based on dolphin algorithm. IEEE Access
**2020**, 8, 75279–75287. [Google Scholar] [CrossRef] - Bilgaiyan, S.; Mishra, S.; Das, M. Effort estimation in agile software development using experimental validation of neural network models. Int. J. Inf. Tecnol.
**2019**, 11, 569–573. [Google Scholar] [CrossRef] - Mustapha, H.; Abdelwahed, M. Investigating the use of random forest in software effort estimation. Procedia Comput. Sci.
**2019**, 148, 343–352. [Google Scholar] - Ullah, A.; Wang, B.; Sheng, J.; Long, J.; Asim, M.; Riaz, F. A Novel Technique of Software Cost Estimation Using Flower Pollination Algorithm. In Proceedings of the 2019 International Conference on Intelligent Computing, Automation and Systems (ICICAS), Chongqing, China, 6–8 December 2019; pp. 654–658. [Google Scholar]
- Sethy, P.K.; Rani, S. Improvement in cocomo modal using optimization algorithms to reduce mmre values for effort estimation. In Proceedings of the 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), Ghaziabad, India, 18–19 April 2019; pp. 1–4. [Google Scholar]
- Effendi, Y.A.; Sarno, R.; Prasetyo, J. Implementation of bat algorithm for cocomo ii optimization. In Proceedings of the 2018 International Seminar on Application for Technology of Information and Communication, Semarang, Indonesia, 21–22 September 2018; pp. 441–446. [Google Scholar]
- Khan, M.S.; ul Hassan, C.A.; Shah, M.A.; Shamim, A. Software cost and effort estimation using a new optimization algorithm inspired by strawberry plant. In Proceedings of the 2018 24th International Conference on Automation and Computing (ICAC), Newcastle upon Tyne, UK, 6–7 September 2018; pp. 1–6. [Google Scholar]
- Desai, V.S.; Mohanty, R. Ann-cuckoo optimization technique to predict software cost estimation. In Proceedings of the 2018 Conference on Information and Communication Technology (CICT), Jabalpur, India, 26–28 October 2018; pp. 1–6. [Google Scholar]
- Jørgensen, M.; Halkjelsvik, T.; Liestøl, K. When should we (not) use the mean magnitude of relative error (mmre) as an error measure in software development effort estimation? Inf. Softw. Technol.
**2022**, 143, 106784. [Google Scholar] [CrossRef] - Foss, T.; Stensrud, E.; Kitchenham, B.; Myrtveit, I. A simulation study of the model evaluation criterion mmre. IEEE Trans. Softw. Eng.
**2003**, 29, 985–995. [Google Scholar] [CrossRef] - Myrtveit, I.; Stensrud, E.; Shepperd, M. Reliability and validity in comparative studies of software prediction models. IEEE Trans. Softw. Eng.
**2005**, 31, 380–391. [Google Scholar] [CrossRef] [Green Version] - Kitchenham, B.A.; Pickard, L.M.; MacDonell, S.G.; Shepperd, M.J. What accuracy statistics really measure [software estimation]. IEEE-Proc. Softw.
**2001**, 148, 81–85. [Google Scholar] [CrossRef] [Green Version] - Shepperd, M.; MacDonell, S. Evaluating prediction systems in software project estimation. Inf. Softw. Technol.
**2012**, 54, 820–827. [Google Scholar] [CrossRef] [Green Version] - Villalobos-Arias, L.; Quesada-Lopez, C.; Guevara-Coto, J.; Martınez, A.; Jenkins, M. Evaluating hyper-parameter tuning using random search in support vector machines for software effort estimation. In Proceedings of the 16th ACM International Conference on Predictive Models and Data Analytics in Software Engineering, Virtual, 8–9 November 2020; pp. 31–40. [Google Scholar]
- Idri, A.; Hosni, M.; Abran, A. Improved estimation of software development effort using classical and fuzzy analogy ensembles. Appl. Soft Comput.
**2016**, 49, 990–1019. [Google Scholar] [CrossRef] - Gneiting, T. Making and evaluating point forecasts. J. Am. Stat. Assoc.
**2011**, 106, 746–762. [Google Scholar] [CrossRef] [Green Version] - Strike, K.; El Emam, K.; Madhavji, N. Software cost estimation with incomplete data. IEEE Trans. Softw. Eng.
**2001**, 27, 890–908. [Google Scholar] [CrossRef] [Green Version] - Hamid, M.; Zeshan, F.; Ahmad, A.; Ahmad, F.; Hamza, M.A.; Khan, Z.A.; Munawar, S.; Aljuaid, H. An intelligent recommender and decision support system (irdss) for effective management of software projects. IEEE Access
**2020**, 8, 140752–140766. [Google Scholar] [CrossRef] - Ali, A.; Gravino, C. A systematic literature review of software effort prediction using machine learning methods. J. Softw. Evol. Process.
**2019**, 31, e2211. [Google Scholar] [CrossRef] - Gautam, S.S.; Singh, V. The state-of-the-art in software development effort estimation. J. Softw. Evol. Process.
**2018**, 30, e1983. [Google Scholar] [CrossRef] - Silhavy, R.; Silhavy, P.; Prokopova, Z. Algorithmic optimisation method for improving use case points estimation. PLoS ONE
**2015**, 10, e0141887. [Google Scholar] [CrossRef] [PubMed] - Silhavy, R.; Silhavy, P.; Prokopova, Z. Analysis and selection of a regression model for the use case points method using a stepwise approach. J. Syst. Softw.
**2017**, 125, 1–14. [Google Scholar] [CrossRef] [Green Version] - Ochodek, M.; Nawrocki, J.; Kwarciak, K. Simplifying effort estimation based on use case points. Inf. Softw. Technol.
**2011**, 53, 200–213. [Google Scholar] [CrossRef] - Subriadi, A.P.; Ningrum, P.A. Critical review of the effort rate value in use case point method for estimating software development effort. J. Theroretical Appl. Inf. Technol.
**2014**, 59, 735–744. [Google Scholar] - Hoc, H.T.; Hai, V.V.; Nhung, H.L.T.K. Adamoptimizer for the Optimisation of Use Case Points Estimation; Springer: Berlin/Heidelberg, Germany, 2020; pp. 747–756. [Google Scholar]
- Karner, G. Metrics for Objectory; No. LiTH-IDA-Ex-9344; University of Linkoping: Linkoping, Sweden, 1993; p. 21. [Google Scholar]
- ISO/IEC 20926:2009; Software and Systems Engineering—Software Measurement—IFPUG Functional Size Measurement Method. ISO/IEC: Geneva, Switzerland, 2009.
- Azzeh, M.; Nassif, A.; Banitaan, S. Comparative analysis of soft computing techniques for predicting software effort based use case points. Iet Softw.
**2018**, 12, 19–29. [Google Scholar] [CrossRef] - Santos, R.; Vieira, D.; Bravo, A.; Suzuki, L.; Qudah, F. A systematic mapping study on the employment of neural networks on software engineering projects: Where to go next? J. Softw. Evol. Process.
**2022**, 34, e2402. [Google Scholar] [CrossRef] - Carbonera, E.C.; Farias, K.; Bischoff, V. Software development effort estimation: A systematic mapping study. IET Softw.
**2020**, 14, 328–344. [Google Scholar] [CrossRef] - Idri, A.; Hosni, M.; Abran, A. Systematic literature review of ensemble effort estimation. J. Syst. Softw.
**2016**, 118, 151–175. [Google Scholar] [CrossRef] - Ouwerkerk, J.; Abran, A. An evaluation of the design of use case points (UCP). In Proceedings of the International Conference On Software Process And Product Measurement MENSURA, Cádiz, Spain, 6–8 November 2006; pp. 83–97. [Google Scholar]
- Abran, A. Software Metrics and Software Metrology; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
- Geron, A. Ensemble Learning and Random Forests; O’reilly: Sebastopol, CA, USA, 2019; pp. 189–212. [Google Scholar]

Authors | Proposal Model | Compare With | Criteria |
---|---|---|---|

Mahmood et al. [3] | Machine-learning-based ensemble techniques | Machine-learning-based solo techniques | $MMRE,PRED\left(x\right)$ |

Praynlin [9] | Metacognitive neuro-fuzzy | Particle swarm optimization, genetic algorithm, and backpropagation network | $MMRE,PRED\left(x\right),$ and other evaluation criteria |

Fadhil et al. [10] | DolBat | COCOMO II | $MMRE,PRED\left(x\right)$ |

Bilgaiyan, S. et al. [11] | Feedforward backpropagation NN | Cascade correlation NN, Elman NN | $MMRE,PRED\left(x\right)$, $MSE$ |

Mustapha et al. [12] | RF | Classical regression tree | $MMRE$, $PRED\left(x\right)$, $MdMRE$ |

Ullah et al. [13] | Flower pollination algorithm | COCOMO-II | $MMRE$ |

Sethy and Rani [14] | TLBO | Bailey, COCOMO 2, Halstead, SEL BCO | $MMRE$ |

Effendi et al. [15] | Optimization of COCOMO II constants | COCOMO II | $MMRE$ |

Khan et al. [16] | Optimization of COCOMO | COCOMO | $MMRE$ |

Desai and Mohanty [17] | ANN-COA | Other neural network-based techniques | $MMRE,RMSE$ |

EFFORT (Person-Hours) | |||||||||
---|---|---|---|---|---|---|---|---|---|

No. | Estimation ($\widehat{\mathit{y}}$) | ||||||||

Real_P20 | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 | |

P1 | 151.85 | 157.2 | 135.9 | 108.4 | 157 | 148 | 149 | 133 | 145 |

P2 | 95.85 | 101 | 90 | 77.2 | 102 | 101 | 102 | 105.9 | 101.9 |

P3 | 58.65 | 64 | 48 | 51 | 60.3 | 55 | 53 | 39.8 | 49 |

P4 | 37.1 | 39 | 35 | 30 | 47 | 41 | 65.2 | 47 | 39.9 |

P5 | 30.7 | 54.6 | 24 | 28 | 34 | 17 | 25 | 18.2 | 20 |

P6 | 24.6 | 46 | 8 | 21 | 55 | 31.9 | 33 | 28 | 48 |

P7 | 13.85 | 15 | 5 | 6 | 15 | 8 | 9 | 8 | 8.2 |

P8 | 179.65 | 186 | 171 | 175.3 | 191.9 | 206 | 200 | 189 | 196 |

P9 | 84.05 | 83 | 82.3 | 68.2 | 103 | 78 | 74 | 72 | 76 |

P10 | 67.2 | 72 | 52 | 52 | 86 | 81 | 84 | 96 | 97 |

P11 | 61 | 80.5 | 41 | 46 | 64 | 64 | 49 | 58 | 44 |

P12 | 36 | 42 | 16 | 16 | 39.2 | 52 | 58 | 58 | 44.9 |

P13 | 25.7 | 31 | 13 | 8.7 | 34 | 22.6 | 24 | 18 | 17 |

P14 | 19.85 | 25 | 22.8 | 19 | 25 | 28 | 31 | 15.3 | 18 |

P15 | 184.2 | 193 | 182 | 197 | 170 | 157 | 176.3 | 181 | 179 |

P16 | 99 | 110 | 93 | 114 | 96 | 103 | 100 | 95 | 103 |

P17 | 197.5 | 203 | 180 | 207 | 183 | 192 | 169 | 193 | 178 |

P18 | 96.25 | 102 | 81.7 | 101 | 83 | 101 | 111 | 105 | 109 |

P19 | 108.75 | 115 | 93 | 109 | 93 | 72 | 78 | 93 | 93.1 |

P20 | 111.3 | 129 | 99.2 | 115 | 113.8 | 135.8 | 118 | 116 | 118 |

P21 | 132 | 145 | 118 | 145.9 | 113 | 105 | 103 | 109 | 113 |

P22 | 128.4 | 135.4 | 119 | 129 | 111 | 132 | 148.1 | 135.7 | 143 |

P23 | 152.1 | 172 | 151 | 155 | 138 | 123 | 151 | 137 | 149 |

P24 | 84.8 | 114 | 74 | 108.6 | 82 | 81 | 81 | 105.9 | 105 |

P25 | 183.5 | 195 | 177 | 230.2 | 171 | 193.2 | 200 | 157 | 167.2 |

P26 | 143 | 172 | 126 | 172 | 128.4 | 161 | 161 | 148 | 152 |

P27 | 137 | 156.1 | 121.6 | 154 | 117 | 148 | 148 | 123 | 132 |

P28 | 168 | 175 | 150 | 165 | 160 | 192 | 179 | 181 | 174 |

Metrics | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 |
---|---|---|---|---|---|---|---|---|

$si{g}_{Left}$ | 0.901 | −0.926 | −0.488 | 0.527 | 0.059 | −0.03 | −0.172 | −0.108 |

$si{g}_{Right}$ | 0.956 | −0.931 | 0.345 | −0.384 | 0.227 | 0.172 | −0.113 | −0.034 |

Sig | 0.929 | −0.929 | −0.072 | 0.072 | 0.143 | 0.071 | −0.143 | −0.071 |

MMRE | 0.163 | 0.179 | 0.175 | 0.156 | 0.163 | 0.186 | 0.175 | 0.175 |

PRED(0.25) | 0.80 | 0.80 | 0.80 | 0.80 | 0.76 | 0.76 | 0.76 | 0.76 |

No. | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 |
---|---|---|---|---|---|---|---|---|

P1 | −5.35 | 15.95 | 43.45 | −5.15 | 3.85 | 2.85 | 18.85 | 6.85 |

P2 | −5.15 | 5.85 | 18.65 | −6.15 | −5.15 | −6.15 | −10.05 | −6.05 |

P3 | −5.35 | 10.65 | 7.65 | −1.65 | 3.65 | 5.65 | 18.85 | 9.65 |

P4 | −1.9 | 2.1 | 7.1 | −9.9 | −3.9 | −28.1 | −9.9 | −2.8 |

P5 | −23.9 | 6.7 | 2.7 | −3.3 | 13.7 | 5.7 | 12.5 | 10.7 |

P6 | −21.4 | 16.6 | 3.6 | −30.4 | −7.3 | −8.4 | −3.4 | −23.4 |

P7 | −1.15 | 8.85 | 7.85 | −1.15 | 5.85 | 4.85 | 5.85 | 5.65 |

P8 | −6.35 | 8.65 | 4.35 | −12.25 | −26.35 | −20.35 | −9.35 | −16.35 |

P9 | 1.05 | 1.75 | 15.85 | −18.95 | 6.05 | 10.05 | 12.05 | 8.05 |

P10 | −4.8 | 15.2 | 15.2 | −18.8 | −13.8 | −16.8 | −28.8 | −29.8 |

P11 | −19.5 | 20 | 15 | −3 | −3 | 12 | 3 | 17 |

P12 | −6 | 20 | 20 | −3.2 | −16 | −22 | −22 | −8.9 |

P13 | −5.3 | 12.7 | 17 | −8.3 | 3.1 | 1.7 | 7.7 | 8.7 |

P14 | −5.15 | −2.95 | 0.85 | −5.15 | −8.15 | −11.15 | 4.55 | 1.85 |

P15 | −8.8 | 2.2 | −12.8 | 14.2 | 27.2 | 7.9 | 3.2 | 5.2 |

P16 | −11 | 6 | −15 | 3 | −4 | −1 | 4 | −4 |

P17 | −5.5 | 17.5 | −9.5 | 14.5 | 5.5 | 28.5 | 4.5 | 19.5 |

P18 | −5.75 | 14.55 | −4.75 | 13.25 | −4.75 | −14.75 | −8.75 | −12.75 |

P19 | −6.25 | 15.75 | −0.25 | 15.75 | 36.75 | 30.75 | 15.75 | 15.65 |

P20 | −17.7 | 12.1 | −3.7 | −2.5 | −24.5 | −6.7 | −4.7 | −6.7 |

P21 | −13 | 14 | −13.9 | 19 | 27 | 29 | 23 | 19 |

P22 | −7 | 9.4 | −0.6 | 17.4 | −3.6 | −19.7 | −7.3 | −14.6 |

P23 | −19.9 | 1.1 | −2.9 | 14.1 | 29.1 | 1.1 | 15.1 | 3.1 |

P24 | −29.2 | 10.8 | −23.8 | 2.8 | 3.8 | 3.8 | −21.1 | −20.2 |

P25 | −11.5 | 6.5 | −46.7 | 12.5 | −9.7 | −16.5 | 26.5 | 16.3 |

P26 | −29 | 17 | −29 | 14.6 | −18 | −18 | −5 | −9 |

P27 | −19.1 | 15.4 | −17 | 20 | −11 | −11 | 14 | 5 |

P28 | −7 | 18 | 3 | 8 | −24 | −11 | −13 | −6 |

Dataset-1 | Dataset-2 | |
---|---|---|

R-Squared | 0.884 | 0.974 |

MMRE | 0.318 | 0.136 |

PRED(0.20) | 0.222 | 0.714 |

$si{g}_{Left}$ | −0.422 | 0.429 |

$si{g}_{Right}$ | −0.689 | 0.143 |

$sig$ | −0.556 | 0.286 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Huynh Thai, H.; Silhavy, P.; Fajkus, M.; Prokopova, Z.; Silhavy, R.
Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation. *Mathematics* **2022**, *10*, 4649.
https://doi.org/10.3390/math10244649

**AMA Style**

Huynh Thai H, Silhavy P, Fajkus M, Prokopova Z, Silhavy R.
Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation. *Mathematics*. 2022; 10(24):4649.
https://doi.org/10.3390/math10244649

**Chicago/Turabian Style**

Huynh Thai, Hoc, Petr Silhavy, Martin Fajkus, Zdenka Prokopova, and Radek Silhavy.
2022. "Propose-Specific Information Related to Prediction Level at x and Mean Magnitude of Relative Error: A Case Study of Software Effort Estimation" *Mathematics* 10, no. 24: 4649.
https://doi.org/10.3390/math10244649