You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

6 May 2022

Prototype of 3D Reliability Assessment Tool Based on Deep Learning for Edge OSS Computing

and
1
Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi 755-8611, Japan
2
Graduate School of Engineering, Tottori University, Tottori 680-8552, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
This article belongs to the Special Issue Mathematics and Computer Programming in 2D and 3D Open Source Software

Abstract

We focus on an estimation method based on deep learning in terms of fault correction time for the operation reliability assessment of open-source software (OSS) under the environment of an edge computing service. Then, we discuss fault severity levels in order to consider the difficulty of fault correction. We use a deep feedforward neural network in order to estimate fault correction times. In particular, we consider the characteristics of fault trends by using three-dimensional graphs. Therefore, we can increase the recognizability of the proposed method based on deep learning for large-scale fault data from the standpoint of fault severity levels under edge OSS operation.

1. Introduction

Several researchers have discussed open-source software (OSS) reliability assessment methods [1]. Many of them are based on software reliability growth models [2,3,4,5]. Various software reliability growth models have been proposed for the reliability assessment of the system testing phase in software development. Recently, software development style has caused a paradigm shift, such as in OSS development. In particular, the development style of OSS is one of the successful examples. On the other hand, there is a quality problem in OSS development because there is no specific testing phase. In the development and operation phases of OSS, the bug-tracking system is used in most cases.
Moreover, cloud computing as a software service is supported by many users. At present, the cloud service is changing to a service based on edge computing. Edge computing will grow exponentially in the future. In addition, OSS is used in edge computing. For example, there is OpenStack as a major example of OSS in cloud computing. Recently, the edge OSS component was embedded in cloud OSS. In this situation, it is very important to assess operation reliability under edge OSS computing.
Considering software reliability assessment, there are many research papers based on stochastic models. On the other hand, there are several papers in terms of AHP, fuzzy logic, and neural networks [6,7,8,9,10,11,12].
This paper discusses software fault correction time in the OSS component under the edge computing service. In particular, this paper analyzes fault correction time based on the fault severity levels for actual data sets. Then, we make a visualization based on a three-dimensional graph by using an estimation method based on deep learning. Moreover, we discuss the estimation results obtained from the three-dimensional graph based on two kinds of fault severity levels. Furthermore, we develop a prototype of a 3D reliability assessment tool based on deep learning for the edge OSS computing service. Finally, we show several numerical illustrations based on the developed prototype of the 3D reliability assessment tool by using actual fault big data.
The organization of this paper is as follows:
Section 2 proposes an estimation method based on deep learning. Then, two fault severity levels are assumed in the proposed method considering the operation of edge OSS computing. Section 3 shows the estimation method based on deep learning. Section 4 shows several numerical illustrations based on the proposed model by using actual data sets. Section 5 discusses the proposed method.

2. Data Preprocessing for Large-Scale OSS Fault Data

2.1. Related Work

Generally, the number of data is used as the degree of freedom in the statistics. In the case of big data, it is very difficult to estimate the number of faults by using stochastic models because the number of data has a huge volume. In past research, the conventional methods of reliability assessment used the number of software faults only. For example, there are many software reliability growth models, hazard rate models, stochastic differential equation models, etc. Then, we focus on all the data sets recorded on the bug-tracking system. Therefore, we can analyze OSS reliability from the following various standpoints.
  • The software fault is caused as the result of a cause-and-effect relationship. Various data sets in terms of the cause-and-effect relationship are recorded on the bug-tracking system.
  • We can comprehend the cause-and-effect relationship by using the big data on the bug-tracking system.
  • The typical stochastic models are difficult to use, and the fault big data include many explanatory variables because of the problem of local minimum in terms of model parameters.
  • The state of the art beyond the existing work in our method is to be able to use the fault big data. Moreover, our method is able to make an automatic feature extraction from the fault big data.
We make a critical discussion for the typical OSS reliability assessment. Many software reliability assessment models have been proposed by several researchers [2,3,4,5]. Most of them have used fault data only. There is no reliability assessment method based on the stochastic model by using the large-scale data recorded on the bug-tracking system of OSS. As a comparison of the proposed model with the other approaches, there are differences in data types. Therefore, the proposed method can comprehensively judge reliability from the standpoint of a multifaceted perspective.
Furthermore, considering cloud computing, several research papers have been published [13,14]. These research papers have contents in terms of the scalability of hardware such as cloud storage services and cloud scalability. On the other hand, we focus on edge computing, software, and reliability. For the background of edge computing, we show the structure of edge computing in Figure 1. There are several research papers in terms of the debugging method, system architecture, and stochastic models for edge computing [15,16,17]. However, there is no method of reliability assessment in the environment of edge OSS computing.
Figure 1. The structure of edge OSS computing.

2.2. OSS Data Set

There are several approaches for software reliability based on neural networks. Traditionally, comparison research based on software reliability growth models and the method of neural networks has been proposed in the past [18,19]. Past research papers based on the neural networks method especially have used fault data only. In most cases, the methods by machine learning are based on time-series analysis. On the other hand, we use many types of different data depending on software reliability analysis in the proposed method. The unique future of our research is to use two kinds of fault levels as the output data sets.
Several researchers have proposed deep learning algorithms. For example, the application research based on deep learning for the min-cut theorem is shown in [20]. In addition, deep learning is used for automatic recognition in the area of sound recognition [21,22]. Moreover, many deep learning algorithms have been proposed in the areas of image recognition [23,24,25]. In particular, optimized algorithms based on deep learning for each research area have been developed by many researchers. Many methods based on deep learning have been applied to many research areas, such as the various above-mentioned research papers based on deep learning. Then, we focus on the deep learning approach for edge OSS reliability area. We will be able to apply deep learning as the discrete time model to the reliability of the edge OSS operation by using the fault correction time.
In this paper, we apply the deep feedforward neural network to learn the large-scale fault data on bug-tracking systems of OSS projects. Then, we apply the following amount of information to estimate the weight parameters for fault correction time. All data on each explanatory variable are converted from the character data to numerical values by using the count encoding and frequency encoding.
  • Opened: T o is converted to the difference from the previous day, and this unit is day;
  • Changed: T c is converted to the difference from the previous day, and this unit is day;
  • Product: F p is converted to the values of the occurrence ratio for the product name by using the frequency encoding;
  • Component: F c is converted to the values of the occurrence ratio for the component name by using the frequency encoding;
  • Version: F v is converted to the frequency values of appearance for the same version number by using the frequency encoding;
  • Reporter: F r is converted to the frequency values of appearance for the same nickname of reporter by using the frequency encoding;
  • Assignee: F a is converted to the frequency values of appearance for the same nickname of assignee by using the frequency encoding;
  • Severity: F l is converted to the values based on the same numbers of count for fault level by the frequency encoding;
  • Status: F s is converted to the occurrence ratio of the status name by using the frequency encoding;
  • Resolution: F w , as with what happened to the bugs, is converted to the occurrence ratio of the status name by using the frequency encoding;
  • Hardware: F h is converted to numerical values in terms of hardware by the frequency encoding;
  • OS: F o is also converted to numerical values in terms of the operating system by the frequency encoding;
  • Summary: C s is converted to the number of words by using the count encoding;
where T * is the unit of time, F * is the values converted by using the frequency encoding, and C * is the values converted by using the count encoding.

2.3. Data Preprocessing

We convert all the above items from the character data to numerical values by using the frequency and count encodings. In particular, we use the correction time of the detected faults as the output data for the learning data. The correction time of software faults will be useful to the measure of software stability. Then, we define the instantaneous correction time of software faults as follows:
O k = T k c T k o .
where O k is the k-th instantaneous correction time of software faults. In addition, T k c is the k-th changed date of the OSS fault. Similarly, T k o is the k-th opened date of the OSS fault. We define O k as the explanatory variable of deep learning, i.e., the output value for the learning data. O k means the output values as the instantaneous correction times of the detected faults.
As shown in Figure 2, the characteristics of edge OSS computing are the fault levels. “High” and “Medium” class faults are the remarkable numbers. Therefore, we focus on the fault detection phenomenon in terms of severity levels. Then, the “Medium” class means the fault of medium level, i.e., this is categorized as the low level. The fault classified as “High” is difficult to remove from the source code. Moreover, the “High” class faults have a large impact on the OSS system. Therefore, the high-level fault greatly depends on the software reliability. Considering the “High” fault, we define the following:
O k h F k h l , subject to F k h l F k l .
where F k h l means the k-th high-level fault. Similarly, focusing on the “Medium”fault, we consider that O k m is the k-th instantaneous correction time of the detected faults in the case of the medium level. Then, we define the following:
O k m F k m l , subject to F k m l F k l .
Figure 2. The fault severity levels for edge OSS.
Similarly, F k m l means the k-th “Medium” fault.

3. Development of Prototype Tool

Our research group has proposed several reliability assessment tools. In particular, we have developed a three-dimensional tool for OSS reliability assessment. It is useful to easily understand the trend of reliability from various points of view by using three-dimensional modeling. We show the cumulative number of detected faults M * ( t ) at time t of our three-dimensional model proposed in the past as follows [26]:
M 1 ( t ) = R 1 ( t ) 1 1 + c 1 + c · exp ( b t ) · exp b t σ 1 ω 1 ( t ) ,
M 2 ( t ) = R 2 ( t ) 1 1 + c 1 + c · exp ( b t ) · exp b t σ 2 ω 2 ( t ) ,
M 3 ( t ) = R 3 ( t ) 1 1 + c 1 + c · exp ( b t ) · exp b t σ 3 ω 3 ( t ) ,
where R i ( t ) ( i = 1 , 2 , 3 ) is the amount of changes in terms of specification according to each version of OSS. In addition, R i ( t ) ( i = 1 , 2 , 3 ) is defined as α i e β i t , where α i ( i = 1 , 2 , 3 ) is the number of latent faults in the OSS used in cloud computing and β i ( i = 1 , 2 , 3 ) is the changing rate in terms of specification according to each version of OSS. Then, we assume that the fault-prone specification for each version of OSS grows exponentially according to time t. On the other hand, the OSS will show a regression trend of reliability if β i ( i = 1 , 2 , 3 ) is a negative value. Conversely, the OSS will show a reliability growth trend if β i ( i = 1 , 2 , 3 ) is a positive value. Moreover, σ 1 , σ 2 , and σ 3 are noisy factors in terms of the magnitude of noisy fluctuation. ω i ( t ) is the i-th Wiener process. Furthermore, b is the detection rate per fault and c is defined as the parameter of fault factor.
In addition, the integrated equation is as follows:
M ( t ) = R ( t ) 1 1 + c 1 + c · exp ( b t ) · exp b t σ 1 ω 1 ( t ) σ 2 ω 2 ( t ) σ 3 ω 3 ( t ) .
In the proposed model, by considering the independence of each noise, we can assume that the parameter σ 1 means the failure-occurrence phenomenon due to inherent faults. The parameter σ 2 means the network changing rate per unit time resulting from OSS cloud computing. The parameter σ 3 means the renewal rate per unit time resulting from big data.
Considering our model in Equation (7), the effort and fault data sets are only used as reliability data. On the other hand, various data sets are recorded in the bug-tracking system. Moreover, the amount of data in the bug-tracking system is huge. By using all the data recorded on the bug-tracking system, we take advantage of the amount of information in terms of many fault factors.
In this paper, we will be able to understand edge OSS reliability by using three-dimensional modeling from the standpoint of fault levels. The procedure of deep learning in this paper is shown in Figure 3. Moreover, we show the steps of the proposed prototype as follows:
Figure 3. The workflow of estimation for each fault severity level by using the procedures of our deep learning method.
  • The user of the prototype starts from the main menu by running our application. In addition, the user completes data preprocessing.
  • The user selects the calculation button. Then, the application window calls the Python program. In addition, the Python program imports the tensorflow package. Moreover, the proposed deep learning algorithm is executed by using Figure 3.
  • After the completion of the learning phase, several reliability assessment measures are illustrated by selecting the graph button.
We use the data preprocessing method proposed in Section 5. For example, all data sets have been converted from Table 1 to Table 2. As reference information, Table 1 and Table 2 will be helpful for the readers to understand the proposed method. However, the factor of “Summary” is eliminated from Table 1 because the words are very long. By using the data sets such as those in Table 1 and Table 2, we apply the data in the above-mentioned step 1 to deep learning. From Table 1 and Table 2, the conventional methods of reliability assessment make use of all data sets because the conventional reliability assessment methods use only the number of fault data.
Table 1. A part of raw data logged on the bug-tracking system.
Table 2. A part of the numeric values converted from the raw data.
Considering the structure of the software tool, there are several visualization tools, such as the class diagram, object diagram, component diagram, activity diagram, use case diagram, and sequence diagram, in terms of UML. Generally, in many cases, UML has been used in the case where software is developed from scratch by a software manufacturer. However, our tool is implemented by using the package-based development style. This is the characteristic of our research. In the case of package-based development, we can show the structure of our prototype by using the package diagram in UML. The reasons are as follows:
Different programming languages (HTML, CSS, JavaScript, and Python).
Dynamic links based on Node.js and the file system of OS, such as macOS, Windows, Linux, etc.
There is a difference in the scale and language of the packages.
From the above-mentioned characteristics, we show the structure of our tool by using the package diagram. Then, Figure 4 shows the package diagram of the prototype developed by using UML. Furthermore, we show the structure of data preprocessing in Figure 5.
Figure 4. The package diagram of the prototype developed by using UML.
Figure 5. The structure of data preprocessing.
Our tool is developed as a prototype. We believe that the task of researchers only provides to the point of a prototype. In addition, researchers should propose the application framework of the proposed method. Therefore, we show the framework based on the proposed method in this paper. The completed tool will be able to be easily developed by software developers and business people. Our prototype will be helpful for users and developers to assess reliability in the operation of the edge OSS service. In particular, several activation functions and dropout values are set always the same in this paper. The reason for this is that there are many OSS projects, i.e., many hyper-parameters and functions will be changed according to various situations under OSS computing. Therefore, we have set the activation function and dropout value always as the same value, considering the standardization for OSS projects.

4. Performance Illustrations of the Developed 3D Application

4.1. Data Set for Edge OSS Computing

We focus on the OpenStack Project [27] that included several edge components. In this paper, we show numerical examples by using data sets on the assumption of the edge OSS service. The data used in this paper are collected from the bug-tracking system.
The demonstration of our prototype tool is available at the following URL; however, the function of calculation cannot execute considering the security: http://www.tam.eee.yamaguchi-u.ac.jp/js/ec/, accessed on 24 February 2022.
Our prototype tool has been released as the OSS based on GNU General Public License (GPL) in March 2022. The source code of our tool is available from “SOFTWARE” at the following URL: http://www.tam.eee.yamaguchi-u.ac.jp/, accessed on 24 February 2022.
Table 1 and Table 2 are parts of all the data sets. The total number of lines of data is about 20,000 lines. Then, the data consist of about 140,000 data items total. These are the specified version data. Actually, the users can obtain a greater number of data according to various OSS projects.

4.2. Estimation Results

We analyze the fault big data in terms of fault correction time in the OSS component of edge computing included under cloud computing such as OpenStack [27]. We can obtain the fault correction times from the “Opened” and “Changed” factors in the bug-tracking system. In this paper, we discuss two kinds of fault severity levels such as “High” and “Medium”.
First, we show the main screen of our tool in Figure 6. In addition, Figure 7 shows the menu of our tool. Moreover, Figure 8 is the simplified readme screen of our tool. Our tool is structured by using the dynamic link based on NW.js and Python. Therefore, we can program the simple menu by using the HTML and javascript codes shown in Figure 6, Figure 7 and Figure 8.
Figure 6. The main screen of our tool.
Figure 7. The menu of our tool.
Figure 8. The readme screen of our tool.
Figure 9 and Figure 10 show the overall pictures of the estimated errors between validation and training in the case of 30% testing data, respectively. From Figure 9 and Figure 10, we find that the errors of validation and training fit better in the case of 30%. In particular, the error of the “Medium” class fits better than that of the “High” one. Similarly, Figure 11 and Figure 12 show the estimated error between validation and training in the cases of the high and medium levels for 30% testing data, respectively. From Figure 11 and Figure 12, we find that there is no possibility of overfitting.
Figure 9. The overall picture of the estimated error between validation and training in case of 30% testing data.
Figure 10. Another angle of the estimated error between validation and training in case of 30% testing data.
Figure 11. The estimated error between validation and training in case of high-level 30% testing data.
Figure 12. The estimated error between validation and training in case of medium-level 30% testing data.
Moreover, Figure 13 and Figure 14 show the overall pictures of the estimated fault correction times in the case of 30% testing data, respectively. From Figure 13 and Figure 14, we can confirm the scattered condition of the estimated fault correction time for each fault level. We find that there are many faults of the “High” class in the early stage of operation from Figure 13 and Figure 14. On the other hand, there are many faults of the “Medium” class in the later stage of operation. Similarly, Figure 15 and Figure 16 show the estimated fault correction times in the cases of the high and medium levels for 30% testing data, respectively. From Figure 15 and Figure 16, we find that the variations of the “High” and “Medium” classes are almost the same value.
Figure 13. The overall picture of the estimated fault correction time in case of 30% testing data.
Figure 14. Another angle of the estimated fault correction time in case of 30% testing data.
Figure 15. The estimated fault correction time in case of high-level 30% testing data.
Figure 16. The estimated fault correction time in case of medium-level 30% testing data.
Furthermore, the overall pictures of the estimated cumulative fault correction times in the case of 30% testing data are shown in Figure 17 and Figure 18, respectively. From Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18, we find that the condition of learning is stable on the whole. In particular, the number of faults in the “High” class becomes large in the early stage of operation.
Figure 17. The overall picture of the estimated cumulative fault correction time in case of 30% testing data.
Figure 18. Another angle of the estimated cumulative fault correction time in case of 30% testing data.
As for the above-mentioned results, we confirm that the developed prototype tool for reliability assessment based on deep learning is useful for estimating reliability in the near future. In particular, the advantage of our method is that it can make use of all the data on the bug-tracking system.

4.3. Comparison Results

As shown in Section 2, our method is different from the reliability assessment method based on many typical stochastic models. However, we can compare our method with the method based on the typical neural network as machine learning. We show the estimated error between validation and training as the comparison results in the case of 30% testing data in Figure 19. From Figure 19, we find that the error becomes a large value in the case of the “Medium” class.
Figure 19. The estimated error between validation and training as comparison results in case of 30% testing data.

5. Concluding Remarks

In the operation of the cloud service, several edge OSS components are embedded in cloud OSS computing. In the bug-tracking system in OSS, there are several severity levels of software faults in OSS. As the characteristics of edge OSS, the “High” and “Medium” classes have influential impacts as fault severity levels. The reliability of edge OSS becomes large if we can understand the trend of software fault correction times. Then, we discussed the estimation method of fault correction times.
In this paper, we have proposed the estimation method of fault correction times for two kinds of fault severity levels. It will be useful to assess OSS reliability under the environment of an edge computing service if the OSS managers can estimate the fault correction time. In addition, the proposed method based on deep learning considering the fault severity levels has been discussed in this paper. In particular, the proposed method can comprehend the reliability trend based on the fault correction times for mainly fault severity levels.
Finally, this paper has discussed the trend of OSS faults for edge computing. In addition, we have developed the prototype of a software tool based on the proposed method by using actual edge OSS data as follows:
The comprehension of the trend of large-scale OSS fault levels as data preprocessing.
The estimation of fault correction time based on two-stage deep learning.
The development of a prototype as a reliability assessment tool based on deep learning that can be used by users who are not familiar with deep learning.
The proposed method and prototype will be helpful as assessment measures of reliability control for an edge OSS service in the operation phase.

Author Contributions

Conceptualization, Y.T. and S.Y.; methodology, Y.T. and S.Y.; software, Y.T.; validation, Y.T. and S.Y.; data curation, Y.T.; writing—review and editing, Y.T. and S.Y.; visualization, Y.T.; project administration, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the JSPS KAKENHI Grant No. 20K11799 in Japan.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

OpenStack Project [27].

Acknowledgments

This work was supported in part by the JSPS KAKENHI Grant No. 20K11799 in Japan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yamada, S.; Tamura, Y. OSS Reliability Measurement and Assessment; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  2. Lyu, M.R. (Ed.) Handbook of Software Reliability Engineering; IEEE Computer Society Press: Los Alamitos, CA, USA, 1996. [Google Scholar]
  3. Yamada, S. Software Reliability Modeling: Fundamentals and Applications; Springer: Tokyo, Japan; Heidelberg, Germany, 2014. [Google Scholar]
  4. Kapur, P.K.; Pham, H.; Gupta, A.; Jha, P.C. Software Reliability Assessment with OR Applications; Springer: London, UK, 2011. [Google Scholar]
  5. Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M. Semi-supervised learning with deep generative models. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  6. Sahu, K.; Srivastava, R.K. Revisiting software reliability, data management, analytics and innovation. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2019; Volume 808. [Google Scholar]
  7. Sahu, K.; Srivastava, R.K. Soft computing approach for prediction of software reliability. ICIC Express Lett. 2018, 12, 1213–1222. [Google Scholar]
  8. Ji, C.; Su, X.; Qin, Z.; Nawaz, A. Probability analysis of construction risk based on noisy-or gate bayesian networks. Reliab. Eng. Syst. Saf. 2022, 217, 107974. [Google Scholar] [CrossRef]
  9. Sahu, K.; Srivastava, R.K. Needs and importance of reliability prediction: An industrial perspective. Inf. Sci. Lett. 2020, 9, 1–5. [Google Scholar]
  10. Sahu, K.; Srivastava, R.K. Predicting software bugs of newly and large datasets through a unified neuro-fuzzy approach: Reliability perspective. Adv. Math. Sci. J. 2021, 10, 543–555. [Google Scholar] [CrossRef]
  11. Türk, A.; Özkök, M. Shipyard location selection based on fuzzy AHP and TOPSIS. J. Intell. Fuzzy Syst. 2020, 39, 4557–4576. [Google Scholar] [CrossRef]
  12. Abuhamdah, A.; Boulila, W.; Jaradat, G.M.; Quteishat, A.M.; Alsmadi, M.K.; Almarashdeh, I.A. A novel population-based local search for nurse rostering problem. Int. J. Electr. Comput. Eng. 2021, 11, 471–480. [Google Scholar] [CrossRef]
  13. Ibrahim, I.M.; Mostafa, M.G.M.; El-Din, S.H.N.; Elgohary, R.; Faheem, H. A robust generic multi-authority attributes management system for cloud storage services. IEEE Trans. Cloud Comput. 2018, 9, 435–446. [Google Scholar] [CrossRef]
  14. Al-Said, A.A.; Andras, P. Scalability analysis comparisons of cloud-based software services. J. Cloud Comput. Adv. Syst. Appl. 2019, 8, 1–17. [Google Scholar] [CrossRef]
  15. Ozcan, M.O.; Odaci, F.; Ari, I. Remote debugging for containerized applications in edge computing environments. In Proceedings of the 2019 IEEE International Conference on Edge Computing (EDGE), Milan, Italy, 8–13 July 2019; pp. 30–32. [Google Scholar] [CrossRef]
  16. Hu, P.; Chen, W. Software-defined edge computing (SDEC): Principles, open system architecture and challenges. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; pp. 8–16. [Google Scholar] [CrossRef]
  17. Alsenani, Y.; Crosby, G.; Velasco, T. SaRa: A stochastic model to estimate reliability of edge resources in volunteer cloud. In Proceedings of the 2018 IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 2–7 July 2018; pp. 121–124. [Google Scholar] [CrossRef]
  18. Karunanithi, N.; Whitley, D.; Malaiya, Y.K. Using neural networks in reliability prediction. IEEE Softw. 1992, 9, 53–59. [Google Scholar] [CrossRef]
  19. Dohi, T.; Nishio, Y.; Osaki, S. Optimal software release scheduling based on artificial neural networks. Ann. Softw. Eng. 1999, 8, 167–185. [Google Scholar] [CrossRef]
  20. Blum, A.; Lafferty, J.; Rwebangira, M.R.; Reddy, R. Semi-supervised learning using randomized mincuts. In Proceedings of the International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004. [Google Scholar]
  21. George, E.D.; Dong, Y.; Li, D.; Alex, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 30–42. [Google Scholar]
  22. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  23. Martinez, H.P.; Bengio, Y.; Yannakakis, G.N. Learning deep physiological models of affect. IEEE Comput. Intell. Mag. 2013, 8, 20–33. [Google Scholar] [CrossRef] [Green Version]
  24. Hutchinson, B.; Deng, L.; Yu, D. Tensor deep stacking networks. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1944–1957. [Google Scholar] [CrossRef] [PubMed]
  25. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimizations. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
  26. Tamura, Y.; Yamada, S. Multi-dimensional software tool for OSS project management considering cloud with big data. Int. J. Reliab. Qual. Saf. Eng. 2018, 25, 1850014-1–1850014-16. [Google Scholar] [CrossRef]
  27. The OpenStack Project, Build the Future of Open Infrastructure. Available online: https://www.openstack.org/ (accessed on 24 February 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.