GUI Component Detection-Based Automated Software Crash Diagnosis
Abstract
:1. Introduction
- An automated crash diagnosis method that extracts GUI components using an object-detection technique and is applicable to all GUI-based software is proposed.
- During crash diagnosis, an STG is generated to avoid meaningless test cases by structuring the changes in the software state based on information regarding the class, location, and size of the screen and GUI components.
- Cross-platform black-box testing can be provided.
- Quantitative and empirical crash diagnoses were conducted using open-source Android application datasets.
- An application was developed and tested to establish an environment to detect multiple crashes.
- The software STG that was generated during the crash diagnosis was used as a training dataset to prepare the groundwork for research on reinforcement-learning-based GUI software-crash-diagnosis techniques.
2. Related Work
3. Overall Approach
3.1. Step 1: Construction of the STG
3.1.1. Extraction of GUI Components
3.1.2. Generation of STG
- No change on the screen;
- Crash;
- OOA;
- Maximum steps exceeded;
- No executable action.
Algorithm 1: Configure Initial STG | |
Input: device, model, application | |
1 | device.install(application) |
2 | nodeList = new List() |
3 | lastNode = null |
4 | step = 0 |
5 | while step < MAX_STEP // Iterate until the MAX step |
6 | image = device.getScreen() |
7 | screenStatus = screenAnalysis(image) |
8 | if screenStatus ! = null // Check for crash, OOA, and state change |
9 | lastNode.getAction(index=0).setStatus(screenStatus) |
10 | break |
11 | node = makeNode(image) // Generate nodes for the graph |
12 | nodeList.add(node) |
13 | if lastNode ! = null // Add no action to the initial node |
14 | lastNode.getAction(index=0).setNode(Node) |
15 | UI_List = GUI_Component_detect(model, image) // Detect GUI components |
16 | actionList = makeActions(UI_List) |
17 | Node.addActions(actionList) |
18 | // Create commands to execute actions matching device type command = node.getAction(index=0) .getCommand(device_type=device.getType()) |
19 | device.execute(command) |
20 | lastNode = node |
21 | step = step + 1 |
3.2. Step 2: Testing with STG
3.2.1. STG-Based Testing Process
Algorithm 2: STG-based testing | |
Input: device, model, nodeList, application, testScenarioList | |
1 | startNode = nodeList.get(index=0) |
2 | // Iterate until nothing remains to be done at the initial node while startNode.isClosed() ! = True |
3 | device.restart(application) |
4 | step, nowNode, history = doTest(testScenarioList) |
5 | // Execute when the step is smaller than MAX_STEP while step < MAX_STEP |
6 | image = device.getScreen() |
7 | screenStatue = screenAnalysis(image) |
8 | if screenStatus ! = null |
9 | nowNode.setStatus(screenStatus) |
10 | break |
11 | // Replace the existing node with a new one and connect it nowNode = getAddConnectNode(nowNode, image) |
12 | actions = nowNode.getActions()// Non-close action |
13 | if action == null |
14 | nowNode.setClose() |
15 | break |
16 | action, others = select(actions) |
17 | add_scenario(history, others) |
18 | history.add(action) |
19 | device.execute(action.command()) |
20 | step = step + 1 |
3.2.2. Reporting
4. Experiment
4.1. Configuration of Experimental Environment
4.1.1. Design for Performance Testing of the GUI-Component Detection Model
4.1.2. Design for Testing Open-Source Applications
4.1.3. Crash Detection Testing Design
4.2. Experimental Results
4.2.1. Results of Performance Evaluation of GUI-Component Detection Model
4.2.2. Results of Open-Source Application Testing Performance
4.2.3. Crash-Detection Testing Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Basili, V.R. Software development: A paradigm for the future. In Proceedings of the Thirteenth Annual International Computer Software & Applications Conference, Orlando, FL, USA, 20–22 September 1989; pp. 471–485. [Google Scholar]
- Yu, J. Research process on software development model. IOP Conf. Ser. Mater. Sci. Eng. 2018, 394, 032045. [Google Scholar] [CrossRef]
- Dingsøyr, T.; Moe, N.B. Exploring software development at the very large-scale: A revelatory case study and research agenda for agile method adaptation. Empir. Softw. Eng. 2018, 23, 490–520. [Google Scholar] [CrossRef]
- Hu, C.; Neamtiu, I. Automating GUI testing for Android applications. In Proceedings of the 6th International Workshop on Automation of Software Test, Honolulu, HI, USA, 23–24 May 2011; Association for Computing Machinery: New York, NY, USA; pp. 77–83. [Google Scholar]
- Sui, Y.; Zhang, Y. Event trace reduction for effective bug replay of Android apps via differential GUI state analysis. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Tallinn, Estonia, 26–30 August 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1095–1099. [Google Scholar]
- Moran, K. Enhancing android application bug reporting. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, Bergamo, Italy, 30 August–4 September 2015; Association for Computing Machinery: New York, NY, USA; pp. 1045–1047. [Google Scholar]
- Ko, Y.; Zhu, B. Fuzzing with automatically controlled interleavings to detect concurrency bugs. J. Syst. Softw. 2022, 191, 111379. [Google Scholar] [CrossRef]
- Jovic, M.; Adamoli, A. Catch me if you can: Performance bug detection in the wild. In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, Portland, OR, USA, 22–27 October 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 155–170. [Google Scholar]
- Choi, W.; Necula, G. Guided gui testing of android apps with minimal restart and approximate learning. ACM Sigplan Not. 2013, 48, 623–640. [Google Scholar] [CrossRef]
- Dong, Z.; Böhme, M. Time-travel testing of android apps. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, New York, NY, USA, 27 June–19 July 2020; pp. 481–492. [Google Scholar]
- Gu, T.; Sun, C. Practical GUI testing of Android applications via model abstraction and refinement. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), Montreal, QC, Canada, 25–31 May 2019; pp. 269–280. [Google Scholar]
- Wang, J.; Jiang, Y. ComboDroid: Generating high-quality test inputs for Android apps via use case combinations. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, Seoul, Republic of Korea, 5–11 October 2020; pp. 469–480. [Google Scholar]
- Machiry, A.; Tahiliani, R. Dynodroid: An input generation system for android apps. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, Saint Petersburg, Russia, 18–26 August 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 224–234. [Google Scholar]
- Mao, K.; Harman, M. Sapienz: Multi-objective automated testing for android applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis, Saarbrücken, Germany, 18–20 July 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 94–105. [Google Scholar]
- Pan, M.; Huang, A. Reinforcement learning based curiosity-driven testing of Android applications. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Virtual Event, 18–22 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 153–164. [Google Scholar]
- Su, T.; Meng, G. Guided, stochastic model-based GUI testing of Android apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, Paderborn, Germany, 4–8 September 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 245–256. [Google Scholar]
- Zheng, Y.; Xie, X. Wuji: Automatic online combat game testing using evolutionary deep reinforcement learning. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), San Diego, CA, USA, 10–15 November 2019; pp. 772–784. [Google Scholar]
- Dolan, R.J.; Matthews, J.M. Maximizing the utility of customer product testing: Beta test design and management. J. Prod. Innov. Manag. 1993, 10, 318–330. [Google Scholar] [CrossRef]
- Pelivani, E.; Cico, B. A comparative study of automation testing tools for web applications. In Proceedings of the 2021 10th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 7–11 June 2021; pp. 1–6. [Google Scholar]
- Jiang, Z.; Scheibe, K.P. The Economics of Public Beta Testing. Decis. Sci. 2017, 48, 150–175. [Google Scholar] [CrossRef]
- Lamkanfi, A.; Demeyer, S. Predicting the severity of a reported bug. In Proceedings of the 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010), Cape Town, South Africa, 2–3 May 2010; pp. 1–10. [Google Scholar]
- Sharma, M.; Kumari, M. Multiattribute based machine learning models for severity prediction in cross project context. In Proceedings of the Computational Science and Its Applications–ICCSA 2014: 14th International Conference, Guimarães, Portugal, 30 June–3 July 2014; pp. 227–241. [Google Scholar]
- Chaturvedi, K.K.; Singh, V.B. Determining bug severity using machine learning techniques. In Proceedings of the 2012 CSI Sixth International Conference on Software Engineering (CONSEG), Indore, India, 5–7 September 2012; pp. 1–6. [Google Scholar]
- Lamkanfi, A.; Demeyer, S. Comparing mining algorithms for predicting the severity of a reported bug. In Proceedings of the 2011 15th European Conference on Software Maintenance and Reengineering, Oldenburg, Germany, 1–4 March 2011; pp. 249–258. [Google Scholar]
- Menzies, T.; Marcus, A. Automated severity assessment of software defect reports. In Proceedings of the 2008 IEEE International Conference on Software Maintenance, Beijing, China, 28 September–4 October 2008; pp. 346–355. [Google Scholar]
- Tian, Y.; Lo, D. Information retrieval based nearest neighbor classification for fine-grained bug severity prediction. In Proceedings of the 2012 19th Working Conference on Reverse Engineering, Kingston, ON, Canada, 15–18 October 2012; pp. 215–224. [Google Scholar]
- Yatskiv, S.; Voytyuk, I. Improved method of software automation testing based on the robotic process automation technology. In Proceedings of the 2019 9th International Conference on Advanced Computer Information Technologies (ACIT), Ceske Budejovice, Czech Republic, 5–7 June 2019; pp. 293–296. [Google Scholar]
- Ma, Y.W.; Lin, D.P. System design and development for robotic process automation. In Proceedings of the 2019 IEEE International Conference on Smart Cloud (SmartCloud), Tokyo, Japan, 10–12 December 2019; pp. 187–189. [Google Scholar]
- Maalla, A. Development prospect and application feasibility analysis of robotic process automation. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; pp. 2714–2717. [Google Scholar]
- Yatskiv, N.; Yatskiv, S. Method of robotic process automation in software testing using artificial intelligence. In Proceedings of the 2020 10th International Conference on Advanced Computer Information Technologies (ACIT), Deggendorf, Germany, 13–15 May 2020; pp. 501–504. [Google Scholar]
- Battina, D.S. Artificial intelligence in software test automation: A systematic literature review. Int. J. Emerg. Technol. Innov. Res. 2019, 6, 2349–5162. [Google Scholar]
- Bajammal, M.; Stocco, A. A survey on the use of computer vision to improve software engineering tasks. IEEE Trans. Softw. Eng. 2020, 48, 1722–1742. [Google Scholar] [CrossRef]
- Jia, L.; Dong, W. Bug Finder Evaluation Guided Program Analysis Improvement. In Proceedings of the 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 19–20 October 2019; pp. 122–125. [Google Scholar]
- Ranorex. Available online: https://www.ranorex.com (accessed on 27 April 2023).
- Sikulix. Available online: http://sikulix.com/ (accessed on 27 April 2023).
- Krizhevsky, A.; Sutskever, I. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- LeCun, Y.; Bottou, L. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- He, K.; Zhang, X. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Hinton, G.E.; Osindero, S. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
- Everingham, M.; Eslami, S.A. The pascal visual object classes challenge: A retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
- Coco Detection Challenge (Bounding Box). Available online: https://competitions.codalab.org/competitions/20794 (accessed on 14 March 2023).
- ImageNet. Imagenet Object Localization Challenge. Available online: https://www.kaggle.com/c/imagenet-object-localization-challenge (accessed on 14 March 2023).
- G. Research. Open Images 2019—Object Detection Challenge. Available online: https://www.kaggle.com/c/open-images-2019-object-detection (accessed on 14 March 2023).
- Bouma-Sims, E.; Reaves, B. A First Look at Scams on YouTube. arXiv 2021, arXiv:2104.06515. [Google Scholar]
- Cortes, C.; Vapnik, V. Support vector machine. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Zhao, Z.Q.; Zheng, P. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Ren, S.; He, K. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1–14. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Sharif, A.; Marijan, D. DeepOrder: Deep learning for test case prioritization in continuous integration testing. In Proceedings of the 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME), Luxembourg, 27 September–1 October 2021; pp. 525–534. [Google Scholar]
- Qiao, L.; Li, X. Deep learning based software defect prediction. Neurocomputing 2020, 385, 100–110. [Google Scholar] [CrossRef]
- Kim, J.; Kwon, M. Generating test input with deep reinforcement learning. In Proceedings of the 11th International Workshop on Search-Based Software Testing, Gothenburg, Sweden, 28–29 May 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 51–58. [Google Scholar]
- Liu, M.; Li, K. DeepSQLi: Deep semantic learning for testing SQL injection. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Virtual Event, 18–22 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 286–297. [Google Scholar]
- Mirabella, A.G.; Martin-Lopez, A. Deep learning-based prediction of test input validity for restful apis. In Proceedings of the 2021 IEEE/ACM Third International Workshop on Deep Learning for Testing and Testing for Deep Learning, Madrid, Spain, 1 June 2021; pp. 9–16. [Google Scholar]
- Oz, M.; Kaya, C. On the use of generative deep learning approaches for generating hidden test scripts. Int. J. Softw. Eng. Knowl. Eng. 2021, 31, 1447–1468. [Google Scholar] [CrossRef]
- Amalfitano, D.; Fasolino, A.R. MobiGUITAR: Automated model-based testing of mobile apps. IEEE Softw. 2015, 32, 53–59. [Google Scholar] [CrossRef]
- Amalfitano, D.; Riccio, V. Combining automated GUI exploration of android apps with capture and replay through machine learning. Inf. Softw. Technol. 2019, 105, 95–116. [Google Scholar] [CrossRef]
- Wendland, T.; Sun, J. Andror2: A dataset of manually-reproduced bug reports for android apps. In Proceedings of the 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR), Madrid, Spain, 17–19 May 2021; pp. 600–604. [Google Scholar]
- UI/Application Exercieser Monkey. Available online: https://developer.android.com/studio/test/monkey (accessed on 14 March 2023).
- Pilgun, A.; Gadyatskaya, O. Fine-grained code coverage measurement in automated black-box android testing. ACM Trans. Softw. Eng. Methodol. (TOSEM) 2020, 29, 1–35. [Google Scholar] [CrossRef]
- MLOps. Available online: https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning (accessed on 14 March 2023).
Item | Description |
---|---|
Number of Android Repositories | 100 |
Number of collected images from XML | 295 |
Number of labeled datasets | 726 |
Data processing |
|
Data filtering |
|
Ratio of training dataset | 80% |
Ratio of test dataset | 10% |
Ratio of validation dataset | 10% |
The link to the dataset | https://github.com/sd05031/Dataset_for_GUI_components (accessed on 21 May 2023) |
Item | Description |
---|---|
Implementation framework | Deep learning: PyTorch Object detection: MMDetection |
GUI-component detection model | Faster R-CNN |
Component types to be detected | 13 types (Button, Image, TextView, ToggleButton, RadioButton, EditText, ProgressBar, SeekBar, RatingBar, ScrollView, Switch, Spinner, CheckBox) |
Average time for component detection | 0.28 s |
Item | Description |
---|---|
Device name | Android Emulator |
Device type | Virtual device |
OS | Android API 26 Android 8.0 OS (Oreo) |
Resolution | Width: 1080 px Length: 2280 px |
RAM capacity | 1536 MB (1.5 GB) |
GPU hardware | Yes |
Item | Description | |||
Name of the dataset | AndroR2 | |||
Number of total bug reports being published | 90 items | |||
Bug types | Crash, Output, GUI | |||
Data feed | Files containing bugs (APK), bug reports, reproduction scripts (Java), metadata (JSON) | |||
Used bug reports for crash detection testing | Crash type 3 bug reports: #7, #11, and #50 | |||
Bug report ID | Bug type | Application name | Android OS version reported | GUI actions in bug scenarios |
7 | Crash | HAB Panel Viewer | 8.1 | 5 |
11 | Crash | Noad Player | 6.1 | 2 |
50 | Crash | Berkeley Mobile | 9.0 | 1 |
Item | Description |
---|---|
Measurement tool | Android Code Coverage Tool (ACV Tool) [62] |
Data applied | AndroR2 [60] Bug report #62 (Output type) |
Baseline | Monkey [61] |
Length of the test scenario | 5 and 10 |
Number of events set for Monkey test | 100, 500, and 1000 |
Seed value of Monkey | 1 and 777 |
Measurement object | Total code coverage |
Item | Content |
---|---|
Platform | Android Native |
Programming language | Kotlin |
Minimum OS requirements | Android API 26 Android 8.0 OS (Oreo) |
Permission requests | None |
Number of activities | 28 |
Number of action events | 32 |
Number of crash events | 5 |
Crash 1 event activity | A0 |
Crash 2 event activity | F0 |
Crash 3 event activity | G1 |
Crash 4 event activity | H0 |
Crash 5 event activity | J2 |
TEST ITEM | Value |
---|---|
Macro precision | 0.8529 |
Macro recall | 0.8349 |
Macro F1-score | 0.8430 |
Accuracy | 0.8659 |
Technique | Proposed Technique | Baseline (Monkey) | ||||
---|---|---|---|---|---|---|
MAX_STEP (depth) | 5 | 10 | - | - | - | - |
Number of events | - | - | 100 | 500 | 1000 | 1000 |
Seed value | - | - | 1 | 1 | 1 | 777 |
Statement coverage | 6.543% | 7.629% | 5.331% | 6.382% | 6.438% | 6.628% |
Function coverage | 8.354% | 9.317% | 7.360% | 8.497% | 8.579% | 8.277% |
Class coverage | 12.056% | 13.517% | 11.346% | 12.279% | 12.279% | 11.812% |
Activity coverage | 17.460% | 68.254% | 17.160% | 17.460% | 17.460% | 17.460% |
Code coverage | 6.141% | 7.089% | 5.331% | 6.382% | 6.483% | 6.216% |
Technique | Detection Performance | 1 (# 7) | 2 (# 11) | 3 (# 50) |
---|---|---|---|---|
Proposed technique | Number of crash detections | 1 | 1 | 9 |
Number of GUI action executions | 5 | 2 | 1, 3 | |
Number of OOA detections | 0 | 2 | 0 | |
Baseline (Monkey) | Number of crash detections (seed 1) | 0 | 0 | 3 |
Number of crash detections (seed 2) | 0 | 0 | 2 | |
Number of crash detections (seed 3) | 0 | 0 | 2 | |
Number of crash detections (seed 4) | 1 | 0 | 2 | |
Number of crash detections (seed 5) | 2 | 0 | 3 | |
Detection exception type | Illegal State Exception | None | Null Pointer Exception |
Test Execution | Crash 1 | Crash 2 | Crash 3 | Crash 4 | Crash 5 |
---|---|---|---|---|---|
Success in detection | O | O | O | O | O |
Number of GUI actions | 2 | 4 | 4 | 4 | 7 |
Number of events per loop: 100 | |||||
Number of events detected | - | - | 57/100 | 76/100 | - |
Number of loops detected | - | - | 63/100 | 35/100 | - |
Number of searches | <10,000 | <10,000 | 5763 | 3576 | <10,000 |
Success in detection | X | X | O | O | X |
Number of events per loop: 500 | |||||
Number of events detected | 258/500 | 340/500 | 481/500 | 160/500 | - |
Number of loops detected | 28/100 | 27/100 | 6/100 | 10/100 | - |
Number of searches | 14,285 | 13,840 | 3481 | 5160 | <50,000 |
Success in detection | O | O | O | O | X |
Number of events per loop: 1000 | |||||
Number of events detected | 716/1000 | 318/1000 | 133/1000 | 236/1000 | 823/1000 |
Number of loops detected | 40/100 | 4/100 | 21/100 | 42/100 | 77/100 |
Number of searches | 40,716 | 4318 | 21,133 | 42,236 | 77,823 |
Success in detection | O | O | O | O | O |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nam, S.-G.; Seo, Y.-S. GUI Component Detection-Based Automated Software Crash Diagnosis. Electronics 2023, 12, 2382. https://doi.org/10.3390/electronics12112382
Nam S-G, Seo Y-S. GUI Component Detection-Based Automated Software Crash Diagnosis. Electronics. 2023; 12(11):2382. https://doi.org/10.3390/electronics12112382
Chicago/Turabian StyleNam, Seong-Guk, and Yeong-Seok Seo. 2023. "GUI Component Detection-Based Automated Software Crash Diagnosis" Electronics 12, no. 11: 2382. https://doi.org/10.3390/electronics12112382
APA StyleNam, S.-G., & Seo, Y.-S. (2023). GUI Component Detection-Based Automated Software Crash Diagnosis. Electronics, 12(11), 2382. https://doi.org/10.3390/electronics12112382