Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)
Abstract
1. Introduction
2. Research Methodology
2.1. Research Questions
2.2. Search Process
2.3. Study Selection
2.4. Inclusion and Exclusion Criteria
2.5. Quality Assessment
2.6. Data Extraction
2.7. Data Synthesis
3. SLR Results
3.1. Search Results
3.2. Quality Assessment Results
3.3. Key Quality Factors and Evaluation Methods
4. Discussion
4.1. Challenges in Evaluating Website Performance
4.2. Distribution of Research Focus
4.3. Trends by Publication Year
4.4. Research Geography
4.5. Study Context and Website Types
4.6. Summary of Research Trends
4.7. What Are the Approaches Used to Predict Website Performance?
4.8. What Are the Key Performance Indicators (KPIs) Used in Studies?
4.9. Technological Advances and Development of Performance Criterion
5. Discussion and Recommendation
5.1. Discussions
Limitations of Study
5.2. Recommendations for Practitioners
- Adopt Core Performance Metrics Early in Development
- Prioritize foundational metrics such as Load Time, Time to First Byte, and Page Size, as they directly impact user experience and are supported by nearly all performance testing tools.
- Tools such as Google PageSpeed Insights and WebPageTest be used to help continuously monitor these metrics throughout development.
- Implement User-Centric Indicators
- Incorporate Largest Contentful Paint (LCP), and Time to Interactive (TTI) for a more realistic evaluation of perceived performance.
- These should be particularly emphasized in dynamic, content-heavy websites.
- Optimize Design and Front-End Assets
- Reduce the number of requests and overall page weight through efficient asset management.
- Apply compression techniques, lazy loading, and minified JavaScript/CSS to improve rendering times.
- Ensure Code and Accessibility Quality
- Regularly validate HTML structure using tools such as W3C Markup Validator to detect errors and improve maintainability.
- Check for broken links, missing ALT tags, and other accessibility issues that degrade both SEO and usability.
- Apply Predictive and Intelligent Tools Where Possible
- Use machine learning-based evaluations to predict site performance, especially in high-traffic applications where small inefficiencies can scale.
- Integrate automated performance testing pipelines into CI/CD environments.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Quality Assessment
Studies | Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | Q8 | Q9 | Sum |
---|---|---|---|---|---|---|---|---|---|---|
P1 | Y | Y | Y | P | P | P | Y | P | P | 6.5 |
P2 | Y | Y | Y | P | P | P | Y | P | P | 6.5 |
P3 | Y | Y | Y | P | P | P | Y | Y | Y | 7.5 |
P4 | Y | P | Y | N | N | N | Y | P | P | 4.5 |
P5 | Y | Y | P | P | P | P | Y | P | Y | 6.5 |
P6 | P | Y | Y | P | Y | Y | P | P | Y | 7.5 |
P7 | P | Y | Y | P | P | P | Y | P | P | 6 |
P8 | P | Y | Y | Y | P | P | Y | P | P | 6.5 |
P9 | Y | Y | Y | Y | Y | Y | Y | Y | Y | 9 |
P10 | Y | Y | P | P | P | P | Y | P | P | 6 |
P11 | Y | Y | Y | P | Y | P | Y | Y | N | 7 |
P12 | Y | Y | P | Y | P | P | P | P | N | 5.5 |
P13 | N | Y | Y | P | N | N | P | P | Y | 4.5 |
P14 | P | P | P | P | P | N | Y | P | P | 4.5 |
P15 | P | Y | Y | P | P | N | P | P | N | 4.5 |
P16 | Y | P | Y | N | N | N | Y | Y | Y | 5.5 |
P17 | Y | Y | Y | N | P | Y | P | Y | N | 6 |
P18 | P | Y | P | P | P | P | Y | Y | N | 5.5 |
P19 | Y | Y | Y | N | P | P | Y | Y | N | 6 |
P20 | Y | Y | Y | P | N | N | P | N | P | 4.5 |
P21 | P | Y | P | P | P | P | Y | P | P | 5.5 |
P22 | Y | N | Y | N | N | N | Y | P | Y | 4.5 |
P23 | Y | P | Y | P | P | P | Y | Y | P | 6.5 |
P24 | Y | N | Y | P | P | P | Y | P | P | 5.5 |
P25 | Y | Y | Y | P | Y | P | Y | P | Y | 7.5 |
P26 | Y | Y | Y | Y | Y | P | Y | Y | N | 7.5 |
P27 | Y | P | P | P | P | P | Y | Y | N | 5.5 |
P28 | Y | P | P | P | P | P | P | P | P | 5 |
P29 | Y | Y | Y | P | P | P | P | P | P | 6 |
P30 | P | Y | P | P | P | P | P | Y | N | 5 |
P31 | Y | N | Y | P | P | P | P | Y | N | 5 |
P32 | Y | Y | Y | N | P | P | P | Y | Y | 4.5 |
P33 | Y | Y | P | N | P | P | P | P | P | 5 |
P34 | P | Y | P | P | N | P | P | P | Y | 5 |
Appendix B. Studies
Paper ID | Title | Journal | Author | Country | Year | Context | Ref |
---|---|---|---|---|---|---|---|
P1 | Web Application Performance Analysis of E-Commerce Sites in Bangladesh: An Empirical Study | Modern Education and Computer Science Press(MECS Press) | Mahfida Amjad, Md. Tutul Hossain, Rakib Hassan, Md. Abdur Rahman | Bangladesh | 2021 | e-commerce sites | [23] |
P2 | Evaluating the performance of government websites: An automatic assessment system based on the TFN-AHP methodology | Journal of Information Science | Xudong Cai, Shengli Li, Gengzhong Feng | China | 2020 | e-government | [8] |
P3 | The Performance Evaluation of a Website using Automated Evaluation Tools | Technology Innovation Management and Engineering Science International | Achaporn Kwangsawad; Aungkana Jattamart; Paingruthai Nusawat | Thailand | 2019 | herbal cosmetic | [33] |
P4 | Performance evaluation of websites using entropy and grey relational analysis methods: The case of airline companies | Decision Science Letters | Kemal Vatansever, Yakup Akgűl | Turkey | 2018 | airlines | [26] |
P5 | An Intelligent Method to Assess Webpage Quality using Extreme Learning Machine | International Journal of Computer Science and Network Security | Jayanthi, B., Krishnakumari, P | India | 2016 | education, finance, news and health | [32] |
P6 | Analytic Hierarchy Process (AHP) Based Model for Assessing Performance Quality of Library Websites | Information Technology Journal | Harshan, R. K., Chen, X., and Shi, B. | China | 2017 | library | [25] |
P7 | An empirical performance evaluation of universities website | International Journal of Computer Applications | KAUR, Sukhpuneet; KAUR, Kulwant; KAUR, Parminder | India | 2016 | education | [24] |
P8 | Predicting web page performance level based on web page characteristics | International Journal of Web Engineering and Technology | Junzan Zhou, Yun Zhang, Bo Zhou and Shanping Li | China | 2015 | education | [21] |
P9 | Measuring Quality of Asian Airline Websites Using Analytical Hierarchy Process: A Future Customer Satisfaction Approach | Information Systems International | Humera Khan, P.D.D.Dominic | Malaysia | 2013 | airline | [29] |
P10 | A comparison of Asian e-government websites quality: using a non-parametric test | International Journal of Business Information Systems | P.D.D. Dominic and Handaru Jati | Malaysia | 2011 | e-government | [44] |
P11 | Quality Ranking of E-Government Websites: PROMETHEE II Approach | International Conference for Informatics for Development | Handaru Jati | Indonesia | 2011 | e-government | [28] |
P12 | Evaluation of Usage of University Websites in Bangladesh | DESIDOC Journal of Library & Information Technology | ISLAM, Anwarul; TSUJI, Keita | Bangladesh | 2011 | university | [40] |
P13 | Measuring the quality of e-commerce websites using analytical hierarchy process | TELKOMNIKA (Telecommunication Computing Electronics and Control) | Aziz, U. A., Wibisono, A., and Nisafani | Indonesia | 2019 | e-commerce | [27] |
P14 | Measuring website quality of the Indian railways | International Journal of Entrepreneurial Knowledge | Jain, R. K., and Rangnekar | Indian | 2015 | railways | [45] |
P15 | Evaluation of Nigeria Universities Websites Quality: A Comparative Analysis | Library Philosophy and Practice | Sunday Adewale Olaleye, Ismaila Temitayo Sanusi, Dandison C. Ukpabi, Adekunle Okunoye | Nigeria | 2018 | university | [46] |
P16 | A comparative approach to web evaluation and website evaluation methods | International Journal of Public Information Systems | Zahran, D. I., Al-Nuaim, H. A., Rutter, M. J., and Benyon, D | Scotland, UK | 2014 | government | [7] |
P17 | A comparison of Asian airlines websites quality: using a non-parametric test | International Journal of Business Innovation and Research | Dominic, P. D. D., and Jati, H | Malaysia | 2011 | airline | [47] |
P18 | A filter-wrapper based feature selection for optimized website quality prediction | Amity International Conference on Artificial Intelligence (AICAI) | Akshi Kumar, Anshika Arora | India | 2019 | commercial, organization, government | [22] |
P19 | A neuro-fuzzy classifier for website quality prediction | International Conference on Advances in Computing, Communications and Informatics | Malhotra, R., and Sharma, A | India | 2013 | NA | [30] |
P20 | A Novel Model for Assessing e-Government Websites Using Hybrid Fuzzy Decision-Making Methods | International Journal of Computational Intelligence Systems | Shayganmehr, M., and Montazer, G. A | Iran | 2021 | e-government | [48] |
P21 | A proposal for a quality model for e-government website | International Conference on Data and Software Engineering (ICoDSE) | HENDRADJAYA, Bayu; PRAPTINI, Rina | Indonesia | 2015 | government | [49] |
P22 | Performance Evaluation of Websites Using Machine Learning | EIMJ | MM Ghattas, PDB Sartawi | Palestine | 2020 | NA | [18] |
P23 | Analysis and modelling of websites quality using fuzzy technique | Second International Conference on Advanced Computing & Communication Technologies | MITTAL, Harish; SHARMA, Monika; MITTAL, J. P | India | 2012 | NA | [50] |
P24 | Analytic hierarchy process for website evaluation | Intelligent Decision Technologies | KABASSI, Katerina | Greece | 2018 | government, health | [51] |
P25 | Application of mathematical simulation methods for evaluating the websites effectiveness | Systems of Signals Generating and Processing in the Field of on-Board Communications | Erokhin, A. G., Vanina, M. F., and Frolova, E. A | Russia | 2019 | e-commerce | [52] |
P26 | Empirical validation of website quality using statistical and machine learning methods. | International Conference-Confluence the Next Generation Information Technology Summit (Confluence) | Poonam Dhiman, Anjali | India | 2014 | NA | [20] |
P27 | Evaluating the Websites’ Quality of Five- and Four-Star Hotels in Egypt | Minia Journal of Tourism and Hospitality Research MJTHR | Elsater, S. A. E., Dawood, A. E. A. A., Mohamed Hussein, M. M., and Ali, M. A | Egypt | 2022 | hotel | [16] |
P28 | A review of website evaluation using web diagnostic tools and data envelopment analysis | Bulletin of Electrical Engineering and Informatics | Najadat, H., Al-Badarneh, A., and Alodibat | Jordan | 2021 | e-government | [6] |
P29 | Empirical and Automated Analysis of Web Applications | International Journal of Computer Applications | KULKARNI, R. B.; DIXIT, S. K | India | 2012 | e-commerce, banking, and e-governance | [53] |
P30 | Website Performance Analysis and Evaluation using Automated Tools. | International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques | Kumar, N., Kumar, S., and Rajak, R | India | 2021 | organization | [39] |
P31 | Framework for evaluation of academic website | International Journal of Computer Techniques | Devi, K., and Sharma, A | India | 2016 | academic | [35] |
P32 | Brief analysis on Website performance evaluation | IET Digital Library | Li Peng; YueMing Lu; Dongbin Wang | China | 2015 | NA | [54] |
P33 | Web page prediction using genetic algorithm and logistic regression based on weblog and web content features | International Conference on Electronics and Sustainable Communication Systems | Gangurde, R., and Kumar | India | 2020 | organization | [31] |
P34 | Performance Testing and Optimization of DiTenun Website | Journal of Applied Science, Engineering, Technology, and Education | Barus, A. C., Sinambela, E. S., Purba, I., Simatupang, J., Marpaung, M., and Pandjaitan, N. | Indonesia | 2022 | industry | [55] |
Appendix C. Metrics
No | Name | Description |
---|---|---|
1 | fully loaded (requests) | This is the quantity of requests that the browser has to make for pieces of material on the page (images, JavaScript, CSS, etc.). It is an information request message sent from a client to a server using the hypertext transfer protocol (HTTP). To transmit images, text, or pages to the user’s browser, it must first request that data, which it does via an HTTP request. |
2 | first CPU idle | First CPU Idle measures when a page is minimally interactive, or when the window is quiet enough to handle user input. |
3 | speed index | A metric that measures how quickly the contents of a webpage are visually displayed during loading. |
4 | start render | The time when the first non-white content is painted on the screen, indicating the beginning of the webpage rendering process |
5 | load time | The time it takes for a webpage to load completely, including all resources and rendering |
6 | mobile optimization | The optimization of the website for mobile devices, including responsive design, mobile-friendly layouts, and fast loading times on mobile networks, to enhance user experience for mobile users |
7 | document complete (time) | Indicates the point at which the browser’s on load event appears, indicating that all of the static page content has been loaded to some extent. |
8 | last painted hero | Function as an artificial indicator, shows the user when the final critical piece of content is visually rendered on the screen. |
9 | first content-full paint | The time when the first content element (such as text or images) is rendered on the screen. |
10 | first byte | Measures the time between when an internet user makes an HTTP request, such as loading a webpage, and when the client’s browser receives the first byte of data. |
11 | bytes in | The amount of data that the browser needs download in order to fully load a webpage. |
12 | time to interactive | Denoting the moment when the last prolonged task concludes, succeeded by 5 s of network and main thread dormancy. TTI offers users a comprehensive understanding of the website’s responsiveness from the perspective of the site visitor. |
13 | max potential first input delay | The period between when a user first interacts with your site, such as clicking a button, and when the browser is fully ready to respond to that interaction. |
14 | first meaningful paint | The first significant paint is the amount of time it takes for the main material of a page to appear on the screen. Although it frequently identifies non-meaningful paints like headers and navigation bars, it is utilized as an approximation of the first meaningful paint. |
15 | largest content-full paint | Largest content-full paint (LCP) is a crucial, user-centric metric for measuring perceived load speed as it marks the page load timeline when the page’s main content has been loaded. |
16 | cumulative layout shift | A metric that measures the amount of unexpected layout shifts that occur during the loading process, affecting visual stability and user experience |
17 | first input delay (FID) | Measures the delay between the user’s first interaction (like clicking a link or button) and the time the browser begins processing that interaction. It reflects real interactivity performance and responsiveness from the user’s perspective. |
18 | availability of hyperlinks | Verifies if users of websites can access the pages without any problems. |
19 | updatability of information | Refers to the information updates on websites. It is measured by the percentage of updated hyperlinks on websites during the assessment cycle. |
20 | richness of content | Determines if webpages have a variety of information resources. |
21 | security of website | Verify whether websites are protected from Trojans by robust security measures. |
22 | impacts on search engines | Refers to the performance of websites on search engines. |
23 | impacts on social media | Checks the influence of websites on the social media. |
24 | impacts on network | Measures the effect of websites on the Internet by reflecting their popularity and importance through the use of tools like PageRank. |
25 | page size | Refers to a specific web page’s overall size. Every file that makes up a webpage is included in the page size. The HTML document, any images that are included, style sheets, scripts, and other material are all included in these folders. |
26 | page requests | A request for a web page, in its whole or in part (including requests for additional frames), results from user actions such typing a URL, clicking a link, sending out a “refresh” command, or moving across the page. |
27 | browser cache | A temporary storage area in memory or on disk that holds the most recently downloaded Web pages. |
28 | page redirects | Page redirects add to the loading cycle, increasing the time to display a page. |
29 | compression | JavaScript and CSS ensure proper compressed this makes the website run much faster. |
30 | render-blocking JavaScript | Render-blocking resources are portions of code in website files, usually CSS and JavaScript, that prevent a web page from loading quickly. |
31 | traffic | The browser gathers data, which is then transmitted to the Alexa website. At the website, this collected data is stored and analyzed, forming the foundation for the company’s web traffic reporting. |
32 | page rank | It is used to calculate and display the PageRank for each Website. |
33 | browser compatibility | Ensuring that the website is compatible with different web browsers and devices, optimizing performance and user experience across various platforms. |
34 | content delivery network | The use of CDN services to distribute webpage content across multiple servers located geographically closer to users, improving load times and reducing latency. |
35 | response time | A website server is expected to respond to a browser request within specific parameters. |
36 | markup validation | It is employed to evaluate and compute the quantity of HTML errors present on the website, including orphan codes, coding errors, missing tags, and similar issues. |
37 | broken links | Links on websites might be internal or external. When a visitor clicks on a link, they trust that the page will load successfully. |
38 | total link | Total Link on webpage. |
39 | text link | Total Text Link. |
40 | word count | Total words on page. |
41 | total body words | Number of words in sentence. |
42 | total sentence | Number of sentences in paragraph. |
43 | total paragraph | Number of paragraphs in body text. |
44 | total cluster count | Number of text cluster on page. |
45 | total image | Total Image on page. |
46 | alt image count | Number of images with ALT clause. |
47 | no alt image count | Number of images without ALT clause. |
48 | animation count | Number animated element. |
49 | unique image count | Number of unique images. |
50 | image map count | Number of image maps on page. |
51 | Un-sized image count | Number of images without size definition. |
52 | total color | Total color on page. |
53 | reading complexity | Overall Page Readability. |
54 | number of components | The amount of request/response between a client and a host |
55 | design optimization | The scripts, HTML, or CSS codes are optimized to enhance loading speed. This optimization concurrently reduces the quantity of website elements, including images, scripts, HTML, CSS codes, or videos. |
56 | availability | Is a website that is accessible. |
57 | the frequency of update | Check frequently website is updated with new content. |
58 | html page sizes | The size of all the HTML code on your web page—this size does not include images, external JavaScript’s or external CSS files. |
59 | download time | The average time to download any page related to the services, including all content contained therein. |
Appendix D. Data Extraction
Paper ID | Factors Examined | Algorithms/Approaches |
---|---|---|
P1 | fully loaded (requests), first CPU idle, speed index, start render, load time, fully loaded (time), document complete (time), last painted hero, first contentful paint, and first byte | automated evaluation tools |
P2 | availability of hyperlinks, updatability of information, loading speed of web pages, richness of content, security of website, construction of columns, impacts on search engines | TFNs and AHP |
P3 | page size, page requests, pages peed, browser cache, page redirects, compression, and render-blocking JavaScript | automated evaluation tools |
P4 | Traffic, page rank, design optimization, load time, response time, markup, and broken links | Entropy and Grey Relational Analysis |
P5 | Total Link, Text Link, Word Count, Total Body Words, Total Sentence, Total paragraph, Total cluster count, Total Image, Alt Image Count, Unique Image Count, Image map Count, Unsized Image count, Total Color, Reading Complexity | Extreme Learning Machine (ELM), SVM |
P6 | Load time, number of components, page speed, page size, response time, mark-up validation, broken links, and design optimization | AHP and FAHP |
P7 | No. of Requests, Load time and Page size | automated evaluation tools |
P8 | Number of servers contacted, Number of origins contacted, Number of object requests median, Object request size median, Number of JavaScript objects median, Size of JavaScript objects median, Number of image objects median, Size of image objects median, Number of flash objects median, Size of flash objects median, Number of CSS objects median, Size of CSS objects median, Maximum size of objects normalized median | RF, AdaBoost, Logistic Regression, SVM, NB, BN |
P9 | Load time page size, response time, page speed, availability, broken links, no. of component, response time, markup validation | Analytical Hierarchy Process |
P10 | Load time, response time, page rank, the frequency of update, traffic, design optimization, page size, number of the item, accessibility error, markup validation | LWM, AHP, FAHP, NHM |
P11 | Load time, response time, page rank, the frequency of update, traffic, design optimization, size, no of items, accessibility error, markup validation, and broken link | PROMETHEE II and AHP |
P12 | Total no of HTML files, HTML page sizes, composition, total number of images, and download time | web diagnostic tools |
P13 | Load Time, Page Size, Number of Item, Page Speed Score, Availability, Page Rank, Traffic, Design Optimization, Markup Validation | AHP |
P14 | Continuous Connectivity, Quick Response, Ease of Access, Options to Pay, Content Usefulness, Ease of Navigation, Clarity of Data, Privacy and Security, Aesthetics, Customization | Statistical tools (ANOVA) |
P15 | ease of use, processing speed, aesthetic design, interactive responsiveness, entertainment, trust and usefulness | web analytical tools |
P16 | Usability, maintainability, reliability, efficiency, navigation, content | web analytical tools |
P17 | Load time, response time, page rank, frequency of update, traffic, design optimization, size, number of items, accessibility error, broken links | LWM, AHP, FAHP |
P18 | Relevance, Updating, Accuracy, Total size, Broken Links, Loading Time, Communication, Social Media Connectivity, Browser Compatibility, Typography & Font, Color Scheme, Overall Theme | NB, KNN, DT, RF |
P19 | Word Count, Body Text Words, Page Size, Table Count, Graphics Count, Division Count, List Count, Number of Links, Page Title length | ANFIS clustering algorithms |
P20 | Speed of servers’ responsiveness, Compatibility with social networks, Document downloading time, Bandwidth, File size, Picture size, Server location, Security, Content quality | Hybrid Fuzzy Decision-Making Methods |
P21 | Responsiveness, Service Availability, Multi-lingual, Service Accuracy, User Satisfaction, Security, Trust, Information Accuracy, System availability, Access Time, Browser Usage, Usability | automated evaluation tools |
P22 | Page size, load time, design optimization, markup validation, response time, speed, broken links | Linear regression, SVM |
P23 | load time, response time, mark-up validation, broken link, accessibility error, size, page rank, frequency of update, traffic and design optimization | Fuzzy logic |
P24 | Content and appearance, Information quality, Navigability, Graphic design, FAQs, Interactivity, Satisfaction, Usability, Reliability, Privacy, Web Services, Technology, Functionality | AHP, fuzzy AHP |
P25 | conversion metric, time spent on site, number of refusals, number of pages viewed | mathematical simulation methods |
P26 | Total words length, Body text length, Title text length, Total links, Internal links, Size of page in KB, Emphasize text, HTML Lines, JS Lines, Complexity, Tables, Graphics | statistical and ML methods |
P27 | Informational content, Design, Ease of use, Interactivity, Marketing Image, Online processes | Statistical tools |
P28 | Ambiguity, uncertainty, time, Usefulness, satisfaction, Download time, help features, dynamic content, response time, average page size, hits, visitors | automated evaluation tools |
P29 | Page load, response time, optimal navigation, HTML, maintainability, security, functionality, usability, efficiency, creditability | automated evaluation tools |
P30 | User Friendliness, Accessibility, Security, SEO, Social | automated evaluation tools |
P31 | Usability, Content, Presentation, Functionality, and Reliability | automated evaluation tools |
P32 | Query DNS, Response to request, Establish connection | automated evaluation tools |
P33 | Web log | automated evaluation tools |
P34 | response time and service availability | Logistic Regression (LR) |
Appendix E. Research Focus
Research Topics | Paper ID |
---|---|
Identifying factors influencing website performance | P1,P2,P3,P4,P5,P6,P7,P8,P9,P10,P11,P12,P14,P15,P16,P17,P18,P19, |
P20,P21,P22,P23,P24,P25,P26,P27,P28,P29,P30,P31,P32,P34 | |
The state-of-the-art in performance evaluation of websites | P2,P4,P6,P9,P10,P11,P14,P17,P19,P20,P23,P24,P25,P28,P32 |
ML and Deep learning | P5,P8,P18,P19,P26,P33,P22 |
Appendix F. Full Survey Questionnaire
References
- Faustina, F.; Balaji, T. Evaluation of universities websites in Chennai city, India using analytical hierarchy process. In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 3–5 March 2016; IEEE: New York, NY, USA; pp. 112–116. [Google Scholar] [CrossRef]
- Hidayah, N.A.; Subiyakto, A.; Setyaningsih, F. Combining Webqual and Importance Performance Analysis for Assessing A Government Website. In Proceedings of the 2019 7th International Conference on Cyber and IT Service Management (CITSM), Jakarta, Indonesia, 6–8 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Shayganmehr, M.; Montazer, G.A. Identifying Indexes Affecting the Quality of E-Government Websites. In Proceedings of the 2019 5th International Conference on Web Research (ICWR), Tehran, Iran, 24–25 April 2019; pp. 167–171. [Google Scholar] [CrossRef]
- Joyami, E.N.; Salmani, D. Assessing the Quality of Online Services (Website) of Tehran University. 2019. Available online: https://un-pub.eu/ojs/index.php/pntsbs/article/view/4519 (accessed on 28 August 2025).
- Fogli, D.; Guida, G. Evaluating Quality in Use of Corporate Web Sites: An Empirical Investigation. ACM Trans. Web 2018, 12, 1–35. [Google Scholar] [CrossRef]
- Najadat, H.; Al-Badarneh, A.; Alodibat, S. A review of website evaluation using web diagnostic tools and data envelopment analysis. Bull. Electr. Eng. Inform. 2021, 10, 258–265. [Google Scholar] [CrossRef]
- Zahran, D.I.; Al-Nuaim, H.A.; Rutter, M.J.; Benyon, D. A Comparative Approach To Web Evaluation And Website Evaluation Methods. Int. J. Public Inf. Syst. 2014, 10. Available online: http://www.ijpis.net/index.php/IJPIS/article/view/126 (accessed on 28 August 2025).
- Cai, X.; Li, S.; Feng, G. Evaluating the performance of government websites: An automatic assessment system based on the TFN-AHP methodology. J. Inf. Sci. 2020, 46, 760–775. [Google Scholar] [CrossRef]
- Saleh, A.H.; Yusoff, R.C.M.; Bakar, N.A.A.; Ibrahim, R. Systematic literature review on university website quality. Indones. J. Electr. Eng. Comput. Sci. 2022, 25, 511. [Google Scholar] [CrossRef]
- Kitchenham, B. Procedures for Performing Systematic Reviews; Technical Report TR/SE-0401; Keele University: Keele, UK, 2004. [Google Scholar]
- Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. Ann. Intern. Med. 2009, 151, W-65. [Google Scholar] [CrossRef]
- Ghobadi, S. What drives knowledge sharing in software development teams: A literature review and classification framework. Inf. Manag. 2015, 52, 82–97. [Google Scholar] [CrossRef]
- Niazi, M.G.; Kamran, M.K.A.; Ghaebi, A. Presenting a proposed framework for evaluating university websites. Electron. Libr. 2020, 38, 881–904. [Google Scholar] [CrossRef]
- Adepoju, S.A.; Oyefolahan, I.O.; Abdullahi, M.B.; Mohammed, A.A. Integrated Usability Evaluation Framework for University Websites. i-Manager 2019, 8, 40–48. [Google Scholar] [CrossRef]
- Allison, R.; Hayes, C.; McNulty, C.A.M.; Young, V. A Comprehensive Framework to Evaluate Websites: Literature Review and Development of GoodWeb. JMIR Form. Res. 2019, 3, e14372. [Google Scholar] [CrossRef]
- Elsater, S.A.E.A.; Dawood, A.E.A.A.; Hussein, M.M.M.; Ali, M.A. Evaluating the Websites’ Quality of Five and Four Star Hotels in Egypt. Minia J. Tour. Hosp. Res. MJTHR 2022, 13, 183–193. [Google Scholar] [CrossRef]
- Alsulami, M.H.; Khayyat, M.M.; Aboulola, O.I.; Alsaqer, M.S. Development of an Approach to Evaluate Website Effectiveness. Sustainability 2021, 13, 13304. [Google Scholar] [CrossRef]
- Ghattas, M.M. Performance Evaluation of Websites Using Machine Learning. Master’s Thesis, Al-Quds University, Jerusalem Governorate, Palestine, 2020. [Google Scholar]
- Kinnunen, M. Evaluating and Improving Web Performance Using Free-to-Use Tools. laturi.oulu.fi. Available online: https://oulurepo.oulu.fi/handle/10024/15601 (accessed on 18 February 2024).
- Dhiman, P.; Anjali. Empirical validation of website quality using statistical and machine learning methods. In Proceedings of the 2014 5th International Conference—Confluence The Next Generation Information Technology Summit (Confluence), Noida, India, 25–26 September 2014; pp. 286–291. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, Y.; Zhou, B.; Li, S. Predicting web page performance level based on web page characteristics. Int. J. Web Eng. Technol. 2015, 10, 152. [Google Scholar] [CrossRef]
- Kumar, A.; Arora, A. A Filter-Wrapper based Feature Selection for Optimized Website Quality Prediction. In Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 284–291. [Google Scholar] [CrossRef]
- Amjad, M.; Hossain, M.T.; Hassan, R.; Rahman, M.A. Web Application Performance Analysis of ECommerce Sites in Bangladesh: An Empirical Study. Int. J. Inf. Eng. Electron. Bus. 2021, 13, 47–54. [Google Scholar] [CrossRef]
- Kaur, S.; Kaur, K.; Kaur, P. An Empirical Performance Evaluation of Universities Website. Int. J. Comput. Appl. 2016, 146, 10–16. [Google Scholar] [CrossRef]
- Harshan, R.K.; Chen, X.; Shi, B. Analytic Hierarchy Process (AHP) Based Model for Assessing Performance Quality of Library Websites. Inf. Technol. J. 2016, 16, 35–43. [Google Scholar] [CrossRef]
- Vatansever, K.; Akgűl, Y. Performance evaluation of websites using entropy and grey relational analysis methods: The case of airline companies. Decis. Sci. Lett. 2018, 7, 119–130. [Google Scholar] [CrossRef]
- Aziz, U.A.; Wibisono, A.; Nisafani, A.S. Measuring the quality of e-commerce websites using analytical hierarchy process. TELKOMNIKA Telecommun. Comput. Electron. Control 2019, 17, 1283–1290. [Google Scholar] [CrossRef]
- Jati, H. Quality Ranking of E-Government Websites—PROMETHEE II Approach. In Proceedings of the International Conference on Informatics for Development 2011 (ICID 2011), Yogyakarta, Indonesia, 26 November 2011; pp. 39–45. Available online: https://www.semanticscholar.org/paper/Quality-Ranking-of-E-Government-Websites-PROMETHEE-Jati/75baad420698797cfca91b7fd1278a512cdecb6b (accessed on 19 February 2024).
- Khan, H.; Dominic, P.D.D. Measuring Quality of Asian Airline Websites Using Analytical Hierarchy Process: A Future Customer Satisfaction Approach. In Proceedings of the Information Systems International Conference, Bali, Indonesia, 2–4 December 2013. [Google Scholar]
- Malhotra, R.; Sharma, A. A neuro-fuzzy classifier for website quality prediction. In Proceedings of the 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Mysore, India, 22–25 August 2013; pp. 1274–1279. [Google Scholar] [CrossRef]
- Gangurde, R.; Kumar, B. Web Page Prediction Using Genetic Algorithm and Logistic Regression based on Weblog and Web Content Features. In Proceedings of the 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2–4 July 2020; pp. 68–74. [Google Scholar] [CrossRef]
- Jayanthi, B.; Krishnakumari, P. An Intelligent Method to Assess Webpage Quality using Extreme Learning Machine. Int. J. Comput. Sci. Netw. Secur. (IJCSNS) 2016, 16, 81. [Google Scholar]
- Kwangsawad, A.; Jattamart, A.; Nusawat, P. The Performance Evaluation of a Website using Automated Evaluation Tools. In Proceedings of the 2019 4th Technology Innovation Management and Engineering Science International Conference (TIMES-iCON), Bangkok, Thailand, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Massaro, A.; Giannone, D.; Birardi, V.; Galiano, A.M. An Innovative Approach for the Evaluation of the Web Page Impact Combining User Experience and Neural Network Score. Future Internet 2021, 13, 145. [Google Scholar] [CrossRef]
- Devi, K.; Sharma, A.K. Framework for Evaluation of Academic Website. Int. J. Comput. Tech. 2016, 3, 234–239. [Google Scholar]
- Wang, X.S.; Balasubramanian, A.; Krishnamurthy, A.; Wetherall, D. Demystifying Page Load Performance with WProf. In Proceedings of the 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13), Lombard, IL, USA, 2–5 April 2013; pp. 473–485. Available online: https://www.usenix.org/conference/nsdi13/technical-sessions/presentation/wang_xiao (accessed on 18 February 2024).
- De Fausti, F.; Pugliese, F.; Zardetto, D. Towards Automated Website Classification by Deep Learning. arXiv 2021, arXiv:1910.09991. [Google Scholar] [CrossRef]
- Rasheed, K.; Noman, M.; Imran, M.; Iqbal, M.; Khan, Z.M.; Abid, M.M. Performance comparison among local and foreign universities websites using seo tools. ICTACT J. SOFT Comput. 2018, 8, 1559–1564. [Google Scholar]
- Kumar, N.; Kumar, S.; Rajak, R. Website Performance Analysis and Evaluation using Automated Tools. In Proceedings of the 2021 5th International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques (ICEECCOT), Mysuru, India, 10–11 December 2021; pp. 210–214. [Google Scholar] [CrossRef]
- Islam, A.; Tsuji, K. Evaluation of Usage of University Websites in Bangladesh. DESIDOC J. Libr. Inf. Technol. 2011, 31, 469–479. [Google Scholar] [CrossRef]
- Armaini, I.; Dar, M.H.; Bangun, B. Evaluation of Labuhanbatu Regency Government Website based on Performance Variables. Sink. J. Dan Penelit. Tek. Inform. 2022, 7, 760–776. [Google Scholar] [CrossRef]
- Pandya, S. Review paper on web page prediction using data mining. Int. J. Comput. Eng. Intell. Syst. 2015, 6, 760–766. [Google Scholar]
- Dominic, P.D.D.; Jati, H.; Kannabiran, G. Performance evaluation on quality of Asian e-government websites—An AHP approach. Int. J. Bus. Inf. Syst. 2010, 6, 219–239. [Google Scholar] [CrossRef]
- Dominic, P.D.D.; Jati, H.; Sellappan, P.; Nee, G.K. A comparison of Asian e-government websites quality: Using a non-parametric test. Int. J. Bus. Inf. Syst. 2011, 7, 220–246. [Google Scholar] [CrossRef]
- Jain, R.K.; Rangnekar, S. Measuring website quality of the Indian railways. Int. J. Entrep. Knowl. 2015, 3, 57–64. [Google Scholar] [CrossRef]
- Olaleye, S.A.; Sanusi, I.T.; Ukpabi, D.C.; Okunoye, A. Evaluation of Nigeria Universities Websites Quality: A Comparative Analysis. jultika.oulu.fi. Available online: https://oulurepo.oulu.fi/handle/10024/23263 (accessed on 17 February 2024).
- Dominic, P.D.D.; Jati, H. A comparison of Asian airlines websites quality: Using a non-parametric test. Int. J. Bus. Innov. Res. 2011, 5, 599–623. [Google Scholar] [CrossRef]
- Shayganmehr, M.; Montazer, G.A. A Novel Model for Assessing e-Government Websites Using Hybrid Fuzzy Decision-Making Methods. Int. J. Comput. Intell. Syst. 2021, 14, 1468–1488. [Google Scholar] [CrossRef]
- Hendradjaya, B.; Praptini, R. A proposal for a quality model for e-govemment website. In Proceedings of the 2015 International Conference on Data and Software Engineering (ICoDSE), Yogyakarta, Indonesia, 25–26 November 2015; pp. 19–24. [Google Scholar] [CrossRef]
- Mittal, H.; Sharma, M.; Mittal, J.P. Analysis and Modelling of Websites Quality Using Fuzzy Technique. In Proceedings of the 2012 Second International Conference on Advanced Computing & Communication Technologies, Rohtak, Haryana India, 7–8 January 2012; pp. 10–15. [Google Scholar] [CrossRef]
- Kabassi, K. Analytic Hierarchy Process for website evaluation. Intell. Decis. Technol. 2018, 12, 137–148. [Google Scholar] [CrossRef]
- Erokhin, A.G.; Vanina, M.F.; Frolova, E.A. Application of Mathematical Simulation Methods for Evaluating the Websites Effectiveness. In Proceedings of the 2019 Systems of Signals Generating and Processing in the Field of on Board Communications, Moscow, Russia, 20–21 March 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Kulkarni, R.B.; Dixit, D.S.K. Empirical and Automated Analysis of Web Applications. Int. J. Comput. Appl. 2012, 38, 1–8. [Google Scholar] [CrossRef]
- Peng, L.; Lu, Y.; Wang, D. Brief analysis on Website performance evaluation. In Proceedings of the Third International Conference on Cyberspace Technology, Beijing, China, 17–18 October 2015; p. 4. [Google Scholar] [CrossRef]
- Barus, A.C.; Sinambela, E.S.; Purba, I.; Simatupang, J.; Marpaung, M.; Pandjaitan, N. Performance Testing and Optimization of DiTenun Website. J. Appl. Sci. Eng. Technol. Educ. 2022, 4, 45–54. [Google Scholar] [CrossRef]
Online Database | URL |
---|---|
IEEE Xplore | http://ieeexplore.ieee.org/ (accessed on 10 October 2025) |
Google Scholar | https://scholar.google.com/ (accessed on 10 October 2025) |
Scopus | http://www.scopus.com/ (accessed on 10 October 2025) |
SpringerLink | https://link.springer.com/ (accessed on 10 October 2025) |
ResearchGate | https://www.researchgate.net/ (accessed on 10 October 2025) |
ACM Digital Library | https://dl.acm.org/ (accessed on 10 October 2025) |
Directory of Open Access Journals (DOAJ) | https://doaj.org/ (accessed on 10 October 2025) |
Inclusion Criteria | Exclusion Criteria |
---|---|
Published 2010–2024 | Not peer-reviewed |
English language | Lack of methodology |
Web quality focus | Focus on unrelated topics |
Peer-reviewed | Inaccessible full text |
QA Code | Assessment Question |
---|---|
Q1 | Is the research objective clearly stated? |
Q2 | Is the context and scope of the study well-defined? |
Q3 | Is the methodology appropriate and clearly described? |
Q4 | Are the data sources valid and reliable? |
Q5 | Are the performance evaluation metrics clearly defined? |
Q6 | Are the results clearly presented and supported by data? |
Q7 | Are limitations discussed and addressed? |
Q8 | Does the study contribute new knowledge or findings? |
Q9 | Is the overall structure and academic quality of the article satisfactory? |
Field | Description |
---|---|
ID | Unique identifier assigned to each article for referencing. |
Title | The title of the article. |
Authors | The author(s) of the article. |
Publication Year | The publication year of the article. |
Country | The country in which the research was conducted. |
Performance Factors | Website performance aspects studied (e.g., load time, page size). |
Methodologies | Research approaches used (e.g., classification, clustering, regression). |
Techniques | Specific algorithms employed (e.g., SVM, Neural Networks, Decision Tree). |
Context | Details about the research participants. |
Database | Initial Results | After Duplicate Removal | After Title/Abstract Filter | After Access Filter | After Abstract/Conclusion Filter | After Inclusion/Exclusion | After QA |
---|---|---|---|---|---|---|---|
IEEE Xplore | 243 | 17 | 17 | 17 | 12 | 5 | 5 |
Google Scholar | 1043 | 431 | 178 | 23 | 15 | 9 | 7 |
Scopus | 104 | 60 | 60 | 13 | 8 | 5 | 5 |
ResearchGate | 2690 | 496 | 268 | 9 | 18 | 3 | 2 |
SpringerLink | 705 | 23 | 23 | 23 | 8 | 2 | 2 |
ACM | 661 | 266 | 73 | 24 | 16 | 2 | 2 |
DOAJ | 1211 | 358 | 161 | 11 | 9 | 8 | 7 |
Total | 6657 | 1651 | 780 | 120 | 86 | 34 | 30 |
# of Metrics | Sample Metrics | Category |
---|---|---|
12 | Load time, TTFB, Page size | Performance |
8 | Alt text, Color contrast | Accessibility |
9 | Navigation, Readability | Usability |
7 | Meta tags, Link structure | SEO |
6 | Layout, Mobile responsiveness | Design Quality |
7 | Relevance, Freshness | Content Quality |
No. | Selected Metric | Operational Definition | Measurement Method |
---|---|---|---|
1 | Load Time | Total duration required for a webpage to fully load all resources (HTML, CSS, JS, images). | Measured in milliseconds using tools such as Google Lighthouse, GTmetrix, or WebPageTest. |
2 | Time to First Byte (TTFB) | The time between initiating a request and receiving the first byte from the server. | Measured in ms via browser dev tools or performance testing tools. |
3 | Page Size | The total size of the webpage including all assets (HTML, CSS, scripts, images). | Measured in KB/MB using performance testing tools. |
4 | Number of Requests | The total number of HTTP(S) requests made to load a webpage. | Counted via dev tools or testing tools like WebPageTest. |
5 | Time to Interactive (TTI) | The time it takes for a page to become fully interactive for the user. | Measured in ms using Lighthouse. |
6 | Largest Contentful Paint (LCP) | Time taken for the largest visible content element (image/text block) to render in the viewport. | Measured in ms using Lighthouse/Core Web Vitals. |
7 | Total Link | The number of hyperlinks included in the webpage. | Counted using HTML parsers or crawler tools. |
8 | Byte In | The total amount of data transferred from the server to load the page. | Measured in KB/MB using WebPageTest or network monitors. |
9 | Start Render Time | Time when the browser starts painting the first pixels on the screen. | Measured in ms using WebPageTest or Lighthouse. |
10 | Document Complete Time | The time until the document and resources are fully loaded. | Measured in ms using WebPageTest or GTmetrix. |
11 | Speed Index | A user-centric metric showing how quickly page content is visually displayed. | Measured in ms using Lighthouse or WebPageTest. |
12 | Compression | The use of resource compression (e.g., GZIP, Brotli) to reduce file size. | Checked via HTTP headers or Lighthouse audits. |
13 | Broken Links Detection | Identifies invalid or non-functioning hyperlinks on the page. | Evaluated using crawler tools (e.g., ScreamingFrog, W3C Link Checker). |
14 | Markup Validation (HTML Errors) | Detects errors in HTML structure affecting compatibility and rendering. | Measured using W3C Validator or similar tools. |
15 | Response Time | The time a server takes to respond to a client request. | Measured in ms using dev tools or monitoring platforms. |
16 | Design Optimization | Assessment of layout efficiency, visual hierarchy, and responsive design practices. | Evaluated qualitatively and with tools (e.g., Lighthouse best-practice audits). |
Attribute | Categories |
---|---|
Gender | 21 Male, 14 Female |
Age Groups | 25–34 (15), 35–44 (13), 45–50 (7) |
Professional Role | Junior Developer (10), Senior (12), Researcher/Lead (13) |
Years of Experience | 2–5 (9), 6–10 (14), 11–20 (12) |
Education Level | Bachelor’s (20), Master’s (10), PhD (5) |
Country of Residence | Palestine, Jordan, Egypt, Lebanon, Spain |
Aspect | Problem | Potential Solution | References |
---|---|---|---|
Researchers’ Aspects | Variations in evaluation methodologies | Standardize protocols and metrics | [14,15] |
Focus on e-government and education domains | Encourage interdisciplinary studies and funding | [7,13] | |
Subjective evaluation criteria | Use objective, standard-based measures | [7] | |
Issues of validity and reliability | Apply statistical validation and peer review | [16] | |
Developers’ Aspects | Limited time, budget, and expertise | Use cost-effective tools with minimal requirements | [15,16,17] |
Complexity of existing tools | Provide user-friendly interfaces and guidance | [6,7] | |
Dynamic nature of websites | Employ agile and continuous evaluation | [18,19] | |
Website Evaluation Aspects | Domain diversity makes generalization hard | Customize evaluation per domain | [7,9,13] |
Include usability testing and feedback | User evaluation complexity | [14,15] | |
Create standardized benchmarks | Cross-domain comparability is limited | [9,13] |
Approach/Method | Strengths | Limitations | Suitable Domains/Contexts |
---|---|---|---|
Support Vector Machine (SVM) | High accuracy with small-to-medium datasets; effective for classification; robust against overfitting with proper kernel choice. | Sensitive to parameter tuning (C, γ); computationally expensive with large datasets. | E-commerce (traffic prediction), Education (content-heavy sites). |
Random Forest (RF) | Handles high-dimensional data; robust to noise and imbalance; provides feature importance ranking. | Less interpretable; slower training with very large datasets. | E-government portals; Healthcare (multi-factor performance). |
Decision Trees (DTs) | Easy to interpret and visualize; fast training; suitable for categorical features. | Prone to overfitting; limited generalization without pruning/ensembles. | Educational sites; Small-scale organizational portals. |
Naïve Bayes (NB) | Extremely fast and efficient; works well with text/content features; low data requirement. | Assumes independence among features; lower accuracy in complex scenarios. | News/media sites; Content-driven platforms. |
Logistic/Linear Regression | Simple, interpretable; effective baseline for binary outcomes. | Limited in capturing non-linear relationships; lower predictive power. | Benchmarking studies; Simple performance classification. |
K-Nearest Neighbors (KNN) | Non-parametric; intuitive; adapts easily to new data. | Inefficient with large datasets; sensitive to noisy/irrelevant features. | Social media; User-interaction heavy sites. |
Ensemble Methods (AdaBoost, Gradient Boosting, XGBoost) | High predictive accuracy; reduces variance and bias; robust in complex data scenarios. | Higher complexity; harder to interpret; longer training times. | Cross-domain applications; Large heterogeneous datasets. |
Statistical/Heuristic (Regression, Fuzzy, Rule-based) | Interpretable; useful with limited/incomplete data; simple implementation. | Limited adaptability; lower accuracy with complex/large datasets. | Early-stage studies; Benchmarking frameworks. |
Hybrid/Intelligent (Neuro-fuzzy, AHP-ML, Expert Systems) | Combine strengths of multiple paradigms; innovative; context-aware. | Limited adoption; higher complexity; lack of standardized frameworks. | Specialized domains (finance, healthcare, smart systems). |
Area | Recommendations | References |
---|---|---|
Core Web Vitals (CWVs) | - Improve LCP: Optimize image sizes, minify and combine resources, utilize browser caching, and consider lazy loading. | [32,33] |
- Enhance FID: Minimize JavaScript execution, prioritize critical JS, avoid render-blocking resources. | [33,34] | |
- Minimize CLS: Use fixed dimensions for images and videos, and avoid third-party layouts, and pre-load content. | [13,35,36] | |
Content & Design | - Compress images: Use efficient formats (WebP), and optimize sizes without quality loss. | [37,38] |
- Minify and combine resources: Reduce HTTP requests, minify HTML/CSS/JS, and combine when possible. | [6,39,40] | |
- Implement lazy loading: Load non-critical elements only when needed, improve initial page load. | [21,23] | |
Browser Caching | - Enable caching for static assets: Set appropriate headers for local storage, reduce load and improve experience. | [41,42] |
- Consider CDN: Distribute content across servers, reduce latency, and improve global performance. -Optimized server response times are crucial for efficient performance. | [21,23,41] | |
Mobile Responsiveness | - Use responsive design: Ensure seamless adaptation to different screen sizes and devices. | [17,28,33,38,42] |
- Test for mobile usability: Use tools like Google’s Mobile-Friendly Test to identify and fix issues. | ||
Monitoring & Analysis | - Use website analytics: Track key metrics (page load, bounce rate, conversion) to identify improvement areas. | [19,33,39] |
- Conduct regular performance audits: Use tools like Google PageSpeed Insights and Lighthouse to detect technical issues and optimization opportunities. |
Algorithm | Key Parameters and Ranges | Notes/Best Practices | Ref |
---|---|---|---|
SVM | C ∈ [10−2, 103], γ ∈ [10−4, 100] (log scale) | Standardize features; prefer RBF kernel as default; tune via grid/Bayesian search. | [18,22] |
Random Forest | n_estimators ≥ 300; max_depth = 6–20; min_samples_leaf = 1–5 | Monitor out-of-bag error; use permutation importance for feature interpretability. | [20,39] |
Gradient-Boosted Trees/XGBoost | learning_rate = 0.05–0.2; n_estimators = 200–800; max_depth = 4–8; subsample/colsample = 0.7–1.0 | Apply early stopping (20–50 rounds patience); balance bias/variance with tuned depth. | [18,26] |
Logistic Regression | Regularization: L2; C tuned on log scale | Standardize inputs; report calibrated probabilities for thresholding KPIs. | [6,40] |
KNN | k = 3–15; weighting = distance-based | Scale features; cross-validate k; prefer distance weighting under class imbalance. | [16,29] |
All models | Validation: Nested CV; Metrics: AUC, F1, Accuracy, MAE/RMSE | Stratify splits if imbalance exists; use SHAP for interpretability in deployment. | [15,22,43] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ghattas, M.; Odeh, S.; Mora, A.M. Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024). Computers 2025, 14, 446. https://doi.org/10.3390/computers14100446
Ghattas M, Odeh S, Mora AM. Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024). Computers. 2025; 14(10):446. https://doi.org/10.3390/computers14100446
Chicago/Turabian StyleGhattas, Mohammad, Suhail Odeh, and Antonio M. Mora. 2025. "Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)" Computers 14, no. 10: 446. https://doi.org/10.3390/computers14100446
APA StyleGhattas, M., Odeh, S., & Mora, A. M. (2025). Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024). Computers, 14(10), 446. https://doi.org/10.3390/computers14100446