Next Article in Journal
ARGUS: An Autonomous Robotic Guard System for Uncovering Security Threats in Cyber-Physical Environments
Previous Article in Journal
Cyber Attacks on Space Information Networks: Vulnerabilities, Threats, and Countermeasures for Satellite Security
 
 
Article
Peer-Review Record

AI-Driven Cybersecurity in Mobile Financial Services: Enhancing Fraud Detection and Privacy in Emerging Markets

J. Cybersecur. Priv. 2025, 5(3), 77; https://doi.org/10.3390/jcp5030077
by Ebrahim Mollik 1,* and Faisal Majeed 2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
J. Cybersecur. Priv. 2025, 5(3), 77; https://doi.org/10.3390/jcp5030077
Submission received: 11 August 2025 / Revised: 3 September 2025 / Accepted: 11 September 2025 / Published: 22 September 2025
(This article belongs to the Section Security Engineering & Applications)

Round 1

Reviewer 1 Report

  • The results section is very detailed with percentages and charts, but lacks deeper inferential analysis.
  • Regression/SEM results are mentioned but not presented in full tables.
  • The paper would benefit from showing how this combined framework performs compared to using single models.
  • Many subsections (trust, confidence, satisfaction, recommendation) repeat similar “moderate-to-high trust but with privacy concerns” findings. This could be streamlined to avoid redundancy.
  • The paper refers to multiple charts, but they are only briefly explained.
  • The conclusion repeats discussion points at length. It could be more concise by focusing on contributions, limitations, and clear future research directions.
  •  
  • The overall writing is clear enough to follow, but it is not very engaging for the reader. Abbreviations are sometimes introduced inconsistently like MFS should always be written out in full at first mention, with the abbreviation in parentheses, and then used consistently.  Furthermore, the full terms for abbreviations are consistently capitalized for each word, check on Line 9 [Mobile Financial Services (MFS)], Line 59, 66, 112,176,203, 250,353,366 and so on. 
  • In places where the text indicates a list (e.g., Line 82 “This study has made three significant contributions”), the content is written as a long paragraph instead of bullet points, which makes it harder to read.

Author Response

We sincerely thank the reviewer for the careful reading of our manuscript and for providing constructive feedback. We have revised the paper extensively to address each of the comments. Below, we provide a detailed point-by-point response.

 

  1. Introduction – too much statistics, not enough synthesis

Reviewer comment: The introduction is comprehensive but not concise. It covers a lot of ground and gives useful context, but it spends too much space on statistics and not enough on summarizing prior scholarly work.

Response: We have revised the Introduction to reduce reliance on descriptive statistics and replaced non-academic reports with peer-reviewed studies (e.g., Arner et al., 2020; Zhou et al., 2021; Awosika et al., 2023). The section now provides a more balanced academic synthesis while still highlighting the relevance of mobile financial services in emerging markets. (pp. 3–5).

 

  1. Methodology – insufficient detail

Reviewer comment: Methods are described clearly enough to understand the overall approach, but lack detail on survey items, statistical results, and sampling rationale.

Response: We expanded the Methodology to clarify the adaptation of survey items, provide justification for the purposive convenience sampling, and include additional statistical details. Specifically, we now report Cronbach’s alpha values, confirmatory factor analysis (CFA) results, and SEM fit indices (SRMR, RMSEA, CFI, TLI). (pp. 10–11).

 

  1. Results – too descriptive; missing regression/SEM tables

Reviewer comment: The results section is detailed with percentages and charts but lacks deeper inferential analysis; regression/SEM results are not fully presented.

Response: We revised the Results section to present full regression and SEM outputs in tables (Tables 1–3). These include measurement validity, path coefficients, and subgroup analyses. We also added deeper inferential interpretation to strengthen the analysis beyond descriptive statistics. (pp. 14–15).

 

 

 

  1. Framework – comparison with single models

Reviewer comment: The paper would benefit from showing how the combined TAM–Privacy Calculus–UTAUT framework performs compared to single models.

Response: In the Discussion, we now explicitly compare the integrated framework with the explanatory power of individual models, demonstrating how the combined approach provides greater explanatory value. (pp. 19-21).

 

  1. Redundancy in findings (trust, confidence, satisfaction, recommendation)

Reviewer comment: Many subsections repeat similar findings (“moderate-to-high trust but with privacy concerns”).

Response: We streamlined overlapping subsections by merging redundant points into concise summaries. This reduces repetition while preserving the distinct findings related to trust, satisfaction, and recommendations. (pp. 21–22).

 

  1. Figures and charts – insufficient explanation

Reviewer comment: The paper refers to multiple charts but only briefly explains them.

Response: We expanded the figure explanations, ensuring each chart/table is fully described and directly linked to the theoretical framework. This improves clarity and integration with the discussion.

 

  1. Conclusion – repetitive

Reviewer comment: The conclusion repeats discussion points at length; should be concise with contributions, limitations, and future research.

Response: We rewrote the Conclusion to focus on three elements: (i) key contributions, (ii) study limitations, and (iii) directions for future research. Repetitive material was removed to make the conclusion concise and impactful. (pp. 21-22).

 

  1. Abbreviations – inconsistent usage

Reviewer comment: Abbreviations like MFS should be written in full at first mention and used consistently; capitalization inconsistent.

Response: We standardized abbreviations throughout. “Mobile Financial Services (MFS)” is now introduced at first mention and used consistently thereafter. Capitalization errors have been corrected across the manuscript.

 

  1. Lists – written as paragraphs

Reviewer comment: Contributions are written as a long paragraph instead of bullet points.

Response: We reformatted the contributions section into bullet points for clarity and readability. (p. 21).

 

Reviewer 2 Report

This paper makes a timely contribution by examining how AI-driven cybersecurity can enhance fraud detection and address privacy concerns in mobile financial services across emerging markets. The study is valuable in combining a mixed-methods design with an integrated theoretical framework (TAM, Privacy Calculus, and UTAUT), providing insights into user trust, cultural influences, and regulatory gaps. The findings are relevant and clearly presented, showing both the potential of AI for real-time fraud detection and the significant challenges of transparency and privacy. However, improvements are needed: the reliance on convenience sampling limits generalizability, the linkage between empirical results and theory should be strengthened, and some sections (introduction and literature review) would benefit from replacing non-academic references with peer-reviewed sources. Overall, the study is promising and relevant, but substantial revisions are required to improve its rigor and impact.

1. Title;  "AI-Driven Cybersecurity in Mobile Financial Services: Enhancing Fraud Detection and Privacy in Emerging Markets" (p.1)

My comment; The title is precise but overly long and combines multiple aspects.

My suggestion; Shorten to emphasize fraud detection and privacy, e.g.,
“AI-Driven Fraud Detection and Privacy Protection in Mobile Financial Services.”

2. Introduction & Background; "The fast growth of MFS... holding 66% of the global share..." (p.1)

My comment; Introduction is data-rich but relies on non-academic reports (CoinLaw, Reuters, Economic Times).

My suggestion; Incorporate peer-reviewed references from high-impact journals such as IEEE TDSC or Computers & Security.

3. Problem Statement; "rule-based systems traditionally employed for fraud detection... are not agile and sophisticated enough" (p.2)

My comment; The gap is well identified, but lacks a comparative illustration.

My suggestion; Provide a concise contrast between traditional and AI-based methods, supported by numerical evidence.

4. Research Questions & Objectives; "Research questions: 1. How do users perceive... 2. What privacy concerns... 3. How are cultural...?" (p.2)

My comment; Questions are valid but broad and somewhat overlapping with objectives.

My suggestion; Reformulate to ensure a one-to-one mapping with research objectives.

5. Literature Review; "Another emerging approach is federated learning (FL)..." (p.3)

My comment; Review is comprehensive but descriptive.

My suggestion; Add a critical subsection to highlight strengths and shortcomings in prior work.

6. Methodology; "The study applies a convergent parallel mixed-methods design..." (p.6)

My comment; Clear design, but sample size (151, convenience sampling) weakens generalizability.

My suggestion; Clarify exploratory nature and recommend future replication with larger, randomized samples.

7. Quantitative Analysis; "Cronbach’s alpha coefficients... over the threshold of 0.70" (p.6-7)

My comment; Reliability is claimed but actual alpha values are missing.

My suggestion; Present a detailed table with CFA and reliability statistics.

8. Results; "95.4% of respondents reported concerns about their data privacy..." (p.10)

My comment; Strong finding, but under-discussed in theoretical framing.

My suggestion; Explicitly link this result to Privacy Calculus Theory.

9. Discussion; "False positives still remain a huge barrier..." (p.17)

My comment; Valuable insight, yet lacks numerical linkage to findings.

My suggestion; Include quantitative references (e.g., percentage of users reporting false positives).

10. Conclusion & Future Directions; "Future Research: Longitudinal studies remain necessary..." (p.18)

My comment; Conclusion is thorough but partly repetitive.

My suggestion; Focus more on theoretical innovation (integrated TAM + Privacy Calculus + UTAUT framework).

Author Response

We sincerely thank the reviewer for the constructive feedback and valuable suggestions. We have carefully revised the manuscript to address each point. Below is a detailed, point-by-point response.

 

  1. Title – overly long

Reviewer comment: The title is precise but overly long and combines multiple aspects. Suggested: “AI-Driven Fraud Detection and Privacy Protection in Mobile Financial Services.”

Response: We appreciate the reviewer’s suggestion. After careful consideration, we decided to retain the original title:
“AI-Driven Cybersecurity in Mobile Financial Services: Enhancing Fraud Detection and Privacy in Emerging Markets.”

This title was intentionally kept broader because it directly reflects the scope of our study, which goes beyond fraud detection alone and also addresses privacy, trust, and regulatory aspects within emerging markets. Furthermore, the title aligns with our data collection instruments and research objectives, ensuring methodological consistency.

 

  1. Introduction – reliance on non-academic references

Reviewer comment: The Introduction relies on non-academic reports (CoinLaw, Reuters, Economic Times). Suggested: replace with peer-reviewed references.

Response: We replaced non-academic reports with peer-reviewed sources from high-impact journals such as IEEE Transactions on Dependable and Secure Computing and Computers & Security. The introduction now integrates more scholarly synthesis with limited descriptive statistics. (pp. 3–4).

 

  1. Problem Statement – lacks comparative illustration

Reviewer comment: Gap well identified but lacks contrast between traditional and AI-based methods.

Response: We revised the Problem Statement to include a comparative illustration, showing limitations of rule-based systems (e.g., static rules, high false positives) versus AI-driven methods (e.g., adaptability, anomaly detection, reduced latency). We supported this with recent empirical evidence. (p. 7-8).

 

  1. Research Questions & Objectives – overlap

Reviewer comment: Questions valid but broad and overlapping with objectives; reformulate to ensure one-to-one mapping.

Response: Thank you for this valuable comment. We carefully considered the concern. In line with MDPI’s structure guidelines, we chose not to create a standalone “Research Questions & Objectives” section. Instead, we revised the final part of the Introduction to clearly articulate three guiding research questions and their directly corresponding objectives. This ensures a one-to-one mapping, avoids overlap, and provides a transparent link between the study’s aims and its analysis. The revisions can be found on page 4, Paragraph 6 of the revised manuscript.

 

  1. Literature Review – too descriptive

Reviewer comment: Review is comprehensive but descriptive; should add critical subsection.

Response: We added a critical subsection summarizing strengths and weaknesses of prior works, focusing on AI-based fraud detection, privacy-preserving approaches, and explainable AI (XAI). Additionally, Appendix C now contains a comparative table of reviewed studies (2019–2024), highlighting methodological gaps. (pp. 30-31,Appendix C).

 

  1. Methodology – convenience sampling limits generalizability

Reviewer comment: Clear design, but reliance on 151 convenience samples weakens generalizability.

Response: We clarified that the study is exploratory in nature and acknowledged the limitation of convenience sampling. We now recommend replication with larger, randomized samples in future studies. This limitation is explicitly stated in both the Methodology and Conclusion. (pp. 9-11).

 

  1. Quantitative Analysis – missing alpha values

Reviewer comment: Reliability is claimed but actual alpha values are missing.

Response: We added a detailed table with Cronbach’s alpha values for each construct, alongside CFA loadings, composite reliability (CR), and AVE values. This improves transparency and statistical rigor. (Table 2, p. 15).

 

  1. Results – under-discussed privacy concerns

Reviewer comment: Privacy concerns finding (95.4% respondents) is strong but under-discussed theoretically.

Response: We revised the Results and Discussion to explicitly link privacy concerns to the Privacy Calculus framework, showing how perceived risks outweigh benefits in certain contexts, especially where transparency is lacking.

 

  1. Discussion – false positives lack numerical linkage

Reviewer comment: Insightful point but lacks quantitative linkage.

Response: We revised the discussion to include quantitative linkage by reporting that 38% of respondents experienced false positives in AI-driven fraud detection systems, which undermines user trust. This strengthens the connection between findings and discussion.

 

  1. Conclusion – partly repetitive

Reviewer comment: Conclusion is thorough but repetitive; should focus on theoretical innovation.

Response: We streamlined the Conclusion by removing repetition and emphasizing the theoretical innovation of integrating TAM, Privacy Calculus, and UTAUT. We also clarified contributions, limitations, and directions for future research. (pp. 21–22).

 

Reviewer 3 Report

 

1- The abstract was limited to clarifying the research gap without providing clear and simplified steps for the proposed algorithm or numerical values for the results reached by the researchers to reflect the efficiency of what was proposed.

2- The contributions did not clarify the years covered by the related work or the metrics used to demonstrate the effectiveness of the proposed model. Furthermore, the full acronym was not mentioned when it first appeared here (TAM):
Firstly, it fills a gap in the literature by engaging with users' perspectives about AI security tools in the poorly researched field of emerging markets. Secondly, the paper combines the TAM and Privacy Calculus frameworks to discuss trust and risk trade-offs. Thirdly, it provides empirical recommendations on implementing AI with transparency and explainability on fraud detection systems, so these align with local norms and expectations of privacy.

3- The researchers' findings have not been discussed with other previous work using the same criteria.

4- The concepts and types of XAI are not adequately explained to the reader.

 

1- It is possible to add a table summarizing all the works included in Literature Review (section two), showing the strengths and weaknesses of each one according to the technical tools used.

2- Ensure that the first appearance of any scientific term is accompanied by its abbreviation and that this is not repeated with each appearance.

3- Add a paragraph at the end of the introduction that explains the general structure of the research paper with all its sections.

4- The conclusions are weak and need to be rephrased to clarify the research gap and the solutions offered by the manuscript.

Author Response

We sincerely thank Reviewer for the valuable feedback that has helped us improve the clarity, methodological rigor, and contribution of the manuscript. Below, we respond point-by-point to each comment. All changes have been incorporated into the revised manuscript, with tracked changes highlighted.

 

Major Comments

Comment 1:
The abstract was limited to clarifying the research gap without providing clear and simplified steps for the proposed algorithm or numerical values for the results reached by the researchers to reflect the efficiency of what was proposed.

Response 1:
Thank you for highlighting this. We have revised the Abstract to include both methodological clarity and numerical findings. Specifically, we now state the mixed-methods approach (quantitative survey with n = 151 and qualitative thematic analysis) and report numerical outcomes (e.g., 95.4% raised privacy concerns, 78.2% recognized benefits, regression results β = 0.63, p < 0.01). This provides a clearer view of efficiency and outcomes (Page 2, Lines 7–12).

Comment 2:
The contributions did not clarify the years covered by the related work or the metrics used to demonstrate the effectiveness of the proposed model. Furthermore, the full acronym was not mentioned when it first appeared here (TAM).

Response 2:
We appreciate this observation. In the Introduction and Contributions section, we now specify that the literature review covers the period 2014–2024, focusing on peer-reviewed works on XAI, trust, and fraud detection. We also clarified metrics (AUC, accuracy, false positives) used in related works. Additionally, all acronyms (e.g., Technology Acceptance Model (TAM), Privacy Calculus (PC)) are now written in full at first appearance and used consistently thereafter (Page 3).

 

Comment 3:
The researchers' findings have not been discussed with other previous work using the same criteria.

Response 3:
We agree and have strengthened the Discussion by explicitly comparing our findings with prior studies that used similar criteria (e.g., Zhou et al., 2021 on fraud detection accuracy, Hernandez-Ortega, 2020 on transparency and trust, and Yang et al., 2019 on federated learning). This allows clearer positioning of our contributions relative to established benchmarks ( Page 19-21)

 

Comment 4:
The concepts and types of XAI are not adequately explained to the reader.

Response 4:
We acknowledge this gap. In the Literature Review, we added a subsection that explains the main types of XAI (e.g., model-agnostic vs. model-specific approaches, post-hoc vs. intrinsic explainability, and popular techniques like SHAP, LIME, and counterfactuals) and highlight their differences. This addition improves conceptual clarity for readers unfamiliar with XAI (Page 6–7).

 

Detailed Comments

Comment 1:
It is possible to add a table summarizing all the works included in Literature Review (section two), showing the strengths and weaknesses of each one according to the technical tools used.

Response 1:
We have added Table 1 in the Literature Review, summarizing key prior studies, their methodological tools, and their strengths/weaknesses. This provides a clear comparative overview for readers (Page 8).

 

Comment 2:
Ensure that the first appearance of any scientific term is accompanied by its abbreviation and that this is not repeated with each appearance.

Response 2:
We carefully reviewed the entire manuscript to ensure consistency. All scientific terms now appear in full of abbreviation at first mention and then consistently with abbreviations (e.g., Mobile Financial Services (MFS), Artificial Intelligence (AI), Explainable AI (XAI)).

 

Comment 3:
Add a paragraph at the end of the introduction that explains the general structure of the research paper with all its sections.

Response 3:
We have added a structural roadmap paragraph at the end of the Introduction outlining the sequence of sections (Literature Review, Methodology, Results, Discussion, Practical Implications, and Conclusion). This improves readability and flow (Page 4-5).

Comment 4:
The conclusions are weak and need to be rephrased to clarify the research gap and the solutions offered by the manuscript.

Response 4:
We revised the Conclusion to make it more concise, emphasize the research gap and how our study addresses it (integration of TAM, UTAUT, and Privacy Calculus in the context of AI fraud detection in emerging markets), highlight contributions, and propose stronger future research directions. (Page 21-22).

 

Response to English Language

Comment:
The English could be improved to more clearly express the research.

Response:
We carefully edited the manuscript for clarity, conciseness, and consistency in terminology. Abbreviations and technical terms were standardized, and long sentences were shortened where necessary.

 

Reviewer 4 Report

The paper's topic is relevant and merits further dedication. Nevertheless, the current work has several fundamental flaws.

The central premise is flawed because it disregards a significant body of existing industry knowledge. The authors mistakenly claim that no centralized companies focus exclusively on fraud detection, which is factually incorrect; firms like Feedzai have been operating in this space for years. This oversight compromises the manuscript's entire analysis. Moreover, the paper's discussion is disjointed, addressing three unrelated topics in a confusing sequence: the impact of shallow and deep models, the use of ensemble models with XAI, and federated learning for data privacy. These points lack a clear, unifying theme, and the relationships between them are not evident. The author should focus on a single, coherent topic or restructure the paper to demonstrate the connections between these different approaches.

 

Additionally, a significant disconnect exists between the introduction and the methodology. The methods section relies entirely on a questionnaire administered to participants, a methodology ill-suited for the problem. A more credible analysis would have been based on a comprehensive review of existing scientific publications or on practical, empirical evaluations. The reliance on survey data diminishes the scientific rigor and credibility of the findings.

Author Response


We sincerely thank the reviewer for the constructive feedback. The comments have been invaluable in refining the manuscript’s conceptual grounding, methodological rigor, and overall coherence. Below, we provide point-by-point responses. All changes have been incorporated into the revised manuscript, with tracked changes highlighted.

 

Major Comments

Comment 1:
The central premise is flawed because the manuscript disregards a significant body of existing industry knowledge. The authors mistakenly claim that no centralized companies focus exclusively on fraud detection, which is factually incorrect; companies like Feedzai have been operating in this space for years.

Response 1:
We appreciate this important observation. In the revised Introduction and Discussion, we have corrected this oversight by explicitly referencing Feedzai, FeatureSpace, and SAS, among others, as established industry leaders in fraud detection. Our revised framing clarifies that our contribution lies not in claiming exclusivity but in exploring how AI-driven methods can enhance trust and privacy in emerging markets, thereby complementing existing industry practices (Page 4; Page 20).

 

Comment 2:
The discussion is disjointed, addressing three unrelated topics (shallow vs. deep models, ensemble models with XAI, and federated learning for data privacy) without a clear unifying theme.

Response 2:
We agree that the earlier version lacked coherence. The Discussion section has now been restructured into thematic subsections: 7.1 Theoretical Integration, 7.2 Novel Contribution, 7.3 Industry Practices, 7.4 Trust and Privacy, and 7.5 Summary. This new structure highlights how different approaches (deep vs. shallow models, XAI, and federated learning) are interconnected in solving fraud detection and privacy challenges, offering a unified narrative (Page 19–21).

 

Comment 3:
The methods section relies entirely on a questionnaire administered to participants, which is ill-suited for the problem.

Response 3:
We acknowledge this concern. While our primary data collection was survey-based (n = 151), we have strengthened the methodology by including a mini-systematic review of 15 peer-reviewed AI fraud detection studies (2019–2024), now provided as Appendix C. This addition triangulates user perception data with technical evidence from the literature, enhancing both rigor and credibility. The methodology section has been updated to reflect this expanded approach (Page 9–10; Appendix C, Pages 30–31).

 

Comment 4:
The reliance on survey data diminishes the scientific rigor and credibility of the findings.

Response 4:
We agree and have revised the framing of our contribution. Instead of presenting the study as a standalone technical solution, we now emphasize its role in integrating user perspectives with established AI models. The contribution is clarified as advancing the understanding of trust, adoption, and privacy in emerging markets while positioning our work as complementary to existing fraud detection systems (Page 21).

 

Detailed Comments

Comment 1:
The introduction and methodology show a disconnect.

Response 1:
We revised the Introduction to better align with the methodology, clearly linking the research problem (user trust and privacy concerns in AI fraud detection) with our mixed-methods design. The structural roadmap paragraph at the end of the Introduction also improves alignment between the problem statement and research approach (Page 4-5).

Comment 2:
Industry oversight weakens credibility.

Response 2:
As noted above, we have added explicit references to existing fraud detection companies (Feedzai, FeatureSpace, SAS) and discussed their approaches, thereby strengthening industry alignment (Page 4; Page 20).

 

Response to English Language

Comment:
The English could be improved to more clearly express the research.

Response:
We have carefully edited the manuscript for clarity, conciseness, and consistency. Redundant expressions were removed, sentences shortened, and abbreviations standardized. This has improved readability throughout the paper.

 

Round 2

Reviewer 1 Report

No comments.

No comments.

Author Response

We sincerely thank the reviewer for their positive evaluation of the revised manuscript. We are pleased that the revisions have addressed the earlier concerns. As suggested, we have also carefully checked the language to further improve clarity and conciseness.

Reviewer 2 Report

None remaining. All substantive issues raised in the previous review have been thoroughly addressed.

1- The introduction has been revised to rely on peer-reviewed, high-quality references (e.g., IEEE TDSC, Computers & Security). This significantly improves academic rigor.

2- The problem statement now includes a clear comparative illustration between rule-based and AI-driven approaches, supported by empirical evidence.

3- The alignment between research questions and objectives has been clarified, with one-to-one mapping presented in the revised introduction.

4- The literature review has been expanded with a critical subsection and a comparative table (Appendix C), highlighting methodological gaps in prior studies.

5- Methodological limitations (convenience sampling, exploratory nature) are explicitly acknowledged, with recommendations for future replication.

6- Reliability and validity statistics (Cronbach’s α, CFA, CR, AVE) are now clearly presented in tables, enhancing transparency.

7- The strong finding on privacy concerns (95.4%) has been theoretically framed within the Privacy Calculus model.

8- Discussion of false positives now includes quantitative evidence (~27% of users), linking findings directly to user trust.

9- The conclusion has been streamlined to avoid repetition, with stronger emphasis on theoretical innovation (hybrid TAM + Privacy Calculus + UTAUT).

Only one minor issue remains: the title is still overly long and combines multiple aspects. While the authors justified keeping it, shortening it slightly would improve readability and precision.

Overall, the manuscript is now much improved and suitable for publication.

Author Response

We sincerely thank the reviewer for the very detailed and positive evaluation of the revised manuscript. We appreciate the acknowledgment that the introduction, problem statement, research questions, literature review, methodology, results, and conclusion have been substantially improved.

Regarding the reviewer’s helpful suggestion on shortening the title, we respectfully prefer to retain the current version:
“AI-Driven Cybersecurity in Mobile Financial Services: Enhancing Fraud Detection and Privacy in Emerging Markets”
This is because the title directly reflects the broader scope of our study, which addresses not only fraud detection and privacy but also the specific challenges within emerging markets. We believe this scope is important for accurately representing the contribution of the paper.

We sincerely thank the reviewer for the recommendation that the manuscript is now suitable for publication.

Reviewer 3 Report

All comments done well.

No comments.

Author Response

We sincerely thank the reviewer for acknowledging that all comments have been satisfactorily addressed. We greatly appreciate the constructive feedback received in earlier rounds, which significantly improved the manuscript. We also conducted a final language polish to ensure additional clarity and conciseness.

Back to TopTop