Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = machine ethics test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1707 KiB  
Article
A Structural Causal Model Ontology Approach for Knowledge Discovery in Educational Admission Databases
by Bern Igoche Igoche, Olumuyiwa Matthew and Daniel Olabanji
Knowledge 2025, 5(3), 15; https://doi.org/10.3390/knowledge5030015 - 4 Aug 2025
Abstract
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from [...] Read more.
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from an admission database. Using a dataset of 12,043 records from Benue State Polytechnic, Nigeria, we demonstrate this approach as a proof of concept by constructing a domain-specific SCM ontology, validate it using conditional independence testing (CIT), and extract features for predictive modeling. Five classifiers, Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were evaluated using stratified 10-fold cross-validation. SVM and KNN achieved the highest classification accuracy (92%), with precision and recall scores exceeding 95% and 100%, respectively. Feature importance analysis revealed ‘mode of entry’ and ‘current qualification’ as key causal factors influencing admission decisions. This framework provides a reproducible pipeline that combines semantic representation and empirical validation, offering actionable insights for institutional decision-makers. Comparative benchmarking, ethical considerations, and model calibration are integrated to enhance methodological transparency. Limitations, including reliance on single-institution data, are acknowledged, and directions for generalizability and explainable AI are proposed. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

25 pages, 953 KiB  
Article
Command Redefined: Neural-Adaptive Leadership in the Age of Autonomous Intelligence
by Raul Ionuț Riti, Claudiu Ioan Abrudan, Laura Bacali and Nicolae Bâlc
AI 2025, 6(8), 176; https://doi.org/10.3390/ai6080176 - 1 Aug 2025
Viewed by 154
Abstract
Artificial intelligence has taken a seat at the executive table and is threatening the fact that human beings are the only ones who should be in a position of power. This article gives conjectures on the future of leadership in which managers will [...] Read more.
Artificial intelligence has taken a seat at the executive table and is threatening the fact that human beings are the only ones who should be in a position of power. This article gives conjectures on the future of leadership in which managers will collaborate with learning algorithms in the Neural Adaptive Artificial Intelligence Leadership Model, which is informed by the transformational literature on leadership and socio-technical systems, as well as the literature on algorithmic governance. We assessed the model with thirty in-depth interviews, system-level traces of behavior, and a verified survey, and we explored six hypotheses that relate to algorithmic delegation and ethical oversight, as well as human judgment versus machine insight in terms of agility and performance. We discovered that decisions are made quicker, change is more effective, and interaction is more vivid where agile practices and good digital understanding exist, and statistical tests propose that human flexibility and definite governance augment those benefits as well. It is single-industry research that contains self-reported measures, which causes research to be limited to other industries that contain more objective measures. Practitioners are provided with a practical playbook on how to make algorithmic jobs meaningful, introduce moral fail-safes, and build learning feedback to ensure people and machines are kept in line. Socially, the practice is capable of minimizing bias and establishing inclusion by visualizing accountability in the code and practice. Filling the gap between the theory of leadership and the reality of algorithms, the study provides a model of intelligent systems leading in organizations that can be reproduced. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
34 pages, 1835 KiB  
Article
Advancing Neurodegenerative Disease Management: Technical, Ethical, and Regulatory Insights from the NeuroPredict Platform
by Marilena Ianculescu, Lidia Băjenaru, Ana-Mihaela Vasilevschi, Maria Gheorghe-Moisii and Cristina-Gabriela Gheorghe
Future Internet 2025, 17(7), 320; https://doi.org/10.3390/fi17070320 - 21 Jul 2025
Viewed by 246
Abstract
On a worldwide scale, neurodegenerative diseases, including multiple sclerosis, Parkinson’s, and Alzheimer’s, face considerable healthcare challenges demanding the development of novel approaches to early detection and efficient treatment. With its ability to provide real-time patient monitoring, customized medical care, and advanced predictive analytics, [...] Read more.
On a worldwide scale, neurodegenerative diseases, including multiple sclerosis, Parkinson’s, and Alzheimer’s, face considerable healthcare challenges demanding the development of novel approaches to early detection and efficient treatment. With its ability to provide real-time patient monitoring, customized medical care, and advanced predictive analytics, artificial intelligence (AI) is fundamentally transforming the way healthcare is provided. Through the integration of wearable physiological sensors, motion sensors, and neurological assessment tools, the NeuroPredict platform harnesses AI and smart sensor technologies to enhance the management of specific neurodegenerative diseases. Machine learning algorithms process these data flows to find patterns that point out disease evolution. This paper covers the design and architecture of the NeuroPredict platform, stressing the ethical and regulatory requirements that guide its development. Initial development of AI algorithms for disease monitoring, technical achievements, and constant enhancements driven by early user feedback are addressed in the discussion section. To ascertain the platform’s trustworthiness and data security, it also points towards risk analysis and mitigation approaches. The NeuroPredict platform’s capability for achieving AI-driven smart healthcare solutions is highlighted, even though it is currently in the development stage. Subsequent research is expected to focus on boosting data integration, expanding AI models, and providing regulatory compliance for clinical application. The current results are based on incremental laboratory tests using simulated user roles, with no clinical patient data involved so far. This study reports an experimental technology evaluation of modular components of the NeuroPredict platform, integrating multimodal sensors and machine learning pipelines in a laboratory-based setting, with future co-design and clinical validation foreseen for a later project phase. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

14 pages, 896 KiB  
Article
Calculating the Risk of Admission to Intensive Care Units in COVID-19 Patients Using Machine Learning
by Mireia Ladios-Martin, María José Cabañero-Martínez, José Fernández-de-Maya, Francisco-Javier Ballesta-López, Ignacio Garcia-Garcia, Adrián Belso-Garzas, Francisco-Manuel Aznar-Zamora and Julio Cabrero-García
J. Clin. Med. 2025, 14(12), 4205; https://doi.org/10.3390/jcm14124205 - 13 Jun 2025
Viewed by 409
Abstract
Background: The COVID-19 pandemic clearly posed a global challenge to healthcare systems, where the allocation of limited resources had important logistical and ethical implications. Detecting and prioritizing the population at risk of intensive care unit (ICU) admission is the first step to being [...] Read more.
Background: The COVID-19 pandemic clearly posed a global challenge to healthcare systems, where the allocation of limited resources had important logistical and ethical implications. Detecting and prioritizing the population at risk of intensive care unit (ICU) admission is the first step to being able to care for the most vulnerable people and avoid unnecessary consumption of resources by mildly ill patients. Objective: To create a model, using machine learning techniques, capable of identifying the risk of admission to the ICU throughout the hospital stay of the COVID patient and to evaluate the performance of the model. Methods: A retrospective cohort design was used to develop and validate a classification model of adult COVID-19 patients with or without risk of ICU admission. Data from three hospitals in Spain were used to develop the model (n = 1272) and for subsequent external validation (n = 550). Sensitivity, specificity, positive and negative predictive value, accuracy, F1 score, Youden index and area under the curve of the model were evaluated. Results: The LightGBM model, incorporating 40 variables, was used. The area under the curve obtained by the model when the test dataset was used was 1.00 (0.99–1.0), specificity 0.99 (0.97–1.00) and sensitivity 0.92 (0.86–0.98). Conclusions: A model for predicting ICU admission of hospitalized COVID-19 patients was created with very good results. The identification and prioritization of COVID-19 patients at risk of ICU admission allows the right care to be provided to those who are most in need when the healthcare system is under pressure. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

10 pages, 267 KiB  
Article
Dataset on Programming Competencies Development Using Scratch and a Recommender System in a Non-WEIRD Primary School Context
by Jesennia Cárdenas-Cobo, Cristian Vidal-Silva and Nicolás Máquez
Data 2025, 10(6), 86; https://doi.org/10.3390/data10060086 - 3 Jun 2025
Viewed by 464
Abstract
The ability to program has become an essential competence for individuals in an increasingly digital world. However, access to programming education remains unequal, particularly in non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) contexts. This study presents a dataset resulting from an educational intervention [...] Read more.
The ability to program has become an essential competence for individuals in an increasingly digital world. However, access to programming education remains unequal, particularly in non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) contexts. This study presents a dataset resulting from an educational intervention designed to foster programming competencies and computational thinking skills among primary school students aged 8 to 12 years in Milagro, Ecuador. The intervention integrated Scratch, a block-based programming environment that simplifies coding by eliminating syntactic barriers, and the CARAMBA recommendation system, which provided personalized learning paths based on students’ progression and preferences. A structured educational process was implemented, including an initial diagnostic test to assess logical reasoning, guided activities in Scratch to build foundational skills, a phase of personalized practice with CARAMBA, and a final computational thinking evaluation using a validated assessment instrument. The resulting dataset encompasses diverse information: demographic data, logical reasoning test scores, computational thinking test results pre- and post-intervention, activity logs from Scratch, recommendation histories from CARAMBA, and qualitative feedback from university student tutors who supported the intervention. The dataset is anonymized, ethically collected, and made available under a CC-BY 4.0 license to encourage reuse. This resource is particularly valuable for researchers and practitioners interested in computational thinking development, educational data mining, personalized learning systems, and digital equity initiatives. It supports comparative studies between WEIRD and non-WEIRD populations, validation of adaptive learning models, and the design of inclusive programming curricula. Furthermore, the dataset enables the application of machine learning techniques to predict educational outcomes and optimize personalized educational strategies. By offering this dataset openly, the study contributes to filling critical gaps in educational research, promoting inclusive access to programming education, and fostering a more comprehensive understanding of how computational competencies can be developed across diverse socioeconomic and cultural contexts. Full article
Show Figures

Figure 1

18 pages, 4885 KiB  
Article
Decoding Poultry Welfare from Sound—A Machine Learning Framework for Non-Invasive Acoustic Monitoring
by Venkatraman Manikandan and Suresh Neethirajan
Sensors 2025, 25(9), 2912; https://doi.org/10.3390/s25092912 - 5 May 2025
Cited by 2 | Viewed by 1427
Abstract
Acoustic monitoring presents a promising, non-invasive modality for assessing animal welfare in precision livestock farming. In poultry, vocalizations encode biologically relevant cues linked to health status, behavioral states, and environmental stress. This study proposes an integrated analytical framework that combines signal-level statistical analysis [...] Read more.
Acoustic monitoring presents a promising, non-invasive modality for assessing animal welfare in precision livestock farming. In poultry, vocalizations encode biologically relevant cues linked to health status, behavioral states, and environmental stress. This study proposes an integrated analytical framework that combines signal-level statistical analysis with machine learning and deep learning classifiers to interpret chicken vocalizations in a welfare assessment context. The framework was evaluated using three complementary datasets encompassing health-related vocalizations, behavioral call types, and stress-induced acoustic responses. The pipeline employs a multistage process comprising high-fidelity signal acquisition, feature extraction (e.g., mel-frequency cepstral coefficients, spectral contrast, zero-crossing rate), and classification using models including Random Forest, HistGradientBoosting, CatBoost, TabNet, and LSTM. Feature importance analysis and statistical tests (e.g., t-tests, correlation metrics) confirmed that specific MFCC bands and spectral descriptors were significantly associated with welfare indicators. LSTM-based temporal modeling revealed distinct acoustic trajectories under visual and auditory stress, supporting the presence of habituation and stressor-specific vocal adaptations over time. Model performance, validated through stratified cross-validation and multiple statistical metrics (e.g., F1-score, Matthews correlation coefficient), demonstrated high classification accuracy and generalizability. Importantly, the approach emphasizes model interpretability, facilitating alignment with known physiological and behavioral processes in poultry. The findings underscore the potential of acoustic sensing and interpretable AI as scalable, biologically grounded tools for real-time poultry welfare monitoring, contributing to the advancement of sustainable and ethical livestock production systems. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

20 pages, 713 KiB  
Article
Spendception: The Psychological Impact of Digital Payments on Consumer Purchase Behavior and Impulse Buying
by Naeem Faraz and Amna Anjum
Behav. Sci. 2025, 15(3), 387; https://doi.org/10.3390/bs15030387 - 19 Mar 2025
Cited by 1 | Viewed by 6226
Abstract
This study introduces a novel construct, Spendception, which conceptualizes the psychological impact of digital payment systems on consumer behavior, marking a significant contribution to the field of consumer psychology and behavioral economics. Spendception reflects the reduced psychological resistance to spending when using digital [...] Read more.
This study introduces a novel construct, Spendception, which conceptualizes the psychological impact of digital payment systems on consumer behavior, marking a significant contribution to the field of consumer psychology and behavioral economics. Spendception reflects the reduced psychological resistance to spending when using digital payment methods, as compared to cash, due to the diminished visibility of transactions and the perceived ease of payments. This research aims to explore the role of Spendception in increasing consumer purchase behavior, whereas the role of impulse buying has been observed as a mediator. To test the proposed model, an extensive survey was performed by collecting 1162 respondents from all walks of life to get the real picture. We employed exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) to validate the measurement of key constructs. To test the hypothetical relations among all the variables, we employed structural equation modeling (SEM). Furthermore, a machine learning technique was used to test the robustness of the model. Results showed that Spendception greatly boosted the consumer purchase behavior, with impulse buying partially mediating the relation. Gender was found to moderate the relationship, with female consumers being more susceptible to impulse buying caused by Spendception. The study showed that digital payment systems made buying feel less noticeable, which led to people spending more without realizing the financial impact. This study introduces Spendception, a novel construct that extends existing consumer behavior theories by explaining how digital payment systems reduce psychological barriers to spending. It bridges the gap between Spendception and the pain of paying, demonstrating that the lack of immediate visibility and physicality in digital payments alters consumers’ perceptions of spending, leading to impulse buying and higher purchase behavior. The findings also offer actionable insights for marketers in designing targeted campaigns that leverage the psychological effects of Spendception. The findings provide actionable insights for marketers to design targeted campaigns and for policymakers to promote financial literacy, ensuring ethical use of digital payment systems. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

28 pages, 1922 KiB  
Article
An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics
by Ikpenmosa Uhumuavbi
Laws 2025, 14(2), 19; https://doi.org/10.3390/laws14020019 - 19 Mar 2025
Cited by 1 | Viewed by 1715
Abstract
The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation [...] Read more.
The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation of AI and the unearned assumptions that are central to its current definition, characterisation, and efforts at controlling it. The contradictions in the framing of AI have been the bane of the incapacity to regulate it. A revival of applied definitional framing of AI across disciplines have produced a plethora of conceptions and inconclusiveness. Therefore, the research advances this position with two fundamental and interrelated arguments. First, the difficulty in regulating AI is tied to it characterisation as artificial intelligence. This has triggered existing and new conflicting notions of the meaning of ‘artificial’ and ‘intelligence’, which are broad and largely unsettled. Second, difficulties in developing a global consensus on responsible AI stem from this inconclusiveness. To advance these arguments, this paper utilises functional contextualism to analyse the fundamental nature and architecture of artificial intelligence and human intelligence. There is a need to establish a test for ‘artificial intelligence’ in order ensure appropriate allocation of rights, duties, and responsibilities. Therefore, this research proposes, develops, and recommends an adaptive three-elements, three-step threshold for achieving responsible artificial intelligence. Full article
(This article belongs to the Topic Emerging Technologies, Law and Policies)
Show Figures

Figure 1

22 pages, 4157 KiB  
Article
Prediction of Green Sukuk Investment Interest Drivers in Nigeria Using Machine Learning Models
by Mukail Akinde, Olasunkanmi Olapeju, Olusegun Olaiju, Timothy Ogunseye, Adebayo Emmanuel, Sekinat Olagoke-Salami, Foluke Oduwole, Ibironke Olapeju, Doyinsola Ibikunle and Kehinde Aladelusi
J. Risk Financial Manag. 2025, 18(2), 89; https://doi.org/10.3390/jrfm18020089 - 6 Feb 2025
Cited by 1 | Viewed by 1325
Abstract
This study developed and evaluated machine learning models (MLMs) for predicting the drivers of green sukuk investment interest (GSII) in Nigeria, adopting the planks of hypothesised determinants adapted from variants of the planned behavioural model and behavioural finance theory. Of the seven models [...] Read more.
This study developed and evaluated machine learning models (MLMs) for predicting the drivers of green sukuk investment interest (GSII) in Nigeria, adopting the planks of hypothesised determinants adapted from variants of the planned behavioural model and behavioural finance theory. Of the seven models leveraged in the prediction, random forest, which had the highest level of accuracy (82.35% for testing and 90.37% for training datasets), with a good R2 value (0.774), afforded the optimal choice for prediction. The random forest model ultimately classified 10 of the hypothesised predictors of GSII, which underpinned constructs such as risk, perceived behavioural control, information availability, and growth, as highly important; 21, which were inclusive of all of the hypothesised constructs in measurement, as moderately important; and the remaining 15 as low in importance. The feature importance determined by the random forest model afforded an indicator-specific value, which can help green sukuk (GS) issuers to prioritise the most important drivers of investment interest, suggest important contexts for ethical investment policy enhancement, and inform insights about optimal resource allocation and pragmatic recommendations for stakeholders with respect to the funding of climate change mitigation projects in Nigeria. Full article
(This article belongs to the Special Issue Machine Learning-Based Risk Management in Finance and Insurance)
Show Figures

Figure 1

25 pages, 1047 KiB  
Review
Artificial Intelligence in Cardiac Surgery: Transforming Outcomes and Shaping the Future
by Vasileios Leivaditis, Eleftherios Beltsios, Athanasios Papatriantafyllou, Konstantinos Grapatsas, Francesk Mulita, Nikolaos Kontodimopoulos, Nikolaos G. Baikoussis, Levan Tchabashvili, Konstantinos Tasios, Ioannis Maroulis, Manfred Dahm and Efstratios Koletsis
Clin. Pract. 2025, 15(1), 17; https://doi.org/10.3390/clinpract15010017 - 14 Jan 2025
Cited by 2 | Viewed by 4113
Abstract
Background: Artificial intelligence (AI) has emerged as a transformative technology in healthcare, with its integration into cardiac surgery offering significant advancements in precision, efficiency, and patient outcomes. However, a comprehensive understanding of AI’s applications, benefits, challenges, and future directions in cardiac surgery is [...] Read more.
Background: Artificial intelligence (AI) has emerged as a transformative technology in healthcare, with its integration into cardiac surgery offering significant advancements in precision, efficiency, and patient outcomes. However, a comprehensive understanding of AI’s applications, benefits, challenges, and future directions in cardiac surgery is needed to inform its safe and effective implementation. Methods: A systematic review was conducted following PRISMA guidelines. Literature searches were performed in PubMed, Scopus, Cochrane Library, Google Scholar, and Web of Science, covering publications from January 2000 to November 2024. Studies focusing on AI applications in cardiac surgery, including risk stratification, surgical planning, intraoperative guidance, and postoperative management, were included. Data extraction and quality assessment were conducted using standardized tools, and findings were synthesized narratively. Results: A total of 121 studies were included in this review. AI demonstrated superior predictive capabilities in risk stratification, with machine learning models outperforming traditional scoring systems in mortality and complication prediction. Robotic-assisted systems enhanced surgical precision and minimized trauma, while computer vision and augmented cognition improved intraoperative guidance. Postoperative AI applications showed potential in predicting complications, supporting patient monitoring, and reducing healthcare costs. However, challenges such as data quality, validation, ethical considerations, and integration into clinical workflows remain significant barriers to widespread adoption. Conclusions: AI has the potential to revolutionize cardiac surgery by enhancing decision making, surgical accuracy, and patient outcomes. Addressing limitations related to data quality, bias, validation, and regulatory frameworks is essential for its safe and effective implementation. Future research should focus on interdisciplinary collaboration, robust testing, and the development of ethical and transparent AI systems to ensure equitable and sustainable advancements in cardiac surgery. Full article
Show Figures

Figure 1

13 pages, 2625 KiB  
Article
Moving Healthcare AI Support Systems for Visually Detectable Diseases to Constrained Devices
by Tess Watt, Christos Chrysoulas, Peter J. Barclay, Brahim El Boudani and Grigorios Kalliatakis
Appl. Sci. 2024, 14(24), 11474; https://doi.org/10.3390/app142411474 - 10 Dec 2024
Cited by 1 | Viewed by 1810
Abstract
Image classification usually requires connectivity and access to the cloud, which is often limited in many parts of the world, including hard-to-reach rural areas. Tiny machine learning (tinyML) aims to solve this problem by hosting artificial intelligence (AI) assistants on constrained devices, eliminating [...] Read more.
Image classification usually requires connectivity and access to the cloud, which is often limited in many parts of the world, including hard-to-reach rural areas. Tiny machine learning (tinyML) aims to solve this problem by hosting artificial intelligence (AI) assistants on constrained devices, eliminating connectivity issues by processing data within the device itself, without Internet or cloud access. This study explores the use of tinyML to provide healthcare support with low-spec devices in low-connectivity environments, focusing on the diagnosis of skin diseases and the ethical use of AI assistants in a healthcare setting. To investigate this, images of skin lesions were used to train a model for classifying visually detectable diseases (VDDs). The model weights were then offloaded to a Raspberry Pi with a webcam attached, to be used for the classification of skin lesions without Internet access. It was found that the developed prototype achieved a test accuracy of 78% when trained on the HAM10000 dataset, and a test accuracy of 85% when trained on the ISIC 2020 Challenge dataset. Full article
(This article belongs to the Special Issue Computer-Vision-Based Biomedical Image Processing)
Show Figures

Figure 1

14 pages, 253 KiB  
Review
Novel Approaches for the Early Detection of Glaucoma Using Artificial Intelligence
by Marco Zeppieri, Lorenzo Gardini, Carola Culiersi, Luigi Fontana, Mutali Musa, Fabiana D’Esposito, Pier Luigi Surico, Caterina Gagliano and Francesco Saverio Sorrentino
Life 2024, 14(11), 1386; https://doi.org/10.3390/life14111386 - 28 Oct 2024
Cited by 3 | Viewed by 2819
Abstract
Background: If left untreated, glaucoma—the second most common cause of blindness worldwide—causes irreversible visual loss due to a gradual neurodegeneration of the retinal ganglion cells. Conventional techniques for identifying glaucoma, like optical coherence tomography (OCT) and visual field exams, are frequently laborious and [...] Read more.
Background: If left untreated, glaucoma—the second most common cause of blindness worldwide—causes irreversible visual loss due to a gradual neurodegeneration of the retinal ganglion cells. Conventional techniques for identifying glaucoma, like optical coherence tomography (OCT) and visual field exams, are frequently laborious and dependent on subjective interpretation. Through the fast and accurate analysis of massive amounts of imaging data, artificial intelligence (AI), in particular machine learning (ML) and deep learning (DL), has emerged as a promising method to improve the early detection and management of glaucoma. Aims: The purpose of this study is to examine the current uses of AI in the early diagnosis, treatment, and detection of glaucoma while highlighting the advantages and drawbacks of different AI models and algorithms. In addition, it aims to determine how AI technologies might transform glaucoma treatment and suggest future lines of inquiry for this area of study. Methods: A thorough search of databases, including Web of Science, PubMed, and Scopus, was carried out to find pertinent papers released until August 2024. The inclusion criteria were limited to research published in English in peer-reviewed publications that used AI, ML, or DL to diagnose or treat glaucoma in human subjects. Articles were chosen and vetted according to their quality, contribution to the field, and relevancy. Results: Convolutional neural networks (CNNs) and other deep learning algorithms are among the AI models included in this paper that have been shown to have excellent sensitivity and specificity in identifying glaucomatous alterations in fundus photos, OCT scans, and visual field tests. By automating standard screening procedures, these models have demonstrated promise in distinguishing between glaucomatous and healthy eyes, forecasting the course of the disease, and possibly lessening the workload of physicians. Nonetheless, several significant obstacles remain, such as the requirement for various training datasets, outside validation, decision-making transparency, and handling moral and legal issues. Conclusions: Artificial intelligence (AI) holds great promise for improving the diagnosis and treatment of glaucoma by facilitating prompt and precise interpretation of imaging data and assisting in clinical decision making. To guarantee wider accessibility and better patient results, future research should create strong generalizable AI models validated in various populations, address ethical and legal matters, and incorporate AI into clinical practice. Full article
(This article belongs to the Special Issue Cornea and Anterior Eye Diseases: 2nd Edition)
38 pages, 11831 KiB  
Article
CIPHER: Cybersecurity Intelligent Penetration-Testing Helper for Ethical Researcher
by Derry Pratama, Naufal Suryanto, Andro Aprila Adiputra, Thi-Thu-Huong Le, Ahmada Yusril Kadiptya, Muhammad Iqbal and Howon Kim
Sensors 2024, 24(21), 6878; https://doi.org/10.3390/s24216878 - 26 Oct 2024
Cited by 3 | Viewed by 4841
Abstract
Penetration testing, a critical component of cybersecurity, typically requires extensive time and effort to find vulnerabilities. Beginners in this field often benefit from collaborative approaches with the community or experts. To address this, we develop Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers (CIPHER), [...] Read more.
Penetration testing, a critical component of cybersecurity, typically requires extensive time and effort to find vulnerabilities. Beginners in this field often benefit from collaborative approaches with the community or experts. To address this, we develop Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers (CIPHER), a large language model specifically trained to assist in penetration testing tasks as a chatbot. Unlike software development, penetration testing involves domain-specific knowledge that is not widely documented or easily accessible, necessitating a specialized training approach for AI language models. CIPHER was trained using over 300 high-quality write-ups of vulnerable machines, hacking techniques, and documentation of open-source penetration testing tools augmented in an expert response structure. Additionally, we introduced the Findings, Action, Reasoning, and Results (FARR) Flow augmentation, a novel method to augment penetration testing write-ups to establish a fully automated pentesting simulation benchmark tailored for large language models. This approach fills a significant gap in traditional cybersecurity Q&A benchmarks and provides a realistic and rigorous standard for evaluating LLM’s technical knowledge, reasoning capabilities, and practical utility in dynamic penetration testing scenarios. In our assessments, CIPHER achieved the best overall performance in providing accurate suggestion responses compared to other open-source penetration testing models of similar size and even larger state-of-the-art models like Llama 3 70B and Qwen1.5 72B Chat, particularly on insane difficulty machine setups. This demonstrates that the current capabilities of general large language models (LLMs) are insufficient for effectively guiding users through the penetration testing process. We also discuss the potential for improvement through scaling and the development of better benchmarks using FARR Flow augmentation results. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 690 KiB  
Perspective
Artificial Intelligence as a Replacement for Animal Experiments in Neurology: Potential, Progress, and Challenges
by Thorsten Rudroff
Neurol. Int. 2024, 16(4), 805-820; https://doi.org/10.3390/neurolint16040060 - 29 Jul 2024
Cited by 10 | Viewed by 7842
Abstract
Animal experimentation has long been a cornerstone of neurology research, but it faces growing scientific, ethical, and economic challenges. Advances in artificial intelligence (AI) are providing new opportunities to replace animal testing with more human-relevant and efficient methods. This article explores the potential [...] Read more.
Animal experimentation has long been a cornerstone of neurology research, but it faces growing scientific, ethical, and economic challenges. Advances in artificial intelligence (AI) are providing new opportunities to replace animal testing with more human-relevant and efficient methods. This article explores the potential of AI technologies such as brain organoids, computational models, and machine learning to revolutionize neurology research and reduce reliance on animal models. These approaches can better recapitulate human brain physiology, predict drug responses, and uncover novel insights into neurological disorders. They also offer faster, cheaper, and more ethical alternatives to animal experiments. Case studies demonstrate AI’s ability to accelerate drug discovery for Alzheimer’s, predict neurotoxicity, personalize treatments for Parkinson’s, and restore movement in paralysis. While challenges remain in validating and integrating these technologies, the scientific, economic, practical, and moral advantages are driving a paradigm shift towards AI-based, animal-free research in neurology. With continued investment and collaboration across sectors, AI promises to accelerate the development of safer and more effective therapies for neurological conditions while significantly reducing animal use. The path forward requires the ongoing development and validation of these technologies, but a future in which they largely replace animal experiments in neurology appears increasingly likely. This transition heralds a new era of more humane, human-relevant, and innovative brain research. Full article
(This article belongs to the Collection Advances in Neurodegenerative Diseases)
Show Figures

Figure 1

23 pages, 3766 KiB  
Article
DOxy: A Dissolved Oxygen Monitoring System
by Navid Shaghaghi, Frankie Fazlollahi, Tushar Shrivastav, Adam Graham, Jesse Mayer, Brian Liu, Gavin Jiang, Naveen Govindaraju, Sparsh Garg, Katherine Dunigan and Peter Ferguson
Sensors 2024, 24(10), 3253; https://doi.org/10.3390/s24103253 - 20 May 2024
Cited by 4 | Viewed by 2614
Abstract
Dissolved Oxygen (DO) in water enables marine life. Measuring the prevalence of DO in a body of water is an important part of sustainability efforts because low oxygen levels are a primary indicator of contamination and distress in bodies of water. Therefore, aquariums [...] Read more.
Dissolved Oxygen (DO) in water enables marine life. Measuring the prevalence of DO in a body of water is an important part of sustainability efforts because low oxygen levels are a primary indicator of contamination and distress in bodies of water. Therefore, aquariums and aquaculture of all types are in need of near real-time dissolved oxygen monitoring and spend a lot of money on purchasing and maintaining DO meters that are either expensive, inefficient, or manually operated—in which case they also need to ensure that manual readings are taken frequently which is time consuming. Hence a cost-effective and sustainable automated Internet of Things (IoT) system for this task is necessary and long overdue. DOxy, is such an IoT system under research and development at Santa Clara University’s Ethical, Pragmatic, and Intelligent Computing (EPIC) Laboratory which utilizes cost-effective, accessible, and sustainable Sensing Units (SUs) for measuring the dissolved oxygen levels present in bodies of water which send their readings to a web based cloud infrastructure for storage, analysis, and visualization. DOxy’s SUs are equipped with a High-sensitivity Pulse Oximeter meant for measuring dissolved oxygen levels in human blood, not water. Hence a number of parallel readings of water samples were gathered by both the High-sensitivity Pulse Oximeter and a standard dissolved oxygen meter. Then, two approaches for relating the readings were investigated. In the first, various machine learning models were trained and tested to produce a dynamic mapping of sensor readings to actual DO values. In the second, curve-fitting models were used to produce a successful conversion formula usable in the DOxy SUs offline. Both proved successful in producing accurate results. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop