Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = Jupyter Notebook

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4485 KB  
Article
A Modeling Approach to Aggregated Noise Effects of Offshore Wind Farms in the Canary and North Seas
by Ion Urtiaga-Chasco and Alonso Hernández-Guerra
J. Mar. Sci. Eng. 2026, 14(1), 2; https://doi.org/10.3390/jmse14010002 - 19 Dec 2025
Viewed by 386
Abstract
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. [...] Read more.
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. Here, we integrate a turbine SL prediction algorithm with open-source propagation models in a Jupyter Notebook (version 7.4.7) to streamline aggregated SPL estimation for OWFs. Species-specific audiograms and weighting functions are included to assess potential biological impacts. The tool is applied to four planned OWFs, two in the Canary region and two in the Belgian and German North Seas, under conservative assumptions. Results indicate that at 10 m/s wind speed, a single turbine’s SL reaches 143 dB re 1 µPa in the one-third octave band centered at 160 Hz. Sensitivity analyses indicate that variations in wind speed can cause the operational source level at 160 Hz to increase by up to approximately 2 dB re 1 µPa2/Hz from the nominal value used in this study, while differences in sediment type can lead to transmission loss variations ranging from 0 to on the order of 100 dB, depending on bathymetry and range. Maximum SPLs of 112 dB re 1 µPa are predicted within OWFs, decreasing to ~50 dB re 1 µPa at ~100 km. Within OWFs, Low-Frequency (LF) cetaceans and Phocid Carnivores in Water (PCW) would likely perceive the noise; National Marine Fisheries Service (NMFS) marine mammals’ auditory-injury thresholds are not exceeded, but behavioral-harassment thresholds may be crossed. Outside the farms, only LF audiograms are crossed. In high-traffic North Sea regions, OWF noise is largely masked, whereas in lower-noise areas, such as the Canary Islands, it can exceed ambient levels, highlighting the importance of site-specific assessments, accurate ambient noise monitoring and propagation modeling for ecological impact evaluation. Full article
Show Figures

Figure 1

11 pages, 1595 KB  
Communication
PyMossFit: A Google Colab Option for Mössbauer Spectra Fitting
by Fabio D. Saccone
Spectrosc. J. 2025, 3(4), 29; https://doi.org/10.3390/spectroscj3040029 - 4 Nov 2025
Cited by 1 | Viewed by 561
Abstract
This article introduces the main characteristics of PyMossFit, a software for Mössbauer spectra fitting. It is explained how each aspect of the code works. Based on the Lmfit Python package, it is a robust data fitting tool. Designed to run through Jupyter Notebook [...] Read more.
This article introduces the main characteristics of PyMossFit, a software for Mössbauer spectra fitting. It is explained how each aspect of the code works. Based on the Lmfit Python package, it is a robust data fitting tool. Designed to run through Jupyter Notebook in the Google Colab cloud, it also allows one to work via multiple devices and operating systems. In addition, it allows the fitting procedure to be performed collaboratively among researchers. The software performs the folding of raw data with a discrete Fourier transform. Data smoothing is available with the use of a Savitzky–Golay algorithm. Moreover, a K-nearest neighbor algorithm enables users to determine the present phases by matching the correlations of hyperfine parameters from a local database. Full article
(This article belongs to the Special Issue Advances in Spectroscopy Research)
Show Figures

Graphical abstract

23 pages, 2649 KB  
Article
RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models
by Chak Wa (Winston) Cheang, Kristin B. Byrd, Nicholas M. Enwright, Daniel D. Buscombe, Christopher R. Sherwood and Dean B. Gesch
Remote Sens. 2025, 17(18), 3165; https://doi.org/10.3390/rs17183165 - 12 Sep 2025
Viewed by 1168
Abstract
Coastal vegetated ecosystems, including tidal marshes, vegetated dunes, and shrub- and forest-dominated wetlands, can mitigate hurricane impacts such as coastal flooding and erosion by increasing surface roughness and reducing wave energy. Land cover maps can be used as input to improve simulations of [...] Read more.
Coastal vegetated ecosystems, including tidal marshes, vegetated dunes, and shrub- and forest-dominated wetlands, can mitigate hurricane impacts such as coastal flooding and erosion by increasing surface roughness and reducing wave energy. Land cover maps can be used as input to improve simulations of surface roughness in advanced hydro-morphological models. Consequently, there is a need for efficient tools to develop up-to-date land cover maps that include the accurate distribution of vegetation types prior to an extreme storm. In response, we developed the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). RUSH delivers high-resolution maps of coastal vegetation for near-real-time or historical conditions via a Jupyter Notebook application and a graphical user interface (GUI). The application generates 3 m spatial resolution land cover maps with classes relevant to coastal settings, especially along mainland beaches, headlands, and barrier islands, as follows: (1) open water; (2) emergent wetlands; (3) dune grass; (4) woody wetlands; and (5) bare ground. These maps are developed by applying one of two seasonal random-forest machine learning models to Planet Labs SuperDove multispectral imagery. Cool Season and Warm Season Models were trained on 665 and 594 reference points, respectively, located across study regions in the North Carolina Outer Banks, the Mississippi Delta in Louisiana, and a portion of the Florida Gulf Coast near Apalachicola. Cool Season and Warm Season Models were tested with 666 and 595 independent points, with an overall accuracy of 93% and 94%, respectively. The Jupyter Notebook application provides users with a flexible platform for customization for advanced users, whereas the GUI, designed with user-experience feedback, provides non-experts access to remote sensing capabilities. This application can also be used for long-term coastal geomorphic and ecosystem change assessments. Full article
Show Figures

Figure 1

33 pages, 21287 KB  
Article
Interactive, Shallow Machine Learning-Based Semantic Segmentation of 2D and 3D Geophysical Data from Archaeological Sites
by Lieven Verdonck, Michel Dabas and Marc Bui
Remote Sens. 2025, 17(17), 3092; https://doi.org/10.3390/rs17173092 - 4 Sep 2025
Viewed by 1636
Abstract
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation [...] Read more.
In recent decades, technological developments in archaeological geophysics have led to growing data volumes, so that an important bottleneck is now at the stage of data interpretation. The manual delineation and classification of anomalies are time-consuming, and different methods for (semi-)automatic image segmentation have been proposed, based on explicitly formulated rulesets or deep convolutional neural networks (DCNNs). So far, these have not been used widely in archaeological geophysics because of the complexity of the segmentation task (due to the low contrast between archaeological structures and background and the low predictability of the targets). Techniques based on shallow machine learning (e.g., random forests, RFs) have been explored very little in archaeological geophysics, although they are less case-specific than most rule-based methods, do not require large training sets as is the case for DCNNs, and can easily handle 3D data. In this paper, we show their potential for geophysical data analysis. For the classification on the pixel level, we use ilastik, an open-source segmentation tool developed in medical imaging. Algorithms for object classification, manual reclassification, post-processing, vectorisation, and georeferencing were brought together in a Jupyter Notebook, available on GitHub (version 7.3.2). To assess the accuracy of the RF classification applied to geophysical datasets, we compare it with manual interpretation. A quantitative evaluation using the mean intersection over union metric results in scores of ~60%, which only slightly increases after the manual correction of the RF classification results. Remarkably, a similar score results from the comparison between independent manual interpretations. This observation illustrates that quantitative metrics are not a panacea for evaluating machine-generated geophysical data interpretation in archaeology, which is characterised by a significant degree of uncertainty. It also raises the question of how the semantic segmentation of geophysical data (whether carried out manually or with the aid of machine learning) can best be evaluated. Full article
Show Figures

Figure 1

18 pages, 4429 KB  
Article
Integrating Unsupervised Land Cover Analysis with Socioeconomic Change for Post-Industrial Cities: A Case Study of Ponca City, Oklahoma
by Jaryd Hinch and Joni Downs
Remote Sens. 2025, 17(17), 2957; https://doi.org/10.3390/rs17172957 - 26 Aug 2025
Viewed by 879
Abstract
Urban centers shaped by industrial histories often exhibit complex patterns of land cover change that are not well-captured by standard classification techniques. This study investigates post-industrial urban change in Ponca City, Oklahoma, using remote sensing, unsupervised machine learning, and socioeconomic contextualization. Using a [...] Read more.
Urban centers shaped by industrial histories often exhibit complex patterns of land cover change that are not well-captured by standard classification techniques. This study investigates post-industrial urban change in Ponca City, Oklahoma, using remote sensing, unsupervised machine learning, and socioeconomic contextualization. Using a Jupyter Notebook version 7.0.8 environment for Python libraries, Landsat imagery from 1990 to 2020 was analyzed to detect shifts in land cover patterns across a relatively small, heterogeneous landscape. Principal component analysis (PCA) was applied to reduce dimensionality and enhance pixel distinction across multiband reflectance data. Socioeconomic data and historical context were incorporated to interpret changes in land use alongside patterns of industrial reduction and urban redevelopment. Results revealed changes in five distinct land cover classes of urban, vegetative, and industrial land uses, with observable trends aligning with key periods of economic and infrastructural transition. The trends also aligned with socioeconomic changes of the city, with a larger reduction in industrial and commercial land cover than in residential and vegetation cover types. These findings demonstrate the utility of machine learning classification in small-scale, heterogeneous environments and provide a replicable methodological framework for smaller city municipalities to monitor urban change. Full article
(This article belongs to the Special Issue Remote Sensing Measurements of Land Use and Land Cover)
Show Figures

Figure 1

19 pages, 1317 KB  
Article
Clinical, Immunohistochemical, and Inflammatory Profiles in Colorectal Cancer: The Impact of MMR Deficiency
by Vlad Alexandru Ionescu, Gina Gheorghe, Ioana Alexandra Baban, Alexandru Barbu, Ninel Iacobus Antonie, Teodor Florin Georgescu, Razvan Matei Bratu, Carmen Cristina Diaconu, Cristina Mambet, Coralia Bleotu, Valentin Enache and Camelia Cristina Diaconu
Diagnostics 2025, 15(17), 2141; https://doi.org/10.3390/diagnostics15172141 - 25 Aug 2025
Cited by 1 | Viewed by 1249
Abstract
Background/Objectives: Mismatch repair (MMR) deficiency assessment has proven to be a valuable tool for prognostic evaluation and therapeutic management guidance in patients with colorectal cancer (CRC). Our study aimed to investigate the associations between MMR deficiency and a range of clinicopathological parameters. Methods: [...] Read more.
Background/Objectives: Mismatch repair (MMR) deficiency assessment has proven to be a valuable tool for prognostic evaluation and therapeutic management guidance in patients with colorectal cancer (CRC). Our study aimed to investigate the associations between MMR deficiency and a range of clinicopathological parameters. Methods: We conducted a retrospective observational study including 264 patients diagnosed with CRC, for whom immunohistochemical (IHC) data were available. Statistical analysis was performed using the Python 3.12.7 programming language within the Jupyter Notebook environment (Anaconda distribution). Results: MMR deficiency was identified in 18.18% of patients. It was significantly associated with younger age (<50 years), female sex, right-sided tumor location, poor tumor differentiation (G3), smoking, and loss of CDX2 expression (p < 0.001). MLH1 and PMS2 were the most frequently affected proteins, with concurrent loss in 77.08% of MMR-deficient cases. Loss of MLH1 expression correlated with female sex (p = 0.004), right-sided location (p < 0.001), poor differentiation (p < 0.001), and loss of CDX2 expression (p < 0.001). Additionally, the loss of PMS2 expression was associated with female sex (p = 0.015), right-sided tumor location (p = 0.003), and poor differentiation (p < 0.001). No significant associations were identified between MMR status and tumor stage, histological subtype, PLR, or NLR values. Conclusions: Gaining deeper insights into the clinical relevance of MMR status in CRC could contribute to improved testing rates and support the design of tailored management strategies that address the specific biological features of these tumors. Full article
(This article belongs to the Special Issue Advances in the Diagnosis of Gastrointestinal Diseases—2nd Edition)
Show Figures

Figure 1

27 pages, 9197 KB  
Data Descriptor
A Six-Year, Spatiotemporally Comprehensive Dataset and Data Retrieval Tool for Analyzing Chlorophyll-a, Turbidity, and Temperature in Utah Lake Using Sentinel and MODIS Imagery
by Kaylee B. Tanner, Anna C. Cardall and Gustavious P. Williams
Data 2025, 10(8), 128; https://doi.org/10.3390/data10080128 - 13 Aug 2025
Viewed by 1262
Abstract
Data from earth observation satellites provide unique and valuable information about water quality conditions in freshwater lakes but require significant processing before they can be used, even with the use of tools like Google Earth Engine. We use imagery from Sentinel 2 and [...] Read more.
Data from earth observation satellites provide unique and valuable information about water quality conditions in freshwater lakes but require significant processing before they can be used, even with the use of tools like Google Earth Engine. We use imagery from Sentinel 2 and MODIS and in situ data from the State of Utah Ambient Water Quality Management System (AQWMS) database to develop models and to generate a highly accessible, easy-to-use CSV file of chlorophyll-a (which is an indicator of algal biomass), turbidity, and water temperature measurements on Utah Lake. From a collection of 937 Sentinel 2 images spanning the period from January 2019 to May 2025, we generated 262,081 estimates each of chlorophyll-a and turbidity, with an additional 1,140,777 data points interpolated from those estimates to provide a dataset with a consistent time step. From a collection of 2333 MODIS images spanning the same time period, we extracted 1,390,800 measurements each of daytime water surface temperature and nighttime water surface temperature and interpolated or imputed an additional 12,058 data points from those estimates. We interpolated the data using piecewise cubic Hermite interpolation polynomials to preserve the original distribution of the data and provide the most accurate estimates of measurements between observations. We demonstrate the processing steps required to extract usable, accurate estimates of these three water quality parameters from satellite imagery and format them for analysis. We include summary statistics and charts for the resulting dataset, which show the usefulness of this data for informing Utah Lake management issues. We include the Jupyter Notebook with the implemented processing steps and the formatted CSV file of data as supplemental materials. The Jupyter Notebook can be used to update the Utah Lake data or can be easily modified to generate similar data for other waterbodies. We provide this method, tool set, and data to make remotely sensed water quality data more accessible to researchers, water managers, and others interested in Utah Lake and to facilitate the use of satellite data for those interested in applying remote sensing techniques to other waterbodies. Full article
(This article belongs to the Collection Modern Geophysical and Climate Data Analysis: Tools and Methods)
Show Figures

Graphical abstract

24 pages, 1467 KB  
Article
Introducing Machine Learning in Teaching Quantum Mechanics
by M. K. Pawelkiewicz, Filippo Gatti, Didier Clouteau, Viatcheslav Kokoouline and Mehdi Adrien Ayouz
Atoms 2025, 13(7), 66; https://doi.org/10.3390/atoms13070066 - 8 Jul 2025
Viewed by 1143
Abstract
In this article, we describe an approach to teaching introductory quantum mechanics and machine learning techniques. This approach combines several key concepts from both fields. Specifically, it demonstrates solving the Schrödinger equation using the discrete-variable representation (DVR) technique, as well as the architecture [...] Read more.
In this article, we describe an approach to teaching introductory quantum mechanics and machine learning techniques. This approach combines several key concepts from both fields. Specifically, it demonstrates solving the Schrödinger equation using the discrete-variable representation (DVR) technique, as well as the architecture and training of neural network models. To illustrate this approach, a Python-based Jupyter notebook is developed. This notebook can be used for self-learning or for learning with an instructor. Furthermore, it can serve as a toolbox for demonstrating individual concepts in quantum mechanics and machine learning and for conducting small research projects in these areas. Full article
(This article belongs to the Special Issue Artificial Intelligence for Quantum Sciences)
Show Figures

Figure 1

21 pages, 2614 KB  
Review
Exploring the Applications of Lemna minor in Animal Feed: A Review Assisted by Artificial Intelligence
by Helmut Bethancourt-Dalmasí, Manuel Viuda-Martos, Raquel Lucas-González, Fernando Borrás and Juana Fernández-López
Appl. Sci. 2025, 15(12), 6732; https://doi.org/10.3390/app15126732 - 16 Jun 2025
Viewed by 3379
Abstract
The work aims to apply cheap and widely accessible tools based on artificial intelligence to analyze, group, and categorize a large amount of available research literature (from a massive bibliographic search) on the use of Lemna minor for animal feed, not only comprehensively [...] Read more.
The work aims to apply cheap and widely accessible tools based on artificial intelligence to analyze, group, and categorize a large amount of available research literature (from a massive bibliographic search) on the use of Lemna minor for animal feed, not only comprehensively and objectively, but also in a more effective and less time-consuming way. In addition, a comprehensive and critical summary was conducted to highlight recent applications of L. minor in animal feed. The Scopus database was used for the original bibliographic search. Then, a newly developed online and freely available tool called “Jupyter Notebook on Google Colab” was applied to cluster the large volume of bibliographic data (1432 papers) obtained in the basic search, which allowed their reduction until only 148 papers. These papers were reviewed in a traditional way obtaining relevant information about L. minor production, nutritional value, composition, and its application as animal feed. In this sense, the most successful applications were for fish and poultry feeding, reaching levels of inclusion of 15–20% in fish and 5–15% in poultry. It is of great interest because of the expected increase in prices of conventional sources of protein for animal feed. Full article
Show Figures

Figure 1

27 pages, 4562 KB  
Article
Text Mining for Consumers’ Sentiment Tendency and Strategies for Promoting Cross-Border E-Commerce Marketing Using Consumers’ Online Review Data
by Changting Liu, Tao Chen, Qiang Pu and Ying Jin
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 125; https://doi.org/10.3390/jtaer20020125 - 2 Jun 2025
Cited by 5 | Viewed by 3643
Abstract
With the rapid advancement of information technology and the increasing maturity of online shopping platforms, cross-border shopping has experienced rapid growth. Online consumer reviews, as an essential part of the online shopping process, have become a vital way for merchants to obtain user [...] Read more.
With the rapid advancement of information technology and the increasing maturity of online shopping platforms, cross-border shopping has experienced rapid growth. Online consumer reviews, as an essential part of the online shopping process, have become a vital way for merchants to obtain user feedback and gain insights into market demands. The research employs Python tools (Jupyter Notebook 7.0.8) to analyze the 14,078 pieces of review text data from the top four best-selling products in a certain product category on a certain cross-border e-commerce platform. By applying social network analysis, constructing LDA (Latent Dirichlet Allocation) topic models, and establishing LSTM (Long Short-Term Memory) sentiment classification models, the topics and sentiment distribution of the review set are obtained, and the evolution trends of topics and sentiments are analyzed according to different periods. The research finds that in the overall review set, consumers’ focus is concentrated on five aspects: functional features, quality and cost-effectiveness, usage effectiveness, post-purchase support, and design and assembly. In terms of changes in review sentiments, the negative proportion of the topics of functional features and usage effects is still relatively high. Given the above, this study integrates the 4P and 4C theories to propose strategies for enhancing the marketing capabilities of cross-border e-commerce in the context of digital cross-border operations, providing theoretical and practical marketing insights for cross-border e-commerce enterprises. Full article
(This article belongs to the Special Issue Human–Technology Synergies in AI-Driven E-Commerce Environments)
Show Figures

Figure 1

17 pages, 1852 KB  
Article
A Tutorial Toolbox to Simplify Bioinformatics and Biostatistics Analyses of Microbial Omics Data in an Island Context
by Isaure Quétel, Sourakhata Tirera, Damien Cazenave, Nina Allouch, Chloé Baum, Yann Reynaud, Degrâce Batantou Mabandza, Virginie Nerrière, Serge Vedy, Matthieu Pot, Sébastien Breurec, Anne Lavergne, Séverine Ferdinand, Vincent Guerlais and David Couvin
BioMedInformatics 2025, 5(2), 27; https://doi.org/10.3390/biomedinformatics5020027 - 19 May 2025
Viewed by 2998
Abstract
Background: Bioinformatics is increasingly used in various scientific works. Large amounts of heterogeneous data are being generated these days. It is difficult to interpret and analyze these data effectively. Several software tools have been developed to facilitate the handling and analysis of biological [...] Read more.
Background: Bioinformatics is increasingly used in various scientific works. Large amounts of heterogeneous data are being generated these days. It is difficult to interpret and analyze these data effectively. Several software tools have been developed to facilitate the handling and analysis of biological data, based on specific needs. Methods: The Galaxy web platform is one of these software tools, allowing free access to users and facilitating the use of thousands of tools. Other software tools, such as Bioconda or Jupyter Notebook, facilitate the installation of tools and their dependencies. In addition to these tools, RStudio can be mentioned as a powerful interface that facilitates the use of the R programming language for data analysis and statistics. Results: The aim of this study is to provide the scientific community with guides on how to perform bioinformatics/biostatistical analyses in a simpler manner. With this work, we also try to democratize well-documented software tools to make them suitable for both bioinformaticians and non-bioinformaticians. We believe that user-friendly guides and real-life/concrete examples will provide end-users with suitable and easy-to-use methods for their bioinformatics analysis needs. Furthermore, tutorials and usage examples are available on our dedicated GitHub repository. Conclusions: These tutorials/examples (In English and/or French) could be used as pedagogical tools to promote bioinformatics analysis and offer potential solutions to several bioinformatics needs. Special emphasis is placed on microbial omics data analysis. Full article
Show Figures

Figure 1

17 pages, 4131 KB  
Article
Enhancing Malignant Lymph Node Detection in Ultrasound Imaging: A Comparison Between the Artificial Intelligence Accuracy, Dice Similarity Coefficient and Intersection over Union
by Iulian-Alexandru Taciuc, Mihai Dumitru, Andreea Marinescu, Crenguta Serboiu, Gabriela Musat, Mirela Gherghe, Adrian Costache and Daniela Vrinceanu
J. Mind Med. Sci. 2025, 12(1), 29; https://doi.org/10.3390/jmms12010029 - 4 May 2025
Cited by 2 | Viewed by 2361
Abstract
Background: The accurate identification of malignant lymph nodes in cervical ultrasound images is crucial for early diagnosis and treatment planning. Traditional evaluation metrics, such as accuracy and the Dice Similarity Coefficient (DSC), often fail to provide a realistic assessment of segmentation performance, as [...] Read more.
Background: The accurate identification of malignant lymph nodes in cervical ultrasound images is crucial for early diagnosis and treatment planning. Traditional evaluation metrics, such as accuracy and the Dice Similarity Coefficient (DSC), often fail to provide a realistic assessment of segmentation performance, as they do not account for partial overlaps between predictions and ground truth. This study addresses this gap by introducing the Intersection over Union (IoU) as an additional metric to offer a more comprehensive evaluation of model performance. Specifically, we aimed to develop a convolutional neural network (CNN) capable of detecting suspicious malignant lymph nodes and assess its effectiveness using both conventional and IoU-based performance metrics. Methods: A dataset consisting of 992 malignant lymph node images was extracted from 166 cervical ultrasound scans and labeled using the ImgLab annotation tool. A CNN was developed using Python, Keras, and TensorFlow and employed within the Jupyter Notebook environment. The network architecture consists of four neural layers trained to distinguish malignant lymph nodes. Results: The CNN achieved a training accuracy of 97% and a validation accuracy of 99%. The DSC score was 0.984, indicating a strong segmentation performance, although it was limited to detecting malignant lymph nodes in positive cases. An IoU evaluation applied to the test images revealed an average overlap of 74% between the ground-truth labels and model predictions, offering a more nuanced measure of the segmentation accuracy. Conclusions: The CNN demonstrated high accuracy and DSC scores, confirming its effectiveness in identifying malignant lymph nodes. However, the IoU values, while lower than conventional accuracy metrics, provided a more realistic evaluation of the model’s performance, highlighting areas for potential improvement in segmentation accuracy. This study underscores the importance of using IoU alongside traditional metrics to obtain a more reliable assessment of deep learning-based medical image analysis models. Full article
Show Figures

Figure 1

18 pages, 1686 KB  
Article
Comparative Analysis of Machine Learning and Deep Learning Models for Lung Cancer Prediction Based on Symptomatic and Lifestyle Features
by Bireswar Dutta
Appl. Sci. 2025, 15(8), 4507; https://doi.org/10.3390/app15084507 - 19 Apr 2025
Cited by 2 | Viewed by 2741
Abstract
Lung cancer remains a leading cause of global mortality, with early detection being critical for improving the patient survival rates. However, applying machine learning and deep learning effectively for lung cancer prediction using symptomatic and lifestyle data requires the careful consideration of feature [...] Read more.
Lung cancer remains a leading cause of global mortality, with early detection being critical for improving the patient survival rates. However, applying machine learning and deep learning effectively for lung cancer prediction using symptomatic and lifestyle data requires the careful consideration of feature selection and model optimization, which is not consistently addressed in the existing research. This research addresses this gap by systematically evaluating and comparing the predictive efficacy of several machine learning and deep learning models, employing rigorous data preprocessing, including feature selection with Pearson’s correlation, outlier removal, and normalization, on a patient symptom and lifestyle factor dataset from Kaggle. Machine learning classifiers, including Decision Trees, K-Nearest Neighbors, Random Forest, Naïve Bayes, AdaBoost, Logistic Regression, and Support Vector Machines, were implemented using Weka simultaneously with neural network models with 1, 2, and 3 hidden layers, which were developed in Python within a Jupyter Notebook environment. The model performance was assessed using K-fold cross-validation and 80/20 train/test splitting. The results highlight the importance of feature selection for enhancing the model accuracy and demonstrate that the single-hidden-layer neural network, trained for 800 epochs, achieved a prediction accuracy of 92.86%, outperforming the machine learning models. This study contributes to developing more effective computational methods for early lung cancer detection, ultimately supporting improved patient outcomes. Full article
Show Figures

Figure 1

20 pages, 6745 KB  
Article
A Proposed Method of Automating Data Processing for Analysing Data Produced from Eye Tracking and Galvanic Skin Response
by Javier Sáez-García, María Consuelo Sáiz-Manzanares and Raúl Marticorena-Sánchez
Computers 2024, 13(11), 289; https://doi.org/10.3390/computers13110289 - 8 Nov 2024
Cited by 1 | Viewed by 2338
Abstract
The use of eye tracking technology, together with other physiological measurements such as psychogalvanic skin response (GSR) and electroencephalographic (EEG) recordings, provides researchers with information about users’ physiological behavioural responses during their learning process in different types of tasks. These devices produce a [...] Read more.
The use of eye tracking technology, together with other physiological measurements such as psychogalvanic skin response (GSR) and electroencephalographic (EEG) recordings, provides researchers with information about users’ physiological behavioural responses during their learning process in different types of tasks. These devices produce a large volume of data. However, in order to analyse these records, researchers have to process and analyse them using complex statistical and/or machine learning techniques (supervised or unsupervised) that are usually not incorporated into the devices. The objectives of this study were (1) to propose a procedure for processing the extracted data; (2) to address the potential technical challenges and difficulties in processing logs in integrated multichannel technology; and (3) to offer solutions for automating data processing and analysis. A Notebook in Jupyter is proposed with the steps for importing and processing data, as well as for using supervised and unsupervised machine learning algorithms. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

21 pages, 4763 KB  
Article
MCMC Methods for Parameter Estimation in ODE Systems for CAR-T Cell Cancer Therapy
by Elia Antonini, Gang Mu, Sara Sansaloni-Pastor, Vishal Varma and Ryme Kabak
Cancers 2024, 16(18), 3132; https://doi.org/10.3390/cancers16183132 - 11 Sep 2024
Cited by 2 | Viewed by 2776
Abstract
Chimeric antigen receptor (CAR)-T cell therapy represents a breakthrough in treating resistant hematologic cancers. It is based on genetically modifying T cells transferred from the patient or a donor. Although its implementation has increased over the last few years, CAR-T has many challenges [...] Read more.
Chimeric antigen receptor (CAR)-T cell therapy represents a breakthrough in treating resistant hematologic cancers. It is based on genetically modifying T cells transferred from the patient or a donor. Although its implementation has increased over the last few years, CAR-T has many challenges to be addressed, for instance, the associated severe toxicities, such as cytokine release syndrome. To model CAR-T cell dynamics, focusing on their proliferation and cytotoxic activity, we developed a mathematical framework using ordinary differential equations (ODEs) with Bayesian parameter estimation. Bayesian statistics were used to estimate model parameters through Monte Carlo integration, Bayesian inference, and Markov chain Monte Carlo (MCMC) methods. This paper explores MCMC methods, including the Metropolis–Hastings algorithm and DEMetropolis and DEMetropolisZ algorithms, which integrate differential evolution to enhance convergence rates. The theoretical findings and algorithms were validated using Python and Jupyter Notebooks. A real medical dataset of CAR-T cell therapy was analyzed, employing optimization algorithms to fit the mathematical model to the data, with the PyMC library facilitating Bayesian analysis. The results demonstrated that our model accurately captured the key dynamics of CAR-T cell therapy. This conclusion underscores the potential of parameter estimation to improve the understanding and effectiveness of CAR-T cell therapy in clinical settings. Full article
(This article belongs to the Section Cancer Immunology and Immunotherapy)
Show Figures

Figure 1

Back to TopTop