Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = nature F0 contours

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1346 KiB  
Article
A Language Vision Model Approach for Automated Tumor Contouring in Radiation Oncology
by Yi Luo, Hamed Hooshangnejad, Xue Feng, Gaofeng Huang, Xiaojian Chen, Rui Zhang, Quan Chen, Wil Ngwa and Kai Ding
Bioengineering 2025, 12(8), 835; https://doi.org/10.3390/bioengineering12080835 - 31 Jul 2025
Viewed by 239
Abstract
Background: Lung cancer ranks as the leading cause of cancer-related mortality worldwide. The complexity of tumor delineation, crucial for radiation therapy, requires expertise often unavailable in resource-limited settings. Artificial Intelligence (AI), particularly with advancements in deep learning (DL) and natural language processing (NLP), [...] Read more.
Background: Lung cancer ranks as the leading cause of cancer-related mortality worldwide. The complexity of tumor delineation, crucial for radiation therapy, requires expertise often unavailable in resource-limited settings. Artificial Intelligence (AI), particularly with advancements in deep learning (DL) and natural language processing (NLP), offers potential solutions yet is challenged by high false positive rates. Purpose: The Oncology Contouring Copilot (OCC) system is developed to leverage oncologist expertise for precise tumor contouring using textual descriptions, aiming to increase the efficiency of oncological workflows by combining the strengths of AI with human oversight. Methods: Our OCC system initially identifies nodule candidates from CT scans. Employing Language Vision Models (LVMs) like GPT-4V, OCC then effectively reduces false positives with clinical descriptive texts, merging textual and visual data to automate tumor delineation, designed to elevate the quality of oncology care by incorporating knowledge from experienced domain experts. Results: The deployment of the OCC system resulted in a 35.0% reduction in the false discovery rate, a 72.4% decrease in false positives per scan, and an F1-score of 0.652 across our dataset for unbiased evaluation. Conclusions: OCC represents a significant advance in oncology care, particularly through the use of the latest LVMs, improving contouring results by (1) streamlining oncology treatment workflows by optimizing tumor delineation and reducing manual processes; (2) offering a scalable and intuitive framework to reduce false positives in radiotherapy planning using LVMs; (3) introducing novel medical language vision prompt techniques to minimize LVM hallucinations with ablation study; and (4) conducting a comparative analysis of LVMs, highlighting their potential in addressing medical language vision challenges. Full article
(This article belongs to the Special Issue Novel Imaging Techniques in Radiotherapy)
Show Figures

Figure 1

19 pages, 3130 KiB  
Article
Deep Learning-Based Instance Segmentation of Galloping High-Speed Railway Overhead Contact System Conductors in Video Images
by Xiaotong Yao, Huayu Yuan, Shanpeng Zhao, Wei Tian, Dongzhao Han, Xiaoping Li, Feng Wang and Sihua Wang
Sensors 2025, 25(15), 4714; https://doi.org/10.3390/s25154714 - 30 Jul 2025
Viewed by 234
Abstract
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping [...] Read more.
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping status of conductors is crucial, and instance segmentation techniques, by delineating the pixel-level contours of each conductor, can significantly aid in the identification and study of galloping phenomena. This work expands upon the YOLO11-seg model and introduces an instance segmentation approach for galloping video and image sensor data of OCS conductors. The algorithm, designed for the stripe-like distribution of OCS conductors in the data, employs four-direction Sobel filters to extract edge features in horizontal, vertical, and diagonal orientations. These features are subsequently integrated with the original convolutional branch to form the FDSE (Four Direction Sobel Enhancement) module. It integrates the ECA (Efficient Channel Attention) mechanism for the adaptive augmentation of conductor characteristics and utilizes the FL (Focal Loss) function to mitigate the class-imbalance issue between positive and negative samples, hence enhancing the model’s sensitivity to conductors. Consequently, segmentation outcomes from neighboring frames are utilized, and mask-difference analysis is performed to autonomously detect conductor galloping locations, emphasizing their contours for the clear depiction of galloping characteristics. Experimental results demonstrate that the enhanced YOLO11-seg model achieves 85.38% precision, 77.30% recall, 84.25% AP@0.5, 81.14% F1-score, and a real-time processing speed of 44.78 FPS. When combined with the galloping visualization module, it can issue real-time alerts of conductor galloping anomalies, providing robust technical support for railway OCS safety monitoring. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

21 pages, 4777 KiB  
Article
Harnessing Semantic and Trajectory Analysis for Real-Time Pedestrian Panic Detection in Crowded Micro-Road Networks
by Rongyong Zhao, Lingchen Han, Yuxin Cai, Bingyu Wei, Arifur Rahman, Cuiling Li and Yunlong Ma
Appl. Sci. 2025, 15(10), 5394; https://doi.org/10.3390/app15105394 - 12 May 2025
Viewed by 415
Abstract
Pedestrian panic behavior is a primary cause of overcrowding and stampede accidents in public micro-road network areas with high pedestrian density. However, reliably detecting such behaviors remains challenging due to their inherent complexity, variability, and stochastic nature. Current detection models often rely on [...] Read more.
Pedestrian panic behavior is a primary cause of overcrowding and stampede accidents in public micro-road network areas with high pedestrian density. However, reliably detecting such behaviors remains challenging due to their inherent complexity, variability, and stochastic nature. Current detection models often rely on single-modality features, which limits their effectiveness in complex and dynamic crowd scenarios. To overcome these limitations, this study proposes a contour-driven multimodal framework that first employs a CNN (CDNet) to estimate density maps and, by analyzing steep contour gradients, automatically delineates a candidate panic zone. Within these potential panic zones, pedestrian trajectories are analyzed through LSTM networks to capture irregular movements, such as counterflow and nonlinear wandering behaviors. Concurrently, semantic recognition based on Transformer models is utilized to identify verbal distress cues extracted through Baidu AI’s real-time speech-to-text conversion. The three embeddings are fused through a lightweight attention-enhanced MLP, enabling end-to-end inference at 40 FPS on a single GPU. To evaluate branch robustness under streaming conditions, the UCF Crowd dataset (150 videos without panic labels) is processed frame-by-frame at 25 FPS solely for density assessment, whereas full panic detection is validated on 30 real Itaewon-Stampede videos and 160 SUMO/Unity simulated emergencies that include explicit panic annotations. The proposed system achieves 91.7% accuracy and 88.2% F1 on the Itaewon set, outperforming all single- or dual-modality baselines and offering a deployable solution for proactive crowd safety monitoring in transport hubs, festivals, and other high-risk venues. Full article
Show Figures

Figure 1

25 pages, 2085 KiB  
Article
How Much Does the Dynamic F0 Curve Affect the Expression of Emotion in Utterances?
by Tae-Jin Yoon
Appl. Sci. 2024, 14(23), 10972; https://doi.org/10.3390/app142310972 - 26 Nov 2024
Viewed by 1134
Abstract
The modulation of vocal elements, such as pitch, loudness, and duration, plays a crucial role in conveying both linguistic information and the speaker’s emotional state. While acoustic features like fundamental frequency (F0) variability have been widely studied in emotional speech analysis, accurately classifying [...] Read more.
The modulation of vocal elements, such as pitch, loudness, and duration, plays a crucial role in conveying both linguistic information and the speaker’s emotional state. While acoustic features like fundamental frequency (F0) variability have been widely studied in emotional speech analysis, accurately classifying emotion remains challenging due to the complex and dynamic nature of vocal expressions. Traditional analytical methods often oversimplify these dynamics, potentially overlooking intricate patterns indicative of specific emotions. This study examines the influences of emotion and temporal variation on dynamic F0 contours in the analytical framework, utilizing a dataset valuable for its diverse emotional expressions. However, the analysis is constrained by the limited variety of sentences employed, which may affect the generalizability of the findings to broader linguistic contexts. We utilized the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), focusing on eight distinct emotional states performed by 24 professional actors. Sonorant segments were extracted, and F0 measurements were converted into semitones relative to a 100 Hz baseline to standardize pitch variations. By employing Generalized Additive Mixed Models (GAMMs), we modeled non-linear trajectories of F0 contours over time, accounting for fixed effects (emotions) and random effects (individual speaker variability). Our analysis revealed that incorporating emotion-specific, non-linear time effects and individual speaker differences significantly improved the model’s explanatory power, ultimately explaining up to 66.5% of the variance in the F0. The inclusion of random smooths for time within speakers captured individual temporal modulation patterns, providing a more accurate representation of emotional speech dynamics. The results demonstrate that dynamic modeling of F0 contours using GAMMs enhances the accuracy of emotion classification in speech. This approach captures the nuanced pitch patterns associated with different emotions and accounts for individual variability among speakers. The findings contribute to a deeper understanding of the vocal expression of emotions and offer valuable insights for advancing speech emotion recognition systems. Full article
(This article belongs to the Special Issue Advances and Applications of Audio and Speech Signal Processing)
Show Figures

Figure 1

16 pages, 2632 KiB  
Article
Dual-Guided Brain Diffusion Model: Natural Image Reconstruction from Human Visual Stimulus fMRI
by Lu Meng and Chuanhao Yang
Bioengineering 2023, 10(10), 1117; https://doi.org/10.3390/bioengineering10101117 - 24 Sep 2023
Cited by 3 | Viewed by 4258
Abstract
The reconstruction of visual stimuli from fMRI signals, which record brain activity, is a challenging task with crucial research value in the fields of neuroscience and machine learning. Previous studies tend to emphasize reconstructing pixel-level features (contours, colors, etc.) or semantic features (object [...] Read more.
The reconstruction of visual stimuli from fMRI signals, which record brain activity, is a challenging task with crucial research value in the fields of neuroscience and machine learning. Previous studies tend to emphasize reconstructing pixel-level features (contours, colors, etc.) or semantic features (object category) of the stimulus image, but typically, these properties are not reconstructed together. In this context, we introduce a novel three-stage visual reconstruction approach called the Dual-guided Brain Diffusion Model (DBDM). Initially, we employ the Very Deep Variational Autoencoder (VDVAE) to reconstruct a coarse image from fMRI data, capturing the underlying details of the original image. Subsequently, the Bootstrapping Language-Image Pre-training (BLIP) model is utilized to provide a semantic annotation for each image. Finally, the image-to-image generation pipeline of the Versatile Diffusion (VD) model is utilized to recover natural images from the fMRI patterns guided by both visual and semantic information. The experimental results demonstrate that DBDM surpasses previous approaches in both qualitative and quantitative comparisons. In particular, the best performance is achieved by DBDM in reconstructing the semantic details of the original image; the Inception, CLIP and SwAV distances are 0.611, 0.225 and 0.405, respectively. This confirms the efficacy of our model and its potential to advance visual decoding research. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

21 pages, 23680 KiB  
Article
FBC-ANet: A Semantic Segmentation Model for UAV Forest Fire Images Combining Boundary Enhancement and Context Awareness
by Lin Zhang, Mingyang Wang, Yunhong Ding, Tingting Wan, Bo Qi and Yutian Pang
Drones 2023, 7(7), 456; https://doi.org/10.3390/drones7070456 - 9 Jul 2023
Cited by 16 | Viewed by 2618
Abstract
Forest fires are one of the most serious natural disasters that threaten forest resources. The early and accurate identification of forest fires is crucial for reducing losses. Compared with satellites and sensors, unmanned aerial vehicles (UAVs) are widely used in forest fire monitoring [...] Read more.
Forest fires are one of the most serious natural disasters that threaten forest resources. The early and accurate identification of forest fires is crucial for reducing losses. Compared with satellites and sensors, unmanned aerial vehicles (UAVs) are widely used in forest fire monitoring tasks due to their flexibility and wide coverage. The key to fire monitoring is to accurately segment the area where the fire is located in the image. However, for early forest fire monitoring, fires captured remotely by UAVs have the characteristics of a small area, irregular contour, and susceptibility to forest cover, making the accurate segmentation of fire areas from images a challenge. This article proposes an FBC-ANet network architecture that integrates boundary enhancement modules and context-aware modules into a lightweight encoder–decoder network. FBC-Anet can extract deep semantic features from images and enhance shallow edge features, thereby achieving an effective segmentation of forest fire areas in the image. The FBC-ANet model uses an Xception network as the backbone of an encoder to extract features of different scales from images. By transforming the extracted deep semantic features through the CIA module, the model’s feature learning ability for fire pixels is enhanced, making feature extraction more robust. FBC-ANet integrates the decoder into the BEM module to enhance the extraction of shallow edge features in images. The experimental results indicate that the FBC-ANet model has a better segmentation performance for small target forest fires compared to the baseline model. The segmentation accuracy on the dataset FLAME is 92.19%, the F1 score is 90.76%, and the IoU reaches 83.08%. This indicates that the FBC-ANet model can indeed extract more valuable features related to fire in the image, thereby better segmenting the fire area from the image. Full article
Show Figures

Figure 1

27 pages, 11254 KiB  
Article
A Robust Real-Time Ellipse Detection Method for Robot Applications
by Wenshan He, Gongping Wu, Fei Fan, Zhongyun Liu and Shujie Zhou
Drones 2023, 7(3), 209; https://doi.org/10.3390/drones7030209 - 17 Mar 2023
Cited by 8 | Viewed by 4403
Abstract
Over the years, many ellipse detection algorithms have been studied broadly, while the critical problem of accurately and effectively detecting ellipses in the real-world using robots remains a challenge. In this paper, we proposed a valuable real-time robot-oriented detector and simple tracking algorithm [...] Read more.
Over the years, many ellipse detection algorithms have been studied broadly, while the critical problem of accurately and effectively detecting ellipses in the real-world using robots remains a challenge. In this paper, we proposed a valuable real-time robot-oriented detector and simple tracking algorithm for ellipses. This method uses low-cost RGB cameras for conversion into HSV space to obtain reddish regions of interest (RROIs) contours, effective arc selection and grouping strategies, and the candidate ellipses selection procedures that eliminate invalid edges and clustering functions. Extensive experiments are conducted to adjust and verify the method’s parameters for achieving the best performance. The method combined with a simple tracking algorithm executes only approximately 30 ms on a video frame in most cases. The results show that the proposed method had high-quality performance (precision, recall, F-Measure scores) and the least execution time compared with the existing nine most advanced methods on three public actual application datasets. Our method could detect elliptical markers in real-time in practical applications, detect ellipses adaptively under natural light, well detect severely blocked and specular reflection ellipses when the elliptical object was far from or close to the robot. The average detection frequency can meet the real-time requirements (>10 Hz). Full article
(This article belongs to the Topic 3D Computer Vision and Smart Building and City)
Show Figures

Figure 1

13 pages, 1496 KiB  
Article
Arbitrary-Shaped Text Detection with B-Spline Curve Network
by Yuwei You, Yuxin Lei, Zixu Zhang and Minglei Tong
Sensors 2023, 23(5), 2418; https://doi.org/10.3390/s23052418 - 22 Feb 2023
Cited by 1 | Viewed by 2140
Abstract
Text regions in natural scenes have complex and variable shapes. Directly using contour coordinates to describe text regions will make the modeling inadequate and lead to low accuracy of text detection. To address the problem of irregular text regions in natural scenes, we [...] Read more.
Text regions in natural scenes have complex and variable shapes. Directly using contour coordinates to describe text regions will make the modeling inadequate and lead to low accuracy of text detection. To address the problem of irregular text regions in natural scenes, we propose an arbitrary-shaped text detection model based on Deformable DETR called BSNet. The model differs from the traditional method of directly predicting contour points by using B-Spline curve to make the text contour more accurate and reduces the number of predicted parameters simultaneously. The proposed model eliminates manually designed components and dramatically simplifies the design. The proposed model achieves F-measure of 86.8% and 87.6% on CTW1500 and Total-Text, demonstrating the model’s effectiveness. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

14 pages, 1256 KiB  
Article
CA-STD: Scene Text Detection in Arbitrary Shape Based on Conditional Attention
by Xing Wu, Yangyang Qi, Jun Song, Junfeng Yao, Yanzhong Wang, Yang Liu, Yuexing Han and Quan Qian
Information 2022, 13(12), 565; https://doi.org/10.3390/info13120565 - 1 Dec 2022
Cited by 7 | Viewed by 2134
Abstract
Scene Text Detection (STD) is critical for obtaining textual information from natural scenes, serving for automated driving and security surveillance. However, existing text detection methods fall short when dealing with the variation in text curvatures, orientations, and aspect ratios in complex backgrounds. To [...] Read more.
Scene Text Detection (STD) is critical for obtaining textual information from natural scenes, serving for automated driving and security surveillance. However, existing text detection methods fall short when dealing with the variation in text curvatures, orientations, and aspect ratios in complex backgrounds. To meet the challenge, we propose a method called CA-STD to detect arbitrarily shaped text against a complicated background. Firstly, a Feature Refinement Module (FRM) is proposed to enhance feature representation. Additionally, the conditional attention mechanism is proposed not only to decouple the spatial and textual information from scene text images, but also to model the relationship among different feature vectors. Finally, the Contour Information Aggregation (CIA) is presented to enrich the feature representation of text contours by considering circular topology and semantic information simultaneously to obtain the detection curves with arbitrary shapes. The proposed CA-STD method is evaluated on different datasets with extensive experiments. On the one hand, the CA-STD outperforms state-of-the-art methods and achieves 82.9 in precision on the dataset of TotalText. On the other hand, the method has better performance than state-of-the-art methods and achieves the F1 score of 83.8 on the dataset of CTW-1500. The quantitative and qualitative analysis proves that the CA-STD can detect variably shaped scene text effectively. Full article
(This article belongs to the Special Issue Intelligence Computing and Systems)
Show Figures

Figure 1

18 pages, 4607 KiB  
Article
Rapid Target Detection of Fruit Trees Using UAV Imaging and Improved Light YOLOv4 Algorithm
by Yuchao Zhu, Jun Zhou, Yinhui Yang, Lijuan Liu, Fei Liu and Wenwen Kong
Remote Sens. 2022, 14(17), 4324; https://doi.org/10.3390/rs14174324 - 1 Sep 2022
Cited by 30 | Viewed by 4083
Abstract
The detection and counting of fruit tree canopies are important for orchard management, yield estimation, and phenotypic analysis. Previous research has shown that most fruit tree canopy detection methods are based on the use of traditional computer vision algorithms or machine learning methods [...] Read more.
The detection and counting of fruit tree canopies are important for orchard management, yield estimation, and phenotypic analysis. Previous research has shown that most fruit tree canopy detection methods are based on the use of traditional computer vision algorithms or machine learning methods to extract shallow features such as color and contour, with good results. However, due to the lack of robustness of these features, most methods are hardly adequate for the recognition and counting of fruit tree canopies in natural scenes. Other studies have shown that deep learning methods can be used to perform canopy detection. However, the adhesion and occlusion of fruit tree canopies, as well as background noise, limit the accuracy of detection. Therefore, to improve the accuracy of fruit tree canopy recognition and counting in real-world scenarios, an improved YOLOv4 (you only look once v4) is proposed, using a dataset produced from fruit tree canopy UAV imagery, combined with the Mobilenetv3 network, which can lighten the model and increase the detection speed, combined with the CBAM (convolutional block attention module), which can increase the feature extraction capability of the network, and combined with ASFF (adaptively spatial feature fusion), which enhances the multi-scale feature fusion capability of the network. In addition, the K-means algorithm and linear scale scaling are used to optimize the generation of pre-selected boxes, and the learning strategy of cosine annealing is combined to train the model, thus accelerating the training speed of the model and improving the detection accuracy. The results show that the improved YOLOv4 model can effectively overcome the noise in an orchard environment and achieve fast and accurate recognition and counting of fruit tree crowns while lightweight the model. The mAP reached 98.21%, FPS reached 96.25 and F1-score reached 93.60% for canopy detection, with a significant reduction in model size; the average overall accuracy (AOA) reached 96.73% for counting. In conclusion, the YOLOv4-Mobilenetv3-CBAM-ASFF-P model meets the practical requirements of orchard fruit tree canopy detection and counting in this study, providing optional technical support for the digitalization, refinement, and smart development of smart orchards. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing)
Show Figures

Graphical abstract

13 pages, 560 KiB  
Article
How Visual Word Decoding and Context-Driven Auditory Semantic Integration Contribute to Reading Comprehension: A Test of Additive vs. Multiplicative Models
by Yu Li, Hongbing Xing, Linjun Zhang, Hua Shu and Yang Zhang
Brain Sci. 2021, 11(7), 830; https://doi.org/10.3390/brainsci11070830 - 23 Jun 2021
Cited by 4 | Viewed by 3286
Abstract
Theories of reading comprehension emphasize decoding and listening comprehension as two essential components. The current study aimed to investigate how Chinese character decoding and context-driven auditory semantic integration contribute to reading comprehension in Chinese middle school students. Seventy-five middle school students were tested. [...] Read more.
Theories of reading comprehension emphasize decoding and listening comprehension as two essential components. The current study aimed to investigate how Chinese character decoding and context-driven auditory semantic integration contribute to reading comprehension in Chinese middle school students. Seventy-five middle school students were tested. Context-driven auditory semantic integration was assessed with speech-in-noise tests in which the fundamental frequency (F0) contours of spoken sentences were either kept natural or acoustically flattened, with the latter requiring a higher degree of contextual information. Statistical modeling with hierarchical regression was conducted to examine the contributions of Chinese character decoding and context-driven auditory semantic integration to reading comprehension. Performance in Chinese character decoding and auditory semantic integration scores with the flattened (but not natural) F0 sentences significantly predicted reading comprehension. Furthermore, the contributions of these two factors to reading comprehension were better fitted with an additive model instead of a multiplicative model. These findings indicate that reading comprehension in middle schoolers is associated with not only character decoding but also the listening ability to make better use of the sentential context for semantic integration in a severely degraded speech-in-noise condition. The results add to our better understanding of the multi-faceted reading comprehension in children. Future research could further address the age-dependent development and maturation of reading skills by examining and controlling other important cognitive variables, and apply neuroimaging techniques such as functional magmatic resonance imaging and electrophysiology to reveal the neural substrates and neural oscillatory patterns for the contribution of auditory semantic integration and the observed additive model to reading comprehension. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

18 pages, 2948 KiB  
Article
Tonal Contour Generation for Isarn Speech Synthesis Using Deep Learning and Sampling-Based F0 Representation
by Pongsathon Janyoi and Pusadee Seresangtakul
Appl. Sci. 2020, 10(18), 6381; https://doi.org/10.3390/app10186381 - 13 Sep 2020
Cited by 5 | Viewed by 2949
Abstract
The modeling of fundamental frequency (F0) in speech synthesis is a critical factor affecting the intelligibility and naturalness of synthesized speech. In this paper, we focus on improving the modeling of F0 for Isarn speech synthesis. We propose the [...] Read more.
The modeling of fundamental frequency (F0) in speech synthesis is a critical factor affecting the intelligibility and naturalness of synthesized speech. In this paper, we focus on improving the modeling of F0 for Isarn speech synthesis. We propose the F0 model for this based on a recurrent neural network (RNN). Sampled values of F0 are used at the syllable level of continuous Isarn speech combined with their dynamic features to represent supra-segmental properties of the F0 contour. Different architectures of the deep RNNs and different combinations of linguistic features are analyzed to obtain conditions for the best performance. To assess the proposed method, we compared it with several RNN-based baselines. The results of objective and subjective tests indicate that the proposed model significantly outperformed the baseline RNN model that predicts values of F0 at the frame level, and the baseline RNN model that represents the F0 contours of syllables by using discrete cosine transform. Full article
(This article belongs to the Special Issue Intelligent Speech and Acoustic Signal Processing)
Show Figures

Figure 1

12 pages, 4724 KiB  
Article
A 3D-QSAR Study on the Antitrypanosomal and Cytotoxic Activities of Steroid Alkaloids by Comparative Molecular Field Analysis
by Charles Okeke Nnadi, Julia Barbara Althaus, Ngozi Justina Nwodo and Thomas Jürgen Schmidt
Molecules 2018, 23(5), 1113; https://doi.org/10.3390/molecules23051113 - 8 May 2018
Cited by 18 | Viewed by 6033
Abstract
As part of our research for new leads against human African trypanosomiasis (HAT), we report on a 3D-QSAR study for antitrypanosomal activity and cytotoxicity of aminosteroid-type alkaloids recently isolated from the African medicinal plant Holarrhena africana A. DC. (Apocynaceae), some of which are [...] Read more.
As part of our research for new leads against human African trypanosomiasis (HAT), we report on a 3D-QSAR study for antitrypanosomal activity and cytotoxicity of aminosteroid-type alkaloids recently isolated from the African medicinal plant Holarrhena africana A. DC. (Apocynaceae), some of which are strong trypanocides against Trypanosoma brucei rhodesiense (Tbr), with low toxicity against mammalian cells. Fully optimized 3D molecular models of seventeen congeneric Holarrhena alkaloids were subjected to a comparative molecular field analysis (CoMFA). CoMFA models were obtained for both, the anti-Tbr and cytotoxic activity data. Model performance was assessed in terms of statistical characteristics (R2, Q2, and P2 for partial least squares (PLS) regression, internal cross-validation (leave-one-out), and external predictions (test set), respectively, as well as the corresponding standard deviation error in prediction (SDEP) and F-values). With R2 = 0.99, Q2 = 0.83 and P2 = 0.79 for anti-Tbr activity and R2 = 0.94, Q2 = 0.64, P2 = 0.59 for cytotoxicity against L6 rat skeletal myoblasts, both models were of good internal and external predictive power. The regression coefficients of the models representing the most prominent steric and electrostatic effects on anti-Tbr and for L6 cytotoxic activity were translated into contour maps and analyzed visually, allowing suggestions for possible modification of the aminosteroids to further increase the antitrypanosomal potency and selectivity. Very interestingly, the 3D-QSAR model established with the Holarrhena alkaloids also applied to the antitrypanosomal activity of two aminocycloartane-type compounds recently isolated by our group from Buxus sempervirens L. (Buxaceae), which indicates that these structurally similar natural products share a common structure–activity relationship (SAR) and, possibly, mechanism of action with the Holarrhena steroids. This 3D-QSAR study has thus resulted in plausible structural explanations of the antitrypanosomal activity and selectivity of aminosteroid- and aminocycloartane-type alkaloids as an interesting new class of trypanocides and may represent a starting point for lead optimization. Full article
Show Figures

Figure 1

11 pages, 2182 KiB  
Article
Fusarium graminearum in Stored Wheat: Use of CO2 Production to Quantify Dry Matter Losses and Relate This to Relative Risks of Zearalenone Contamination under Interacting Environmental Conditions
by Esther Garcia-Cela, Elsa Kiaitsi, Michael Sulyok, Angel Medina and Naresh Magan
Toxins 2018, 10(2), 86; https://doi.org/10.3390/toxins10020086 - 17 Feb 2018
Cited by 28 | Viewed by 6504
Abstract
Zearalenone (ZEN) contamination from Fusarium graminearum colonization is particularly important in food and feed wheat, especially during post-harvest storage with legislative limits for both food and feed grain. Indicators of the relative risk from exceeding these limits would be useful. We examined the [...] Read more.
Zearalenone (ZEN) contamination from Fusarium graminearum colonization is particularly important in food and feed wheat, especially during post-harvest storage with legislative limits for both food and feed grain. Indicators of the relative risk from exceeding these limits would be useful. We examined the effect of different water activities (aw; 0.95–0.90) and temperature (10–25 °C) in naturally contaminated and irradiated wheat grain, both inoculated with F. graminearum and stored for 15 days on (a) respiration rate; (b) dry matter losses (DML); (c) ZEN production and (d) relationship between DML and ZEN contamination relative to the EU legislative limits. Gas Chromatography was used to measure the temporal respiration rates and the total accumulated CO2 production. There was an increase in temporal CO2 production rates in wetter and warmer conditions in all treatments, with the highest respiration in the 25 °C × 0.95 aw treatments + F. graminearum inoculation. This was reflected in the total accumulated CO2 in the treatments. The maximum DMLs were in the 0.95 aw/20–25 °C treatments and at 10 °C/0.95 aw. The DMLs were modelled to produce contour maps of the environmental conditions resulting in maximum/minimum losses. Contamination with ZEN/ZEN-related compounds were quantified. Maximum production was at 25 °C/0.95–0.93 aw and 20 °C/0.95 aw. ZEN contamination levels plotted against DMLs for all the treatments showed that at ca. <1.0% DML, there was a low risk of ZEN contamination exceeding EU legislative limits, while at >1.0% DML, the risk was high. This type of data is important in building a database for the development of a post-harvest decision support system for relative risks of different mycotoxins. Full article
Show Figures

Figure 1

Back to TopTop