Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = phonoenterogram (PEG)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1109 KB  
Article
A Novel Unsupervised You Only Listen Once (YOLO) Machine Learning Platform for Automatic Detection and Characterization of Prominent Bowel Sounds Towards Precision Medicine
by Gayathri Yerrapragada, Jieun Lee, Mohammad Naveed Shariff, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Avneet Kaur, Divyanshi Sood, Swetha Rapolu, Jay Gohri, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Jahnavi Mikkilineni, Naghmeh Asadimanesh, Thangeswaran Natarajan, Jayarajasekaran Janarthanan, Shiva Sankari Karuppiah, Vivek N. Iyer, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
Bioengineering 2025, 12(11), 1271; https://doi.org/10.3390/bioengineering12111271 - 19 Nov 2025
Viewed by 2624
Abstract
Phonoenterography (PEG) offers a non-invasive and radiation-free technique to assess gastrointestinal activity through acoustic signal analysis. In this feasibility study, 110 high-resolution PEG recordings (44.1 kHz, 16-bit) were acquired from eight healthy individuals, yielding 6314 prominent bowel sound (PBS) segments through automated segmentation. [...] Read more.
Phonoenterography (PEG) offers a non-invasive and radiation-free technique to assess gastrointestinal activity through acoustic signal analysis. In this feasibility study, 110 high-resolution PEG recordings (44.1 kHz, 16-bit) were acquired from eight healthy individuals, yielding 6314 prominent bowel sound (PBS) segments through automated segmentation. Each event was characterized using a 279-feature acoustic profile comprising Mel-frequency cepstral coefficients (MFCCs), their first-order derivatives (Δ-MFCCs), and six global spectral parameters. After normalization and dimensionality reduction with PCA and UMAP (cosine distance, 35 neighbors, minimum distance = 0.01), five clustering strategies were evaluated. K-Means (k = 5) achieved the most favorable balance between cluster quality (silhouette = 0.60; Calinski–Harabasz = 19,165; Davies–Bouldin = 0.68) and interpretability, consistently identifying five acoustic patterns: single-burst, multiple-burst, harmonic, random-continuous, and multi-modal. Temporal modeling of clustered events further revealed distinct sequential dynamics, with Single-Burst events showing the longest dwell times, random continuous the shortest, and strong diagonal elements in the transition matrix confirming measurable state persistence. Frequent transitions between random continuous and multi-modal states suggested dynamic exchanges between transient and overlapping motility patterns. Together, these findings demonstrate that unsupervised PEG-based analysis can capture both acoustic variability and temporal organization of bowel sounds. This annotation-free approach provides a scalable framework for real-time gastrointestinal monitoring and holds potential for clinical translation in conditions such as postoperative ileus, bowel obstruction, irritable bowel syndrome, and inflammatory bowel disease. Full article
Show Figures

Figure 1

13 pages, 769 KB  
Article
A Novel You Only Listen Once (YOLO) Deep Learning Model for Automatic Prominent Bowel Sounds Detection: Feasibility Study in Healthy Subjects
by Rohan Kalahasty, Gayathri Yerrapragada, Jieun Lee, Keerthy Gopalakrishnan, Avneet Kaur, Pratyusha Muddaloor, Divyanshi Sood, Charmy Parikh, Jay Gohri, Gianeshwaree Alias Rachna Panjwani, Naghmeh Asadimanesh, Rabiah Aslam Ansari, Swetha Rapolu, Poonguzhali Elangovan, Shiva Sankari Karuppiah, Vijaya M. Dasari, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
Sensors 2025, 25(15), 4735; https://doi.org/10.3390/s25154735 - 31 Jul 2025
Cited by 3 | Viewed by 3475
Abstract
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low [...] Read more.
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low clinical value in diagnosis. Interpretation of the acoustic characteristics of BSs, i.e., using a phonoenterogram (PEG), may aid in diagnosing various GI conditions non-invasively. Use of artificial intelligence (AI) and improvements in computational analysis can enhance the use of PEGs in different GI diseases and lead to a non-invasive, cost-effective diagnostic modality that has not been explored before. The purpose of this work was to develop an automated AI model, You Only Listen Once (YOLO), to detect prominent bowel sounds that can enable real-time analysis for future GI disease detection and diagnosis. A total of 110 2-minute PEGs sampled at 44.1 kHz were recorded using the Eko DUO® stethoscope from eight healthy volunteers at two locations, namely, left upper quadrant (LUQ) and right lower quadrant (RLQ) after IRB approval. The datasets were annotated by trained physicians, categorizing BSs as prominent or obscure using version 1.7 of Label Studio Software®. Each BS recording was split up into 375 ms segments with 200 ms overlap for real-time BS detection. Each segment was binned based on whether it contained a prominent BS, resulting in a dataset of 36,149 non-prominent segments and 6435 prominent segments. Our dataset was divided into training, validation, and test sets (60/20/20% split). A 1D-CNN augmented transformer was trained to classify these segments via the input of Mel-frequency cepstral coefficients. The developed AI model achieved area under the receiver operating curve (ROC) of 0.92, accuracy of 86.6%, precision of 86.85%, and recall of 86.08%. This shows that the 1D-CNN augmented transformer with Mel-frequency cepstral coefficients achieved creditable performance metrics, signifying the YOLO model’s capability to classify prominent bowel sounds that can be further analyzed for various GI diseases. This proof-of-concept study in healthy volunteers demonstrates that automated BS detection can pave the way for developing more intuitive and efficient AI-PEG devices that can be trained and utilized to diagnose various GI conditions. To ensure the robustness and generalizability of these findings, further investigations encompassing a broader cohort, inclusive of both healthy and disease states are needed. Full article
(This article belongs to the Special Issue Biomedical Signals, Images and Healthcare Data Analysis: 2nd Edition)
Show Figures

Figure 1

21 pages, 1337 KB  
Review
Practicing Digital Gastroenterology through Phonoenterography Leveraging Artificial Intelligence: Future Perspectives Using Microwave Systems
by Renisha Redij, Avneet Kaur, Pratyusha Muddaloor, Arshia K. Sethi, Keirthana Aedma, Anjali Rajagopal, Keerthy Gopalakrishnan, Ashima Yadav, Devanshi N. Damani, Victor G. Chedid, Xiao Jing Wang, Christopher A. Aakre, Alexander J. Ryu and Shivaram P. Arunachalam
Sensors 2023, 23(4), 2302; https://doi.org/10.3390/s23042302 - 18 Feb 2023
Cited by 16 | Viewed by 9145
Abstract
Production of bowel sounds, established in the 1900s, has limited application in existing patient-care regimes and diagnostic modalities. We review the physiology of bowel sound production, the developments in recording technologies and the clinical application in various scenarios, to understand the potential of [...] Read more.
Production of bowel sounds, established in the 1900s, has limited application in existing patient-care regimes and diagnostic modalities. We review the physiology of bowel sound production, the developments in recording technologies and the clinical application in various scenarios, to understand the potential of a bowel sound recording and analysis device—the phonoenterogram in future gastroenterological practice. Bowel sound production depends on but is not entirely limited to the type of food consumed, amount of air ingested and the type of intestinal contractions. Recording technologies for extraction and analysis of these include the wavelet-based filtering, autoregressive moving average model, multivariate empirical mode decompression, radial basis function network, two-dimensional positional mapping, neural network model and acoustic biosensor technique. Prior studies evaluate the application of bowel sounds in conditions such as intestinal obstruction, acute appendicitis, large bowel disorders such as inflammatory bowel disease and bowel polyps, ascites, post-operative ileus, sepsis, irritable bowel syndrome, diabetes mellitus, neurodegenerative disorders such as Parkinson’s disease and neonatal conditions such as hypertrophic pyloric stenosis. Recording and analysis of bowel sounds using artificial intelligence is crucial for creating an accessible, inexpensive and safe device with a broad range of clinical applications. Microwave-based digital phonoenterography has huge potential for impacting GI practice and patient care. Full article
(This article belongs to the Special Issue Microwave and Antenna System in Medical Applications)
Show Figures

Figure 1

Back to TopTop