Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = Turbo coding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1295 KB  
Article
Use of Small-Molecule Inhibitors of CILK1 and AURKA as Cilia-Promoting Drugs to Decelerate Medulloblastoma Cell Replication
by Sean H. Fu, Chelsea Park, Niyathi A. Shah, Ana Limerick, Ethan W. Powers, Cassidy B. Mann, Emily M. Hyun, Ying Zhang, David L. Brautigan, Sijie Hao, Roger Abounader and Zheng Fu
Biomedicines 2026, 14(2), 265; https://doi.org/10.3390/biomedicines14020265 - 24 Jan 2026
Viewed by 211
Abstract
Background/Objective: The primary cilium is the sensory organelle of a cell and a dynamic membrane protrusion during the cell cycle. It originates from the centriole at G0/G1 and undergoes disassembly to release centrioles for spindle formation before a cell enters [...] Read more.
Background/Objective: The primary cilium is the sensory organelle of a cell and a dynamic membrane protrusion during the cell cycle. It originates from the centriole at G0/G1 and undergoes disassembly to release centrioles for spindle formation before a cell enters mitosis, thereby serving as a cell cycle checkpoint. Cancer cells that undergo rapid cell cycle and replication have a low ciliation rate. In this study, we aimed to identify cilia-promoting drugs that can accelerate ciliation and decelerate replication of cancer cells. Methods: To perform a comprehensive and efficient literature search on drugs that can promote ciliation, we developed an intelligent process that integrates either the GPT 4 Turbo, Gemini 1.5 Pro, or Claude 3.5 Haiku application programming interfaces (APIs) into a PubMed scraper that we coded, enabling the large language models (LLMs) to directly query articles for predefined user questions. We evaluated the performance of this intelligent literature search based on metrics and tested the effect of two candidate drugs on ciliation and proliferation of medulloblastoma cells. Results: Gemini was the best model overall, as it balanced high accuracy with solid precision and recall scores. Among the top candidate drugs identified are Alvocidib and Alisertib, small-molecule inhibitors of CILK1 and AURKA, respectively. Here, we show that both kinase inhibitors can effectively increase cilia frequency and significantly decrease the replication of medulloblastoma cells. Conclusions: The results demonstrated the potential of using cilia-promoting drugs, such as Alvocidib and Alisertib, to suppress cancer cell replication. Additionally, it shows the massive benefits of integrating accessible large language models to conduct sweeping, rapid, and accurate literature searches. Full article
(This article belongs to the Special Issue Signaling of Protein Kinases in Development and Disease (2nd Edition))
Show Figures

Figure 1

32 pages, 611 KB  
Article
Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Biomedical Question Answering
by Larissa Pusch and Tim O. F. Conrad
BioMedInformatics 2025, 5(4), 70; https://doi.org/10.3390/biomedinformatics5040070 - 9 Dec 2025
Cited by 1 | Viewed by 1183
Abstract
Advancements in natural language processing (NLP), particularly Large Language Models (LLMs), have greatly improved how we access knowledge. However, in critical domains like biomedicine, challenges like hallucinations—where language models generate information not grounded in data—can lead to dangerous misinformation. This paper presents a [...] Read more.
Advancements in natural language processing (NLP), particularly Large Language Models (LLMs), have greatly improved how we access knowledge. However, in critical domains like biomedicine, challenges like hallucinations—where language models generate information not grounded in data—can lead to dangerous misinformation. This paper presents a hybrid approach that combines LLMs with Knowledge Graphs (KGs) to improve the accuracy and reliability of question-answering systems in the biomedical field. Our method, implemented using the LangChain framework, includes a query-checking algorithm that checks and, where possible, corrects LLM-generated Cypher queries, which are then executed on the Knowledge Graph, grounding answers in the KG and reducing hallucinations in the evaluated cases. We evaluated several LLMs, including several GPT models and Llama 3.3:70b, on a custom benchmark dataset of 50 biomedical questions. GPT-4 Turbo achieved 90% query accuracy, outperforming most other models. We also evaluated prompt engineering, but found little statistically significant improvement compared to the standard prompt, except for Llama 3:70b, which improved with few-shot prompting. To enhance usability, we developed a web-based interface that allows users to input natural language queries, view generated and corrected Cypher queries, and inspect results for accuracy. This framework improves reliability and accessibility by accepting natural language questions and returning verifiable answers directly from the knowledge graph, enabling inspection and reproducibility. The source code for generating the results of this paper and for the user-interface can be found in our Git repository: https://git.zib.de/lpusch/cyphergenkg-gui, accessed on 1 November 2025. Full article
Show Figures

Figure 1

19 pages, 3351 KB  
Article
A Multi-Point Preliminary Design Method for Centrifugal Compressor Stages of Fuel Cell-Based Propulsion Systems
by Alessandro Cappiello, Viviane Ciais and Matteo Pini
Int. J. Turbomach. Propuls. Power 2025, 10(4), 39; https://doi.org/10.3390/ijtpp10040039 - 3 Nov 2025
Viewed by 809
Abstract
The successful implementation of an airborne propulsion system based on hydrogen-powered fuel cell technology highly depends on the development of an efficient, lightweight and compact air supply compressor. Meeting these requirements by designing the compressor using conventional single-point preliminary design methods can be [...] Read more.
The successful implementation of an airborne propulsion system based on hydrogen-powered fuel cell technology highly depends on the development of an efficient, lightweight and compact air supply compressor. Meeting these requirements by designing the compressor using conventional single-point preliminary design methods can be challenging, due to the very wide range of corrected mass flow rate and pressure ratio values that the air supply compressor must be able to accommodate. This article presents a multi-point design methodology for the preliminary design of centrifugal compressors of air supply systems. The method is implemented in an in-house code, called TurboSim, and allows to perform single- and multi-objective constrained optimization of vaneless centrifugal compressors. Furthermore, an automatic design point selection method is also available. The accuracy of the compressor lumped-parameter model is validated against experimental data obtained on a high-pressure-ratio single-stage vaneless centrifugal compressor from the literature. Subsequently, the design methodology is applied to optimize the compressor of the air supply system of an actual fuel cell powertrain. The results, compared to those obtained with a more conventional single-point design method, show that the multi-point method provides compressor designs that feature superior performance and that better comply with the specified constraints at the target operating points. Full article
Show Figures

Figure 1

12 pages, 211 KB  
Article
A Comparative Study of Large Language Models in Programming Education: Accuracy, Efficiency, and Feedback in Student Assignment Grading
by Andrija Bernik, Danijel Radošević and Andrej Čep
Appl. Sci. 2025, 15(18), 10055; https://doi.org/10.3390/app151810055 - 15 Sep 2025
Viewed by 2220
Abstract
Programming education traditionally requires extensive manual assessment of student assignments, which is both time-consuming and resource-intensive for instructors. Recent advances in large language models (LLMs) open opportunities for automating this process and providing timely feedback. This paper investigates the application of artificial intelligence [...] Read more.
Programming education traditionally requires extensive manual assessment of student assignments, which is both time-consuming and resource-intensive for instructors. Recent advances in large language models (LLMs) open opportunities for automating this process and providing timely feedback. This paper investigates the application of artificial intelligence (AI) tools for preliminary assessment of undergraduate programming assignments. A multi-phase experimental study was conducted across three computer science courses: Introduction to Programming, Programming 2, and Advanced Programming Concepts. A total of 315 Python assignments were collected from the Moodle learning management system, with 100 randomly selected submissions analyzed in detail. AI evaluation was performed using ChatGPT-4 (GPT-4-turbo), Claude 3, and Gemini 1.5 Pro models, employing structured prompts aligned with a predefined rubric that assessed functionality, code structure, documentation, and efficiency. Quantitative results demonstrate high correlation between AI-generated scores and instructor evaluations, with ChatGPT-4 achieving the highest consistency (Pearson coefficient 0.91) and the lowest average absolute deviation (0.68 points). Qualitative analysis highlights AI’s ability to provide structured, actionable feedback, though variability across models was observed. The study identifies benefits such as faster evaluation and enhanced feedback quality, alongside challenges including model limitations, potential biases, and the need for human oversight. Recommendations emphasize hybrid evaluation approaches combining AI automation with instructor supervision, ethical guidelines, and integration of AI tools into learning management systems. The findings indicate that AI-assisted grading can improve efficiency and pedagogical outcomes while maintaining academic integrity. Full article
17 pages, 1583 KB  
Article
Comparative Analysis of AI Models for Python Code Generation: A HumanEval Benchmark Study
by Ali Bayram, Gonca Gokce Menekse Dalveren and Mohammad Derawi
Appl. Sci. 2025, 15(18), 9907; https://doi.org/10.3390/app15189907 - 10 Sep 2025
Viewed by 6258
Abstract
This study conducts a comprehensive comparative analysis of six contemporary artificial intelligence models for Python code generation using the HumanEval benchmark. The evaluated models include GPT-3.5 Turbo, GPT-4 Omni, Claude 3.5 Sonnet, Claude 3.7 Sonnet, Claude Sonnet 4, and Claude Opus 4. A [...] Read more.
This study conducts a comprehensive comparative analysis of six contemporary artificial intelligence models for Python code generation using the HumanEval benchmark. The evaluated models include GPT-3.5 Turbo, GPT-4 Omni, Claude 3.5 Sonnet, Claude 3.7 Sonnet, Claude Sonnet 4, and Claude Opus 4. A total of 164 Python programming problems were utilized to assess model performance through a multi-faceted methodology incorporating automated functional correctness evaluation via the Pass@1 metric, cyclomatic complexity analysis, maintainability index calculations, and lines-of-code assessment. The results indicate that Claude Sonnet 4 achieved the highest performance with a success rate of 95.1%, followed closely by Claude Opus 4 at 94.5%. Across all metrics, models developed by Anthropic Claude consistently outperformed those developed by OpenAI GPT by margins exceeding 20%. Statistical analysis further confirmed the existence of significant differences between the model families (p < 0.001). Anthropic Claude models were observed to generate more sophisticated and maintainable solutions with superior syntactic accuracy. In contrast, OpenAI GPT models tended to adopt simpler strategies but exhibited notable limitations in terms of reliability. These findings offer evidence-based insights to guide the selection of AI-powered coding assistants in professional software development contexts. Full article
Show Figures

Figure 1

67 pages, 2605 KB  
Article
Polar Codes for 6G and Beyond Wireless Quantum Optical Communications
by Peter Jung, Kushtrim Dini, Faris Abdel Rehim and Hamza Almujahed
Electronics 2025, 14(17), 3563; https://doi.org/10.3390/electronics14173563 - 8 Sep 2025
Viewed by 969
Abstract
Wireless communication applications above 300 GHz need careful analog electronics design that takes into account the frequency-dependent nature of ohmic resistance at these frequencies. The cumbersome development of electronics brings quantum optical communication solutions for the sixth generation (6G) THz band located between [...] Read more.
Wireless communication applications above 300 GHz need careful analog electronics design that takes into account the frequency-dependent nature of ohmic resistance at these frequencies. The cumbersome development of electronics brings quantum optical communication solutions for the sixth generation (6G) THz band located between 300 GHz and 10 THz into focus. In this manuscript, the authors propose to replace the classical radio frequency based inner physical layer transceiver blocks used in classical channel coded short range wireless communication systems by wireless quantum optical communication concepts. In addition to discussing the resulting generic concept of the wireless quantum optical communications and illustrating optimum quantum data detection schemes, novel reduced state quantum data detection and novel Kohonen maps-based quantum data detection, will be addressed. All the considered quantum data detection schemes provide soft outputs required for the lowest possible block error ratio (BLER) at the output of the channel decoding. Furthermore, a novel polar codes design approach determining the polar sequence by appropriately combining already available polar sequences tailored for low BLER is presented for the first time after illustrating the basics of polar codes. In addition, turbo equalization for wireless quantum optical communications using polar codes will be presented, for the first time explicitly stating the generation of soft information associated with the codebits and introducing a novel scheme for the computation of extrinsic soft outputs to be used in the turbo equalization iterations. New simulation results emphasize the viability of the theoretical concepts. Full article
(This article belongs to the Special Issue Channel Coding and Measurements for 6G Wireless Communications)
Show Figures

Figure 1

18 pages, 1609 KB  
Article
Using Large Language Models to Extract Structured Data from Health Coaching Dialogues: A Comparative Study of Code Generation Versus Direct Information Extraction
by Sai Sangameswara Aadithya Kanduri, Apoorv Prasad and Susan McRoy
BioMedInformatics 2025, 5(3), 50; https://doi.org/10.3390/biomedinformatics5030050 - 4 Sep 2025
Viewed by 3601
Abstract
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined [...] Read more.
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined answers detracts from the naturalness of conversations and user engagement. Large Language Models (LLMs) offer the promise of covering a wide range of expressions. However, using an LLM for simple entity extraction would not necessarily perform better than functions coded in a programming language, while creating higher long-term costs. Methods: This study uses a real data set of annotated human coaching dialogs to develop LLM-based models for two training scenarios: one that generates pattern-matching functions and the other which does direct extraction. We use models of different sizes and complexity, including Meta-Llama, Gemma, and ChatGPT, and calculate their speed and accuracy. Results: LLM-generated pattern-matching functions took an average of 10 milliseconds (ms) per item as compared to 900 ms. (ChatGPT 3.5 Turbo) to 5 s (Llama 2 70B). The accuracy for pattern matching was 99% on real data, while LLM accuracy ranged from 90% (Llama 2 70B) to 100% (ChatGPT 3.5 Turbo), on both real and synthetically generated examples created for fine-tuning. Conclusions: These findings suggest promising directions for future research that combines both methods (reserving the LLM for cases that cannot be matched directly) or that use LLMs to generate synthetic training data with more expressive variety which can be used to improve the coverage of either generated codes or fine-tuned models. Full article
(This article belongs to the Section Methods in Biomedical Informatics)
Show Figures

Figure 1

17 pages, 1976 KB  
Article
A Novel Reconfigurable Vector-Processed Interleaving Algorithm for a DVB-RCS2 Turbo Encoder
by Moshe Bensimon, Ohad Boxerman, Yehuda Ben-Shimol, Erez Manor and Shlomo Greenberg
Electronics 2025, 14(13), 2600; https://doi.org/10.3390/electronics14132600 - 27 Jun 2025
Viewed by 746
Abstract
Turbo Codes (TCs) are a family of convolutional codes that provide powerful Forward Error Correction (FEC) and operate near the Shannon limit for channel capacity. In the context of modern communication systems, such as those conforming to the DVB-RCS2 standard, Turbo Encoders (TEs) [...] Read more.
Turbo Codes (TCs) are a family of convolutional codes that provide powerful Forward Error Correction (FEC) and operate near the Shannon limit for channel capacity. In the context of modern communication systems, such as those conforming to the DVB-RCS2 standard, Turbo Encoders (TEs) play a crucial role in ensuring robust data transmission over noisy satellite links. A key computational bottleneck in the Turbo Encoder is the non-uniform interleaving stage, where input bits are rearranged according to a dynamically generated permutation pattern. This stage often requires the intermediate storage of data, resulting in increased latency and reduced throughput, especially in embedded or real-time systems. This paper introduces a vector processing algorithm designed to accelerate the interleaving stage of the Turbo Encoder. The proposed algorithm is tailored for vector DSP architectures (e.g., CEVA-XC4500), and leverages the hardware’s SIMD capabilities to perform the permutation operation in a structured, phase-wise manner. Our method adopts a modular Load–Execute–Store design, facilitating efficient memory alignment, deterministic latency, and hardware portability. We present a detailed breakdown of the algorithm’s implementation, compare it with a conventional scalar (serial) model, and analyze its compatibility with the DVB-RCS2 specification. Experimental results demonstrate significant performance improvements, achieving a speed-up factor of up to 3.4× in total cycles, 4.8× in write operations, and 7.3× in read operations, relative to the baseline scalar implementation. The findings highlight the effectiveness of vectorized permutation in FEC pipelines and its relevance for high-throughput, low-power communication systems. Full article
(This article belongs to the Special Issue Evolutionary Hardware-Software Codesign Based on FPGA)
Show Figures

Figure 1

15 pages, 3669 KB  
Article
Turbo Equalization Based on a Virtual Decoder for Underwater Acoustic Communication
by Cong Peng, Lei Wang, Lerong Hong, Zehua Lin and An Luo
J. Mar. Sci. Eng. 2025, 13(6), 1099; https://doi.org/10.3390/jmse13061099 - 30 May 2025
Viewed by 853
Abstract
By transferring external information between the equalizer and the decoder iteratively, the performance of turbo equalization is close to the channel capacity. Conventional turbo equalization (CTE) relies on channel coding, while the transmission of external information is a problem in an uncoded system, [...] Read more.
By transferring external information between the equalizer and the decoder iteratively, the performance of turbo equalization is close to the channel capacity. Conventional turbo equalization (CTE) relies on channel coding, while the transmission of external information is a problem in an uncoded system, and turbo equalization without channel coding (TECC) remains unexplored. Therefore, this paper introduces a TECC framework with a virtual decoder constructed by bidirectional processing. The main innovation is that the existence of the virtual decoder enables the transmission of external information. Under this new framework, we implement it with a minimum mean square error decision feedback equalizer (MMSE-DFE) and evaluate its performance across stationary channels and multipath fading channels. Simulation results demonstrate significant communication performance enhancement after three to four iterations, surpassing both conventional bidirectional and unidirectional equalization. In addition, the proposed TECC is verified through underwater acoustic communication in a sea trial. The results also demonstrate that the TECC achieves better bit error performance. Full article
Show Figures

Figure 1

31 pages, 9910 KB  
Article
Automated Identification and Representation of System Requirements Based on Large Language Models and Knowledge Graphs
by Lei Wang, Ming-Chao Wang, Yuan-Rong Zhang, Jian Ma, Hong-Yu Shao and Zhi-Xing Chang
Appl. Sci. 2025, 15(7), 3502; https://doi.org/10.3390/app15073502 - 23 Mar 2025
Cited by 4 | Viewed by 2525
Abstract
In the product design and manufacturing process, the effective management and representation of system requirements (SRs) are crucial for ensuring product quality and consistency. However, current methods are hindered by document ambiguity, weak requirement interdependencies, and limited semantic expressiveness in model-based systems engineering. [...] Read more.
In the product design and manufacturing process, the effective management and representation of system requirements (SRs) are crucial for ensuring product quality and consistency. However, current methods are hindered by document ambiguity, weak requirement interdependencies, and limited semantic expressiveness in model-based systems engineering. To address these challenges, this paper proposes a prompt-driven integrated framework that synergizes large language models (LLMs) and knowledge graphs (KGs) to automate the visualization of SR text and structured knowledge extraction. Specifically, this paper introduces a template for information extraction tailored to arbitrary requirement documents, designed around five SysML-defined SR categories: functional requirements, interface requirements, performance requirements, physical requirements, and design constraints. By defining structured elements for each category and leveraging the GPT-4 model to extract key information from unstructured texts, the system can effectively extract and present the structured requirement information. Furthermore, the system constructs a knowledge graph to represent system requirements, visually illustrating the interdependencies and constraints between them. A case study applying this approach to Chapters 18–22 of the ‘Code for Design of Metro’ demonstrates the effectiveness of the proposed method in automating requirement representation, enhancing requirement traceability, and improving management. Moreover, a comparison of information extraction accuracy between GPT-4, GPT-3.5-turbo, BERT, and RoBERTa using the same dataset reveals that GPT-4 achieves an overall extraction accuracy of 84.76% compared to 79.05% for GPT-3.5-turbo and 59.05% for both BERT and RoBERTa. This proves the effectiveness of the proposed method in information extraction and provides a new technical pathway for intelligent requirement management. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

12 pages, 207 KB  
Article
A Large Language Model-Based Approach for Coding Information from Free-Text Reported in Fall Risk Surveillance Systems: New Opportunities for In-Hospital Risk Management
by Davide Rango, Giulia Lorenzoni, Henrique Salmazo Da Silva, Vicente Paulo Alves and Dario Gregori
J. Clin. Med. 2025, 14(5), 1580; https://doi.org/10.3390/jcm14051580 - 26 Feb 2025
Cited by 1 | Viewed by 1003
Abstract
Background/Objectives: Falls are the most common adverse in-hospital event, resulting in a considerable social and economic burden on individuals, their families, and the healthcare system. This study aims to develop and implement an automatic coding system using large language models (LLMs) to extract [...] Read more.
Background/Objectives: Falls are the most common adverse in-hospital event, resulting in a considerable social and economic burden on individuals, their families, and the healthcare system. This study aims to develop and implement an automatic coding system using large language models (LLMs) to extract and categorize free-text information (including the location of the fall and any resulting injury) from in-hospital fall records. Methods: The study used the narrative description of the falls reported through the Incident Reporting system to the Risk Management Service of an Italian Local Health Authority in Italy (name not disclosed as per research agreement). The OpenAI application programming interface (API) was used to access the generative pre-trained transformers (GPT) models, extract data from the narrative description of the falls, and perform the classification task. The GPT-4-turbo models were used for the classification task. Two independent reviewers manually coded the information, representing the gold standard for the classification task. Sensitivity, specificity, and accuracy were calculated to evaluate the performance of the task. Results: The analysis included 187 fall records with free-text event descriptions detailing the location of the fall and 93 records providing information about the presence or absence of an injury. GPT-4-turbo showed excellent performance, with specificity, sensitivity, and accuracy values of at least 0.913 for detecting the location and 0.953 for detecting the injury. Conclusions: The GPT models effectively extracted and categorized the information, even though the text was not optimized for GPT-based analysis. This shows their potential for the use of LLMs in clinical risk management research. Full article
(This article belongs to the Section Epidemiology & Public Health)
12 pages, 1807 KB  
Article
Fluorescent Clade IIb Lineage B.1 Mpox Viruses for Antiviral Screening
by Francisco Javier Alvarez-de Miranda, Rocío Martín, Antonio Alcamí and Bruno Hernáez
Viruses 2025, 17(2), 253; https://doi.org/10.3390/v17020253 - 13 Feb 2025
Cited by 2 | Viewed by 1673
Abstract
The ongoing global outbreak of mpox caused by clade IIb viruses has led to more than 100,000 confirmed cases around the world, highlighting the urgent need for antiviral research to combat current and future mpox outbreaks. Reporter viruses expressing fluorescent proteins to monitor [...] Read more.
The ongoing global outbreak of mpox caused by clade IIb viruses has led to more than 100,000 confirmed cases around the world, highlighting the urgent need for antiviral research to combat current and future mpox outbreaks. Reporter viruses expressing fluorescent proteins to monitor viral replication and virus spreading in cell culture provide a powerful tool for antiviral drug screening. In this work, we engineered two recombinant mpox clade IIb viruses by inserting, under the control of the vaccinia early/late promoter 7.5, the coding sequence of two different fluorescent proteins (EGFP and TurboFP635) in a previously unreported location within the viral genome. These recombinant viruses replicate in BSC-1 cells at rates similar to those of the parental virus. We show how these reporter mpox viruses allow the discrimination of infected cells by cell flow cytometry and facilitate the quantification of viral spread in cell culture. Finally, we validated these reporter viruses with two previously known inhibitors of poxvirus replication, cytosine arabinoside (AraC) and bisbenzimide. Full article
Show Figures

Figure 1

29 pages, 8379 KB  
Article
Vertex-Oriented Method for Polyhedral Reconstruction of 3D Buildings Using OpenStreetMap
by Hanli Liu, Carlos J. Hellín, Abdelhamid Tayebi, Francisco Calles and Josefa Gómez
Sensors 2024, 24(24), 7992; https://doi.org/10.3390/s24247992 - 14 Dec 2024
Cited by 1 | Viewed by 1282
Abstract
This work presents the mathematical definition and programming considerations of an efficient geometric algorithm used to add roofs to polyhedral 3D building models obtained from OpenStreetMap. The algorithm covers numerous roof shapes, including some well-defined shapes that lack an explicit reconstruction theory. These [...] Read more.
This work presents the mathematical definition and programming considerations of an efficient geometric algorithm used to add roofs to polyhedral 3D building models obtained from OpenStreetMap. The algorithm covers numerous roof shapes, including some well-defined shapes that lack an explicit reconstruction theory. These shapes include gabled, hipped, pyramidal, skillion, half-hipped, gambrel, and mansard. The input data for the developed code consist of latitude and longitude coordinates defining the target area. Geospatial data necessary for the algorithm are obtained through a request to the overpass-turbo service. The findings showcase outstanding performance for buildings with straightforward footprints, but they have limitations for the ones with intricate footprints. In future work, further refinement is necessary to solve the mentioned limitation. Full article
(This article belongs to the Special Issue Advanced Intelligent Sensing for Building Monitoring)
Show Figures

Figure 1

50 pages, 3145 KB  
Review
A History of Channel Coding in Aeronautical Mobile Telemetry and Deep-Space Telemetry
by Michael Rice
Entropy 2024, 26(8), 694; https://doi.org/10.3390/e26080694 - 16 Aug 2024
Cited by 2 | Viewed by 4507
Abstract
This paper presents a history of the development of channel codes in deep-space telemetry and aeronautical mobile telemetry. The history emphasizes “firsts” and other remarkable achievements. Because coding was used first in deep-space telemetry, the history begins with the codes used for Mariner [...] Read more.
This paper presents a history of the development of channel codes in deep-space telemetry and aeronautical mobile telemetry. The history emphasizes “firsts” and other remarkable achievements. Because coding was used first in deep-space telemetry, the history begins with the codes used for Mariner and Pioneer. History continues with the international standard for concatenated coding developed for the Voyager program and the remarkable role channel coding played in rescuing the nearly-doomed Galileo mission. The history culminates with the adoption of turbo codes and LDPC codes and the programs that relied on them. The history of coding in aeronautical mobile telemetry is characterized by a number of “near misses” as channel codes were explored, sometimes tested, and rarely adopted. Aeronautical mobile telemetry is characterized by bandwidth constraints that make use of low-rate codes and their accompanying bandwidth expansion, an unattractive option. The emergence of a family of high-rate LDPC codes coupled with a bandwidth-efficient modulation has nudged the aeronautical mobile telemetry community to adopt the codes in their standards. Full article
(This article belongs to the Special Issue Coding for Aeronautical Telemetry)
Show Figures

Figure 1

10 pages, 1405 KB  
Article
Performance Evaluation of Open Channel Buhlmann Fecal Calprotectin Turbo Assay on Abbott Alinity C Analyzer
by Kavithalakshmi Sataranatarajan, Shishir Adhikari, Ngoc Nguyen, Madhusudhanan Narasimhan, Jyoti Balani and Alagarraju Muthukumar
Diagnostics 2024, 14(16), 1744; https://doi.org/10.3390/diagnostics14161744 - 11 Aug 2024
Viewed by 2672
Abstract
Inflammatory bowel disease (IBD) is characterized by chronic inflammation of the gastrointestinal (GI) tract. Fecal calprotectin (fCAL) is a noninvasive laboratory test used in the diagnosis and monitoring of IBDs such as Crohn’s disease and ulcerative colitis. The fCAL send-out test that our [...] Read more.
Inflammatory bowel disease (IBD) is characterized by chronic inflammation of the gastrointestinal (GI) tract. Fecal calprotectin (fCAL) is a noninvasive laboratory test used in the diagnosis and monitoring of IBDs such as Crohn’s disease and ulcerative colitis. The fCAL send-out test that our facility has been offering so far uses an ELISA-based method. In the current study, we sought to validate the performance of a Buhlmann fCAL turbo assay in an automated Abbott Alinity C analyzer (AFCAL) in our core laboratory. Five-day imprecision studies showed good performance for both within-run (5.3%) and between-day (2.5%) measurements. The reportable range was verified as 30–20,000 µg/g. Deming regression and Bland–Altman analysis indicated a strong correlation of r = 0.99 with a low, acceptable bias of 1.8% for AFCAL relative to the predicate Buhlmann fCAL ELISA results. AFCAL’s clinical performance was determined retrospectively in 62 patients with ICD codes for IBD. Overall, the implementation of AFCAL in our routine clinical testing has improved our turnaround time, reduced the cost per test, and significantly increased our clinician satisfaction. Full article
(This article belongs to the Special Issue Diagnosis and Management of Gastrointestinal Inflammation)
Show Figures

Figure 1

Back to TopTop