Big Data System for Global Health

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 30 June 2024 | Viewed by 4959

Special Issue Editors


E-Mail Website
Guest Editor
Sano Centre for Computational Medicine, Krakow, Poland
Interests: artificial intelligence; data science; machine learning; drug research; clinical research

E-Mail Website
Guest Editor
Sano Centre for Computational Medicine, Krakow, Poland
Interests: parallel and distributed computing; high-performance computing (HPC); cloud and big data technologies; scientific workflows; biomedical applications

Special Issue Information

Dear Colleagues,

The coronavirus global pandemic had taught us the significance of Big Data data on a global scale. Many of the successes that were accomplished in vaccine development, testing, treatment, and contact tracing were centered around Big Data. Clearly, our global pandemic would not be manageable with Big Data and the computational methods around it. This Special Issue plans to give an overview of the most recent advances in how Big Data can contribute to global health complex problems such as the coronavirus pandemic. Specifically, it is aimed at providing selected Big Data contributions to aid in tracking infectious disease outbreaks, the discovery of treatments, advancing the understanding of non-communicable diseases (heart disease, stroke, cancer, diabetes), and lastly, how climate change will impact human health globally. Potential topics include, but are not limited to:

  • Advents of social media data in tracking infectious diseases
  • Development of human diseases atlases
  • Development of the human brain atlas
  • Biomedical applications centered around knowledge graphs (disease treatment)
  • Using scientific workflows in modeling climate change and its impact on human health
  • Agent-based modeling for animal health, food sourcing, and supply
  • Literature mining application in repurposing drugs for new usage for various diseases (cancer, Alzheimer's)
  • Advances in Big Data driven drug development

Dr. Ahmed Abdeen Hamed
Dr. Maciej Malawski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • outbreaks tracking
  • social media data
  • literature mining
  • human disease
  • knowledge graphs
  • drug development
  • drug repurposing
  • scientific workflows
  • agent-based modeling

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4233 KiB  
Article
Toward Morphologic Atlasing of the Human Whole Brain at the Nanoscale
by Wieslaw L. Nowinski
Big Data Cogn. Comput. 2023, 7(4), 179; https://doi.org/10.3390/bdcc7040179 - 01 Dec 2023
Viewed by 1614
Abstract
Although no dataset at the nanoscale for the entire human brain has yet been acquired and neither a nanoscale human whole brain atlas has been constructed, tremendous progress in neuroimaging and high-performance computing makes them feasible in the non-distant future. To construct the [...] Read more.
Although no dataset at the nanoscale for the entire human brain has yet been acquired and neither a nanoscale human whole brain atlas has been constructed, tremendous progress in neuroimaging and high-performance computing makes them feasible in the non-distant future. To construct the human whole brain nanoscale atlas, there are several challenges, and here, we address two, i.e., the morphology modeling of the brain at the nanoscale and designing of a nanoscale brain atlas. A new nanoscale neuronal format is introduced to describe data necessary and sufficient to model the entire human brain at the nanoscale, enabling calculations of the synaptome and connectome. The design of the nanoscale brain atlas covers design principles, content, architecture, navigation, functionality, and user interface. Three novel design principles are introduced supporting navigation, exploration, and calculations, namely, a gross neuroanatomy-guided navigation of micro/nanoscale neuroanatomy; a movable and zoomable sampling volume of interest for navigation and exploration; and a nanoscale data processing in a parallel-pipeline mode exploiting parallelism resulting from the decomposition of gross neuroanatomy parcellated into structures and regions as well as nano neuroanatomy decomposed into neurons and synapses, enabling the distributed construction and continual enhancement of the nanoscale atlas. Numerous applications of this atlas can be contemplated ranging from proofreading and continual multi-site extension to exploration, morphometric and network-related analyses, and knowledge discovery. To my best knowledge, this is the first proposed neuronal morphology nanoscale model and the first attempt to design a human whole brain atlas at the nanoscale. Full article
(This article belongs to the Special Issue Big Data System for Global Health)
Show Figures

Figure 1

24 pages, 2070 KiB  
Article
An Automatic Generation of Heterogeneous Knowledge Graph for Global Disease Support: A Demonstration of a Cancer Use Case
by Noura Maghawry, Samy Ghoniemy, Eman Shaaban and Karim Emara
Big Data Cogn. Comput. 2023, 7(1), 21; https://doi.org/10.3390/bdcc7010021 - 24 Jan 2023
Cited by 6 | Viewed by 2479
Abstract
Semantic data integration provides the ability to interrelate and analyze information from multiple heterogeneous resources. With the growing complexity of medical ontologies and the big data generated from different resources, there is a need for integrating medical ontologies and finding relationships between distinct [...] Read more.
Semantic data integration provides the ability to interrelate and analyze information from multiple heterogeneous resources. With the growing complexity of medical ontologies and the big data generated from different resources, there is a need for integrating medical ontologies and finding relationships between distinct concepts from different ontologies where these concepts have logical medical relationships. Standardized Medical Ontologies are explicit specifications of shared conceptualization, which provide predefined medical vocabulary that serves as a stable conceptual interface to medical data sources. Intelligent Healthcare systems such as disease prediction systems require a reliable knowledge base that is based on Standardized medical ontologies. Knowledge graphs have emerged as a powerful dynamic representation of a knowledge base. In this paper, a framework is proposed for automatic knowledge graph generation integrating two medical standardized ontologies- Human Disease Ontology (DO), and Symptom Ontology (SYMP) using a medical online website and encyclopedia. The framework and methodologies adopted for automatically generating this knowledge graph fully integrated the two standardized ontologies. The graph is dynamic, scalable, easily reproducible, reliable, and practically efficient. A subgraph for cancer terms is also extracted and studied for modeling and representing cancer diseases, their symptoms, prevention, and risk factors. Full article
(This article belongs to the Special Issue Big Data System for Global Health)
Show Figures

Figure 1

Back to TopTop