Next Issue
Previous Issue

Table of Contents

Data, Volume 3, Issue 3 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:
Open AccessData Descriptor Dataset for SERS Plasmonic Array: Width, Spacing, and Thin Film Oxide Thickness Optimization
Data 2018, 3(3), 37; https://doi.org/10.3390/data3030037 (registering DOI)
Received: 22 August 2018 / Revised: 7 September 2018 / Accepted: 18 September 2018 / Published: 19 September 2018
PDF Full-text (1270 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Surface-enhanced Raman spectroscopy (SERS) improves the scope and power of Raman spectroscopy by taking advantage of plasmonic nanostructures, which have the potential to enhance Raman signal strength by several orders of magnitude, which can allow for the detection of analyte molecules. The dataset
[...] Read more.
Surface-enhanced Raman spectroscopy (SERS) improves the scope and power of Raman spectroscopy by taking advantage of plasmonic nanostructures, which have the potential to enhance Raman signal strength by several orders of magnitude, which can allow for the detection of analyte molecules. The dataset presented provides results of a computational study that used a finite element method (FEM) to model gold nanowires on a silicon dioxide substrate. The survey calculated the surface average of optical surface enhancement due to plasmonic effects across the entire model and studied various geometric parameters regarding the width of the nanowires, spacing between the nanowires, and thickness of the silicon dioxide substrate. From this data, enhancement values were found to have a periodicity due to the thickness of the silicon dioxide. Additionally, strong plasmonic enhancement for smaller distances between nanowires were found, as expected; however, additional surface enhancement at greater gap distances were observed, which were not anticipated, possibly due to resonance with periodic dimensions and the frequency of the light. This data presentation will benefit future SERS studies by probing further into the computational and mathematical material presented previously. Full article
Figures

Figure 1

Open AccessData Descriptor De Novo Transcriptome Assembly of Cucurbita Pepo L. Leaf Tissue Infested by Aphis Gossypii
Received: 24 July 2018 / Revised: 10 September 2018 / Accepted: 14 September 2018 / Published: 16 September 2018
PDF Full-text (1745 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Zucchini (Cucurbita pepo L.), extensively cultivated in temperate areas, belongs to the Cucurbitaceae family and it is a species with great economic value. One major threat related to zucchini cultivation is the damage imposed by the cotton/melon aphid Aphis gossypii Glover (Homoptera:
[...] Read more.
Zucchini (Cucurbita pepo L.), extensively cultivated in temperate areas, belongs to the Cucurbitaceae family and it is a species with great economic value. One major threat related to zucchini cultivation is the damage imposed by the cotton/melon aphid Aphis gossypii Glover (Homoptera: Aphididae). We performed RNA-sequencing on cultivar “San Pasquale” leaves, uninfested and infested by A. gossypii, that were collected at three time points (24, 48, and 96 h post infestation). Then, we combined all high-quality reads for de novo assembly of the transcriptome. This resource was primarily established to be used as a reference for gene expression studies in order to investigate the transcriptome reprogramming of zucchini plants following aphid infestation. In addition, raw reads will be valuable for new experiments based on the latest bioinformatic tools and analytical approaches. The assembled transcripts will serve as an important reference for sequence-based studies and for primer design. Both datasets can be used to support/improve the prediction of protein-coding genes in the zucchini genome, which has been recently released into the public domain. Full article
Figures

Figure 1

Open AccessArticle Evolutionary Path of Factors Influencing Life Satisfaction among Chinese Elderly: A Perspective of Data Visualization
Received: 31 July 2018 / Revised: 24 August 2018 / Accepted: 6 September 2018 / Published: 11 September 2018
PDF Full-text (3302 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
China has the largest aging population of all countries and faces severe aging issues. As an important indicator to measure the quality of life, the life satisfaction of elderly Chinese people has received increasing attention. Based on the cross-sectional survey data collected from
[...] Read more.
China has the largest aging population of all countries and faces severe aging issues. As an important indicator to measure the quality of life, the life satisfaction of elderly Chinese people has received increasing attention. Based on the cross-sectional survey data collected from 2002 to 2014, which were provided by the CLHLS (Chinese Longitudinal Healthy Longevity Survey) project as open datasets, this study investigated how the influence and importance of factors associated with life satisfaction in the elderly have changed during these years. In view of previous research and questionnaire data, demographic, physiological, psychological, economic and social characteristics were selected as potential influencing factors of life satisfaction. With the R programming language, we used IV (information value) as the indicator to measure the influence of associated factors and determined the importance of each factor by establishing a random forest model for each year. Data visualization was used to demonstrate change in each factor with the Tableau visualization tool. The results show that, for most factors, their influence has fluctuated. Since 2002, the most significant factors have always been self-rated health, self-evaluation of economic level, economic self-sufficiency and bright personality. Full article
(This article belongs to the Special Issue Curative Power of Medical Data)
Figures

Figure 1

Open AccessData Descriptor Spatial Distribution of Overhead Power Lines and Underground Cables in Germany in 2016
Received: 26 July 2018 / Revised: 5 September 2018 / Accepted: 6 September 2018 / Published: 10 September 2018
PDF Full-text (1048 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In the context of transformative energy policy frameworks, such as the German “Energiewende”, state and federal agencies, regulators, and country planners need reliable data on the energy system infrastructure to make substantiated decisions about line routing and extension. The decision-making processes are accompanied
[...] Read more.
In the context of transformative energy policy frameworks, such as the German “Energiewende”, state and federal agencies, regulators, and country planners need reliable data on the energy system infrastructure to make substantiated decisions about line routing and extension. The decision-making processes are accompanied by interdisciplinary research efforts in the areas of energy system planning and modelling, economic viability, and environmental impact, e.g., visual amenity or potential impacts on species. Proved data on the spatial distribution of the electricity transmission and distribution network can inform these efforts, in particular when combined with key technological parameters, like installed capacity, total size, and required space. Without these data, adequate assessments of potential impacts, e.g., the collision of birds with overhead lines, are not possible. However, no such comprehensive dataset exists for Germany. The dataset produced in this paper is based on open-source data using OpenStreetMap (OSM). It covers the spatial distribution of overhead power lines and underground cables of Germany, combined with the attributes needed for adequate environmental impact assessment of overhead lines, such as voltage levels, route length, and circuit lengths. Furthermore, the dataset is validated by different publicly available statistics provided by the German Federal Grid Agency and official spatial data of the Federal Office of Cartography and Geodesy. Full article
Figures

Figure 1

Open AccessData Descriptor Gridded Population Maps Informed by Different Built Settlement Products
Received: 16 July 2018 / Revised: 15 August 2018 / Accepted: 27 August 2018 / Published: 4 September 2018
PDF Full-text (1981 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The spatial distribution of humans on the earth is critical knowledge that informs many disciplines and is available in a spatially explicit manner through gridded population techniques. While many approaches exist to produce specialized gridded population maps, little has been done to explore
[...] Read more.
The spatial distribution of humans on the earth is critical knowledge that informs many disciplines and is available in a spatially explicit manner through gridded population techniques. While many approaches exist to produce specialized gridded population maps, little has been done to explore how remotely sensed, built-area datasets might be used to dasymetrically constrain these estimates. This study presents the effectiveness of three different high-resolution built area datasets for producing gridded population estimates through the dasymetric disaggregation of census counts in Haiti, Malawi, Madagascar, Nepal, Rwanda, and Thailand. Modeling techniques include a binary dasymetric redistribution, a random forest with a dasymetric component, and a hybrid of the previous two. The relative merits of these approaches and the data are discussed with regards to studying human populations and related spatially explicit phenomena. Results showed that the accuracy of random forest and hybrid models was comparable in five of six countries. Full article
Figures

Figure 1

Open AccessArticle Synthesizing High-Utility Patterns from Different Data Sources
Received: 3 August 2018 / Revised: 25 August 2018 / Accepted: 30 August 2018 / Published: 3 September 2018
PDF Full-text (1056 KB) | HTML Full-text | XML Full-text
Abstract
In large organizations, it is often required to collect data from the different geographic branches spread over different locations. Extensive amounts of data may be gathered at the centralized location in order to generate interesting patterns via mono-mining the amassed database. However, it
[...] Read more.
In large organizations, it is often required to collect data from the different geographic branches spread over different locations. Extensive amounts of data may be gathered at the centralized location in order to generate interesting patterns via mono-mining the amassed database. However, it is feasible to mine the useful patterns at the data source itself and forward only these patterns to the centralized company, rather than the entire original database. These patterns also exist in huge numbers, and different sources calculate different utility values for each pattern. This paper proposes a weighted model for aggregating the high-utility patterns from different data sources. The procedure of pattern selection was also proposed to efficiently extract high-utility patterns in our weighted model by discarding low-utility patterns. Meanwhile, the synthesizing model yielded high-utility patterns, unlike association rule mining, in which frequent itemsets are generated by considering each item with equal utility, which is not true in real life applications such as sales transactions. Extensive experiments performed on the datasets with varied characteristics show that the proposed algorithm will be effective for mining very sparse and sparse databases with a huge number of transactions. Our proposed model also outperforms various state-of-the-art distributed models of mining in terms of running time. Full article
Figures

Figure 1

Open AccessArticle Nested Stochastic Valuation of Large Variable Annuity Portfolios: Monte Carlo Simulation and Synthetic Datasets
Received: 11 July 2018 / Revised: 24 August 2018 / Accepted: 30 August 2018 / Published: 1 September 2018
PDF Full-text (2361 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic hedging has been adopted by many insurance companies to mitigate the financial risks associated with variable annuity guarantees. To simulate the performance of dynamic hedging for variable annuity products, insurance companies rely on nested stochastic projections, which is highly computationally intensive and
[...] Read more.
Dynamic hedging has been adopted by many insurance companies to mitigate the financial risks associated with variable annuity guarantees. To simulate the performance of dynamic hedging for variable annuity products, insurance companies rely on nested stochastic projections, which is highly computationally intensive and often prohibitive for large variable annuity portfolios. Metamodeling techniques have recently been proposed to address the computational issues. However, it is difficult for researchers to obtain real datasets from insurance companies to test metamodeling techniques and publish the results in academic journals. In this paper, we create synthetic datasets that can be used for the purpose of addressing the computational issues associated with the nested stochastic valuation of large variable annuity portfolios. The runtime used to create these synthetic datasets would be about three years if a single CPU were used. These datasets are readily available to researchers and practitioners so that they can focus on testing metamodeling techniques. Full article
Figures

Figure 1

Open AccessArticle Linking Synthetic Populations to Household Geolocations: A Demonstration in Namibia
Received: 18 June 2018 / Revised: 20 July 2018 / Accepted: 7 August 2018 / Published: 9 August 2018
PDF Full-text (3333 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Whether evaluating gridded population dataset estimates (e.g., WorldPop, LandScan) or household survey sample designs, a population census linked to residential locations are needed. Geolocated census microdata data, however, are almost never available and are thus best simulated. In this paper, we simulate a
[...] Read more.
Whether evaluating gridded population dataset estimates (e.g., WorldPop, LandScan) or household survey sample designs, a population census linked to residential locations are needed. Geolocated census microdata data, however, are almost never available and are thus best simulated. In this paper, we simulate a close-to-reality population of individuals nested in households geolocated to realistic building locations. Using the R simPop package and ArcGIS, multiple realizations of a geolocated synthetic population are derived from the Namibia 2011 census 20% microdata sample, Namibia census enumeration area boundaries, Namibia 2013 Demographic and Health Survey (DHS), and dozens of spatial covariates derived from publicly available datasets. Realistic household latitude-longitude coordinates are manually generated based on public satellite imagery. Simulated households are linked to latitude-longitude coordinates by identifying distinct household types with multivariate k-means analysis and modelling a probability surface for each household type using Random Forest machine learning methods. We simulate five realizations of a synthetic population in Namibia’s Oshikoto region, including demographic, socioeconomic, and outcome characteristics at the level of household, woman, and child. Comparison of variables in the synthetic population were made with 2011 census 20% sample and 2013 DHS data by primary sampling unit/enumeration area. We found that synthetic population variable distributions matched observed observations and followed expected spatial patterns. We outline a novel process to simulate a close-to-reality microdata census geolocated to realistic building locations in a low- or middle-income country setting to support spatial demographic research and survey methodological development while avoiding disclosure risk of individuals. Full article
Figures

Graphical abstract

Open AccessData Descriptor Microstructural and Metabolic Recovery of Anhedonic Rat Brains: An In Vivo Diffusion MRI and 1H-MRS Approach
Received: 29 June 2018 / Revised: 23 July 2018 / Accepted: 26 July 2018 / Published: 30 July 2018
PDF Full-text (184 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This article presents longitudinal 1H-MR Spectroscopy (1H-MRS) data from ventral hippocampus and in vivo diffusion MRI (dMRI) data of the brain from control and anhedonic rats. The 1H-MRS and dMRI data were acquired using a 9.4 T preclinical imaging
[...] Read more.
This article presents longitudinal 1H-MR Spectroscopy (1H-MRS) data from ventral hippocampus and in vivo diffusion MRI (dMRI) data of the brain from control and anhedonic rats. The 1H-MRS and dMRI data were acquired using a 9.4 T preclinical imaging system. Before MRI experiments, animals were exposed to unpredictable chronic mild stress exposure for eight weeks and on the basis of a sucrose consumption test were identified as anhedonic and resilient. An age-matched group of animals, unexposed to the unpredictable chronic mild stress paradigm was considered as control. Data was acquired at the age of 18, 20 and 25 weeks in the anhedonic group and at the age of 18 and 22 weeks in the control group. This multimodal MRI data provides metabolic information of ventral hippocampus and dMRI based microstructural parameters of the brain. Full article
Open AccessReview Deep Learning in Data-Driven Pavement Image Analysis and Automated Distress Detection: A Review
Received: 15 June 2018 / Revised: 5 July 2018 / Accepted: 18 July 2018 / Published: 24 July 2018
PDF Full-text (1061 KB) | HTML Full-text | XML Full-text
Abstract
Deep learning, more specifically deep convolutional neural networks, is fast becoming a popular choice for computer vision-based automated pavement distress detection. While pavement image analysis has been extensively researched over the past three decades or so, recent ground-breaking achievements of deep learning algorithms
[...] Read more.
Deep learning, more specifically deep convolutional neural networks, is fast becoming a popular choice for computer vision-based automated pavement distress detection. While pavement image analysis has been extensively researched over the past three decades or so, recent ground-breaking achievements of deep learning algorithms in the areas of machine translation, speech recognition, and computer vision has sparked interest in the application of deep learning to automated detection of distresses in pavement images. This paper provides a narrative review of recently published studies in this field, highlighting the current achievements and challenges. A comparison of the deep learning software frameworks, network architecture, hyper-parameters employed by each study, and crack detection performance is provided, which is expected to provide a good foundation for driving further research on this important topic in the context of smart pavement or asset management systems. The review concludes with potential avenues for future research; especially in the application of deep learning to not only detect, but also characterize the type, extent, and severity of distresses from 2D and 3D pavement images. Full article
(This article belongs to the Special Issue Big Data Challenges in Smart Cities)
Figures

Figure 1

Open AccessArticle Data Quality: A Negotiator between Paper-Based and Digital Records in Pakistan’s TB Control Program
Received: 23 May 2018 / Revised: 16 July 2018 / Accepted: 18 July 2018 / Published: 19 July 2018
PDF Full-text (3749 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Background: The cornerstone of the public health function is to identify healthcare needs, to influence policy development, and to inform change in practice. Current data management practices with paper-based recording systems are prone to data quality defects. Increasingly, healthcare organizations are using technology
[...] Read more.
Background: The cornerstone of the public health function is to identify healthcare needs, to influence policy development, and to inform change in practice. Current data management practices with paper-based recording systems are prone to data quality defects. Increasingly, healthcare organizations are using technology for the efficient management of data. The aim of this study was to compare the data quality of digital records with the quality of the corresponding paper-based records using a data quality assessment framework. Methodology: We conducted a desk review of paper-based and digital records over the study duration from April 2016 to July 2016 at six enrolled tuberculosis (TB) clinics. We input all data fields of the patient treatment (TB01) card into a spreadsheet-based template to undertake a field-to-field comparison of the shared fields between TB01 and digital data. Findings: A total of 117 TB01 cards were prepared at six enrolled sites, whereas just 50% of the records (n = 59; 59 out of 117 TB01 cards) were digitized. There were 1239 comparable data fields, out of which 65% (n = 803) were correctly matched between paper based and digital records. However, 35% of the data fields (n = 436) had anomalies, either in paper-based records or in digital records. The calculated number of data quality issues per digital patient record was 1.9, whereas it was 2.1 issues per record for paper-based records. Based on the analysis of valid data quality issues, it was found that there were more data quality issues in paper-based records (n = 123) than in digital records (n = 110). Conclusion: There were fewer data quality issues in digital records as compared with the corresponding paper-based records of tuberculosis patients. Greater use of mobile data capture and continued data quality assessment can deliver more meaningful information for decision making. Full article
(This article belongs to the Special Issue Data Management Strategy, Policy and Standard)
Figures

Figure 1

Open AccessArticle Current Core Competencies Trend of Small to Medium Enterprises (SMEs) in China—A Concurrent Comprehensive Evaluation and Active Learning Study of Newly Listed Chinese Stocks from 2015 through 2017
Received: 25 June 2018 / Revised: 12 July 2018 / Accepted: 14 July 2018 / Published: 17 July 2018
PDF Full-text (859 KB) | HTML Full-text | XML Full-text
Abstract
With plenty of stocks newly listed in the Chinese stock market everyday, it becomes more and more important for managers and governess to examine the trend of core competencies for these companies. Since most companies of newly listed stocks are small to medium-sized
[...] Read more.
With plenty of stocks newly listed in the Chinese stock market everyday, it becomes more and more important for managers and governess to examine the trend of core competencies for these companies. Since most companies of newly listed stocks are small to medium-sized enterprises, existing methods are not capable enough to evaluate their competitiveness. To provide an understanding for the trend of core competencies in the Chinese market, this article conducts a concurrent comprehensive evaluation and active learning methodology to analyze the newly listed stocks in SSE (Shanghai Stock Exchange Composite Index) and SZSE (Shenzhen Stock Exchange Component Index) from 2015 through 2017. There is an evidence that Number of Market Makers, Equity Financing Frequency and Executive Replacement Frequency are three main core competencies from 2015 through 2017. Authors contend that their findings in this paper question the quo of core competencies for small to medium-sized enterprises in the Chinese market. Full article
Figures

Figure 1

Open AccessData Descriptor Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research
Received: 5 June 2018 / Revised: 6 July 2018 / Accepted: 6 July 2018 / Published: 10 July 2018
PDF Full-text (944 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Diabetic Retinopathy is the most prevalent cause of avoidable vision impairment, mainly affecting the working-age population in the world. Recent research has given a better understanding of the requirement in clinical eye care practice to identify better and cheaper ways of identification, management,
[...] Read more.
Diabetic Retinopathy is the most prevalent cause of avoidable vision impairment, mainly affecting the working-age population in the world. Recent research has given a better understanding of the requirement in clinical eye care practice to identify better and cheaper ways of identification, management, diagnosis and treatment of retinal disease. The importance of diabetic retinopathy screening programs and difficulty in achieving reliable early diagnosis of diabetic retinopathy at a reasonable cost needs attention to develop computer-aided diagnosis tool. Computer-aided disease diagnosis in retinal image analysis could ease mass screening of populations with diabetes mellitus and help clinicians in utilizing their time more efficiently. The recent technological advances in computing power, communication systems, and machine learning techniques provide opportunities to the biomedical engineers and computer scientists to meet the requirements of clinical practice. Diverse and representative retinal image sets are essential for developing and testing digital screening programs and the automated algorithms at their core. To the best of our knowledge, IDRiD (Indian Diabetic Retinopathy Image Dataset), is the first database representative of an Indian population. It constitutes typical diabetic retinopathy lesions and normal retinal structures annotated at a pixel level. The dataset provides information on the disease severity of diabetic retinopathy, and diabetic macular edema for each image. This makes it perfect for development and evaluation of image analysis algorithms for early detection of diabetic retinopathy. Full article
Figures

Figure 1

Open AccessData Descriptor HANDY: A Benchmark Dataset for Context-Awareness via Wrist-Worn Motion Sensors
Received: 23 May 2018 / Revised: 14 June 2018 / Accepted: 22 June 2018 / Published: 24 June 2018
PDF Full-text (1272 KB) | HTML Full-text | XML Full-text
Abstract
Being aware of a personal context is a promising task for various applications, such as biometry, human-computer interactions, telemonitoring, remote care, mobile marketing and security. The task can be formally defined as the classification of a person being considered into one of predefined
[...] Read more.
Being aware of a personal context is a promising task for various applications, such as biometry, human-computer interactions, telemonitoring, remote care, mobile marketing and security. The task can be formally defined as the classification of a person being considered into one of predefined labels, which may correspond to his/her identity, gender, physical properties, the activity that he/she performs or any other attribute related to the environment being involved. Here, we offer a solution to the problem with a set of multiple motion sensors worn on the wrist. We first provide an annotated and publicly accessible benchmark set for context-awareness through wrist-worn sensors, namely, accelerometers, magnetometers and gyroscopes. Second, we present an evaluation of recent computational methods for two relevant tasks: activity recognition and person identification from hand movements. Finally, we show that fusion of two motion sensors (i.e., accelerometers and magnetometers), leads to higher accuracy for both tasks, compared with the individual use of each sensor type. Full article
Figures

Figure 1

Open AccessArticle A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks
Received: 13 May 2018 / Revised: 12 June 2018 / Accepted: 20 June 2018 / Published: 24 June 2018
PDF Full-text (11069 KB) | HTML Full-text | XML Full-text
Abstract
A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT), which includes data acquisition, processing and visualization from multiple camera networks. The toolbox
[...] Read more.
A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT), which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI) for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP). Processing features include GUI based selection of the region of interest (ROI), automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF), red fraction index (RF), blue fraction index (BF), green-red vegetation index (GRVI), and green excess (GEI) index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML) reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland. Full article
(This article belongs to the Special Issue Data in Astrophysics & Geophysics: Research and Applications)
Figures

Figure 1

Open AccessArticle Statistical Estimate of Radon Concentration from Passive and Active Detectors in Doha
Received: 22 March 2018 / Revised: 19 May 2018 / Accepted: 13 June 2018 / Published: 21 June 2018
PDF Full-text (1537 KB) | HTML Full-text | XML Full-text
Abstract
Harnessing knowledge on the physical and natural conditions that affect our health, general livelihood and sustainability has long been at the core of scientific research. Health risks of ionising radiation from exposure to radon and radon decay products in homes, work and other
[...] Read more.
Harnessing knowledge on the physical and natural conditions that affect our health, general livelihood and sustainability has long been at the core of scientific research. Health risks of ionising radiation from exposure to radon and radon decay products in homes, work and other public places entail developing novel approaches to modelling occurrence of the gas and its decaying products, in order to cope with the physical and natural dynamics in human habitats. Various data modelling approaches and techniques have been developed and applied to identify potential relationships among individual local meteorological parameters with a potential impact on radon concentrations—i.e., temperature, barometric pressure and relative humidity. In this first research work on radon concentrations in the State of Qatar, we present a combination of exploratory, visualisation and algorithmic estimation methods to try and understand the radon variations in and around the city of Doha. Data were obtained from the Central Radiation Laboratories (CRL) in Doha, gathered from 36 passive radon detectors deployed in various schools, residential and work places in and around Doha as well as from one active radon detector located at the CRL. Our key findings show high variations mainly attributable to technical variations in data gathering, as the equipment and devices appear to heavily influence the levels of radon detected. A parameter maximisation method applied to simulate data with similar behaviour to the data from the passive detectors in four of the neighbourhoods appears appropriate for estimating parameters in cases of data limitation. Data from the active detector exhibit interesting seasonal variations—with data clustering exhibiting two clearly separable groups, with passive and active detectors exhibiting a huge disagreement in readings. These patterns highlight challenges related to detection methods—in particular ensuring that deployed detectors and calculations of radon concentrations are adapted to local conditions. The study doesn’t dwell much on building materials and makes rather fundamental assumptions, including an equal exhalation rate of radon from the soil across neighbourhoods, based on Doha’s homogeneous underlying geological formation. The study also highlights potential extensions into the broader category of pollutants such as hydrocarbon, air particulate carbon monoxide and nitrogen dioxide at specific time periods of the year and particularly how they may tie in with global health institutions’ requirements. Full article
Figures

Graphical abstract

Back to Top