Next Issue
Previous Issue

Table of Contents

High-Throughput, Volume 7, Issue 2 (June 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Multi-well plates and cell arrays enable microscopy-based screening in an arrayed format and, in [...] Read more.
View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:
Open AccessReview Microfluidic Devices for Drug Assays
High-Throughput 2018, 7(2), 18; https://doi.org/10.3390/ht7020018
Received: 2 May 2018 / Revised: 7 June 2018 / Accepted: 13 June 2018 / Published: 20 June 2018
PDF Full-text (1089 KB) | HTML Full-text | XML Full-text
Abstract
In this review, we give an overview of the current state of microfluidic-based high-throughput drug assays. In this highly interdisciplinary research field, various approaches have been applied to high-throughput drug screening, including microtiter plate, droplets microfluidics as well as continuous flow, diffusion and
[...] Read more.
In this review, we give an overview of the current state of microfluidic-based high-throughput drug assays. In this highly interdisciplinary research field, various approaches have been applied to high-throughput drug screening, including microtiter plate, droplets microfluidics as well as continuous flow, diffusion and concentration gradients-based microfluidic drug assays. Therefore, we reviewed over 100 recent publications in the field and sorted them according to their microfluidic approach. As a result, we are showcasing, comparing and discussing broadly applied approaches as well as singular promising ones that might contribute to shaping the future of this field. Full article
Figures

Figure 1

Open AccessArticle A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis
High-Throughput 2018, 7(2), 17; https://doi.org/10.3390/ht7020017
Received: 31 March 2018 / Revised: 21 May 2018 / Accepted: 7 June 2018 / Published: 14 June 2018
PDF Full-text (2490 KB) | HTML Full-text | XML Full-text
Abstract
Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory) based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and
[...] Read more.
Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory) based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and needs of each subject, according to the study of diseases at different scales from genotype to phenotype scale. To make concrete the goal of personalized medicine, it is necessary to employ high-throughput methodologies such as Next Generation Sequencing (NGS), Genome-Wide Association Studies (GWAS), Mass Spectrometry or Microarrays, that are able to investigate a single disease from a broader perspective. A side effect of high-throughput methodologies is the massive amount of data produced for each single experiment, that poses several challenges (e.g., high execution time and required memory) to bioinformatic software. Thus a main requirement of modern bioinformatic softwares, is the use of good software engineering methods and efficient programming techniques, able to face those challenges, that include the use of parallel programming and efficient and compact data structures. This paper presents the design and the experimentation of a comprehensive software pipeline, named microPipe, for the preprocessing, annotation and analysis of microarray-based Single Nucleotide Polymorphism (SNP) genotyping data. A use case in pharmacogenomics is presented. The main advantages of using microPipe are: the reduction of errors that may happen when trying to make data compatible among different tools; the possibility to analyze in parallel huge datasets; the easy annotation and integration of data. microPipe is available under Creative Commons license, and is freely downloadable for academic and not-for-profit institutions. Full article
(This article belongs to the Special Issue Applications of Microarrays in Diagnostics)
Figures

Figure 1

Open AccessReview Handling Complexity in Animal and Plant Science Research—From Single to Functional Traits: Are We There Yet?
High-Throughput 2018, 7(2), 16; https://doi.org/10.3390/ht7020016
Received: 4 March 2018 / Revised: 10 May 2018 / Accepted: 24 May 2018 / Published: 28 May 2018
PDF Full-text (214 KB) | HTML Full-text | XML Full-text
Abstract
The current knowledge of the main factors governing livestock, crop and plant quality as well as yield in different species is incomplete. For example, this can be evidenced by the persistence of benchmark crop varieties for many decades in spite of the gains
[...] Read more.
The current knowledge of the main factors governing livestock, crop and plant quality as well as yield in different species is incomplete. For example, this can be evidenced by the persistence of benchmark crop varieties for many decades in spite of the gains achieved over the same period. In recent years, it has been demonstrated that molecular breeding based on DNA markers has led to advances in breeding (animal and crops). However, these advances are not in the way that it was anticipated initially by the researcher in the field. According to several scientists, one of the main reasons for this was related to the evidence that complex target traits such as grain yield, composition or nutritional quality depend on multiple factors in addition to genetics. Therefore, some questions need to be asked: are the current approaches in molecular genetics the most appropriate to deal with complex traits such as yield or quality? Are the current tools for phenotyping complex traits enough to differentiate among genotypes? Do we need to change the way that data is collected and analysed? Full article
Open AccessReview Functional Genomics Approaches to Studying Symbioses between Legumes and Nitrogen-Fixing Rhizobia
High-Throughput 2018, 7(2), 15; https://doi.org/10.3390/ht7020015
Received: 9 April 2018 / Revised: 13 May 2018 / Accepted: 16 May 2018 / Published: 18 May 2018
PDF Full-text (635 KB) | HTML Full-text | XML Full-text
Abstract
Biological nitrogen fixation gives legumes a pronounced growth advantage in nitrogen-deprived soils and is of considerable ecological and economic interest. In exchange for reduced atmospheric nitrogen, typically given to the plant in the form of amides or ureides, the legume provides nitrogen-fixing rhizobia
[...] Read more.
Biological nitrogen fixation gives legumes a pronounced growth advantage in nitrogen-deprived soils and is of considerable ecological and economic interest. In exchange for reduced atmospheric nitrogen, typically given to the plant in the form of amides or ureides, the legume provides nitrogen-fixing rhizobia with nutrients and highly specialised root structures called nodules. To elucidate the molecular basis underlying physiological adaptations on a genome-wide scale, functional genomics approaches, such as transcriptomics, proteomics, and metabolomics, have been used. This review presents an overview of the different functional genomics approaches that have been performed on rhizobial symbiosis, with a focus on studies investigating the molecular mechanisms used by the bacterial partner to interact with the legume. While rhizobia belonging to the alpha-proteobacterial group (alpha-rhizobia) have been well studied, few studies to date have investigated this process in beta-proteobacteria (beta-rhizobia). Full article
Figures

Figure 1

Open AccessArticle Computational Convolution of SELDI Data for the Diagnosis of Alzheimer’s Disease
High-Throughput 2018, 7(2), 14; https://doi.org/10.3390/ht7020014
Received: 23 February 2018 / Revised: 9 May 2018 / Accepted: 14 May 2018 / Published: 17 May 2018
PDF Full-text (1439 KB) | HTML Full-text | XML Full-text
Abstract
Alzheimer’s disease is rapidly becoming an endemic for people over the age of 65. A vital path towards reversing this ominous trend is the building of reliable diagnostic devices for definite and early diagnoses in lieu of the longitudinal, usually inconclusive and non-generalize-able
[...] Read more.
Alzheimer’s disease is rapidly becoming an endemic for people over the age of 65. A vital path towards reversing this ominous trend is the building of reliable diagnostic devices for definite and early diagnoses in lieu of the longitudinal, usually inconclusive and non-generalize-able methods currently in use. In this article, we present a survey of methods for mining pools of mass spectrometer saliva data in relation to diagnosing Alzheimer’s disease. The computational methods provides new approaches for appropriately gleaning latent information from mass spectra data. They improve traditional machine learning algorithms and are most fit for handling matrix data points including solving problems beyond protein identifications and biomarker discovery. Full article
(This article belongs to the Special Issue Parallel and Cloud-Based Bioinformatics and Biomedicine)
Figures

Figure 1

Open AccessCommunication Comparison of Cell Arrays and Multi-Well Plates in Microscopy-Based Screening
High-Throughput 2018, 7(2), 13; https://doi.org/10.3390/ht7020013
Received: 7 March 2018 / Revised: 8 May 2018 / Accepted: 9 May 2018 / Published: 15 May 2018
PDF Full-text (3274 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Multi-well plates and cell arrays enable microscopy-based screening assays in which many samples can be analysed in parallel. Each of the formats possesses its own strengths and weaknesses, but reference comparisons between these platforms and their application rationale is lacking. We aim to
[...] Read more.
Multi-well plates and cell arrays enable microscopy-based screening assays in which many samples can be analysed in parallel. Each of the formats possesses its own strengths and weaknesses, but reference comparisons between these platforms and their application rationale is lacking. We aim to fill this gap by comparing two RNA interference (RNAi)-mediated fluorescence microscopy-based assays, namely epidermal growth factor (EGF) internalization and cell cycle progression, on both platforms. Quantitative analysis revealed that both platforms enabled the generation of data with the appearance of the expected phenotypes significantly distinct from the negative controls. The measurements of cell cycle progression were less variable in multi-well plates. The result can largely be attributed to higher cell numbers resulting in less data variability when dealing with the assay generating phenotypic cell subpopulations. The EGF internalization assay with a uniform phenotype over nearly the whole cell population performed better on cell arrays than in multi-well plates. The result was achieved by scoring five times less cells on cell arrays than in multi-well plates, indicating the efficiency of the cell array format. Our data indicate that the choice of the screening platform primarily depends on the type of the cellular assay to achieve a maximum data quality and screen efficiency. Full article
Figures

Figure 1

Open AccessReview Reactive Chemicals and Electrophilic Stress in Cancer: A Minireview
High-Throughput 2018, 7(2), 12; https://doi.org/10.3390/ht7020012
Received: 3 March 2018 / Revised: 19 April 2018 / Accepted: 26 April 2018 / Published: 27 April 2018
PDF Full-text (1820 KB) | HTML Full-text | XML Full-text
Abstract
Exogenous reactive chemicals can impair cellular homeostasis and are often associated with the development of cancer. Significant progress has been achieved by studying the macromolecular interactions of chemicals that possess various electron-withdrawing groups and the elucidation of the protective responses of cells to
[...] Read more.
Exogenous reactive chemicals can impair cellular homeostasis and are often associated with the development of cancer. Significant progress has been achieved by studying the macromolecular interactions of chemicals that possess various electron-withdrawing groups and the elucidation of the protective responses of cells to chemical interventions. However, the formation of electrophilic species inside the cell and the relationship between oxydative and electrophilic stress remain largely unclear. Derivatives of nitro-benzoxadiazole (also referred as nitro-benzofurazan) are potent producers of hydrogen peroxide and have been used as a model to study the generation of reactive species in cancer cells. This survey highlights the pivotal role of Cu/Zn superoxide dismutase 1 (SOD1) in the production of reactive oxygen and electrophilic species in cells exposed to cell-permeable chemicals. Lipophilic electrophiles rapidly bind to SOD1 and induce stable and functionally active dimers, which produce excess hydrogen peroxide leading to aberrant cell signalling. Moreover, reactive oxygen species and reactive electrophilic species, simultaneously generated by redox reactions, behave as independent entities that attack a variety of proteins. It is postulated that the binding of the electrophilic moiety to multiple proteins leading to impairing different cellular functions may explain unpredictable side effects in patients undergoing chemotherapy with reactive oxygen species (ROS)-inducing drugs. The identification of proteins susceptible to electrophiles at early steps of oxidative and electrophilic stress is a promising way to offer rational strategies for dealing with stress-related malignant tumors. Full article
Figures

Graphical abstract

Open AccessArticle Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson’s Correlation Coefficients for Time Series Data—fMRI Study
High-Throughput 2018, 7(2), 11; https://doi.org/10.3390/ht7020011
Received: 2 March 2018 / Revised: 4 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
Cited by 1 | PDF Full-text (1594 KB) | HTML Full-text | XML Full-text
Abstract
Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is
[...] Read more.
Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods. Full article
(This article belongs to the Special Issue Parallel and Cloud-Based Bioinformatics and Biomedicine)
Figures

Figure 1

Open AccessArticle Red Blood Cell Agglutination for Blood Typing Within Passive Microfluidic Biochips
High-Throughput 2018, 7(2), 10; https://doi.org/10.3390/ht7020010
Received: 22 March 2018 / Revised: 16 April 2018 / Accepted: 16 April 2018 / Published: 19 April 2018
PDF Full-text (22491 KB) | HTML Full-text | XML Full-text
Abstract
Pre-transfusion bedside compatibility test is mandatory to check that the donor and the recipient present compatible groups before any transfusion is performed. Although blood typing devices are present on the market, they still suffer from various drawbacks, like results that are based on
[...] Read more.
Pre-transfusion bedside compatibility test is mandatory to check that the donor and the recipient present compatible groups before any transfusion is performed. Although blood typing devices are present on the market, they still suffer from various drawbacks, like results that are based on naked-eye observation or difficulties in blood handling and process automation. In this study, we addressed the development of a red blood cells (RBC) agglutination assay for point-of-care blood typing. An injection molded microfluidic chip that is designed to enhance capillary flow contained anti-A or anti-B dried reagents inside its microchannel. The only blood handling step in the assay protocol consisted in the deposit of a blood drop at the tip of the biochip, and imaging was then achieved. The embedded reagents were able to trigger RBC agglutination in situ, allowing for us to monitor in real time the whole process. An image processing algorithm was developed on diluted bloods to compute real-time agglutination indicator and was further validated on undiluted blood. Through this proof of concept, we achieved efficient, automated, real time, and quantitative measurement of agglutination inside a passive biochip for blood typing which could be further generalized to blood biomarker detection and quantification. Full article
Figures

Graphical abstract

Open AccessReview Recent Advances in Targeted and Untargeted Metabolomics by NMR and MS/NMR Methods
High-Throughput 2018, 7(2), 9; https://doi.org/10.3390/ht7020009
Received: 19 March 2018 / Revised: 9 April 2018 / Accepted: 13 April 2018 / Published: 18 April 2018
PDF Full-text (2035 KB) | HTML Full-text | XML Full-text
Abstract
Metabolomics has made significant progress in multiple fronts in the last 18 months. This minireview aimed to give an overview of these advancements in the light of their contribution to targeted and untargeted metabolomics. New computational approaches have emerged to overcome the manual
[...] Read more.
Metabolomics has made significant progress in multiple fronts in the last 18 months. This minireview aimed to give an overview of these advancements in the light of their contribution to targeted and untargeted metabolomics. New computational approaches have emerged to overcome the manual absolute quantitation step of metabolites in one-dimensional (1D) 1H nuclear magnetic resonance (NMR) spectra. This provides more consistency between inter-laboratory comparisons. Integration of two-dimensional (2D) NMR metabolomics databases under a unified web server allowed for very accurate identification of the metabolites that have been catalogued in these databases. For the remaining uncatalogued and unknown metabolites, new cheminformatics approaches have been developed by combining NMR and mass spectrometry (MS). These hybrid MS/NMR approaches accelerated the identification of unknowns in untargeted studies, and now they are allowing for profiling ever larger number of metabolites in application studies. Full article
Figures

Figure 1

Back to Top