Abstract: Establishing how a series of potentially important genes might relate to each other is relevant to understand the origin and evolution of illnesses, such as cancer. High‑throughput biological experiments have played a critical role in providing information in this regard. A special challenge, however, is that of trying to conciliate information from separate microarray experiments to build a potential genetic signaling path. This work proposes a two-step analysis pipeline, based on optimization, to approach meta-analysis aiming to build a proxy for a genetic signaling path.
Abstract: A strategy is presented that allows a causal analysis of co-expressed genes, which may be subject to common regulatory influences. A state-of-the-art promoter analysis for potential transcription factor (TF) binding sites in combination with a knowledge-based analysis of the upstream pathway that control the activity of these TFs is shown to lead to hypothetical master regulators. This strategy was implemented as a workflow in a comprehensive bioinformatic software platform. We applied this workflow to gene sets that were identified by a novel triclustering algorithm in naphthalene-induced gene expression signatures of murine liver and lung tissue. As a result, tissue-specific master regulators were identified that are known to be linked with tumorigenic and apoptotic processes. To our knowledge, this is the first time that genes of expression triclusters were used to identify upstream regulators.
Abstract: Microarray technologies have been the basis of numerous important findings regarding gene expression in the few last decades. Studies have generated large amounts of data describing various processes, which, due to the existence of public databases, are widely available for further analysis. Given their lower cost and higher maturity compared to newer sequencing technologies, these data continue to be produced, even though data quality has been the subject of some debate. However, given the large volume of data generated, integration can help overcome some issues related, e.g., to noise or reduced time resolution, while providing additional insight on features not directly addressed by sequencing methods. Here, we present an integration test case based on public Drosophila melanogaster datasets (gene expression, binding site affinities, known interactions). Using an evolutionary computation framework, we show how integration can enhance the ability to recover transcriptional gene regulatory networks from these data, as well as indicating which data types are more important for quantitative and qualitative network inference. Our results show a clear improvement in performance when multiple datasets are integrated, indicating that microarray data will remain a valuable and viable resource for some time to come.
Abstract: Tissue microarray (TMA) methodology allows the concomitant analysis of hundreds of tissue specimens arrayed in the same manner on a recipient block. Subsequently, all samples can be processed under identical conditions, such as antigen retrieval procedure, reagent concentrations, incubation times with antibodies/probes, and escaping the inter-assays variability. Therefore, the use of TMA has revolutionized histopathology translational research projects and has become a tool very often used for putative biomarker investigations. TMAs are particularly relevant for large scale analysis of a defined disease entity. In the course of these exploratory studies, rare subpopulations can be discovered or identified. This can refer to subsets of patients with more particular phenotypic or genotypic disease with low incidence or to patients receiving a particular treatment. Such rare cohorts should be collected for more specific investigations at a later time, when, possibly, more samples of a rare identity will be available as well as more knowledge derived from concomitant, e.g., genetic, investigations will have been acquired. In this article we analyze for the first time the limits and opportunities to construct new TMA blocks using tissues from older available arrays and supplementary donor blocks. In summary, we describe the reasons and technical details for the construction of rare disease entities arrays.
Abstract: Protein microarray technology has gone through numerous innovative developments in recent decades. In this review, we focus on the development of protein detection methods embedded in the technology. Early microarrays utilized useful chromophores and versatile biochemical techniques dominated by high-throughput illumination. Recently, the realization of label-free techniques has been greatly advanced by the combination of knowledge in material sciences, computational design and nanofabrication. These rapidly advancing techniques aim to provide data without the intervention of label molecules. Here, we present a brief overview of this remarkable innovation from the perspectives of label and label-free techniques in transducing nano‑biological events.
Abstract: Nucleic Acid Programmable Protein Arrays (NAPPA) have emerged as a powerful and innovative technology for the screening of biomarkers and the study of protein-protein interactions, among others possible applications. The principal advantages are the high specificity and sensitivity that this platform offers. Moreover, compared to conventional protein microarrays, NAPPA technology avoids the necessity of protein purification, which is expensive and time-consuming, by substituting expression in situ with an in vitro transcription/translation kit. In summary, NAPPA arrays have been broadly employed in different studies improving knowledge about diseases and responses to treatments. Here, we review the principal advances and applications performed using this platform during the last years.