Abstract: In this paper we report findings of the first phase of an investigation, which explored the experience of learning amongst high-level managers, project leaders and visitors in Queensland University of Technology’s (QUT) “Cube”. “The Cube” is a giant, interactive, multi-media display; an award-winning configuration that hosts several interactive projects. The research team worked with three groups of participants to understand the relationship between: (a) the learning experiences that were intended in the establishment phase; (b) the learning experiences that were enacted through the design and implementation of specific projects; and (c) the lived experiences of learning of visitors interacting with the system. We adopted phenomenography as a research approach, to understand variation in people’s understandings and lived experiences of learning in this environment. The project was conducted within the first twelve months of The Cube being open to visitors.
Abstract: As we move into the big data era, data grows not just in size, but also in complexity, containing a rich set of attributes, including location and time information, such as data from mobile devices (e.g., smart phones), natural disasters (e.g., earthquake and hurricane), epidemic spread, etc. We are motivated by the rising challenge and build a visualization tool for exploring generic spatiotemporal data, i.e., records containing time location information and numeric attribute values. Since the values often evolve over time and across geographic regions, we are particularly interested in detecting and analyzing the anomalous changes over time/space. Our analytic tool is based on geographic information system and is combined with spatiotemporal data mining algorithms, as well as various data visualization techniques, such as anomaly grids and anomaly bars superimposed on the map. We study how effective the tool may guide users to find potential anomalies through demonstrating and evaluating over publicly available spatiotemporal datasets. The tool for spatiotemporal anomaly analysis and visualization is useful in many domains, such as security investigation and monitoring, situation awareness, etc.
Abstract: Molecular imaging by definition is the visualization of molecular and cellular processes within a given system. The modalities and reagents described here represent a diverse array spanning both pre-clinical and clinical applications. Innovations in probe design and technologies would greatly benefit therapeutic outcomes by enhancing diagnostic accuracy and assessment of acute therapy. Opportunistic pathogens continue to pose a worldwide threat, despite advancements in treatment strategies, which highlights the continued need for improved diagnostics. In this review, we present a summary of the current clinical protocol for the imaging of a suspected infection, methods currently in development to optimize this imaging process, and finally, insight into endocarditis as a model of infectious disease in immediate need of improved diagnostic methods.
Abstract: Segmentation in ultrasound (US) images is a challenge in computer vision, due to the high signal noise, artifacts that produce discontinuities in the boundaries and shadows that hide part of the received signal. In this paper, a solution based on ellipse fitting motivated by natural artery geometry will be proposed. To optimize the parameters that define such an ellipse, a strategy based on an evolutionary algorithm was adopted. The paper will also demonstrate that the method can be solved in a reasonable amount of time, making intensive GPGPU (general graphics processing unit, GPU, processing) where excellent computing performance gain is obtained (up to 54 times faster than the parallel CPU implementation). The proposed approach is compared with other artery segmentation methods in US images, obtaining very promising results. Furthermore, the proposed approach is parameter free and does not require any initialization estimation close to the final solution.
Abstract: Numerous initiatives have allowed users to share knowledge or opinions using collaborative platforms. In most cases, the users provide a textual description of their knowledge, following very limited or no constraints. Here, we tackle the classification of documents written in such an environment. As a use case, our study is made in the context of text mining evaluation campaign material, related to the classification of cooking recipes tagged by users from a collaborative website. This context makes some of the corpus specificities difficult to model for machine-learning-based systems and keyword or lexical-based systems. In particular, different authors might have different opinions on how to classify a given document. The systems presented hereafter were submitted to the D´Efi Fouille de Textes 2013 evaluation campaign, where they obtained the best overall results, ranking first on task 1 and second on task 2. In this paper, we explain our approach for building relevant and effective systems dealing with such a corpus.
Abstract: We address the problem of automatically processing collocations—a subclass of multi-word expressions characterized by a high degree of morphosyntactic flexibility—in the context of two major applications, namely, syntactic parsing and machine translation. We show that parsing and collocation identification are processes that are interrelated and that benefit from each other, inasmuch as syntactic information is crucial for acquiring collocations from corpora and, vice versa, collocational information can be used to improve parsing performance. Similarly, we focus on the interrelation between collocations and machine translation, highlighting the use of translation information for multilingual collocation identification, as well as the use of collocational knowledge for improving translation. We give a panorama of the existing relevant work, and we parallel the literature surveys with our own experiments involving a symbolic parser and a rule-based translation system. The results show a significant improvement over approaches in which the corresponding tasks are decoupled.