Next Issue
Previous Issue

Table of Contents

Future Internet, Volume 4, Issue 4 (December 2012), Pages 865-1104

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-13
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Plausible Description Logic Programs for Stream Reasoning
Future Internet 2012, 4(4), 865-881; doi:10.3390/fi4040865
Received: 9 August 2012 / Revised: 9 September 2012 / Accepted: 25 September 2012 / Published: 17 October 2012
Cited by 1 | PDF Full-text (295 KB) | HTML Full-text | XML Full-text
Abstract
Sensor networks are estimated to drive the formation of the future Internet, with stream reasoning responsible for analysing sensor data. Stream reasoning is defined as real time logical reasoning on large, noisy, heterogeneous data streams, aiming to support the decision process of [...] Read more.
Sensor networks are estimated to drive the formation of the future Internet, with stream reasoning responsible for analysing sensor data. Stream reasoning is defined as real time logical reasoning on large, noisy, heterogeneous data streams, aiming to support the decision process of large numbers of concurrent querying agents. In this research we exploited non-monotonic rule-based systems for handling inconsistent or incomplete information and also ontologies to deal with heterogeneity. Data is aggregated from distributed streams in real time and plausible rules fire when new data is available. The advantages of lazy evaluation on data streams were investigated in this study, with the help of a prototype developed in Haskell. Full article
(This article belongs to the Special Issue Semantic Interoperability and Knowledge Building)
Open AccessArticle Contributions to the Development of Local e-Government 2.0
Future Internet 2012, 4(4), 882-899; doi:10.3390/fi4040882
Received: 28 August 2012 / Revised: 28 September 2012 / Accepted: 10 October 2012 / Published: 22 October 2012
Cited by 2 | PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
With the emergence of Web 2.0 (Blog, Wiki, RSS, YouTube, Flickr, Podcast, Social Networks, and Mashups), new ways of communicating, interacting and being on the Web have arisen. These new communication tools and strategies can radically change some specific work processes in [...] Read more.
With the emergence of Web 2.0 (Blog, Wiki, RSS, YouTube, Flickr, Podcast, Social Networks, and Mashups), new ways of communicating, interacting and being on the Web have arisen. These new communication tools and strategies can radically change some specific work processes in communities, such as the work processes of an autarchy. Some authors emphasize the advantages of using Web 2.0 tools in autarchies; thus, we were interested in exploring the possibilities and constraints of implementing these tools in our region of Portugal, the Minho. Using a case study methodology, we aimed to find out about the possibilities of implementing Web 2.0 tools in autarchies through exploring the interest and motivation of autarchic collaborators in their use (our unit of analysis in autarchies). Information was gathered with the help of a questionnaire, the design of which was based on previous exploratory interviews and applied to four autarchic units in the Minho region. In each unit, three different target-groups were surveyed (Councilors, Information Systems (IS) Technicians, and General Staff), so that we could triangulate the data. Data analysis and results emphasized the interest and motivation of the autarchies in using Web 2.0 tools, as well as the main constraints that would be faced during Web 2.0 implementation. It also allowed us to establish some guidelines for adequate Web 2.0 implementation, including an “ideal” profile of the person responsible for the implementation process. Full article
(This article belongs to the Special Issue Government 2.0)
Open AccessArticle Creating Open Government Ecosystems: A Research and Development Agenda
Future Internet 2012, 4(4), 900-928; doi:10.3390/fi4040900
Received: 31 July 2012 / Revised: 24 September 2012 / Accepted: 7 October 2012 / Published: 23 October 2012
Cited by 18 | PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose to view the concept of open government from the perspective of an ecosystem, a metaphor often used by policy makers, scholars, and technology gurus to convey a sense of the interdependent social systems of actors, organizations, [...] Read more.
In this paper, we propose to view the concept of open government from the perspective of an ecosystem, a metaphor often used by policy makers, scholars, and technology gurus to convey a sense of the interdependent social systems of actors, organizations, material infrastructures, and symbolic resources that can be created in technology-enabled, information-intensive social systems. We use the concept of an ecosystem to provide a framework for considering the outcomes of a workshop organized to generate a research and development agenda for open government. The agenda was produced in discussions among participants from the government (at the federal, state, and local levels), academic and civil sector communities at the Center for Technology in Government (CTG) at the University at Albany, SUNY in April 2011. The paper begins by discussing concepts central to understanding what is meant by an ecosystem and some principles that characterize its functioning. We then apply this metaphor more directly to government, proposing that policymakers engage in strategic ecosystems thinking, which means being guided by the goal of explicitly and purposefully constructing open government ecosystems. From there, we present the research agenda questions essential to the development of this new view of government's interaction with users and organizations. Our goal is to call attention to some of the fundamental ways in which government must change in order to evolve from outdated industrial bureaucratic forms to information age networked and interdependent systems. Full article
(This article belongs to the Special Issue Government 2.0)
Open AccessArticle Semantic Legal Policies for Data Exchange and Protection across Super-Peer Domains in the Cloud
Future Internet 2012, 4(4), 929-954; doi:10.3390/fi4040929
Received: 21 September 2012 / Revised: 13 October 2012 / Accepted: 17 October 2012 / Published: 25 October 2012
Cited by 1 | PDF Full-text (1958 KB) | HTML Full-text | XML Full-text
Abstract
In semantic policy infrastructure, a Trusted Legal Domain (TLD), designated as a Super-Peer Domain (SPD), is a legal cage model used to circumscribe the legal virtual boundary of data disclosure and usage in the cloud. Semantic legal policies in compliance with the [...] Read more.
In semantic policy infrastructure, a Trusted Legal Domain (TLD), designated as a Super-Peer Domain (SPD), is a legal cage model used to circumscribe the legal virtual boundary of data disclosure and usage in the cloud. Semantic legal policies in compliance with the law are enforced at the super-peer within an SPD to enable Law-as-a-Service (LaaS) for cloud service providers. In addition, cloud users could query fragmented but protected outsourcing cloud data from a law-aware super-peer, where each query is also compliant with the law. Semantic legal policies are logic-based formal policies, which are shown to be a combination of OWL-DL ontologies and stratified Datalog rules with negation, i.e., so-called non-monotonic cq-programs, for policy representation and enforcement. An agent at the super-peer is a unique law-aware guardian that provides protected data integration services for its peers within an SPD. Furthermore, agents at the super-peers specify how law-compliant legal policies are unified with each other to provide protected data exchange services across SPDs in the semantic data cloud. Full article
(This article belongs to the Special Issue Semantic Interoperability and Knowledge Building)
Open AccessArticle Social Media and Experiential Ambivalence
Future Internet 2012, 4(4), 955-970; doi:10.3390/fi4040955
Received: 17 August 2012 / Revised: 7 October 2012 / Accepted: 22 October 2012 / Published: 26 October 2012
Cited by 3 | PDF Full-text (204 KB) | HTML Full-text | XML Full-text
Abstract
At once fearful and dependent, hopeful and distrustful, our contemporary relationship with technology is highly ambivalent. Using experiential accounts from an ongoing Facebook-based qualitative study (N = 231), I both diagnose and articulate this ambivalence. I argue that technological ambivalence is rooted [...] Read more.
At once fearful and dependent, hopeful and distrustful, our contemporary relationship with technology is highly ambivalent. Using experiential accounts from an ongoing Facebook-based qualitative study (N = 231), I both diagnose and articulate this ambivalence. I argue that technological ambivalence is rooted primarily in the deeply embedded moral prescription to lead a meaningful life, and a related uncertainty about the role of new technologies in the accomplishment of this task. On the one hand, technology offers the potential to augment or even enhance personal and public life. On the other hand, technology looms with the potential to supplant or replace real experience. I examine these polemic potentialities in the context of personal experiences, interpersonal relationships, and political activism. I conclude by arguing that the pervasive integration and non-optionality of technical systems amplifies utopian hopes, dystopian fears, and ambivalent concerns in the contemporary era. Full article
(This article belongs to the Special Issue Theorizing the Web 2012)
Open AccessArticle The Cousins of Stuxnet: Duqu, Flame, and Gauss
Future Internet 2012, 4(4), 971-1003; doi:10.3390/fi4040971
Received: 18 September 2012 / Revised: 17 October 2012 / Accepted: 31 October 2012 / Published: 6 November 2012
Cited by 16 | PDF Full-text (230 KB) | HTML Full-text | XML Full-text
Abstract
Stuxnet was the first targeted malware that received worldwide attention forcausing physical damage in an industrial infrastructure seemingly isolated from the onlineworld. Stuxnet was a powerful targeted cyber-attack, and soon other malware samples were discovered that belong to this family. In this [...] Read more.
Stuxnet was the first targeted malware that received worldwide attention forcausing physical damage in an industrial infrastructure seemingly isolated from the onlineworld. Stuxnet was a powerful targeted cyber-attack, and soon other malware samples were discovered that belong to this family. In this paper, we will first present our analysis of Duqu, an information-collecting malware sharing striking similarities with Stuxnet. Wedescribe our contributions in the investigation ranging from the original detection of Duquvia finding the dropper file to the design of a Duqu detector toolkit. We then continue with the analysis of the Flame advanced information-gathering malware. Flame is unique in thesense that it used advanced cryptographic techniques to masquerade as a legitimate proxyfor the Windows Update service. We also present the newest member of the family, called Gauss, whose unique feature is that one of its modules is encrypted such that it can onlybe decrypted on its target system; hence, the research community has not yet been able to analyze this module. For this particular malware, we designed a Gauss detector serviceand we are currently collecting intelligence information to be able to break its very specialencryption mechanism. Besides explaining the operation of these pieces of malware, wealso examine if and how they could have been detected by vigilant system administrators manually or in a semi-automated manner using available tools. Finally, we discuss lessonsthat the community can learn from these incidents. We focus on technical issues, and avoidspeculations on the origin of these threats and other geopolitical questions. Full article
(This article belongs to the Special Issue Aftermath of Stuxnet)
Open AccessArticle Three Steps to Heaven: Semantic Publishing in a Real World Workflow
Future Internet 2012, 4(4), 1004-1015; doi:10.3390/fi4041004
Received: 22 September 2012 / Revised: 24 October 2012 / Accepted: 2 November 2012 / Published: 8 November 2012
PDF Full-text (153 KB) | HTML Full-text | XML Full-text
Abstract
Semantic publishing offers the promise of computable papers, enriched visualisation and a realisation of the linked data ideal. In reality, however, the publication process contrives to prevent richer semantics while culminating in a "lumpen" PDF. In thispaper, we discuss a web-first approach [...] Read more.
Semantic publishing offers the promise of computable papers, enriched visualisation and a realisation of the linked data ideal. In reality, however, the publication process contrives to prevent richer semantics while culminating in a "lumpen" PDF. In thispaper, we discuss a web-first approach to publication, and describe a three-tiered approach that integrates with the existing authoring tooling. Critically, although it adds limited semantics, it does provide value to all the participants in the process: the author, the reader and the machine. Full article
Open AccessArticle Supporting Trust and Privacy with an Identity-Enabled Architecture
Future Internet 2012, 4(4), 1016-1025; doi:10.3390/fi4041016
Received: 1 September 2012 / Revised: 24 September 2012 / Accepted: 25 October 2012 / Published: 19 November 2012
PDF Full-text (520 KB) | HTML Full-text | XML Full-text
Abstract
Cost reduction and a vastly increased potential to create new services, such as via the proliferation of the Cloud, have led to many more players and “end points”. With many of them being new entrants, possibly short-lived, the question of how to [...] Read more.
Cost reduction and a vastly increased potential to create new services, such as via the proliferation of the Cloud, have led to many more players and “end points”. With many of them being new entrants, possibly short-lived, the question of how to handle trust and privacy in this new context arises. In this paper, we specifically look at the underlying infrastructure that connects end-points served by these players, which is an essential part of the overall architecture to enable trust and privacy. We present an enhanced architecture that allows real people, objects and services to reliably interact via an infrastructure providing assured levels of trust. Full article
(This article belongs to the Special Issue Privacy in the Future Internet)
Open AccessArticle Traceability in Model-Based Testing
Future Internet 2012, 4(4), 1026-1036; doi:10.3390/fi4041026
Received: 8 October 2012 / Revised: 27 October 2012 / Accepted: 19 November 2012 / Published: 26 November 2012
Cited by 1 | PDF Full-text (345 KB) | HTML Full-text | XML Full-text
Abstract
The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting [...] Read more.
The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML) for defining the relationships between models. Full article
(This article belongs to the Special Issue Selected Papers from ITA 11)
Open AccessArticle Virtual Astronaut for Scientific Visualization—A Prototype for Santa Maria Crater on Mars
Future Internet 2012, 4(4), 1049-1068; doi:10.3390/fi4041049
Received: 16 July 2012 / Revised: 5 December 2012 / Accepted: 10 December 2012 / Published: 13 December 2012
Cited by 1 | PDF Full-text (4320 KB) | HTML Full-text | XML Full-text
Abstract
To support scientific visualization of multiple-mission data from Mars, the Virtual Astronaut (VA) creates an interactive virtual 3D environment built on the Unity3D Game Engine. A prototype study was conducted based on orbital and Opportunity Rover data covering Santa Maria Crater in [...] Read more.
To support scientific visualization of multiple-mission data from Mars, the Virtual Astronaut (VA) creates an interactive virtual 3D environment built on the Unity3D Game Engine. A prototype study was conducted based on orbital and Opportunity Rover data covering Santa Maria Crater in Meridiani Planum on Mars. The VA at Santa Maria provides dynamic visual representations of the imaging, compositional, and mineralogical information. The VA lets one navigate through the scene and provides geomorphic and geologic contexts for the rover operations. User interactions include in-situ observations visualization, feature measurement, and an animation control of rover drives. This paper covers our approach and implementation of the VA system. A brief summary of the prototype system functions and user feedback is also covered. Based on external review and comments by the science community, the prototype at Santa Maria has proven the VA to be an effective tool for virtual geovisual analysis. Full article
(This article belongs to the Special Issue Geovisual Analytics)
Open AccessArticle A Web-Based Geovisual Analytical System for Climate Studies
Future Internet 2012, 4(4), 1069-1085; doi:10.3390/fi4041069
Received: 10 October 2012 / Revised: 20 November 2012 / Accepted: 10 December 2012 / Published: 14 December 2012
Cited by 4 | PDF Full-text (2695 KB) | HTML Full-text | XML Full-text
Abstract
Climate studies involve petabytes of spatiotemporal datasets that are produced and archived at distributed computing resources. Scientists need an intuitive and convenient tool to explore the distributed spatiotemporal data. Geovisual analytical tools have the potential to provide such an intuitive and convenient [...] Read more.
Climate studies involve petabytes of spatiotemporal datasets that are produced and archived at distributed computing resources. Scientists need an intuitive and convenient tool to explore the distributed spatiotemporal data. Geovisual analytical tools have the potential to provide such an intuitive and convenient method for scientists to access climate data, discover the relationships between various climate parameters, and communicate the results across different research communities. However, implementing a geovisual analytical tool for complex climate data in a distributed environment poses several challenges. This paper reports our research and development of a web-based geovisual analytical system to support the analysis of climate data generated by climate model. Using the ModelE developed by the NASA Goddard Institute for Space Studies (GISS) as an example, we demonstrate that the system is able to (1) manage large volume datasets over the Internet; (2) visualize 2D/3D/4D spatiotemporal data; (3) broker various spatiotemporal statistical analyses for climate research; and (4) support interactive data analysis and knowledge discovery. This research also provides an example for managing, disseminating, and analyzing Big Data in the 21st century. Full article
(This article belongs to the Special Issue Geovisual Analytics)
Open AccessArticle Towards Content Neutrality in Wiki Systems
Future Internet 2012, 4(4), 1086-1104; doi:10.3390/fi4041086
Received: 16 October 2012 / Revised: 29 November 2012 / Accepted: 3 December 2012 / Published: 19 December 2012
Cited by 2 | PDF Full-text (230 KB) | HTML Full-text | XML Full-text
Abstract
The neutral point of view (NPOV) cornerstone of Wikipedia (WP) is challenged for next generation knowledge bases. A case is presented for content neutrality as a new, every point of view (EPOV) guiding principle. The architectural implications of content neutrality are discussed [...] Read more.
The neutral point of view (NPOV) cornerstone of Wikipedia (WP) is challenged for next generation knowledge bases. A case is presented for content neutrality as a new, every point of view (EPOV) guiding principle. The architectural implications of content neutrality are discussed and translated into novel concepts of Wiki architectures. Guidelines for implementing this architecture are presented. Although NPOV is criticized, the contribution avoids ideological controversy and focuses on the benefits of the novel approach. Full article
(This article belongs to the Special Issue Selected Papers from ITA 11)

Other

Jump to: Research

Open AccessEssay Textual Dualism and Augmented Reality in the Russian Empire
Future Internet 2012, 4(4), 1037-1048; doi:10.3390/fi4041037
Received: 16 August 2012 / Revised: 6 November 2012 / Accepted: 5 December 2012 / Published: 10 December 2012
Cited by 1 | PDF Full-text (185 KB) | HTML Full-text | XML Full-text
Abstract
While the current focus on how digital technology alters our conception of the self and its place in the broader perceived reality yields fascinating insight into modern issues, there is much to be gained by analyzing the presence of dualist and augmented [...] Read more.
While the current focus on how digital technology alters our conception of the self and its place in the broader perceived reality yields fascinating insight into modern issues, there is much to be gained by analyzing the presence of dualist and augmented reality discourses in a pre-digital era. This essay will examine the ontological interplay of textual dualist norms in the Russian and Soviet states of the 19th and early 20th centuries and how those norms were challenged by augmented claims embodied in rumors, refrains, and the spelling of names. By utilizing the informational concepts of mobility and asynchronicity, three Russian historical vignettes—the Emancipation of the Serfs in 1861, the documentation of Jews in Imperial Russia, and the attempts by Trotsky to realize Soviet symchka—demonstrate that not only are dualist discourses prevalent in periods outside of the contemporary, but also that the way in which those conflicts framed themselves in the past directly influences their deployment in today’s digital world. Full article
(This article belongs to the Special Issue Theorizing the Web 2012)

Journal Contact

MDPI AG
Future Internet Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
futureinternet@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Future Internet
Back to Top