Special Issue "Social Web, New Media, Algorithms and Power"

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Techno-Social Smart Systems".

Deadline for manuscript submissions: closed (30 April 2020).

Special Issue Editors

Prof. Dr. Lluís Codina
E-Mail Website
Guest Editor
Department of Communication, Universitat Pompeu Fabra, 08002 Barcelona, Spain
Interests: digital news media; digital journalism; information science; journalism seo; academic seo; methods in analysis and evaluation of websites; innovation in journalism; information architecture; information seeking and retrieval; information organization and taxonomies
Dr. Cristòfol Rovira
E-Mail Website
Guest Editor
Department of Communication, Universitat Pompeu Fabra, 08002 Barcelona, Spain
Interests: information science; SEO; academic SEO; eye tracking
Dr. Frederic Guerrero-Solé
E-Mail Website
Guest Editor
Department of Communication, Universitat Pompeu Fabra, 08002 Barcelona, Spain
Interests: social media; media effects; political communication; social media algorithms

Special Issue Information

Dear Colleagues,

The future of the web is clearly marked by several confluences, among which there are the social web, the new media and the domain of artificial intelligence, mainly in the form of machine learning. In the near future we will see the growing power of algorithms in society in many dimensions. In this context, digital media and news platforms must adapt their functions to remain relevant as important sources of credible and trusted information for citizens.

In this Special Issue we would like to invite any contributions about what the current web is like and its trends in relation to the important factors mentioned; social networks, new media, machine learning and the growing power of algorithms in social spaces.

Some significant examples include the eruption of artificial intelligence in the newsrooms of news outlets and in digital platforms. As a result, one of most important factors to consider in understanding the future web is the use of machine learning and deep learning for content recommendations to users on social networks and platforms such as YouTube or Facebook.

In this Special Issue we will consider both theoretical and critical contributions, preferably focused (but not limited to) on the field of production, recommendation and dissemination of news and user generated content.

Prof. Dr. Lluís Codina
Dr. Cristòfol Rovira
Dr. Frederic Guerrero-Solé
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • social web
  • new media
  • social networks
  • machine learning
  • algorithms
  • YouTube
  • Facebook
  • news outlets
  • digital news media

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
A Bibliometric Overview of Twitter-Related Studies Indexed in Web of Science
Future Internet 2020, 12(5), 91; https://doi.org/10.3390/fi12050091 - 20 May 2020
Viewed by 2180
Abstract
Twitter has been one of the most popular social network sites for academic research; the main objective of this study was to update the current knowledge boundary surrounding Twitter-related investigations and, further, identify the major research topics and analyze their evolution across time. [...] Read more.
Twitter has been one of the most popular social network sites for academic research; the main objective of this study was to update the current knowledge boundary surrounding Twitter-related investigations and, further, identify the major research topics and analyze their evolution across time. A bibliometric analysis has been applied in this article: we retrieved 19,205 Twitter-related academic articles from Web of Science after several steps of data cleaning and preparation. The R package “Bibliometrix” was mainly used in analyzing this content. Our study has two sections, and performance analysis contains 5 categories (Annual Scientific Production, Most Relevant Sources, Most Productive Authors, Most Cited Publications, Most Relevant Keywords.). The science mapping included country collaboration analysis and thematic analysis. We highlight our thematic analysis by splitting the whole bibliographic dataset into three temporal periods, thus a thematic evolution across time has been presented. This study is one of the most comprehensive bibliometric overview in analyzing Twitter-related studies by far. We proceed to explain how the results will benefit the understanding of current academic research interests on the social media giant. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Show Figures

Figure 1

Open AccessArticle
Language-Independent Fake News Detection: English, Portuguese, and Spanish Mutual Features
Future Internet 2020, 12(5), 87; https://doi.org/10.3390/fi12050087 - 11 May 2020
Cited by 4 | Viewed by 2124
Abstract
Online Social Media (OSM) have been substantially transforming the process of spreading news, improving its speed, and reducing barriers toward reaching out to a broad audience. However, OSM are very limited in providing mechanisms to check the credibility of news propagated through their [...] Read more.
Online Social Media (OSM) have been substantially transforming the process of spreading news, improving its speed, and reducing barriers toward reaching out to a broad audience. However, OSM are very limited in providing mechanisms to check the credibility of news propagated through their structure. The majority of studies on automatic fake news detection are restricted to English documents, with few works evaluating other languages, and none comparing language-independent characteristics. Moreover, the spreading of deceptive news tends to be a worldwide problem; therefore, this work evaluates textual features that are not tied to a specific language when describing textual data for detecting news. Corpora of news written in American English, Brazilian Portuguese, and Spanish were explored to study complexity, stylometric, and psychological text features. The extracted features support the detection of fake, legitimate, and satirical news. We compared four machine learning algorithms (k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGB)) to induce the detection model. Results show our proposed language-independent features are successful in describing fake, satirical, and legitimate news across three different languages, with an average detection accuracy of 85.3% with RF. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Show Figures

Figure 1

Open AccessArticle
Artificial Intelligence Systems-Aided News and Copyright: Assessing Legal Implications for Journalism Practices
Future Internet 2020, 12(5), 85; https://doi.org/10.3390/fi12050085 - 08 May 2020
Cited by 1 | Viewed by 2233
Abstract
Automated news, or artificial intelligence systems (AIS)-aided production of news items, has been developed from 2010 onwards. It comprises a variety of practices in which the use of data, software and human intervention is involved in diverse degrees. This can affect the application [...] Read more.
Automated news, or artificial intelligence systems (AIS)-aided production of news items, has been developed from 2010 onwards. It comprises a variety of practices in which the use of data, software and human intervention is involved in diverse degrees. This can affect the application of intellectual property and copyright law in many ways. Using comparative legal methods, we examine the implications of them for some legal categories, such as authorship (and hence required originality) and types of works, namely collaborative, derivative and, most especially, collective works. Sui generis and neighboring rights are also considered for examination as being appliable to AIS-aided news outputs. Our main conclusion is that the economics intellectual property rights are guaranteed in any case through collective works. We propose a shorter term of duration before entering public domain. Still, there is a place for more authorial, personal rights. It shows, however, more difficulty when coming to moral rights, especially in Common Law countries. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Open AccessArticle
Aggregated Indices in Website Quality Assessment
Future Internet 2020, 12(4), 72; https://doi.org/10.3390/fi12040072 - 17 Apr 2020
Cited by 5 | Viewed by 2312
Abstract
Website users have increasingly high expectations regarding website quality, starting from performance and ending up with the content. This article provides a list and characteristics of selected website quality indices and testing applications that are available free of charge. Aggregated website quality indices [...] Read more.
Website users have increasingly high expectations regarding website quality, starting from performance and ending up with the content. This article provides a list and characteristics of selected website quality indices and testing applications that are available free of charge. Aggregated website quality indices were characterised based on a review of various source materials, including the academic literature and Internet materials. Aggregated website quality indices are usually developed with a less specialised user (customer) searching for descriptive information in mind. Their presentation is focused on aesthetic sensations. Most frequently, their values are expressed in points or percent. Many of these indices appear to be of little substantive value, as they present approximate, estimated values. These indices, however, are of great marketing value instead. Specific (“single”) indices are of a specialised nature. They are more difficult to interpret and address the subtle aspects of website and web application functioning. They offer great value to designers and software developers. They indicate critical spots which affect the website quality. Most of them are expressed precisely, often up to two or three decimal places, in specific units. Algorithmic tests for website quality, whose results are presented using indices, enable a reduction in the cost intensiveness of tests as well as an increase in their number and frequency, as the tests are repetitive and their number is not limited. What is more, they allow the results to be compared. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Show Figures

Figure 1

Open AccessArticle
Know Your Customer (KYC) Implementation with Smart Contracts on a Privacy-Oriented Decentralized Architecture
Future Internet 2020, 12(2), 41; https://doi.org/10.3390/fi12020041 - 24 Feb 2020
Cited by 2 | Viewed by 3113
Abstract
Enterprise blockchain solutions attempt to solve the crucial matter of user privacy, albeit that blockchain was initially directed towards full transparency. In the context of Know Your Customer (KYC) standardization, a decentralized schema that enables user privacy protection on enterprise blockchains is proposed [...] Read more.
Enterprise blockchain solutions attempt to solve the crucial matter of user privacy, albeit that blockchain was initially directed towards full transparency. In the context of Know Your Customer (KYC) standardization, a decentralized schema that enables user privacy protection on enterprise blockchains is proposed with two types of developed smart contracts. Through the public KYC smart contract, a user registers and uploads their KYC information to the exploited IPFS storage, actions interpreted in blockchain transactions on the permissioned blockchain of Alastria Network. Furthermore, through the public KYC smart contract, an admin user approves or rejects the validity and expiration date of the initial user’s KYC documents. Inside the private KYC smart contract, CRUD (Create, read, update and delete) operations for the KYC file repository occur. The presented system introduces effectiveness and time efficiency of operations through its schema simplicity and smart integration of the different technology modules and components. This developed scheme focuses on blockchain technology as the most important and critical part of the architecture and tends to accomplish an optimal schema clarity. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Show Figures

Figure 1

Other

Jump to: Research

Open AccessConcept Paper
RE-examining the Effect of Online Social Support on Subjective Well-Being: The Moderating Role of Experience
Future Internet 2020, 12(5), 88; https://doi.org/10.3390/fi12050088 - 15 May 2020
Viewed by 1505
Abstract
Building upon the perspectives of social capital theory, social support, and experience, this study developed a theoretical model to investigate the determinants of subjective well-being on social media. This study also examined the moderating role of experience on the relationship between subjective well-being [...] Read more.
Building upon the perspectives of social capital theory, social support, and experience, this study developed a theoretical model to investigate the determinants of subjective well-being on social media. This study also examined the moderating role of experience on the relationship between subjective well-being and social support. Data collected from 267 social media users in Taiwan were used to test the proposed model. Structural equation modeling analysis was used to test the measurement model and the structural model. The findings reveal that receiving online support and providing online support are the key predictors of subjective well-being. Furthermore, social capital positively influences the reception and provision of online support. Finally, providing online support has a significant effect on the subjective well-being of users with low levels of use experience, while receiving online support exerts a stronger influence on the subjective well-being of users with high levels of use experience. Full article
(This article belongs to the Special Issue Social Web, New Media, Algorithms and Power)
Show Figures

Figure 1

Back to TopTop