Next Issue
Volume 11, September
Previous Issue
Volume 11, July

Table of Contents

Future Internet, Volume 11, Issue 8 (August 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) It is anticipated that Artificial General Intelligence (AGI) will be the disruptive technological [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
A Novel Task Caching and Migration Strategy in Multi-Access Edge Computing Based on the Genetic Algorithm
Future Internet 2019, 11(8), 181; https://doi.org/10.3390/fi11080181 - 20 Aug 2019
Viewed by 462
Abstract
Multi-access edge computing (MEC) brings high-bandwidth and low-latency access to applications distributed at the edge of the network. Data transmission and exchange become faster, and the overhead of the task migration between mobile devices and edge cloud becomes smaller. In this paper, we [...] Read more.
Multi-access edge computing (MEC) brings high-bandwidth and low-latency access to applications distributed at the edge of the network. Data transmission and exchange become faster, and the overhead of the task migration between mobile devices and edge cloud becomes smaller. In this paper, we adopt the fine-grained task migration model. At the same time, in order to further reduce the delay and energy consumption of task execution, the concept of the task cache is proposed, which involves caching the completed tasks and related data on the edge cloud. Then, we consider the limitations of the edge cloud cache capacity to study the task caching strategy and fine-grained task migration strategy on the edge cloud using the genetic algorithm (GA). Thus, we obtained the optimal mobile device task migration strategy, satisfying minimum energy consumption and the optimal cache on the edge cloud. The simulation results showed that the task caching strategy based on fine-grained migration can greatly reduce the energy consumption of mobile devices in the MEC environment. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Open AccessArticle
Combined Self-Attention Mechanism for Chinese Named Entity Recognition in Military
Future Internet 2019, 11(8), 180; https://doi.org/10.3390/fi11080180 - 18 Aug 2019
Viewed by 568
Abstract
Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional [...] Read more.
Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional long short-term memory (BiLSTM) neural network with a self-attention mechanism to identify the military entities automatically. We obtain distributed vector representations of the military corpus by unsupervised learning and the BiLSTM model combined with the self-attention mechanism is adopted to capture contextual information fully carried by the character vector sequence. The experimental results show that the self-attention mechanism can improve effectively the performance of MNER task. The F-score of the military documents and network military texts identification was 90.15% and 89.34%, respectively, which was better than other models. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Open AccessArticle
Impact of Modern Virtualization Methods on Timing Precision and Performance of High-Speed Applications
Future Internet 2019, 11(8), 179; https://doi.org/10.3390/fi11080179 - 16 Aug 2019
Viewed by 664
Abstract
The presented work is a result of extended research and analysis on timing methods precision, their efficiency in different virtual environments and the impact of timing precision on the performance of high-speed networks applications. We investigated how timer hardware is shared among heavily [...] Read more.
The presented work is a result of extended research and analysis on timing methods precision, their efficiency in different virtual environments and the impact of timing precision on the performance of high-speed networks applications. We investigated how timer hardware is shared among heavily CPU- and I/O-bound tasks on a virtualized OS as well as on bare OS. By replacing the invoked timing methods within a well-known application for estimation of available path bandwidth, we provide the analysis of their impact on estimation accuracy. We show that timer overhead and precision are crucial for high-performance network applications, and low-precision timing methods usage, e.g., the delays and overheads issued by virtualization result in the degradation of the virtual environment. Furthermore, in this paper, we provide confirmation that, by using the methods we intentionally developed for both precise timing operations and AvB estimation, it is possible to overcome the inefficiency of standard time-related operations and overhead that comes with the virtualization. The impacts of negative virtualization factors were investigated in five different environments to define the most optimal virtual environment for high-speed network applications. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Open AccessArticle
Artificial Intelligence Imagery Analysis Fostering Big Data Analytics
Future Internet 2019, 11(8), 178; https://doi.org/10.3390/fi11080178 - 15 Aug 2019
Viewed by 818
Abstract
In an era of accelerating digitization and advanced big data analytics, harnessing quality data and insights will enable innovative research methods and management approaches. Among others, Artificial Intelligence Imagery Analysis has recently emerged as a new method for analyzing the content of large [...] Read more.
In an era of accelerating digitization and advanced big data analytics, harnessing quality data and insights will enable innovative research methods and management approaches. Among others, Artificial Intelligence Imagery Analysis has recently emerged as a new method for analyzing the content of large amounts of pictorial data. In this paper, we provide background information and outline the application of Artificial Intelligence Imagery Analysis for analyzing the content of large amounts of pictorial data. We suggest that Artificial Intelligence Imagery Analysis constitutes a profound improvement over previous methods that have mostly relied on manual work by humans. In this paper, we discuss the applications of Artificial Intelligence Imagery Analysis for research and practice and provide an example of its use for research. In the case study, we employed Artificial Intelligence Imagery Analysis for decomposing and assessing thumbnail images in the context of marketing and media research and show how properly assessed and designed thumbnail images promote the consumption of online videos. We conclude the paper with a discussion on the potential of Artificial Intelligence Imagery Analysis for research and practice across disciplines. Full article
(This article belongs to the Special Issue Big Data Analytics and Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning
Future Internet 2019, 11(8), 177; https://doi.org/10.3390/fi11080177 - 14 Aug 2019
Viewed by 850
Abstract
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection [...] Read more.
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Graphical abstract

Open AccessArticle
Research on Factors Affecting Solvers’ Participation Time in Online Crowdsourcing Contests
Future Internet 2019, 11(8), 176; https://doi.org/10.3390/fi11080176 - 12 Aug 2019
Viewed by 627
Abstract
A crowdsourcing contest is one of the most popular modes of crowdsourcing and is also an important tool for an enterprise to implement open innovation. The solvers’ active participation is one of the major reasons for the success of crowdsourcing contests. Research on [...] Read more.
A crowdsourcing contest is one of the most popular modes of crowdsourcing and is also an important tool for an enterprise to implement open innovation. The solvers’ active participation is one of the major reasons for the success of crowdsourcing contests. Research on solvers’ participation behavior is helpful in understanding the sustainability and incentives of solvers’ participation in the online crowdsourcing platform. So, how to attract more solvers to participate and put in more effort is the focus of researchers. In this regard, previous studies mainly used the submission quantity to measure solvers’ participation behavior and lacked an effective measure on the degree of participation effort expended by a solver. For the first time, we use solvers’ participation time as a dependent variable to measure their effort in a crowdsourcing contest. Thus, we incorporate participation time into the solver’s participation research. With the data from Taskcn.com, we analyze how participation time is affected four key factors including task design, task description, task process, and environment, respectively. We found that, first, for task design, higher task rewards will attract solvers to invest more time in the participation process and the relationship between participation time and task duration is inverted U-shaped. Second, for task description, the length of the task description has a negative impact on participation time and the task description attachment will positively influence the participation time. Third, for the task process, communication and supplementary explanations in a crowdsourcing process positively affect participation time. Fourth, for environmental factors, the task density of the crowdsourcing platform and the market price of all crowdsourcing contests have respectively negative and positive effects on participation time. Full article
(This article belongs to the Special Issue Network Economics and Utility Maximization)
Show Figures

Figure 1

Open AccessArticle
Quality of Experience (QoE)-Aware Fast Coding Unit Size Selection for HEVC Intra-Prediction
Future Internet 2019, 11(8), 175; https://doi.org/10.3390/fi11080175 - 11 Aug 2019
Viewed by 677
Abstract
The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and [...] Read more.
The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches. Full article
Show Figures

Figure 1

Open AccessReview
A Systematic Analysis of Real-World Energy Blockchain Initiatives
Future Internet 2019, 11(8), 174; https://doi.org/10.3390/fi11080174 - 10 Aug 2019
Viewed by 749
Abstract
The application of blockchain technology to the energy sector promises to derive new operating models focused on local generation and sustainable practices, which are driven by peer-to-peer collaboration and community engagement. However, real-world energy blockchains differ from typical blockchain networks insofar as they [...] Read more.
The application of blockchain technology to the energy sector promises to derive new operating models focused on local generation and sustainable practices, which are driven by peer-to-peer collaboration and community engagement. However, real-world energy blockchains differ from typical blockchain networks insofar as they must interoperate with grid infrastructure, adhere to energy regulations, and embody engineering principles. Naturally, these additional dimensions make real-world energy blockchains highly dependent on the participation of grid operators, engineers, and energy providers. Although much theoretical and proof-of-concept research has been published on energy blockchains, this research aims to establish a lens on real-world projects and implementations that may inform the alignment of academic and industry research agendas. This research classifies 131 real-world energy blockchain initiatives to develop an understanding of how blockchains are being applied to the energy domain, what type of failure rates can be observed from recently reported initiatives, and what level of technical and theoretical details are reported for real-world deployments. The results presented from the systematic analysis highlight that real-world energy blockchains are (a) growing exponentially year-on-year, (b) producing relatively low failure/drop-off rates (~7% since 2015), and (c) demonstrating information sharing protocols that produce content with insufficient technical and theoretical depth. Full article
(This article belongs to the Special Issue Blockchain: Current Challenges and Future Prospects/Applications)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Mars to Earth Data Downloading: A Directory Synchronization Approach
Future Internet 2019, 11(8), 173; https://doi.org/10.3390/fi11080173 - 08 Aug 2019
Viewed by 663
Abstract
This paper aims to present a possible alternative to direct file transfer in “challenged networks”, by using DTNbox, a recent application for peer-to-peer directory synchronization between DTN nodes. This application uses the Bundle Protocol (BP) to tackle long delays and link intermittency typical [...] Read more.
This paper aims to present a possible alternative to direct file transfer in “challenged networks”, by using DTNbox, a recent application for peer-to-peer directory synchronization between DTN nodes. This application uses the Bundle Protocol (BP) to tackle long delays and link intermittency typical of challenged networks. The directory synchronization approach proposed in the paper consists of delegating the transmission of bulk data files to DTNbox, instead of modifying source applications to interface with the API of a specific BP implementation, or making use of custom scripts for file transfers. The validity of the proposed approach is investigated in the paper by considering a Mars to Earth interplanetary environment. Experiments are carried out by means of Virtual Machines running ION, the NASA-JPL implementation of DTN protocols. The results show that the directory synchronization approach is a valid alternative to direct transfer in interplanetary scenarios such as that considered in the paper. Full article
(This article belongs to the Special Issue Delay-Tolerant Networking)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Scheduling for Multi-User Multi-Input Multi-Output Wireless Networks with Priorities and Deadlines
Future Internet 2019, 11(8), 172; https://doi.org/10.3390/fi11080172 - 05 Aug 2019
Viewed by 681
Abstract
The spectral efficiency of wireless networks can be significantly improved by exploiting spatial multiplexing techniques known as multi-user MIMO. These techniques enable the allocation of multiple users to the same time-frequency block, thus reducing the interference between users. There is ample evidence that [...] Read more.
The spectral efficiency of wireless networks can be significantly improved by exploiting spatial multiplexing techniques known as multi-user MIMO. These techniques enable the allocation of multiple users to the same time-frequency block, thus reducing the interference between users. There is ample evidence that user groupings can have a significant impact on the performance of spatial multiplexing. The situation is even more complex when the data packets have priority and deadlines for delivery. Hence, combining packet queue management and beamforming would considerably enhance the overall system performance. In this paper, we propose a combination of beamforming and scheduling to improve the overall performance of multi-user MIMO systems in realistic conditions where data packets have both priority and deadlines beyond which they become obsolete. This method dubbed Reward Per Second (RPS), combines advanced matrix factorization at the physical layer with recently-developed queue management techniques. We demonstrate the merits of the this technique compared to other state-of-the-art scheduling methods through simulations. Full article
(This article belongs to the Special Issue Signal Processing for Next Generation Wireless Networks)
Show Figures

Figure 1

Open AccessArticle
Modeling of Cumulative QoE in On-Demand Video Services: Role of Memory Effect and Degree of Interest
Future Internet 2019, 11(8), 171; https://doi.org/10.3390/fi11080171 - 04 Aug 2019
Viewed by 866
Abstract
The growing demand on video streaming services increasingly motivates the development of a reliable and accurate models for the assessment of Quality of Experience (QoE). In this duty, human-related factors which have significant influence on QoE play a crucial role. However, the complexity [...] Read more.
The growing demand on video streaming services increasingly motivates the development of a reliable and accurate models for the assessment of Quality of Experience (QoE). In this duty, human-related factors which have significant influence on QoE play a crucial role. However, the complexity caused by multiple effects of those factors on human perception has introduced challenges on contemporary studies. In this paper, we inspect the impact of the human-related factors, namely perceptual factors, memory effect, and the degree of interest. Based on our investigation, a novel QoE model is proposed that effectively incorporates those factors to reflect the user’s cumulative perception. Evaluation results indicate that our proposed model performed excellently in predicting cumulative QoE at any moment within a streaming session. Full article
Show Figures

Figure 1

Open AccessArticle
Artificial Intelligence Implementations on the Blockchain. Use Cases and Future Applications
Future Internet 2019, 11(8), 170; https://doi.org/10.3390/fi11080170 - 02 Aug 2019
Viewed by 5520
Abstract
An exemplary paradigm of how an AI can be a disruptive technological paragon via the utilization of blockchain comes straight from the world of deep learning. Data scientists have long struggled to maintain the quality of a dataset for machine learning by an [...] Read more.
An exemplary paradigm of how an AI can be a disruptive technological paragon via the utilization of blockchain comes straight from the world of deep learning. Data scientists have long struggled to maintain the quality of a dataset for machine learning by an AI entity. Datasets can be very expensive to purchase, as, depending on both the proper selection of the elements and the homogeneity of the data contained within, constructing and maintaining the integrity of a dataset is difficult. Blockchain as a highly secure storage medium presents a technological quantum leap in maintaining data integrity. Furthermore, blockchain’s immutability constructs a fruitful environment for creating high quality, permanent and growing datasets for deep learning. The combination of AI and blockchain could impact fields like Internet of things (IoT), identity, financial markets, civil governance, smart cities, small communities, supply chains, personalized medicine and other fields, and thereby deliver benefits to many people. Full article
(This article belongs to the Special Issue Blockchain: Current Challenges and Future Prospects/Applications)
Show Figures

Figure 1

Open AccessArticle
An Image Feature-Based Method for Parking Lot Occupancy
Future Internet 2019, 11(8), 169; https://doi.org/10.3390/fi11080169 - 01 Aug 2019
Viewed by 784
Abstract
The main scope of the presented research was the development of an innovative product for the management of city parking lots. Our application will ensure the implementation of the Smart City concept by using computer vision and communication platforms, which enable the development [...] Read more.
The main scope of the presented research was the development of an innovative product for the management of city parking lots. Our application will ensure the implementation of the Smart City concept by using computer vision and communication platforms, which enable the development of new integrated digital services. The use of video cameras could simplify and lower the costs of parking lot controls. In the aim of parking space detection, an aggregated decision was proposed, employing various metrics, computed over a sliding window interval provided by the camera. The history created over 20 images provides an adaptive method for background and accurate detection. The system has shown high robustness in two benchmarks, achieving a recognition rate higher than 93%. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

Open AccessArticle
Latency-Based Dynamic Controller Assignment in Hybrid SDNs: Considering the Impact of Legacy Routers
Future Internet 2019, 11(8), 168; https://doi.org/10.3390/fi11080168 - 28 Jul 2019
Viewed by 808
Abstract
Software-defined networking (SDN) is a modern network architecture, which separates the network control plane from the data plane. Considering the gradual migration from traditional networks to SDNs, the hybrid SDN, which consists of SDN-enabled devices and legacy devices, is an intermediate state. For [...] Read more.
Software-defined networking (SDN) is a modern network architecture, which separates the network control plane from the data plane. Considering the gradual migration from traditional networks to SDNs, the hybrid SDN, which consists of SDN-enabled devices and legacy devices, is an intermediate state. For wide-area hybrid SDNs, to guarantee the control performance, such as low latency, multi SDN controllers are usually needed to be deployed at different places. How to assign them to switches and partition the network into several control domains is a critical problem. For this problem, the control latency and the packet loss rate of control messages are important metrics, which have been considered in a lot of previous works. However, hybrid SDNs have their unique characters, which can affect the assignment scheme and have been ignored by previous studies. For example, control messages pass through Legacy Forwarding Devices (LFDs) in hybrid SDNs and cause more latency and packet loss rate for queuing compared with SDN-enabled Forwarding Devices (SFDs). In this paper, we propose a dynamic controller assignment scheme in hybrid SDNs, which is called the Legacy Based Assignment (LBA). This scheme can dynamically delegate each controller with a subset of SFDs in the hybrid SDNs, whose objective is to minimize average SFD-to-control latency. We performed some experiments compared with other schemes, which show that our scheme has a better performance in terms of the latency and the packet loss rate. Full article
Show Figures

Figure 1

Open AccessReview
A Hybrid Adaptive Transaction Injection Protocol and Its Optimization for Verification-Based Decentralized System
Future Internet 2019, 11(8), 167; https://doi.org/10.3390/fi11080167 - 27 Jul 2019
Viewed by 743
Abstract
Latency is a critical issue that impacts the performance of decentralized systems. Recently we designed various protocols to regulate the injection rate of unverified transactions into the system to improve system performance. Each of the protocols is designed to address issues related to [...] Read more.
Latency is a critical issue that impacts the performance of decentralized systems. Recently we designed various protocols to regulate the injection rate of unverified transactions into the system to improve system performance. Each of the protocols is designed to address issues related to some particular network traffic syndrome. In this work, we first provide the review of our prior protocols. We then provide a hybrid scheme that combines our transaction injection protocols and provides an optimal linear combination of the protocols based on the syndromes in the network. The goal is to speed up the verification process of systems that rely on only one single basic protocol. The underlying basic protocols are Periodic Injection of Transaction via Evaluation Corridor (PITEC), Probabilistic Injection of Transactions (PIT), and Adaptive Semi-synchronous Transaction Injection (ASTI). Full article
Show Figures

Figure 1

Open AccessArticle
Software Defined Wireless Mesh Network Flat Distribution Control Plane
Future Internet 2019, 11(8), 166; https://doi.org/10.3390/fi11080166 - 25 Jul 2019
Viewed by 800
Abstract
Wireless Mesh Networks (WMNs), have a potential offering relatively stable Internet broadband access. The rapid development and growth of WMNs attract ISPs to support users’ coverage anywhere anytime. To achieve this goal network architecture must be addressed carefully. Software Defined Networking (SDN) proposes [...] Read more.
Wireless Mesh Networks (WMNs), have a potential offering relatively stable Internet broadband access. The rapid development and growth of WMNs attract ISPs to support users’ coverage anywhere anytime. To achieve this goal network architecture must be addressed carefully. Software Defined Networking (SDN) proposes new network architecture for wired and wireless networks. Software Defined Wireless Networking (SDWN) has a great potential to increase efficiency, ease the complexity of control and management, and accelerate technology innovation rate of wireless networking. An SDN controller is the core component of an SDN network. It needs to have updated reports of the network status change, as in network topology and quality of service (QoS) in order to effectively configure and manage the network it controls. In this paper, we propose Flat Distributed Software Defined Wireless Mesh Network architecture where the controller aggregates entire topology discovery and monitors QoS properties of extended WMN nodes using Link Layer Discovery Protocol (LLDP) protocol, which is not possible in multi-hop ordinary architectures. The proposed architecture has been implemented on top of POX controller and Advanced Message Queuing Protocol (AMQP) protocol. The experiments were conducted in a Mininet-wifi emulator, the results present the architecture control plane consistency and two application cases: topology discovery and QoS monitoring. The current results push us to study QoS-routing for video streaming over WMN. Full article
Show Figures

Figure 1

Open AccessArticle
Social Emotional Opinion Decision with Newly Coined Words and Emoticon Polarity of Social Networks Services
Future Internet 2019, 11(8), 165; https://doi.org/10.3390/fi11080165 - 25 Jul 2019
Viewed by 795
Abstract
Nowadays, based on mobile devices and internet, social network services (SNS) are common trends to everyone. Social opinions as public opinions are very important to the government, company, and a person. Analysis and decision of social polarity of SNS about social happenings, political [...] Read more.
Nowadays, based on mobile devices and internet, social network services (SNS) are common trends to everyone. Social opinions as public opinions are very important to the government, company, and a person. Analysis and decision of social polarity of SNS about social happenings, political issues and government policies, or commercial products is very critical to the government, company, and a person. Newly coined words and emoticons on SNS are created every day. Specifically, emoticons are made and sold by a person or companies. Newly coined words are mostly made and used by various kinds of communities. The SNS big data mainly consist of normal text with newly coined words and emoticons so that newly coined words and emoticons analysis is very important to understand the social and public opinions. Social big data is informally made and unstructured, and on social network services, many kinds of newly coined words and various emoticons are made anonymously and unintentionally by people and companies. In the analysis of social data, newly coined words and emoticons limit the guarantee the accuracy of analysis. The newly coined words implicitly contain the social opinions and trends of people. The emotional states of people significantly are expressed by emoticons. Although the newly coined words and emoticons are an important part of the social opinion analysis, they are excluded from the emotional dictionary and social big data analysis. In this research, newly coined words and emoticons are extracted from the raw Twitter’s twit messages and analyzed and included in a pre-built dictionary with the polarity and weight of the newly coined words and emoticons. The polarity and weight are calculated for emotional classification. The proposed emotional classification algorithm calculates the weight of polarity (positive or negative) and results in total polarity weight of social opinion. If the total polarity weight of social opinion is more than the pre-fixed threshold value, the twit message is decided as positive. If it is less than the pre-fixed threshold value, the twit message is decided as negative and the other values mean neutral opinion. The accuracy of the social big data analysis result is improved by quantifying and analyzing emoticons and newly coined words. Full article
(This article belongs to the Special Issue Selected Papers from IEEE ICICT 2019)
Show Figures

Figure 1

Open AccessArticle
Energy Efficient Communications for Reliable IoT Multicast 5G/Satellite Services
Future Internet 2019, 11(8), 164; https://doi.org/10.3390/fi11080164 - 25 Jul 2019
Viewed by 840
Abstract
Satellites can provide strong value-add and complementarity with the new cellular system of the fifth generation (5G) in cost-effective solutions for a massive number of users/devices/things. Due to the inherent broadcast nature of satellite communications, which assures access to remote areas and the [...] Read more.
Satellites can provide strong value-add and complementarity with the new cellular system of the fifth generation (5G) in cost-effective solutions for a massive number of users/devices/things. Due to the inherent broadcast nature of satellite communications, which assures access to remote areas and the support to a very large number of devices, satellite systems will gain a major role in the development of the Internet of Things (IoT) sector. In this vision, reliable multicast services via satellite can be provided to deliver the same content efficiently to multiple devices on the Earth, or for software updating to groups of cars in the Machine-to-Machine (M2M) context or for sending control messages to actuators/IoT embedded devices. The paper focuses on the Network coding (NC) techniques applied to a hybrid satellite/terrestrial network to support reliable multicast services. An energy optimization method is proposed based on joint adaptation of: (i) the repetition factor of data symbols on multiple subcarries of the transmitted orthogonal frequency division multiplexing (OFDM) signal; and (ii) the mean number of needed coded packets according to the requirements of each group and to the physical satellite links conditions. Full article
(This article belongs to the Special Issue Satellite Communications in 5G Networks)
Show Figures

Figure 1

Previous Issue
Back to TopTop