Topic Editors

Department of Electronic Engineering, National Formosa University, Yunlin City 632, Taiwan
Director of the Cognitions Humaine et Artificielle Laboratory, Professeur de Psychologie Cognitive – Université, Paris 8, France
Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
Department of Recreation and Health Care Management, Chia Nan University of Pharmacy & Science, Tainan City 71710, Taiwan
Department of Digital Media Design, National Yunlin University of Science and Technology, Yunlin 640, Taiwan
Department of Electrical Engineering, Lunghwa University of Science and Technology, Taoyuan 333, Taiwan

Electronic Communications, IOT and Big Data

Abstract submission deadline
30 September 2023
Manuscript submission deadline
30 November 2023
Viewed by
18301

Topic Information

Dear Colleagues,

The 2nd IEEE International Conference on Electronic Communications, Internet of Things and Big Data Conference 2022 will be held in Hsinchu, Taiwan, from July 15 to 17, 2022 (http://www.iceib.asia/). It will provide a communication platform for high-tech personnel and researchers in the topics of Electronic Communications, Internet of Things and Big Data. The booming economic development in Asia, especially the advanced technology in electronic communications, Internet of Things and big data, has attracted great attention from universities, research institutions and many companies. This conference will focus on research with innovative ideas or results and practical application. Topics of interest include, but are not limited to, the following:

I. Big Data and Cloud Computing:

1) Models and algorithms of big data;

2) Architecture of big data;

3) Big data management;

4) Big data analysis and processing;      

5) Security and privacy of big data;  

6) Big data in smart cities; 

7) Search, mining and visualization of big data;  

8) Technologies, services and application of big data; 

9) Edge computing;

10) Architectures and systems of cloud computing;  

11) Models, simulations, designs and paradigms of cloud computing;  

12) Management and operations of cloud computing;

13) Technologies, services and applications of cloud computing;

14) Dynamic resource supply and consumption;  

15) Management and analysis of geospatial big data;

16) UAV oblique photography and ground 3D real scene modeling;

17) Aerial photography of UAV loaded with multispectral sensors.

II. Technologies and Application of Artificial Intelligence:

1) Basic theory and application of Artificial Intelligence;

2) Knowledge science and knowledge engineering;    

3) Machine learning and data mining;  

4) Machine perception and virtual reality;    

5) Natural language processing and understanding;  

6) Neural networks and deep learning;   

7) Pattern recognition theory and application;   

8) Rough set and soft computing;

9) Biometric identification;  

10) Computer vision and image processing;  

11) Evolutionary calculation;

12) Information retrieval and web search;  

13) Intelligent planning and scheduling;

14) Intelligent control.

15) Classification and change detection of remote sensing images or aerial images.   

III. Robotics Science and Engineering:   

1) Robot control;  

2) Mobile robotics;  

3) Intelligent pension robots;  

4) Mobile sensor networks;   

5) Perception systems;     

6) Micro robots and micro-manipulation;      

7) Visual serving;  

8) Search, rescue and field robotics;     

9) Robot sensing and data fusion;      

10) Indoor localization and navigation;    

11) Dexterous manipulation;     

12) Medical robots and bio-robotics;      

13) Human centered systems;      

14) Space and underwater robots;     

15) Tele-robotics.

IV. Internet of Things and Sensor Technology:

1) Technology architecture of Internet of Things;

2) Sensors in Internet of Things;

3) Perception technology of Internet of Things information;

4) Multi terminal cooperative control and Internet of Things intelligent terminal;

5) Multi-network resource sharing in the environment of Internet of Things;  

6) Heterogeneous fusion and multi-domain collaboration in the Internet of Things environment;

7) SDN and intelligent service network;

8) Technology and its application in the Internet of Things;  

9) Cloud computing and big data in the Internet of Things;  

10) Information analysis and processing of the Internet of Things; 

11) CPS technology and intelligent information system;  

12) Internet of Things technology standard;

13) Internet of Things information security,

14) Narrow Band Internet of Things (NB-IoT);

15) Smart cities;

16) Smart farming;

17) Smart grids;  

18) Digital health/telehealth/telemedicine.

V. Special Session: Intelligent Big Data Analysis and Applications

1) Big data and its application;

2) Data mining and its application;

3) Cloud computing and its application;

4) Deep learning and its application;

5) Fuzzy theory and its application;

6) Evolutionary computing and its application.

Prof. Dr. Teen-­Hang Meen
Prof. Dr. Charles Tijus
Prof. Dr. Cheng-Chien Kuo
Prof. Dr. Kuei-Shu Hsu
Prof. Dr. Kuo-Kuang Fan
Prof. Dr. Jih-Fu Tu
Topic Editors

Keywords

  • electronic communications
  • Internet of Things
  • Big Data

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.838 3.7 2011 14.9 Days 2300 CHF Submit
Big Data and Cognitive Computing
BDCC
- 6.1 2017 17.2 Days 1600 CHF Submit
Computers
computers
- 3.7 2012 14.6 Days 1600 CHF Submit
Electronics
electronics
2.690 3.7 2012 14.4 Days 2000 CHF Submit
Journal of Sensor and Actuator Networks
jsan
- 6.9 2012 18.4 Days 1600 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (17 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Resource Allocation and Trajectory Optimization in OTFS-Based UAV-Assisted Mobile Edge Computing
Electronics 2023, 12(10), 2212; https://doi.org/10.3390/electronics12102212 - 12 May 2023
Viewed by 395
Abstract
Mobile edge computing (MEC) powered by unmanned aerial vehicles (UAVs), with the advantages of flexible deployment and wide coverage, is a promising technology to solve computationally intensive communication problems. In this paper, an orthogonal time frequency space (OTFS)-based UAV-assisted MEC system is studied, [...] Read more.
Mobile edge computing (MEC) powered by unmanned aerial vehicles (UAVs), with the advantages of flexible deployment and wide coverage, is a promising technology to solve computationally intensive communication problems. In this paper, an orthogonal time frequency space (OTFS)-based UAV-assisted MEC system is studied, in which OTFS technology is used to mitigate the Doppler effect in UAV high-speed mobile communication. The weighted total energy consumption of the system is minimized by jointly optimizing the time division, CPU frequency allocation, transmit power allocation and flight trajectory while considering Doppler compensation. Thus, the resultant problem is a challenging nonconvex problem. We propose a joint algorithm that combines the benefits of the atomic orbital search (AOS) algorithm and convex optimization. Firstly, an improved AOS algorithm is proposed to swiftly obtain the time slot allocation and high-quality solution of the UAV optimal path. Secondly, the optimal solution for the CPU frequency and transmit power allocation is found by using Lagrangian duality and the first-order Taylor formula. Finally, the optimal solution of the original problem is iteratively obtained. The simulation results show that the weighted total energy consumption of the OTFS-based system decreases by 13.6% compared with the orthogonal frequency division multiplexing (OFDM)-based system. The weighted total energy consumption of the proposed algorithm decreases by 11.7% and 26.7% compared with convex optimization and heuristic algorithms, respectively. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Characteristics Mode Analysis-Inspired Compact UWB Antenna with WLAN and X-Band Notch Features for Wireless Applications
J. Sens. Actuator Netw. 2023, 12(3), 37; https://doi.org/10.3390/jsan12030037 - 23 Apr 2023
Viewed by 791
Abstract
A compact circular structured monopole antenna for ultrawideband (UWB) and UWB dual-band notch applications is designed and fabricated on an FR4 substrate. The UWB antenna has a hybrid configuration of the circle and three ellipses as the radiating plane and less than a [...] Read more.
A compact circular structured monopole antenna for ultrawideband (UWB) and UWB dual-band notch applications is designed and fabricated on an FR4 substrate. The UWB antenna has a hybrid configuration of the circle and three ellipses as the radiating plane and less than a quarter-lowered ground plane. The overall dimensions of the projected antennas are 16 × 11 × 1.6 mm3, having a −10 dB impedance bandwidth of 113% (3.7–13.3 GHz). Further, two frequency band notches were created using two inverted U-shaped slots on the radiator. These slots notch the frequency band from 5–5.6 GHz and 7.3–8.3 GHz, covering IEEE 802.11, Wi-Fi, WLAN, and the entire X-band satellite communication. A comprehensive frequency and time domain analysis is performed to validate the projected antenna design’s effectiveness. In addition, a circuit model of the projected antenna design is built, and its performance is evaluated. Furthermore, unlike the traditional technique, which uses the simulated surface current distribution to verify functioning, characteristic mode analysis (CMA) is used to provide deeper insight into distinct modes on the antenna. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
A Quadruple Notch UWB Antenna with Decagonal Radiator and Sierpinski Square Fractal Slots
J. Sens. Actuator Netw. 2023, 12(2), 24; https://doi.org/10.3390/jsan12020024 - 14 Mar 2023
Viewed by 598
Abstract
A novel quadruple-notch UWB (ultrawideband) antenna for wireless applications is presented. The antenna consists of a decagonal-shaped radiating part with Sierpinski square fractal slots up to iteration 3. The ground part is truncated and loaded with stubs and slots. Each individual stub at [...] Read more.
A novel quadruple-notch UWB (ultrawideband) antenna for wireless applications is presented. The antenna consists of a decagonal-shaped radiating part with Sierpinski square fractal slots up to iteration 3. The ground part is truncated and loaded with stubs and slots. Each individual stub at the ground plane creates/controls a particular notch band. Initially, a UWB antenna is designed with the help of truncation at the ground plane. Miniaturization in this design is achieved with the help of Sierpinski square fractal slots. Additionally, these slots help improve the UWB impedance bandwidth. This design is then extended to achieve a quadruple notch by loading the ground with various rectangular-shaped stubs. The final antenna shows the UWB range from 4.21 to 13.92 GHz and notch frequencies at 5.02 GHz (C-band), 7.8 GHz (satellite band), 9.03, and 10.86 GHz (X-band). The simulated and measured results are nearly identical, which shows the efficacy of the proposed design. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Automated and Optimized Regression Model for UWB Antenna Design
J. Sens. Actuator Netw. 2023, 12(2), 23; https://doi.org/10.3390/jsan12020023 - 10 Mar 2023
Viewed by 691
Abstract
Antenna design involves continuously optimizing antenna parameters to meet the desired requirements. Since the process is manual, laborious, and time-consuming, a surrogate model based on machine learning provides an effective solution. The conventional approach for selecting antenna parameters is mapped to a regression [...] Read more.
Antenna design involves continuously optimizing antenna parameters to meet the desired requirements. Since the process is manual, laborious, and time-consuming, a surrogate model based on machine learning provides an effective solution. The conventional approach for selecting antenna parameters is mapped to a regression problem to predict the antenna performance in terms of S parameters. In this regard, a heuristic approach is employed using an optimized random forest model. The design parameters are obtained from an ultrawideband (UWB) antenna simulated using the high-frequency structure simulator (HFSS). The designed antenna is an embedded structure consisting of a circular monopole with a rectangle. The ground plane of the proposed antenna is reduced to realize the wider impedance bandwidth. The lowered ground plane will create a new current channel that affects the uniform current distribution and helps in achieving the wider impedance bandwidth. Initially, data were preprocessed, and feature extraction was performed using additive regression. Further, ten different regression models with optimized parameters are used to determine the best values for antenna design. The proposed method was evaluated by splitting the dataset into train and test data in the ratio of 60:40 and by employing a ten-fold cross-validation scheme. A correlation coefficient of 0.99 was obtained using the optimized random forest model. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Application of Somatosensory Computer Game for Nutrition Education in Preschool Children
Computers 2023, 12(1), 20; https://doi.org/10.3390/computers12010020 - 16 Jan 2023
Viewed by 1378
Abstract
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development [...] Read more.
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development of new media puzzle games for children. Several studies have shown that somatosensory games can improve physical, brain, and sensory integrated development in children, as well as promoting parent–child and peer interactions and enhancing children’s attention and cooperation in play. The purpose of this study is to assess the effect of integrating somatosensory computer games into early childhood nutrition education. The subjects of this study were 15 preschool children (aged 5–6 years old) from a preschool in Taichung City, Taiwan. We used the somatosensory game “Arno’s Fruit and Vegetable Journey” as an intervention tool for early childhood nutrition education. The somatosensory game production uses the Scratch software combined with Rabboni sensors. The somatosensory game education intervention was carried out for one hour a week over two consecutive weeks. We used questionnaires and nutrition knowledge learning sheets to evaluate the early childhood nutrition knowledge and learning status and satisfaction degree in the first and second weeks of this study. The results showed that there were no statistically significant differences between the preschool children’s game scores and times, as well as nutritional knowledge scores, before and after the intervention. Most of the preschool children highly enjoyed the somatosensory game educational activities. We reveal some problems in the teaching activities of somatosensory games, which can provide a reference for future research on designing and producing somatosensory games for preschool children and somatosensory game-based education. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Research on Multi-Agent D2D Communication Resource Allocation Algorithm Based on A2C
Electronics 2023, 12(2), 360; https://doi.org/10.3390/electronics12020360 - 10 Jan 2023
Cited by 2 | Viewed by 854
Abstract
Device to device (D2D) communication technology is the main component of future communication, which greatly improves the utilization of spectrum resources. However, in the D2D subscriber multiplex communication network, the interference between communication links is serious and the system performance is degraded. Traditional [...] Read more.
Device to device (D2D) communication technology is the main component of future communication, which greatly improves the utilization of spectrum resources. However, in the D2D subscriber multiplex communication network, the interference between communication links is serious and the system performance is degraded. Traditional resource allocation schemes need a lot of channel information when dealing with interference problems in the system, and have the problems of weak dynamic resource allocation capability and low system throughput. Aiming at this challenge, this paper proposes a multi-agent D2D communication resource allocation algorithm based on Advantage Actor Critic (A2C). First, a multi-D2D cellular communication system model based on A2C Critic is established, then the parameters of the actor network and the critic network in the system are updated, and finally the resource allocation scheme of D2D users is dynamically and adaptively output. The simulation results show that compared with DQN (deep Q-network) and MAAC (multi-agent actor–critic), the average throughput of the system is improved by 26% and 12.5%, respectively. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Photoplethysmography Data Reduction Using Truncated Singular Value Decomposition and Internet of Things Computing
Electronics 2023, 12(1), 220; https://doi.org/10.3390/electronics12010220 - 02 Jan 2023
Viewed by 985
Abstract
Biometric-based identity authentication is integral to modern-day technologies. From smart phones, personal computers, and tablets to security checkpoints, they all utilize a form of identity check based on methods such as face recognition and fingerprint-verification. Photoplethysmography (PPG) is another form of biometric-based authentication [...] Read more.
Biometric-based identity authentication is integral to modern-day technologies. From smart phones, personal computers, and tablets to security checkpoints, they all utilize a form of identity check based on methods such as face recognition and fingerprint-verification. Photoplethysmography (PPG) is another form of biometric-based authentication that has recently been gaining momentum, because it is effective and easy to implement. This paper considers a cloud-based system model for PPG-authentication, where the PPG signals of various individuals are collected with distributed sensors and communicated to the cloud for authentication. Such a model incursarge signal traffic, especially in crowded places such as airport security checkpoints. This motivates the need for a compression–decompression scheme (or a Codec for short). The Codec is required to reduce the data traffic by compressing each PPG signal before it is communicated, i.e., encoding the signal right after it comes off the sensor and before it is sent to the cloud to be reconstructed (i.e., decoded). Therefore, the Codec has two system requirements to meet: (i) produce high-fidelity signal reconstruction; and (ii) have a computationallyightweight encoder. Both requirements are met by the Codec proposed in this paper, which is designed using truncated singular value decomposition (T-SVD). The proposed Codec is developed and tested using a publicly available dataset of PPG signals collected from multiple individuals, namely the CapnoBase dataset. It is shown to achieve a 95% compression ratio and a 99% coefficient of determination. This means that the Codec is capable of delivering on the first requirement, high-fidelity reconstruction, while producing highly compressed signals. Those compressed signals do not require heavy computations to be produced as well. An implementation on a single-board computer is attempted for the encoder, showing that the encoder can average 300 milliseconds per signal on a Raspberry Pi 3. This is enough time to encode a PPG signal prior to transmission to the cloud. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
A Low-Latency Fair-Arbiter Architecture for Network-on-Chip Switches
Appl. Sci. 2022, 12(23), 12458; https://doi.org/10.3390/app122312458 - 06 Dec 2022
Viewed by 962
Abstract
As semiconductor technology evolves, computing platforms attempt to integrate hundreds of processing cores and associated interconnects into a single chip. Network-on-chip (NoC) technology has been widely used for data exchange centers in recent years. As the core element of the NoC, the round-robin [...] Read more.
As semiconductor technology evolves, computing platforms attempt to integrate hundreds of processing cores and associated interconnects into a single chip. Network-on-chip (NoC) technology has been widely used for data exchange centers in recent years. As the core element of the NoC, the round-robin arbiter provides fair and fast arbitration, which is essential to ensure the high performance of each module on the chip. In this paper, we propose a low-latency fair switch arbiter (FSA) architecture based on the tree structure search algorithm. The FSA uses a feedback-based parallel priority update mechanism to complete the arbitration within the leaf nodes and a lock-based round-robin search algorithm to guarantee global fairness. To reduce latency, the FSA keeps the lock structure only at the leaf node so that the complexity of the critical path does not increase. Meanwhile, the FSA achieves a critical path with only O(log4N) delay by using four input nodes in parallel. The latency of the proposed circuit is on average 22.2% better than the existing fair structures and 8.1% better than the fastest arbiter, according to the synthesis results. The proposed architecture is well suited for high-speed network-on-chip switches and has better scalability for switches with large numbers of ports. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
An Explainable and Lightweight Deep Convolutional Neural Network for Quality Detection of Green Coffee Beans
Appl. Sci. 2022, 12(21), 10966; https://doi.org/10.3390/app122110966 - 29 Oct 2022
Viewed by 1152
Abstract
In recent years, the demand for coffee has increased tremendously. During the production process, green coffee beans are traditionally screened manually for defective beans before they are packed into coffee bean packages; however, this method is not only time-consuming but also increases the [...] Read more.
In recent years, the demand for coffee has increased tremendously. During the production process, green coffee beans are traditionally screened manually for defective beans before they are packed into coffee bean packages; however, this method is not only time-consuming but also increases the rate of human error due to fatigue. Therefore, this paper proposed a lightweight deep convolutional neural network (LDCNN) for a quality detection system of green coffee beans, which combined depthwise separable convolution (DSC), squeeze-and-excite block (SE block), skip block, and other frameworks. To avoid the influence of low parameters of the lightweight model caused by the model training process, rectified Adam (RA), lookahead (LA), and gradient centralization (GC) were included to improve efficiency; the model was also put into the embedded system. Finally, the local interpretable model-agnostic explanations (LIME) model was employed to explain the predictions of the model. The experimental results indicated that the accuracy rate of the model could reach up to 98.38% and the F1 score could be as high as 98.24% when detecting the quality of green coffee beans. Hence, it can obtain higher accuracy, lower computing time, and lower parameters. Moreover, the interpretable model verified that the lightweight model in this work was reliable, providing the basis for screening personnel to understand the judgment through its interpretability, thereby improving the classification and prediction of the model. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
XOR-Based Meaningful (n, n) Visual Multi-Secrets Sharing Schemes
Appl. Sci. 2022, 12(20), 10368; https://doi.org/10.3390/app122010368 - 14 Oct 2022
Viewed by 717
Abstract
The basic visual cryptography (VC) model was proposed by Naor and Shamir in 1994. The secret image is encrypted into pieces, called shares, which can be viewed by collecting and directly stacking these shares. Many related studies were subsequently proposed. The most recent [...] Read more.
The basic visual cryptography (VC) model was proposed by Naor and Shamir in 1994. The secret image is encrypted into pieces, called shares, which can be viewed by collecting and directly stacking these shares. Many related studies were subsequently proposed. The most recent advancement in visual cryptography, XOR-based VC, can address the issue of OR-based VC’s poor image quality of the restored image by lowering hardware costs. Simultaneous sharing of multiple secret images can reduce computational costs, while designing shared images into meaningful unrelated images helps avoid attacks and is easier to manage. Both have been topics of interest to many researchers in recent years. This study suggests ways for XOR-based VCS that simultaneously encrypts several secret images and makes each share separately meaningful. Theoretical analysis and experimental results show that our methods are secure and effective. Compared with previous schemes, our scheme has more capabilities. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Communication
Applying Natural Language Processing and TRIZ Evolutionary Trends to Patent Recommendations for Product Design
Appl. Sci. 2022, 12(19), 10105; https://doi.org/10.3390/app121910105 - 08 Oct 2022
Cited by 1 | Viewed by 1027
Abstract
Traditional TRIZ theory provides methods and processes for systematic analysis on engineering problems, which can improve the efficiency of solving problems. However, the effect of solving problems is not necessarily guaranteed, and depends on the user’s profession and experience. Therefore, this study proposes [...] Read more.
Traditional TRIZ theory provides methods and processes for systematic analysis on engineering problems, which can improve the efficiency of solving problems. However, the effect of solving problems is not necessarily guaranteed, and depends on the user’s profession and experience. Therefore, this study proposes a methodology to apply evolutionary benefits in the 37 trend lines developed by TRIZ researchers to assist in intelligently screening relevant patents applicable to the content of the product design. In such a way, the efficiency of problem solving and product design quality may be improved more effectively. First, the patent database is used as the training dataset, words and sentences in the patent documents are analyzed through natural language processing to obtain keywords that may be related to evolutionary benefits. Using word vectors trained by Doc2vec, the semantic similarity can be calculated to obtain the similarity relationship between patent text and evolutionary benefit. Secondly, the goals of the product development project may make be related to the evolutionary benefits, and then applicable patent recommendations can be provided. The proposed methodology may achieve the purpose of intelligent design assistance to enhance the product development process and problem-solving. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
XOR-Based (n, n) Visual Cryptography Schemes for Grayscale or Color Images with Meaningful Shares
Appl. Sci. 2022, 12(19), 10096; https://doi.org/10.3390/app121910096 - 08 Oct 2022
Viewed by 1205
Abstract
XOR-based Visual Cryptography Scheme (XOR-based VCS) is a method of secret image sharing. The principle of XOR-based VCS is to encrypt a secret image into several encrypted images, called shares. No information about the secret can be obtained from any of the shares, [...] Read more.
XOR-based Visual Cryptography Scheme (XOR-based VCS) is a method of secret image sharing. The principle of XOR-based VCS is to encrypt a secret image into several encrypted images, called shares. No information about the secret can be obtained from any of the shares, and after applying the logical XOR operation to stack these shares, the original secret image can be recovered. In this paper, we present a new XOR-based VCS for grayscale or a color secret image. This scheme encrypts the secret grayscale (or color) image into n meaningful grayscale (or color) shares, which can import n difference cover images. After stacking n shares using the XOR operation, the original secret image can be completely restored. Both the theoretical proof and experimental results show that our method is accurate and efficient. To the best of our knowledge, ours is the only scheme that currently provides this functionality for grayscale and color secret images. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Development of a Machine Learning-Based Framework for Predicting Vessel Size Based on Container Capacity
Appl. Sci. 2022, 12(19), 9999; https://doi.org/10.3390/app12199999 - 05 Oct 2022
Viewed by 807
Abstract
Ports are important hubs in logistics and supply chain systems, where the majority of the available data is still not being fully exploited. Container throughput is the amount of work done by the TEU and the ability to handle containers at a minimal [...] Read more.
Ports are important hubs in logistics and supply chain systems, where the majority of the available data is still not being fully exploited. Container throughput is the amount of work done by the TEU and the ability to handle containers at a minimal cost. This capacity of container throughput is the most important part of the scale of services, which is a crucial factor in selecting port terminals. At the port container terminal, it is necessary to allocate an appropriate number of available quay cranes to the berth before container ships arrive at the port container terminal. Predicting the size of a ship is especially important for calculating the number of quay cranes that should be allocated to ships that will eventually dock at the port terminal. Machine learning techniques are flexible tools for utilizing and unlocking the value of the data. In this paper, we used neighborhood component analysis as a tool for feature selection and state-of-the-art machine learning algorithms for multiclass classification. The paper proposes a novel two-stage approach for estimating and predicting vessel size based on container capacity. Our proposed approach revealed seven unique features of port data, which are the essential parameters for the identification of the vessel size. We obtained the highest average classification accuracy of 97.6% with the linear support vector machine classifier. This study paves a new direction for research in port logistics incorporating machine learning. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
A Novel Mechanical Fault Diagnosis Based on Transfer Learning with Probability Confidence Convolutional Neural Network Model
Appl. Sci. 2022, 12(19), 9670; https://doi.org/10.3390/app12199670 - 26 Sep 2022
Viewed by 997
Abstract
For fault diagnosis, convolutional neural networks (CNN) have been performing as a data-driven method to identify mechanical fault features in forms of vibration signals. However, because of CNN’s ineffective and inaccurate identification of unknown fault categories, we propose a model based on transfer [...] Read more.
For fault diagnosis, convolutional neural networks (CNN) have been performing as a data-driven method to identify mechanical fault features in forms of vibration signals. However, because of CNN’s ineffective and inaccurate identification of unknown fault categories, we propose a model based on transfer learning with probability confidence CNN (TPCCNN) to model the fault features of rotating machinery for fault diagnosis. TPCCNN includes three major modules: (1) feature engineering to perform a series of data pre-processing and feature extraction; (2) transferring learning features of heterogeneous datasets for different datasets to have better generality in model training and reduce the time for modeling and parameter tuning; and (3) building a PCCNN model to classify known and unknown fault categories. In addition to solving the problem of an imbalanced sample size, TPCCNN self-learns and retrains by iterating with unknown classes to the original model. This model is verified with the use of the open-source datasets CWRU and Ottawa. The experimental results showing the feature transfer of heterogeneous datasets are of average accuracy rates of 99.2% and 93.8% respectively for known and unknown categories, and TPCCNN is then proven effectively in training heterogeneous datasets. Likewise, similar feature sets can also be applied to reduce the training of predicting models by 34% and 68% of the time. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Design, Analysis, and Simulation of 60 GHz Millimeter Wave MIMO Microstrip Antennas
J. Sens. Actuator Netw. 2022, 11(4), 59; https://doi.org/10.3390/jsan11040059 - 24 Sep 2022
Viewed by 1608
Abstract
This article comparatively shows the evolution of parameters of three types of arrays for MIMO microstrip antennas, to which the number of ports is gradually incremented until reaching 32. The three arrays have a 1 × 2 configuration in each port and present [...] Read more.
This article comparatively shows the evolution of parameters of three types of arrays for MIMO microstrip antennas, to which the number of ports is gradually incremented until reaching 32. The three arrays have a 1 × 2 configuration in each port and present different geometry or type of coupling in the next way: square patch with quarter-wave coupling (Antenna I), square patch with inset feed (Antenna II) and circular patch with quarter-wave coupling (Antenna III). The arrays were designed and simulated to operate on the millimetric wave band, specifically in the 60 GHz frequency to be used in wireless technologies such as IEEE 802.11 ad. A method of rapid prototyping was formulated to increase the number of elements in the array obtaining dimensions and coordinates of location in the layout in short periods of time. The simulation was conducted through ADS software, and the results of gain, directivity, return loss, bandwidth, beamwidth, and efficiency were evaluated. In terms of array results of 32 ports, Antenna III obtained the lowest return loss with −42.988 dB, being more than 19 dB lower than the others. The highest gain is also obtained by Antenna III with 24.541 dBi and an efficiency of 66%. Antenna II obtained better efficiency, reaching 71.03%, but with a gain of more than 2dB below the Antenna III. Antenna I obtained the best bandwidth. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
A Holistic Scalability Strategy for Time Series Databases Following Cascading Polyglot Persistence
Big Data Cogn. Comput. 2022, 6(3), 86; https://doi.org/10.3390/bdcc6030086 - 18 Aug 2022
Viewed by 1328
Abstract
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested [...] Read more.
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested resources becomes a further requirement. Following this goal, NagareDB and its Cascading Polyglot Persistence approach were born. They were not just intended to provide a fast time series solution, but also to find a great cost-efficiency balance. However, although they provided outstanding results, they lacked a natural way of scaling out in a cluster fashion. Consequently, monolithic approaches could extract the maximum value from the solution but distributed ones had to rely on general scalability approaches. In this research, we proposed a holistic approach specially tailored for databases following Cascading Polyglot Persistence to further maximize its inherent resource-saving goals. The proposed approach reduced the cluster size by 33%, in a setup with just three ingestion nodes and up to 50% in a setup with 10 ingestion nodes. Moreover, the evaluation shows that our scaling method is able to provide efficient cluster growth, offering scalability speedups greater than 85% in comparison to a theoretically 100% perfect scaling, while also ensuring data safety via data replication. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Article
Energy Efficient Hybrid Relay-IRS-Aided Wireless IoT Network for 6G Communications
Electronics 2022, 11(12), 1900; https://doi.org/10.3390/electronics11121900 - 16 Jun 2022
Cited by 2 | Viewed by 1525
Abstract
Intelligent Reflecting Surfaces (IRS) have been recognized as presenting a highly energy-efficient and optimal solution for future fast-growing 6G communication systems by reflecting the incident signal towards the receiver. The large number of Internet of Things (IoT) devices are distributed randomly in order [...] Read more.
Intelligent Reflecting Surfaces (IRS) have been recognized as presenting a highly energy-efficient and optimal solution for future fast-growing 6G communication systems by reflecting the incident signal towards the receiver. The large number of Internet of Things (IoT) devices are distributed randomly in order to serve users while providing a high data rate, seamless data transfer, and Quality of Service (QoS). The major challenge in satisfying the above requirements is the energy consumed by IoT network. Hence, in this paper, we examine the energy-efficiency (EE) of a hybrid relay-IRS-aided wireless IoT network for 6G communications. In our analysis, we study the EE performance of IRS-aided and DF relay-aided IoT networks separately, as well as a hybrid relay-IRS-aided IoT network. Our numerical results showed that the EE of the hybrid relay-IRS-aided system has better performance than both the conventional relay and the IRS-aided IoT network. Furthermore, we realized that the multiple IRS blocks can beat the relay in a high SNR regime, which results in lower hardware costs and reduced power consumption. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Back to TopTop