Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (386)

Search Parameters:
Keywords = amount of information shared

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1000 KiB  
Article
A Transfer-Learning-Based Approach to Symmetry-Preserving Dynamic Equivalent Modeling of Large Power Systems with Small Variations in Operating Conditions
by Lahiru Aththanayake, Devinder Kaur, Shama Naz Islam, Ameen Gargoom and Nasser Hosseinzadeh
Symmetry 2025, 17(7), 1023; https://doi.org/10.3390/sym17071023 - 29 Jun 2025
Viewed by 318
Abstract
Robust dynamic equivalents of large power networks are essential for fast and reliable stability analysis of bulk power systems. This is because the dimensionality of modern power systems raises convergence issues in modern stability-analysis programs. However, even with modern computational power, it is [...] Read more.
Robust dynamic equivalents of large power networks are essential for fast and reliable stability analysis of bulk power systems. This is because the dimensionality of modern power systems raises convergence issues in modern stability-analysis programs. However, even with modern computational power, it is challenging to find reduced-order models for power systems due to the following factors: the tedious mathematical analysis involved in the classical reduction techniques requires large amounts of computational power; inadequate information sharing between geographical areas prohibits the execution of model-dependent reduction techniques; and frequent fluctuations in the operating conditions (OPs) of power systems necessitate updates to reduced models. This paper focuses on a measurement-based approach that uses a deep artificial neural network (DNN) to estimate the dynamics of an external system (ES) of a power system, enabling stability analysis of a study system (SS). This DNN technique requires boundary measurements only between the SS and the ES. However, machine learning-based techniques like this DNN are known for their extensive training requirements. In particular, for power systems that undergo continuous fluctuations in operating conditions due to the use of renewable energy sources, the applications of this DNN technique are limited. To address this issue, a Deep Transfer Learning (DTL)-based technique is proposed in this paper. This approach accounts for variations in the OPs such as time-to-time variations in loads and intermittent power generation from wind and solar energy sources. The proposed technique adjusts the parameters of a pretrained DNN model to a new OP, leveraging symmetry in the balanced adaptation of model layers to maintain consistent dynamics across operating conditions. The experimental results were obtained by representing the Queensland (QLD) system in the simplified Australian 14 generator (AU14G) model as the SS and the rest of AU14G as the ES in five scenarios that represent changes to the OP caused by variations in loads and power generation. Full article
(This article belongs to the Special Issue Symmetry Studies and Application in Power System Stability)
Show Figures

Figure 1

21 pages, 2502 KiB  
Article
Characterisation of Waste Textiles from Mixed MSW and Separate Collection—A Case Study from Vienna, Austria
by Pablo Kählig, Wolfgang Ipsmiller, Andreas Bartl and Jakob Lederer
Sustainability 2025, 17(12), 5484; https://doi.org/10.3390/su17125484 - 13 Jun 2025
Viewed by 520
Abstract
Textile recycling approaches require input material streams of defined purity. Establishing sorting facilities and defining viable sorting fractions for efficient subsequent recycling necessitates knowledge on the composition and material content of the textiles to be processed. Subsequently, this information is crucial for the [...] Read more.
Textile recycling approaches require input material streams of defined purity. Establishing sorting facilities and defining viable sorting fractions for efficient subsequent recycling necessitates knowledge on the composition and material content of the textiles to be processed. Subsequently, this information is crucial for the implementation of a sustainable circular economy for textiles. This study presents the results of a comprehensive waste textile sampling and characterisation along with data on the quantities and composition of waste textiles in Vienna in 2022. The data reveals that only 28% of the 19,975 t of waste textiles generated end up in separate collection, of which a significant amount goes to the international market. However, the results regarding the fibre composition show that textiles from mixed municipal solid waste and separate collection are very similar. Cotton fibres accounted for approx. half of the fibre mass from non-complex textiles, with 9328 t overall (6776 t in the mixed municipal solid waste and 2522 t in separate collection). A further analysis regarding fibre blends found that a total of 6275 t of single-fibre materials and 5132 t of two-fibre materials were present. This reveals great potential for using this waste stream in fibre-to-fibre recycling processes. Collecting accurate data on this waste stream enables sorters and recyclers to tailor their processes to the expected input material. By increasing the amount of recycled materials, the share of incinerated or landfilled textiles will decrease, which in turn will have a positive impact on the environment. However, further research in textile identification and material separation as well as regulations to keep these materials in a sustainable closed loop are required. Full article
(This article belongs to the Special Issue Recycling Materials for the Circular Economy—2nd Edition)
Show Figures

Graphical abstract

30 pages, 7143 KiB  
Article
Enabling Future Maritime Traffic Management: A Decentralized Architecture for Sharing Data in the Maritime Domain
by Dennis Höhn, Lorenz Mumm, Benjamin Reitz, Christina Tsiroglou and Axel Hahn
J. Mar. Sci. Eng. 2025, 13(4), 732; https://doi.org/10.3390/jmse13040732 - 5 Apr 2025
Cited by 1 | Viewed by 564
Abstract
Digitalization is transforming the maritime sector, and the amount and variety of data generated is increasing rapidly. Effective data utilization is crucial for data-driven services such as for highly automated maritime systems and efficient traffic coordination. However, these applications depend on heterogeneous, distributed [...] Read more.
Digitalization is transforming the maritime sector, and the amount and variety of data generated is increasing rapidly. Effective data utilization is crucial for data-driven services such as for highly automated maritime systems and efficient traffic coordination. However, these applications depend on heterogeneous, distributed data sources managed by different actors, making secure and sovereign information sharing difficult. This paper investigates how maritime data can be exchanged reliably and securely without jeopardizing data sovereignty. Based on the existing literature, we identify the main challenges and current research gap in sharing maritime information, emphasizing the importance of data availability. From this, we derive requirements for a secure and sovereign infrastructure for data exchange. To address these challenges, we propose a fully decentralized architecture for the maritime sector based on the concept of a data space. Our approach integrates protocols to improve data availability while minimizing data volume, considering maritime constraints such as volatile connectivity, low bandwidth and existing standards. We evaluate our architecture through a maritime traffic management case study and demonstrate its ability to enable secure and sovereign exchange of heterogeneous data. The results confirm that our solution reliably supports distributed data collection and enables data-driven, value-added services, which in turn will improve the safety and efficiency of the maritime domain in the near future. Full article
Show Figures

Figure 1

25 pages, 1712 KiB  
Article
Improving the Information Systems of a Warehouse as a Critical Component of Logistics: The Case of Lithuanian Logistics Companies
by Kristina Vaičiūtė and Aušra Katinienė
Systems 2025, 13(3), 186; https://doi.org/10.3390/systems13030186 - 7 Mar 2025
Cited by 2 | Viewed by 1401
Abstract
Rapid changes in the modern world and technological advances and processes are increasingly contributing to greater attention being given to emerging problems associated with obtaining big data, as well as modifying decision-making processes in diverse spheres. Special attention in logistics companies should be [...] Read more.
Rapid changes in the modern world and technological advances and processes are increasingly contributing to greater attention being given to emerging problems associated with obtaining big data, as well as modifying decision-making processes in diverse spheres. Special attention in logistics companies should be given to the warehouse as a critical component of logistics, in particular to such processes as big data processing and automation, as well as the improvement, development, and support of information systems. Enhancing logistics information systems provides companies with a competitive advantage, reduces the emergence of human error, accelerates processes, and ensures the collection and sharing of information and big data are used in a sustainable manner. The automation of warehouse processes results in better-established operational safety and overall service quality. The present paper reviews the importance of improving warehouse automation and logistics information systems. Its advantages are highlighted, and the results of the conducted research are provided to expose the problem areas of warehouse automation and encourage improvements in information systems in Lithuanian logistics companies wherein there is a need to transfer a large amount of information and increase service quality. Full article
Show Figures

Figure 1

18 pages, 2737 KiB  
Article
Cost-Effective Multitask Active Learning in Wearable Sensor Systems
by Asiful Arefeen and Hassan Ghasemzadeh
Sensors 2025, 25(5), 1522; https://doi.org/10.3390/s25051522 - 28 Feb 2025
Viewed by 701
Abstract
Multitask learning models provide benefits by reducing model complexity and improving accuracy by concurrently learning multiple tasks with shared representations. Leveraging inductive knowledge transfer, these models mitigate the risk of overfitting on any specific task, leading to enhanced overall performance. However, supervised multitask [...] Read more.
Multitask learning models provide benefits by reducing model complexity and improving accuracy by concurrently learning multiple tasks with shared representations. Leveraging inductive knowledge transfer, these models mitigate the risk of overfitting on any specific task, leading to enhanced overall performance. However, supervised multitask learning models, like many neural networks, require substantial amounts of labeled data. Given the cost associated with data labeling, there is a need for an efficient label acquisition mechanism, known as multitask active learning (MTAL). In wearable sensor systems, success of MTAL largely hinges on its query strategies because active learning in such settings involves interaction with end-users (e.g., patients) for annotation. However, these strategies have not been studied in mobile health settings and wearable systems to date. While strategies like one-sided sampling, alternating sampling, and rank-combination-based sampling have been proposed in the past, their applicability in mobile sensor settings—a domain constrained by label deficit—remains largely unexplored. This study investigates the MTAL querying approaches and addresses crucial questions related to the choice of sampling methods and the effectiveness of multitask learning in mobile health applications. Utilizing two datasets on activity recognition and emotion classification, our findings reveal that rank-based sampling outperforms other techniques, particularly in tasks with high correlation. However, sole reliance on informativeness for sample selection may introduce biases into models. To address this issue, we also propose a Clustered Stratified Sampling (CSS) method in tandem with the multitask active learning query process. CSS identifies clustered mini-batches of samples, optimizing budget utilization and maximizing performance. When employed alongside rank-based query selection, our proposed CSS algorithm demonstrates up to 9% improvement in accuracy over traditional querying approaches for a 2000-query budget. Full article
(This article belongs to the Special Issue Edge AI for Wearables and IoT)
Show Figures

Figure 1

23 pages, 1454 KiB  
Article
Slot Allocation Protocol for UAV Swarm Ad Hoc Networks: A Distributed Coalition Formation Game Approach
by Liubin Song and Daoxing Guo
Entropy 2025, 27(3), 256; https://doi.org/10.3390/e27030256 - 28 Feb 2025
Viewed by 1220
Abstract
With the rapid development of unmanned aerial vehicle (UAV) manufacturing technology, large-scale UAV swarm ad hoc networks are becoming widely used in military and civilian spheres. UAV swarms equipped with ad hoc networks and satellite networks are being developed for 6G heterogeneous networks, [...] Read more.
With the rapid development of unmanned aerial vehicle (UAV) manufacturing technology, large-scale UAV swarm ad hoc networks are becoming widely used in military and civilian spheres. UAV swarms equipped with ad hoc networks and satellite networks are being developed for 6G heterogeneous networks, especially in offshore and remote areas. A key operational aspect in large-scale UAV swarm networks is slot allocation for large capacity and a low probability of conflict. Traditional methods typically form coalitions among UAVs that are in close spatial proximity to reduce internal network interference, thereby achieving greater throughput. However, significant internal interference still persists. Given that UAV networks are required to transmit a substantial amount of safety-related control information, any packet loss due to internal interference can easily pose potential risks. In this paper, we propose a distributed time coalition formation game algorithm that ensures the absence of internal interference and collisions while sharing time slot resources, thereby enhancing the network’s throughput performance. Instead of forming a coalition from UAVs within a contiguous block area as used in prior studies, UAV nodes with no interference from each other form a coalition that can be called a time coalition. UAVs belonging to one coalition share their transmitting slots with each other, and thus, every UAV node achieves the whole transmitting slots of coalition members. They can transmit data packets simultaneously with no interference. In addition, a distributed coalition formation game-based TDMA (DCFG-TDMA) protocol based on the distributed time coalition formation algorithm is designed for UAV swarm ad hoc networks. Our simulation results verify that the proposed algorithm can significantly improve the UAV throughput compared with that of the conventional TDMA protocol. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

18 pages, 639 KiB  
Article
A Directory of Datasets for Mining Software Repositories
by Themistoklis Diamantopoulos and Andreas L. Symeonidis
Data 2025, 10(3), 28; https://doi.org/10.3390/data10030028 - 20 Feb 2025
Viewed by 1365
Abstract
The amount of software engineering data is constantly growing, as more and more developers employ online services to store their code, keep track of bugs, or even discuss issues. The data residing in these services can be mined to address different research challenges; [...] Read more.
The amount of software engineering data is constantly growing, as more and more developers employ online services to store their code, keep track of bugs, or even discuss issues. The data residing in these services can be mined to address different research challenges; therefore, certain initiatives have been established to encourage sharing research datasets collecting them. In this work, we investigate the effect of such an initiative; we create a directory that includes the papers and the corresponding datasets of the data track of the Mining Software Engineering (MSR) conference. Specifically, our directory includes metadata and citation information for the papers of all data tracks, throughout the last twelve years. We also annotate the datasets according to the data source and further assess their compliance to the FAIR principles. Using our directory, researchers can find useful datasets for their research, or even design methodologies for assessing their quality, especially in the software engineering domain. Moreover, the directory can be used for analyzing the citations of data papers, especially with regard to different data categories, as well as for examining their FAIRness score throughout the years, along with its effect on the usage/citation of the datasets. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

22 pages, 2718 KiB  
Article
Closing the Loop of Biowaste Composting by Anaerobically Co-Digesting Leachate, a By-Product from Composting, with Glycerine
by Thi Cam Tu Le, Katarzyna Bernat, Tomasz Pokój and Dorota Kulikowska
Energies 2025, 18(3), 537; https://doi.org/10.3390/en18030537 - 24 Jan 2025
Viewed by 840
Abstract
To achieve the required recycling rates, organic recycling via composting should be widely introduced in Poland for selectively collected biowaste. However, this process not only produces compost but also leachate (LCB), a nitrogen- and organics-rich liquid by-product. So far there has [...] Read more.
To achieve the required recycling rates, organic recycling via composting should be widely introduced in Poland for selectively collected biowaste. However, this process not only produces compost but also leachate (LCB), a nitrogen- and organics-rich liquid by-product. So far there has been limited information on the application of anaerobic digestion (AD) for treating LCB, which has fermentative potential. However, for effective methane production (MP) via AD, the ratio of chemical oxygen demand to total Kjeldahl nitrogen (COD/TKN) and pH of LCB are too low; thus, it should be co-digested with other organics-rich waste, e.g., glycerine (G). The present study tested the effect of G content in feedstock (in the range of 3–5% (v/v)) on the effectiveness of co-digestion with LCB, based on MP and the removal of COD. MP was accessed by using an automatic methane potential test system (AMPTS). Regardless of the feedstock composition (LCB, or LCB with G), the efficiency of COD removal was over 91%. Co-digestion not only increased MP by 6–15%, but also the methane content in the biogas by 4–14% compared to LCB only (353 NL/kg CODadded, 55%). MP and COD removal proceeded in two phases. During co-digestion in the 1st phase, volatile fatty acids (VFA) accumulated up to 2800 mg/L and the pH decreased below 6.8. The presence of G altered the shares of individual VFA and promoted the accumulation of propionic acid in contrast to LCB only, where caproic acid predominated. An initial accumulation of propionic acid and acidification in the mixtures decreased the kinetic constants of MP (from 0.79 to 0.54 d−1) and the rate of COD removal (from 2193 to 1603 mg/(L·d)). In the 2nd phase, the pH recovered, VFA concentrations decreased, and MP was no longer limited by these factors. However, it should be noted that excessive amounts of G, especially in reactors with constant feeding, may cause VFA accumulation to a greater extent and create a toxic environment for methanogens, inhibiting biogas production. In contrast, digestion of LCB only may lead to ammonium buildup if the COD/TKN ratio of the feedstock is too low. Despite these limitations, the use of AD in the treatment of LCB as a sustainable “closed-loop nutrient” technology closes the loop in composting of biowaste. Full article
(This article belongs to the Special Issue New Challenges in Waste-to-Energy and Bioenergy Systems)
Show Figures

Figure 1

22 pages, 7366 KiB  
Article
Analyzing the Corporate Business Intelligence Impact: A Case Study in the Financial Sector
by Serap Akcan Yetgin and Hilal Altas
Appl. Sci. 2025, 15(3), 1012; https://doi.org/10.3390/app15031012 - 21 Jan 2025
Cited by 1 | Viewed by 2662
Abstract
Business intelligence is the process and methods that enable businesses to effectively analyze large amounts of data and transform it into meaningful information, helping to increase efficiency and productivity in businesses, thus enabling businesses to gain competitive advantage. In this context, business intelligence [...] Read more.
Business intelligence is the process and methods that enable businesses to effectively analyze large amounts of data and transform it into meaningful information, helping to increase efficiency and productivity in businesses, thus enabling businesses to gain competitive advantage. In this context, business intelligence improves data management and decision-making processes and plays a critical role in strategic management. The main purpose of this study is to analyze the transition process of business intelligence solutions in financial institutions in detail, to increase efficiency in reporting processes, and to optimize decision-making processes. The study examines the ‘Cheque Report’, which reports the status of cheques in XY Financial Institution. Within the scope of the study, the transition process to business intelligence in the financial institution examined the ‘Cheque Report’ in three stages: in the first stage, reports were prepared manually; in the second stage, they were prepared with PL/SQL, and in the last stage, they were prepared with a business intelligence solution, and their outputs were compared. As a result, it was observed that with the use of business intelligence, fast and direct access to reports, data security, freedom from person dependency, and efficiency in internal information sharing are provided. Full article
Show Figures

Figure 1

24 pages, 654 KiB  
Article
Deep Learning Framework for Advanced De-Identification of Protected Health Information
by Ahmad Aloqaily, Emad E. Abdallah, Rahaf Al-Zyoud, Esraa Abu Elsoud, Malak Al-Hassan and Alaa E. Abdallah
Future Internet 2025, 17(1), 47; https://doi.org/10.3390/fi17010047 - 20 Jan 2025
Cited by 3 | Viewed by 1578
Abstract
Electronic health records (EHRs) are widely used in healthcare institutions worldwide, containing vast amounts of unstructured textual data. However, the sensitive nature of Protected Health Information (PHI) embedded within these records presents significant privacy challenges, necessitating robust de-identification techniques. This paper introduces a [...] Read more.
Electronic health records (EHRs) are widely used in healthcare institutions worldwide, containing vast amounts of unstructured textual data. However, the sensitive nature of Protected Health Information (PHI) embedded within these records presents significant privacy challenges, necessitating robust de-identification techniques. This paper introduces a novel approach, leveraging a Bi-LSTM-CRF model to achieve accurate and reliable PHI de-identification, using the i2b2 dataset sourced from Harvard University. Unlike prior studies that often unify Bi-LSTM and CRF layers, our approach focuses on the individual design, optimization, and hyperparameter tuning of both the Bi-LSTM and CRF components, allowing for precise model performance improvements. This rigorous approach to architectural design and hyperparameter tuning, often underexplored in the existing literature, significantly enhances the model’s capacity for accurate PHI tag detection while preserving the essential clinical context. Comprehensive evaluations are conducted across 23 PHI categories, as defined by HIPAA, ensuring thorough security across critical domains. The optimized model achieves exceptional performance metrics, with a precision of 99%, recall of 98%, and F1-score of 98%, underscoring its effectiveness in balancing recall and precision. By enabling the de-identification of medical records, this research strengthens patient confidentiality, promotes compliance with privacy regulations, and facilitates safe data sharing for research and analysis. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

17 pages, 2764 KiB  
Article
Passive Radar-Based Parameter Estimation of Low Earth Orbit Debris Targets
by Justin K. A. Henry and Ram M. Narayanan
Aerospace 2025, 12(1), 53; https://doi.org/10.3390/aerospace12010053 - 15 Jan 2025
Viewed by 1252
Abstract
Major space agencies such as NASA and the ESA have long reported the growing dangers caused by resident space objects orbiting our planet. These objects continue to grow in number as satellites are imploded and space debris impacts each other, causing fragmentation. As [...] Read more.
Major space agencies such as NASA and the ESA have long reported the growing dangers caused by resident space objects orbiting our planet. These objects continue to grow in number as satellites are imploded and space debris impacts each other, causing fragmentation. As a result, significant efforts by both the public and private sectors are geared towards enhancing space domain awareness capabilities to protect future satellites and astronauts from impact by these orbiting debris. Current approaches and standards implement very large radar arrays, telescopes, and laser ranging systems to detect and track such objects. These systems are very expensive, may take significant amounts of time to develop, and are still only sparingly able to efficiently track debris targets less than 10 cm in diameter. This work proposes a theoretical passive-radar-based method using illuminators of opportunity for detecting space debris while estimating motion direction and Doppler. We show that by using a signal processing chain based on the self-mixing technique and digital filters, Doppler information can be extracted and continuously tracked by a uniform linear receiver array. This can be achieved by a passive sensor system, which has the advantage of lower cost without the need to emit signals that constrain the spectrum sharing issues. Full article
(This article belongs to the Special Issue Advances in Avionics and Astrionics Systems)
Show Figures

Figure 1

22 pages, 2042 KiB  
Article
Secrecy Rate Performance Analysis of Jammer-Aided Symbiotic Radio with Sensing Errors for Fifth Generation Wireless Networks
by Muhammed Yusuf Onay
Appl. Sci. 2025, 15(1), 289; https://doi.org/10.3390/app15010289 - 31 Dec 2024
Cited by 1 | Viewed by 832
Abstract
Symbiotic radio (SR), which has recently been introduced as an effective solution for 5G wireless networks, stands out with system models that include hybrid devices that share the frequency spectrum and transmit information to the same receiver. However, the low bit rate and [...] Read more.
Symbiotic radio (SR), which has recently been introduced as an effective solution for 5G wireless networks, stands out with system models that include hybrid devices that share the frequency spectrum and transmit information to the same receiver. However, the low bit rate and the small amount of energy harvested in SR, where backscatter communication systems are integrated, make the system vulnerable to eavesdropping. To ensure security, the secrecy rate is defined as the difference between the number of bits transmitted to the receiver over the information channel and the number of bits reaching the eavesdropper (ED) over the wiretap channel. This paper is the first work that aims to maximize the secrecy rate for friendly jammer-aided SR networks with EDs over time allocation and power reflection coefficient in the presence of sensing errors. The proposed model consists of a base station (BS), a hybrid transmitter (HT) in symbiotic relationship with the BS, a WiFi access point used by the HT for energy harvesting, a jammer cooperating with the HT and BS, an information receiver, and EDs trying to access the information of the HT and BS. The simulation results provide valuable insights into the impact of system parameters on secrecy rate performance. Although taking the sensing error into account degrades the system performance, the real-world applicability of the system with sensing error is more realistic. It is also observed that the proposed system has higher performance compared to the wireless powered communication networks in the literature, which only use the energy harvest-then-transmit protocol and the power reflection coefficient is assumed to be zero. Full article
Show Figures

Figure 1

24 pages, 4109 KiB  
Article
AI-Based Malicious Encrypted Traffic Detection in 5G Data Collection and Secure Sharing
by Gang Han, Haohe Zhang, Zhongliang Zhang, Yan Ma and Tiantian Yang
Electronics 2025, 14(1), 51; https://doi.org/10.3390/electronics14010051 - 26 Dec 2024
Viewed by 1232
Abstract
With the development and widespread application of network information, new technologies led by 5G are emerging, resulting in an increasingly complex network security environment and more diverse attack methods. Unlike traditional networks, 5G networks feature higher connection density, faster data transmission speeds, and [...] Read more.
With the development and widespread application of network information, new technologies led by 5G are emerging, resulting in an increasingly complex network security environment and more diverse attack methods. Unlike traditional networks, 5G networks feature higher connection density, faster data transmission speeds, and lower latency, which are widely applied in scenarios such as smart cities, the Internet of Things, and autonomous driving. The vast amounts of sensitive data generated by these applications become primary targets during the processes of collection and secure sharing, and unauthorized access or tampering could lead to severe data breaches and integrity issues. However, as 5G networks extensively employ encryption technologies to protect data transmission, attackers can hide malicious content within encrypted communication, rendering traditional content-based traffic detection methods ineffective for identifying malicious encrypted traffic. To address this challenge, this paper proposes a malicious encrypted traffic detection method based on reconstructive domain adaptation and adversarial hybrid neural networks. The proposed method integrates generative adversarial networks with ResNet, ResNeXt, and DenseNet to construct an adversarial hybrid neural network, aiming to tackle the challenges of encrypted traffic detection. On this basis, a reconstructive domain adaptation module is introduced to reduce the distribution discrepancy between the source domain and the target domain, thereby enhancing cross-domain detection capabilities. By preprocessing traffic data from public datasets, the proposed method is capable of extracting deep features from encrypted traffic without the need for decryption. The generator utilizes the adversarial hybrid neural network module to generate realistic malicious encrypted traffic samples, while the discriminator achieves sample classification through high-dimensional feature extraction. Additionally, the domain classifier within the reconstructive domain adaptation module further improves the model’s stability and generalization across different network environments and time periods. Experimental results demonstrate that the proposed method significantly improves the accuracy and efficiency of malicious encrypted traffic detection in 5G network environments, effectively enhancing the detection performance of malicious traffic in 5G networks. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)
Show Figures

Figure 1

26 pages, 1839 KiB  
Systematic Review
A Systematic Literature Review of Privacy Information Disclosure in AI-Integrated Internet of Things (IoT) Technologies
by M A Shariful Amin, Seongjin Kim, Md Al Samiul Amin Rishat, Zhenya Tang and Hyunchul Ahn
Sustainability 2025, 17(1), 8; https://doi.org/10.3390/su17010008 - 24 Dec 2024
Cited by 2 | Viewed by 2495
Abstract
The rapid advancement and integration of Artificial Intelligence (AI) in Internet of Things (IoT) technologies have raised significant concerns regarding privacy information disclosure. As AI-enabled IoT devices collect, process, and share vast amounts of personal data, it is crucial to understand the current [...] Read more.
The rapid advancement and integration of Artificial Intelligence (AI) in Internet of Things (IoT) technologies have raised significant concerns regarding privacy information disclosure. As AI-enabled IoT devices collect, process, and share vast amounts of personal data, it is crucial to understand the current state of research on this topic and identify areas for future investigation. This research systematically analyzed 38 peer-reviewed articles on privacy information disclosure in the AI-enabled IoT context. The analysis yielded pivotal themes pertinent to information disclosure in the IoT realm, encompassing facets such as consumer IoT adoption, personalized service, the commodification of information, external threats, vulnerability, innovation, regulation, behavioral patterns, trust, demographic considerations, user satisfaction, strategic marketing plans, and institutional reputation. This paper posits a combined summary research framework explaining user-centric information disclosure behavior in the IoT sphere in light of these disclosures. The insights presented cater to diverse stakeholders, including researchers, policymakers, and businesses, aiming for optimized AI-integrated IoT engagement while prioritizing privacy. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

20 pages, 12422 KiB  
Article
LHSDNet: A Lightweight and High-Accuracy SAR Ship Object Detection Algorithm
by Dahai Dai, Hao Wu, Yue Wang and Penghui Ji
Remote Sens. 2024, 16(23), 4527; https://doi.org/10.3390/rs16234527 - 3 Dec 2024
Cited by 7 | Viewed by 1191
Abstract
At present, the majority of deep learning-based ship object detection algorithms concentrate predominantly on enhancing recognition accuracy, often overlooking the complexity of the algorithm. These complex algorithms demand significant computational resources, making them unsuitable for deployment on resource-constrained edge devices, such as airborne [...] Read more.
At present, the majority of deep learning-based ship object detection algorithms concentrate predominantly on enhancing recognition accuracy, often overlooking the complexity of the algorithm. These complex algorithms demand significant computational resources, making them unsuitable for deployment on resource-constrained edge devices, such as airborne and spaceborne platforms, thereby limiting their practicality. With the purpose of alleviating this problem, a lightweight and high-accuracy synthetic aperture radar (SAR) ship image detection network (LHSDNet) is proposed. Initially, GhostHGNetV2 was utilized as the feature extraction network, and the calculation amount of the network was reduced by GhostConv. Next, a lightweight feature fusion network was designed to combine shallow and deep features through lightweight convolutions, effectively preserving more information while minimizing computational requirements. Lastly, the feature extraction module was integrated through parameter sharing, and the detection head was lightweight to save computing resources further. The results from our experiments demonstrate that the proposed LHSDNet model increases mAP50 by 0.7% in comparison to the baseline model. Additionally, it illustrates a pronounced decrease in parameter count, computational demand, and model file size by 48.33%, 51.85%, and 41.26%, respectively, when contrasted with the baseline model. LHSDNet achieves a balance between precision and computing resources, rendering it more appropriate for edge device implementation. Full article
Show Figures

Figure 1

Back to TopTop