Innovations, Challenges and Emerging Technologies in Data Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 September 2024) | Viewed by 5606

Special Issue Editors


E-Mail Website
Guest Editor
Department of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, Croatia
Interests: formal knowledge representation; automated reasoning; machine learning; information retrieval; Semantic Web
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, Croatia
Interests: computer-aided instruction; learning management systems; computer science education; database systems; data processing; data warehouses; mobile applications; website; semantic web
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, Croatia
Interests: data processing; data collection; data integration; data preprocessing; information exchange; data warehouses; database machines; semantic web

Special Issue Information

Dear Colleagues,

In the age of big data, the field of data engineering has become a cornerstone of technological progress and innovation. The unprecedented growth of data generated from various sources such as digital platforms, social media, IoT devices, and a variety of other digital channels has brought the need for sophisticated data engineering practices to the forefront of modern computational challenges. As we explore this area, we recognize the enormous potential but also the numerous challenges that come with the rapid development of data-driven technologies. These advancements are not only transforming the industry, but are also shaping the future in unprecedented ways. The increasing need for advanced data engineering solutions emphasizes the importance of this area in today’s digital landscape.

This Special Issue, entitled "Innovations, Challenges, and Emerging Technologies in Data Engineering" is dedicated to the latest advances, current challenges, and emerging technologies in data engineering. This topic represents the natural continuation and improvement of the First Edition’s subject “Advanced Web Applications”, in which data engineering has its foundations, and this progression demonstrates a natural and logical evolution of topics, each building upon the other.

Our goal is to provide a comprehensive overview of the current state of data engineering as well as insights into the latest developments and discussions about the challenges that professionals and researchers are facing in this rapidly evolving field.

This Special Issue seeks contributions on the development and innovative application of data engineering technologies, as well as data engineering analysis, design, implementation, evaluation, and training. We welcome high-quality case studies that describe the successful implementation of data engineering projects, as well as research papers that propose new approaches and techniques for solving traditional and emerging data engineering challenges. We welcome submissions that provide insights into innovative data engineering practices as well as solutions and strategies for overcoming complexity and harnessing data's potential in various domains.

The topics covered include, but are not restricted to, the following:

  • Data platforms, cloud platforms, building data pipelines.
  • Data lake architecture and management.
  • Data democratization.
  • Data sources, transferring on-premises data to the cloud, fetching from relational or NoSQL databases and public sources, data ingestion.
  • Using Pub/Sub agents for message forwarding and data integration, metadata and data provenance.
  • Data consolidation and transformation, message schema control, building schema catalogs, and addressing message schema evolution problems.
  • Using software containers, managing Kubernetes clusters and Docker containers.
  • Creating APIs in the cloud and serving via message forwarding agents.
  • Specific technologies including, but not exclusively restricted to, GO and Python programming languages, GIT, Jenkins, Terraform, Swagger, and Avro message formats.
  • Applying DevOps concepts in code development.
  • Establishing development, testing, staging, and production environments in the cloud.
  • Advanced analytics and machine learning in data engineering.
  • Real-time analytics and stream processing.
  • Case studies on successful data engineering projects.

Dr. Marko Horvat
Prof. Dr. Igor Mekterović
Dr. Ljiljana Brkić
Prof. Dr. Ping-Feng Pai
Dr. Guanfeng Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data engineering
  • DevOps
  • data platform architecture
  • cloud computing
  • data pipelines
  • data ingestion
  • data provenance
  • real-time analytics
  • stream processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2723 KiB  
Article
Using Deep Learning, Optuna, and Digital Images to Identify Necrotizing Fasciitis
by Ming-Jr Tsai, Chung-Hui Lin, Jung-Pin Lai and Ping-Feng Pai
Electronics 2024, 13(22), 4421; https://doi.org/10.3390/electronics13224421 - 11 Nov 2024
Cited by 1 | Viewed by 1844
Abstract
Necrotizing fasciitis, which is categorized as a medical and surgical emergency, is a life-threatening soft tissue infection. Necrotizing fasciitis diagnosis primarily relies on computed tomography (CT), magnetic resonance imaging (MRI), ultrasound scans, surgical biopsy, blood tests, and expert knowledge from doctors or nurses. [...] Read more.
Necrotizing fasciitis, which is categorized as a medical and surgical emergency, is a life-threatening soft tissue infection. Necrotizing fasciitis diagnosis primarily relies on computed tomography (CT), magnetic resonance imaging (MRI), ultrasound scans, surgical biopsy, blood tests, and expert knowledge from doctors or nurses. Necrotizing fasciitis develops rapidly, making early diagnosis crucial. With the rapid progress of information technology and systems, in terms of both hardware and software, deep learning techniques have been employed to address problems in various fields. This study develops an information system using convolutional neural networks (CNNs), Optuna, and digital images (CNNOPTDI) to detect necrotizing fasciitis. The determination of the hyperparameters in convolutional neural networks plays a critical role in influencing classification performance. Therefore, Optuna, an optimization framework for hyperparameter selection, is utilized to optimize the hyperparameters of the CNN models. We collect the images for this study from open data sources such as Open-i and Wikipedia. The numerical results reveal that the developed CNNOPTDI system is feasible and effective in identifying necrotizing fasciitis with very satisfactory classification accuracy. Therefore, a potential future application of the CNNOPTDI system could be in remote medical stations or telemedicine settings to assist with the early detection of necrotizing fasciitis. Full article
(This article belongs to the Special Issue Innovations, Challenges and Emerging Technologies in Data Engineering)
Show Figures

Figure 1

24 pages, 6869 KiB  
Article
Automobile-Demand Forecasting Based on Trend Extrapolation and Causality Analysis
by Zhengzhu Zhang, Haining Chai, Liyan Wu, Ning Zhang and Fenghe Wu
Electronics 2024, 13(16), 3294; https://doi.org/10.3390/electronics13163294 - 19 Aug 2024
Viewed by 2894
Abstract
Accurate automobile-demand forecasting can provide effective guidance for automobile-manufacturing enterprises in terms of production planning and supply planning. However, automobile sales volume is affected by historical sales volume and other external factors, and it shows strong non-stationarity, nonlinearity, autocorrelation and other complex characteristics. [...] Read more.
Accurate automobile-demand forecasting can provide effective guidance for automobile-manufacturing enterprises in terms of production planning and supply planning. However, automobile sales volume is affected by historical sales volume and other external factors, and it shows strong non-stationarity, nonlinearity, autocorrelation and other complex characteristics. It is difficult to accurately forecast sales volume using traditional models. To solve this problem, a forecasting model combining trend extrapolation and causality analysis is proposed and derived from the historical predictors of sales volume and the influence of external factors. In the trend-extrapolation model, the historical predictors of sales series was captured based on the Seasonal Autoregressive Integrated Moving Average (SARIMA) and Polynomial Regression (PR); then, Empirical Mode Decomposition (EMD), a stationarity-test algorithm, and an autocorrelation-test algorithm were introduced to reconstruct the sales sequence into stationary components with strong seasonality and trend components, which reduced the influences of non-stationarity and nonlinearity on the modeling. In the causality-analysis submodel, 31-dimensional feature data were extracted from influencing factors, such as date, macroeconomy, and promotion activities, and a Gradient-Boosting Decision Tree (GBDT) was used to establish the mapping between influencing factors and future sales because of its excellent ability to fit nonlinear relationships. Finally, the forecasting performance of three combination strategies, namely the boosting series, stacking parallel and weighted-average parallel strategies, were tested. Comparative experiments on three groups of sales data showed that the weighted-average parallel combination strategy had the best performance, with loss reductions of 16.81% and 4.68% for data from the number-one brand, 25.60% and 2.79% for data from the number-two brand, and 46.26% and 14.37% for data from the number-three brand compared with the other combination strategies. Other ablation studies and comparative experiments with six basic models proved the effectiveness and superiority of the proposed model. Full article
(This article belongs to the Special Issue Innovations, Challenges and Emerging Technologies in Data Engineering)
Show Figures

Figure 1

Back to TopTop