You are currently viewing a new version of our website. To view the old version click .

Applied Artificial Intelligence for Distributed Industrial Systems—Edge–Cloud Intelligence and Time Series Forecasting

Special Issue Information

Dear Colleagues,

The field of Artificial Intelligence has seen huge advances in the last few years. Novel technologies, such as the transformers used in LLMs, have shown unprecedented levels of prediction accuracy in loosely defined environments, even when applied beyond their original domain. At the same time, the processing of structured data, as experienced in many industrial settings, has come into focus.

Industrial environments generate vast amounts of data, most of which takes the form of time series. Such data streams are inherently noisy, increasingly multivariate, and often produced at high frequency by dense sensor networks and automated control systems, resulting in massive data volumes. Applying modern machine learning techniques to such data can significantly enhance forecasting, anomaly detection, and classification tasks. These tasks provide the predictive and diagnostic signals that drive operational decisions, triggering control actions, maintenance interventions and safety responses within industrial processes.

Because such decisions must often be made immediately, real-time industrial environments impose strict requirements on low-latency processing, high reliability and minimal reliance on centralized cloud-based infrastructure. Addressing these constraints increasingly relies on a flexible edge–cloud continuum, where computational workloads are distributed across edge devices, on-premise servers, and cloud-scale resources. Within this continuum, emerging techniques for distributed LLM inference also allow large foundation models to be partitioned, compressed, or collaboratively executed across heterogeneous industrial nodes. This enables advanced reasoning, contextual understanding, and multimodal analytics to be brought closer to where data are generated—while still leveraging the cloud for heavier coordination, model updates, and cross-site aggregation. The result is an architecture that combines rapid, local intelligence with scalable, cloud-enabled model management.

Distributed AI at the edge–cloud continuum, including federated and collaborative and LLM-based inference paradigms, offers a powerful approach to these challenges. By enabling scalable, privacy-preserving, and context-aware processing directly within industrial systems, edge-intelligence can support timely, robust, context-aware, and autonomous decision-making across complex industrial environments.

This Special Issue invites research and practical case studies on applied AI methods that leverage the edge-cloud continuum, distributed learning, and emerging LLM-based inference to enhance real-time forecasting, decision-making and operational intelligence in industrial environments, from manufacturing and logistics to smart energy systems and autonomous operations.

Prof. Dr. Andreas Fischer
Prof. Dr. Andreas Kassler
Prof. Dr. Bestoun S. Ahmed
Prof. Dr. Carlos Enrique Palau Salvador
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multivariate time-series forecasting
  • federated and split learning
  • edge intelligence and edge–cloud continuum architectures
  • distributed LLM inference for industrial AI
  • high-volume and high-frequency data processing
  • multivariate anomaly detection
  • data stream classification
  • explainable and trustworthy AI in industrial settings

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Published Papers

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Big Data Cogn. Comput. - ISSN 2504-2289