Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (668)

Search Parameters:
Keywords = AIS big data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1225 KB  
Article
F-DeNETS: A Hybrid Methodology for Complex Multi-Criteria Decision-Making Under Uncertainty
by Konstantinos A. Chrysafis
Systems 2025, 13(11), 1019; https://doi.org/10.3390/systems13111019 (registering DOI) - 13 Nov 2025
Abstract
In the modern business environment, where uncertainty and complexity make decision-making difficult, the need for robust, transparent and adaptable support tools is highlighted. The proposed method, named Flexible Decision Navigator for Evaluating Trends and Strategies (F-DeNETS), offers a complementary perspective to classic Artificial [...] Read more.
In the modern business environment, where uncertainty and complexity make decision-making difficult, the need for robust, transparent and adaptable support tools is highlighted. The proposed method, named Flexible Decision Navigator for Evaluating Trends and Strategies (F-DeNETS), offers a complementary perspective to classic Artificial Intelligence (AI), Big Data and Multi-Criteria Decision-Making (MCDM) tools. Despite their broad use, these methods frequently suffer from critical sensitivities In the weighting of criteria and the handling of uncertainty, leading to compromised reliability and limited practical utility in environments with limited data availability. To bridge this gap, F-DeNETS integrates intuition and uncertainty into a transparent and statistically grounded process. It introduces a balanced approach that combines statistical evidence with human judgment, extending the boundaries of classic AI, Big Data and MCDM methods. Classic MCDM methods, although useful, are sometimes limited by subjectivity, staticity and dependence on large volumes of data. To fill this gap, F-DeNETS, a hybrid framework combining Fuzzy Decision-Making Trial and Evaluation Laboratory (DEMATEL), Non-Asymptotic Fuzzy Estimators (NAFEs) and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), transforms expert judgments into statistically sound fuzzy quantifications, incorporates dynamic adaptation to new data, reduces bias and enhances reliability. A numerical application from the shipping industry demonstrates that F-DeNETS offers a flexible and interpretable methodology for optimal decisions in environments of high uncertainty. Full article
31 pages, 1106 KB  
Review
AΙ-Driven Drug Repurposing: Applications and Challenges
by Paraskevi Keramida, Nikolaos K. Syrigos, Marousa Kouvela, Garyfallia Poulakou, Andriani Charpidou and Oraianthi Fiste
Medicines 2025, 12(4), 28; https://doi.org/10.3390/medicines12040028 - 13 Nov 2025
Abstract
Drug repurposing is the process of discovering new therapeutic indications for already existing drugs. By using already approved molecules with known safety profiles, this approach reduces the time, costs, and failure rates associated with traditional drug development, accelerating the availability of new treatments [...] Read more.
Drug repurposing is the process of discovering new therapeutic indications for already existing drugs. By using already approved molecules with known safety profiles, this approach reduces the time, costs, and failure rates associated with traditional drug development, accelerating the availability of new treatments to patients. Artificial Intelligence (AI) plays a crucial role in drug repurposing by exploiting various computational techniques to analyze and process big datasets of biological and medical information, predict similarities between biomolecules, and identify disease mechanisms. The purpose of this review is to explore the role of AI tools in drug repurposing and underline their applications across various medical domains, mainly in oncology, neurodegenerative disorders, and rare diseases. However, several challenges remain to be addressed. These include the need for a deeper understanding of molecular mechanisms, ethical concerns, regulatory requirements, and issues related to data quality and interpretability. Overall, AI-driven drug repurposing is an innovative and promising field that can transform medical research and drug development, covering unmet medical needs efficiently and cost-effectively. Full article
Show Figures

Figure 1

34 pages, 1010 KB  
Systematic Review
Big Data Management and Quality Evaluation for the Implementation of AI Technologies in Smart Manufacturing
by Alexander E. Hramov and Alexander N. Pisarchik
Appl. Sci. 2025, 15(22), 11905; https://doi.org/10.3390/app152211905 - 9 Nov 2025
Viewed by 518
Abstract
This review examines the role of industrial data in enabling artificial intelligence (AI) technologies within the framework of Industry 4.0. Key aspects of industrial data management, including collection, preprocessing, integration, and utilization for training AI models, are analyzed and systematically categorized. Criteria for [...] Read more.
This review examines the role of industrial data in enabling artificial intelligence (AI) technologies within the framework of Industry 4.0. Key aspects of industrial data management, including collection, preprocessing, integration, and utilization for training AI models, are analyzed and systematically categorized. Criteria for assessing data quality are defined, covering accuracy, completeness, consistency, and confidentiality, and practical recommendations are proposed for preparing data for effective machine learning and deep learning applications. In addition, current approaches to data management are compared, and methods for evaluating and improving data quality are outlined. Particular attention is given to challenges and limitations in industrial contexts, as well as the prospects for leveraging high-quality data to enhance AI-driven smart manufacturing. Full article
Show Figures

Figure 1

27 pages, 9075 KB  
Review
Visualized Analysis of Adolescent Non-Suicidal Self-Injury and Comorbidity Networks
by Zhen Zhang, Juan Guo, Yali Zhao, Xiangyan Li and Chunhui Qi
Behav. Sci. 2025, 15(11), 1513; https://doi.org/10.3390/bs15111513 - 7 Nov 2025
Viewed by 410
Abstract
Non-suicidal self-injury (NSSI) has become an increasingly salient mental health concern among adolescents, and it commonly co-occurs with depression, anxiety, borderline personality disorder, substance use, and childhood maltreatment, forming a complex psychological risk structure. Despite a growing body of literature, a systematic understanding [...] Read more.
Non-suicidal self-injury (NSSI) has become an increasingly salient mental health concern among adolescents, and it commonly co-occurs with depression, anxiety, borderline personality disorder, substance use, and childhood maltreatment, forming a complex psychological risk structure. Despite a growing body of literature, a systematic understanding of the structural links between NSSI and psychiatric comorbidities remains limited. This study uses bibliometric and visualization methods to map the developmental trajectory and knowledge structure of the field and to identify research hotspots and frontiers. Drawing on the Web of Science Core Collection, we screened 1562 papers published between 2005 and 2024 on adolescent NSSI and comorbid psychological problems. Using CiteSpace 6.3.R1, VOSviewer 1.6.20, and R 4.3.3, we constructed knowledge graphs from keyword co-occurrence, clustering, burst-term detection, and co-citation analyses. The results show an explosive growth of research in recent years. Hotspots center on comorbidity mechanisms of mood disorders, the impact of childhood trauma, and advances in dynamic assessment. Research has evolved from describing behavioral features toward integrative mechanisms, with five current emphases: risk factor modeling, diagnostic standard optimization, cultural sensitivity, stratified intervention strategies, and psychological risks in special populations. With big data and AI applications, the field is moving toward dynamic prediction and precision intervention. Future work should strengthen cross-cultural comparisons, refine comorbidity network theory, and develop biomarker-informed differentiated interventions to advance both theory and clinical practice. Full article
(This article belongs to the Section Health Psychology)
Show Figures

Figure 1

23 pages, 5376 KB  
Review
Interferences and Frontiers Between Industry 4.0 and Circular Economy
by Dorel Badea, Andra-Teodora Gorski, Diana Elena Ranf, Elisabeta-Emilia Halmaghi and Hortensia Gorski
Processes 2025, 13(11), 3579; https://doi.org/10.3390/pr13113579 - 6 Nov 2025
Viewed by 307
Abstract
The article examines the relationship between Industry 4.0 (I4.0) and the circular economy (CE), which are modern and widely used in various scientific disciplines as well as in interdisciplinary and transdisciplinary fields. It was taken into account that a modern resource for knowledge [...] Read more.
The article examines the relationship between Industry 4.0 (I4.0) and the circular economy (CE), which are modern and widely used in various scientific disciplines as well as in interdisciplinary and transdisciplinary fields. It was taken into account that a modern resource for knowledge and innovation in a scientific area consists precisely in exploring conceptual interoperability, both for the purpose of clarifying aspects of the theory specific to that discipline and also in terms of offering new, less explored perspectives that are valuable for the practice of everyday economic activities. The main methodological component used is bibliometric analysis, starting from the construction of a database of existing approaches to the two key concepts considered, at the level of the Web of Science Core Collection (WoS). This research shows that there is an increase in the theoretical and practical scope of use of the two concepts, a characteristic observed through the consistency and diversification manifested within the considered frame of reference. The main conclusion of the study is that AI-driven servitization, IoT, and big data are facilitators of the implementation of the CE. The contribution lies in consolidating an updated bibliometric overview of this interdisciplinary field and in highlighting new directions. Full article
(This article belongs to the Special Issue Circular Economy on Production Processes and Systems Engineering)
Show Figures

Figure 1

72 pages, 1461 KB  
Systematic Review
LLMs for Cybersecurity in the Big Data Era: A Comprehensive Review of Applications, Challenges, and Future Directions
by Aristeidis Karras, Leonidas Theodorakopoulos, Christos Karras, Alexandra Theodoropoulou, Ioanna Kalliampakou and Gerasimos Kalogeratos
Information 2025, 16(11), 957; https://doi.org/10.3390/info16110957 - 4 Nov 2025
Viewed by 816
Abstract
This paper presents a systematic review of research (2020–2025) on the role of Large Language Models (LLMs) in cybersecurity, with emphasis on their integration into Big Data infrastructures. Based on a curated corpus of 235 peer-reviewed studies, this review synthesizes evidence across multiple [...] Read more.
This paper presents a systematic review of research (2020–2025) on the role of Large Language Models (LLMs) in cybersecurity, with emphasis on their integration into Big Data infrastructures. Based on a curated corpus of 235 peer-reviewed studies, this review synthesizes evidence across multiple domains to evaluate how models such as GPT-4, BERT, and domain-specific variants support threat detection, incident response, vulnerability assessment, and cyber threat intelligence. The findings confirm that LLMs, particularly when coupled with scalable Big Data pipelines, improve detection accuracy and reduce response latency compared with traditional approaches. However, challenges persist, including adversarial susceptibility, risks of data leakage, computational overhead, and limited transparency. The contribution of this study lies in consolidating fragmented research into a unified taxonomy, identifying sector-specific gaps, and outlining future research priorities: enhancing robustness, mitigating bias, advancing explainability, developing domain-specific models, and optimizing distributed integration. In doing so, this review provides a structured foundation for both academic inquiry and practical adoption of LLM-enabled cyberdefense strategies. Last search: 30 April 2025; methods followed: PRISMA-2020; risk of bias was assessed; random-effects syntheses were conducted. Full article
(This article belongs to the Special Issue IoT, AI, and Blockchain: Applications, Security, and Perspectives)
Show Figures

Figure 1

21 pages, 2680 KB  
Review
Big Data and AI-Enabled Construction of a Novel Gemstone Database: Challenges, Methodologies, and Future Perspectives
by Yu Zhang and Guanghai Shi
Minerals 2025, 15(11), 1149; https://doi.org/10.3390/min15111149 - 31 Oct 2025
Viewed by 419
Abstract
Gemstone samples, as objects of study in gemology, carry rich geological information and cultural value, playing an irreplaceable role in teaching, research, and public science communication. In the current age of big data, machine learning and artificial intelligence techniques based on gemstone databases [...] Read more.
Gemstone samples, as objects of study in gemology, carry rich geological information and cultural value, playing an irreplaceable role in teaching, research, and public science communication. In the current age of big data, machine learning and artificial intelligence techniques based on gemstone databases have emerged as a cutting-edge area of gemology. However, traditional gemstone databases have three major limitations: an absence of standardized data schemas, incomplete core datasets (e.g., records of synthetic and treated gemstones and inclusion characteristics), and poor data interoperability. These deficiencies hinder the application of advanced technologies, such as machine learning (ML) and AI techniques. This paper reviews gemstone data and applications, as well as existing gem-related sample databases, and proposes a framework for a new gemstone database based on standardization (FAIR principles), integration (blockchain technology), and dynamism (real-time updates). This framework could transform the gemstone industry, shifting it from “experience-driven” to “data-driven” practices. Powered by big data technology, this novel database will revolutionize gemological research, jewelry authentication, market transactions, and educational outreach, fostering innovation in academic research and practical applications. Full article
Show Figures

Figure 1

18 pages, 2239 KB  
Article
AI–Big Data Analytics Platform for Energy Forecasting in Modern Power Systems
by Martin Santos-Dominguez, Nicasio Hernandez Flores, Isaac Alberto Parra-Ramirez and Gustavo Arroyo-Figueroa
Big Data Cogn. Comput. 2025, 9(11), 272; https://doi.org/10.3390/bdcc9110272 - 31 Oct 2025
Viewed by 717
Abstract
Big Data Analytics is vital for power grids, as it empowers informed decision-making, anticipates potential operational and maintenance issues, optimizes grid management, supports renewable energy integration, ultimately reduces costs, improves customer service, monitors consumer behavior, and offers new services. This paper describes the [...] Read more.
Big Data Analytics is vital for power grids, as it empowers informed decision-making, anticipates potential operational and maintenance issues, optimizes grid management, supports renewable energy integration, ultimately reduces costs, improves customer service, monitors consumer behavior, and offers new services. This paper describes the AI–Big Data Analytics Architecture based on a data lake architecture that uses a reduced and customized set of Hadoop and Spark as a cost-effective, on-premises alternative for advanced data analytics in power systems. As a case study, a comparative analysis of electricity price forecasting models in the day-ahead market for nodes of the Mexican national electrical system using statistical, machine learning, and deep learning models, is presented. To build and select the best forecasting model, a data science and machine learning methodology is used. The results show that the Gradient Boosting and Support Vector Regression models presented the best performance, with a Mean Absolute Percentage Error (MAPE) between 1% and 4% for five-day-ahead electricity price forecasting. The implementation of the best forecasting model into the Big Data Analytics Platform allows the automation of the calculation of the local electricity price forecast per node (every 24, 72, or 120 h) and its display in a comparative dashboard with actual and forecasted data for decision-making on demand. The proposed architecture is a valuable tool that allows the future implementation of intelligent energy forecasting models in power grids, such as load demand, fuel prices, power generation, and consumption, among others. Full article
(This article belongs to the Special Issue Machine Learning and AI Technology for Sustainable Development)
Show Figures

Figure 1

17 pages, 405 KB  
Article
AI-Driven Responsible Supply Chain Management and Ethical Issue Detection in the Tourism Industry
by Minjung Hong and JongMyoung Kim
Sustainability 2025, 17(21), 9622; https://doi.org/10.3390/su17219622 - 29 Oct 2025
Viewed by 504
Abstract
This study aims to develop and evaluate an AI- and big-data-based innovation system for proactively managing ESG (Environmental, Social, and Governance) risks within the tourism supply chain. Drawing on heterogeneous data sources including supply chain records, news articles, social media, and public databases, [...] Read more.
This study aims to develop and evaluate an AI- and big-data-based innovation system for proactively managing ESG (Environmental, Social, and Governance) risks within the tourism supply chain. Drawing on heterogeneous data sources including supply chain records, news articles, social media, and public databases, the research employs advanced methodologies such as network analysis, anomaly detection, natural language processing (including greenwashing detection), and predictive modeling. Through this comprehensive approach, the study demonstrates the feasibility and effectiveness of a dynamic AI-driven ESG risk management system that delivers reliable risk identification and quantitative performance evaluation. The theoretical contribution lies in bridging AI-driven ESG evaluation frameworks with sustainable tourism and hospitality literature, moving beyond static, indicator-based assessments toward a more systematic, replicable, and predictive methodology capable of capturing the dynamic, multiscalar, and networked nature of tourism supply chains. Ultimately, this research provides tourism and hospitality firms with a powerful tool to enhance transparency, mitigate ethical and reputational risks, and strengthen stakeholder trust, while offering actionable insights for managers and policymakers developing data-driven ESG integration strategies. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

15 pages, 1041 KB  
Article
Implementation and Rollout of a Trusted AI-Based Approach to Identify Financial Risks in Transportation Infrastructure Construction Projects
by Michael Grims, Daniel Karas, Marina Ivanova, Gerhard Höfinger, Sebastian Bruchhaus, Marco X. Bornschlegl and Matthias L. Hemmje
Appl. Syst. Innov. 2025, 8(6), 161; https://doi.org/10.3390/asi8060161 - 24 Oct 2025
Viewed by 390
Abstract
Using big data for risk analysis of construction projects is a largely unexplored area. In this traditional industry, risk identification is often based either on so-called domain expert knowledge, in other words on experience, or on different statistical and quantitative analysis of individual [...] Read more.
Using big data for risk analysis of construction projects is a largely unexplored area. In this traditional industry, risk identification is often based either on so-called domain expert knowledge, in other words on experience, or on different statistical and quantitative analysis of individual past projects. The motivation of this research is based on the implemented and evaluated data-driven and AI-based DARIA approach to identify financial risks in the execution phase of transportation infrastructure construction projects that shows exceptional results at an early stage of the project execution phase and has already been deployed into enterprise-wide production within the STRABAG group. Due to DARIA’s productive use, concern and doubts about the trustworthiness of its ML algorithm are certainly possible, especially when DARIA identifies risky projects while all conventional metrics within the STRABAG controlling system do not identify any problems. “If AI systems do not prove to be worthy of trust, their widespread acceptance and adoption will be hindered, and the potentially vast societal and economic benefits will not be fully realized”. Thus, and based on the results of a user study during DARIA’s successful deployment into enterprise-wide production, this paper focuses on the identification of suitable indicators to measure the trustworthiness of the DARIA ML algorithm in the interaction between individuals and systems as well as on the modeling of the reproducibility of the internal state of DARIA’s ML model. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

30 pages, 2575 KB  
Review
Industrial Site Selection: Methodologies, Advances and Challenges
by Dongbo Wang, Yubo Zhu, Xidao Mao, Jianyi Wang and Xiaohui Ji
Appl. Sci. 2025, 15(21), 11379; https://doi.org/10.3390/app152111379 - 23 Oct 2025
Viewed by 554
Abstract
Industrial site selection holds strategic importance in the layout of industrial facilities. Scientific decision-making in site selection not only enhances the economic and technical feasibility of a project but also lays the foundation for sustainable development. However, industrial site selection is considered an [...] Read more.
Industrial site selection holds strategic importance in the layout of industrial facilities. Scientific decision-making in site selection not only enhances the economic and technical feasibility of a project but also lays the foundation for sustainable development. However, industrial site selection is considered an NP-hard problem. The criteria used to evaluate site suitability, the methods proven effective under different conditions, big data sources introduced, and the key data gaps, methodological limitations, and research priorities to improve decision quality are important for researchers and engineers. Based on the Web of Science (WOS) core collection as the data source, this paper retrieved the literature related to the themes of “industrial site selection” and “facility location decision making”, and selected 149 highly relevant papers. It systematically categorizes three mainstream site selection methods: operations research-based methods; the application of geographic information systems in site selection; and the application of artificial intelligence in site selection. On this basis, this paper provides a systematic review of the overall industrial site selection process and methodologies, aiming to offer references for subsequent site selection analysis research and practical site selection work. An “MCDM–GIS–AI” technology convergence roadmap is also proposed for industrial site selection to identify remaining research gaps and offer a set of “good-practice guidelines” to inform both practical applications and future analytical studies. Full article
(This article belongs to the Special Issue Applications of Big Data and Artificial Intelligence in Geoscience)
Show Figures

Figure 1

20 pages, 589 KB  
Article
From Big Data to Cultural Intelligence: An AI-Powered Framework and Machine Learning Validation for Global Marketing
by Jungwon Lee
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 288; https://doi.org/10.3390/jtaer20040288 - 22 Oct 2025
Viewed by 733
Abstract
This research addresses the ‘cultural blind spot’ in Big Data and AI, where algorithms treat global user-generated content monolithically, fostering biased marketing models. It proposes a dynamic ‘contextual value amplification’ framework, integrating Impression Management and Construal Level Theories. The study argues that service [...] Read more.
This research addresses the ‘cultural blind spot’ in Big Data and AI, where algorithms treat global user-generated content monolithically, fostering biased marketing models. It proposes a dynamic ‘contextual value amplification’ framework, integrating Impression Management and Construal Level Theories. The study argues that service context—luxury versus budget—systematically reconfigures how cultural values are expressed in online customer reviews. A dual-method approach was applied to 284,746 negative hotel reviews. First, a high-dimensional fixed-effects model provided evidence for ‘cultural complaint signatures’ and revealed a novel mechanism: the luxury context amplifies individualists’ focus on relational Service but dampens their focus on transactional Value. Second, an XGBoost model offered computational validation. Including these theoretically derived features improved the model’s ability to classify a reviewer’s cultural orientation by over 220%. The study proposes a dynamic, context-contingent theory of cross-cultural expression, offers a methodological template fusing econometrics and machine learning to mitigate bias, and advances a conceptual framework for ‘Cultural Intelligence’. Full article
Show Figures

Figure 1

20 pages, 1517 KB  
Article
Divergent Paths of SME Digitalization: A Latent Class Approach to Regional Modernization in the European Union
by Rumiana Zheleva, Kamelia Petkova and Svetlomir Zdravkov
World 2025, 6(4), 144; https://doi.org/10.3390/world6040144 - 21 Oct 2025
Viewed by 597
Abstract
Small and medium-sized enterprises (SMEs) constitute the backbone of the EU economy, yet their uneven digital transformation raises challenges for competitiveness and territorial cohesion. This article examines the organizational and spatial aspects of SME digitalization across the European Union using Flash Eurobarometer 486 [...] Read more.
Small and medium-sized enterprises (SMEs) constitute the backbone of the EU economy, yet their uneven digital transformation raises challenges for competitiveness and territorial cohesion. This article examines the organizational and spatial aspects of SME digitalization across the European Union using Flash Eurobarometer 486 data and latent class analysis (LCA) combined with Bayesian multilevel multinomial regression. The results reveal four SME digitalization profiles—Digitally Conservative Backbone; Partially Digital and Upgrading; Digitally Advanced and Diversified; and Focused Digital Integrators—reflecting diverse adoption patterns of key technologies such as AI, big data and cloud computing. Digitalization is shaped by organizational factors (firm size, value chain integration, digital barriers) and territorial factors (urbanity, border proximity, national digital infrastructure as measured by the Digital Economy and Society Index, DESI). Contrary to linear modernization assumptions, digital adoption follows geographically embedded trajectories, with sectoral uptake occurring even in low-DESI or non-urban regions. These results challenge core–periphery models and highlight the significance of place-based innovation networks. The study contributes to modernization theory and regional innovation systems by showing that digital inequalities exist not only between countries but also within regions and among adoption profiles, emphasizing the need for nuanced, multi-level digital policy approaches across Europe. Full article
Show Figures

Figure 1

24 pages, 797 KB  
Article
Towards a Sustainable Workforce in Big Data Analytics: Skill Requirements Analysis from Online Job Postings Using Neural Topic Modeling
by Fatih Gurcan, Ahmet Soylu and Akif Quddus Khan
Sustainability 2025, 17(20), 9293; https://doi.org/10.3390/su17209293 - 20 Oct 2025
Viewed by 618
Abstract
Big data analytics has become a cornerstone of modern industries, driving advancements in business intelligence, competitive intelligence, and data-driven decision-making. This study applies Neural Topic Modeling (NTM) using the BERTopic framework and N-gram-based textual content analysis to examine job postings related to big [...] Read more.
Big data analytics has become a cornerstone of modern industries, driving advancements in business intelligence, competitive intelligence, and data-driven decision-making. This study applies Neural Topic Modeling (NTM) using the BERTopic framework and N-gram-based textual content analysis to examine job postings related to big data analytics in real-world contexts. A structured analytical process was conducted to derive meaningful insights into workforce trends and skill demands in the big data analytics domain. First, expertise roles and tasks were identified by analyzing job titles and responsibilities. Next, key competencies were categorized into analytical, technical, developer, and soft skills and mapped to corresponding roles. Workforce characteristics such as job types, education levels, and experience requirements were examined to understand hiring patterns. In addition, essential tasks, tools, and frameworks in big data analytics were identified, providing insights into critical technical proficiencies. The findings show that big data analytics requires expertise in data engineering, machine learning, cloud computing, and AI-driven automation. They also emphasize the importance of continuous learning and skill development to sustain a future-ready workforce. By connecting academia and industry, this study provides valuable implications for educators, policymakers, and corporate leaders seeking to strengthen workforce sustainability in the era of big data analytics. Full article
Show Figures

Figure 1

25 pages, 4025 KB  
Review
Precision Forestry Revisited
by Can Vatandaslar, Kevin Boston, Zennure Ucar, Lana L. Narine, Marguerite Madden and Abdullah Emin Akay
Remote Sens. 2025, 17(20), 3465; https://doi.org/10.3390/rs17203465 - 17 Oct 2025
Viewed by 862
Abstract
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web [...] Read more.
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web of Science (up to 2025), the study identifies six main categories and eight components of precision forestry. The findings indicate that “forest management and planning” is the most common category, with nearly half of the studies focusing on this topic. “Remote sensing platforms and sensors” emerged as the most frequently used component, with unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) systems being the most widely adopted tools. The analysis also reveals a notable increase in precision forestry research since the early 2010s, coinciding with rapid developments in small UAVs and mobile sensor technologies. Despite growing interest, robotics and real-time process control systems remain underutilized, mainly due to challenging forest conditions and high implementation costs. The research highlights geographical disparities, with Europe, Asia, and North America hosting the majority of studies. Italy, China, Finland, and the United States stand out as the most active countries in terms of research output. Notably, the review emphasizes the need to integrate precision forestry into academic curricula and support industry adoption through dedicated information and technology specialists. As the forestry workforce ages and technology advances rapidly, a growing skills gap exists between industry needs and traditional forestry education. Equipping the next generation with hands-on experience in big data analysis, geospatial technologies, automation, and Artificial Intelligence (AI) is critical for ensuring the effective adoption and application of precision forestry. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

Back to TopTop