Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,285)

Search Parameters:
Keywords = AI-generated news

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 30228 KB  
Article
Generative AI for Cultural Heritage: Shanghai Revolutionary Culture Digital Design Based on the SD–LoRA Model
by Chunmao Wu, Jian Tang and Ling Tong
Appl. Sci. 2026, 16(9), 4427; https://doi.org/10.3390/app16094427 - 1 May 2026
Abstract
In recent years, the development of generative artificial intelligence, particularly diffusion models such as Stable Diffusion (SD), has provided new opportunities for the digital representation and creative dissemination of cultural heritage. This study takes Shanghai revolutionary cultural heritage as a case study and [...] Read more.
In recent years, the development of generative artificial intelligence, particularly diffusion models such as Stable Diffusion (SD), has provided new opportunities for the digital representation and creative dissemination of cultural heritage. This study takes Shanghai revolutionary cultural heritage as a case study and develops an application-oriented integrated workflow for generating revolutionary cultural images. By introducing Low-Rank Adaptation (LoRA) into the SD framework and combining structural control with local refinement strategies, this study enhances the style expression and structural quality of the generated images. Furthermore, an interactive creation platform is constructed to support the generation and creation of revolutionary cultural images. The evaluation results, including subjective assessment and SSIM/LPIPS metrics, indicate that the proposed workflow achieves higher style consistency and structural reliability while improving the structural integrity and detail reliability of facial regions. The proposed workflow and platform are intended to support revolutionary cultural venues in practical digital production and dissemination of heritage content while also promoting public engagement with revolutionary culture, especially among younger audiences. This study highlights the application potential of generative Artificial Intelligence (AI) in enhancing the accessibility, digitalization, and sustainable dissemination of revolutionary cultural heritage. It also provides a practical reference for interdisciplinary research in the field of cultural heritage and AI-assisted digital communication. Full article
Show Figures

Figure 1

23 pages, 2342 KB  
Article
AI-Driven Traffic Control Method and Reliability Analysis for Digital City Local Narrow-Road, Dense-Network
by Aixu Ji, Jie Wang, Hui Deng, Zipeng Wang, Mingfang Zhang and Pangwei Wang
Appl. Sci. 2026, 16(9), 4430; https://doi.org/10.3390/app16094430 - 1 May 2026
Abstract
In urban environments characterized by narrow roads and dense networks with short intersection spacing and high connectivity, traffic flows exhibit strong spatiotemporal coupling and pose safety challenges. Conventional traffic signal control approaches are difficult to achieve effective regional coordination, while existing control models [...] Read more.
In urban environments characterized by narrow roads and dense networks with short intersection spacing and high connectivity, traffic flows exhibit strong spatiotemporal coupling and pose safety challenges. Conventional traffic signal control approaches are difficult to achieve effective regional coordination, while existing control models based on artificial intelligence (AI) lack consideration for trustworthiness and robustness. To address these challenges, an AI-driven traffic control method for digital city traffic signals is proposed. A unified and decodable latent action representation space is constructed, in which the dependency between phase selection and green time duration is captured using discrete action embedding tables and a conditional variational autoencoder (CVAE), ensuring the stability and interpretability of the AI-driven model. Building on this foundation, a globally shared latent representation is integrated with a local coordination mechanism, and the proximal policy optimization (PPO) algorithm is employed for policy training. A state residual prediction regularization loss is introduced to improve the model’s generalization capability and convergence efficiency. Experiments were conducted using a real-road network and traffic flow data from the Rongdong District of Xiongan New Area. Under spatially imbalanced peak hour traffic conditions, the model reduced average vehicle delay by 14.84% and average queue length by 9.2%; under temporally imbalanced peak hour traffic, it achieved reductions of 5.36% and 7.2% in delay and queue length, respectively. These results demonstrate that the proposed method significantly enhances both traffic efficiency and system robustness, offering scalable, reliable technical support for secure and intelligent transportation systems (ITSs). Full article
37 pages, 11499 KB  
Article
Automated Mid-Surface Mesh Generation Method for Automotive Plastic Parts Based on Deep Learning
by Hongbin Tang, Zehui Huang, Jingchun Wang, Jianjiao Deng, Shibin Wang, Zhiguo Zhang and Zhenjiang Wu
Vehicles 2026, 8(5), 96; https://doi.org/10.3390/vehicles8050096 - 1 May 2026
Abstract
Automotive plastic parts present multiple challenges for Computer-Aided Engineering (CAE) simulation modeling, including complex thin-walled geometries, difficulties in meshing fine features (e.g., clips and snap-fits), and time-consuming manual processing with inconsistent quality. To address these issues, this paper proposes an automated method for [...] Read more.
Automotive plastic parts present multiple challenges for Computer-Aided Engineering (CAE) simulation modeling, including complex thin-walled geometries, difficulties in meshing fine features (e.g., clips and snap-fits), and time-consuming manual processing with inconsistent quality. To address these issues, this paper proposes an automated method for generating mid-surface meshes. The proposed approach integrates AI-based feature recognition, point cloud registration, and geometric fitting. First, a specialized point cloud dataset consisting of 132,000 samples of plastic part features was constructed. Using a PointNet++ model, precise semantic segmentation of typical features, such as clips and backing plates, was achieved. Subsequently, a library of typical features was established, and an FPFH-ICP point cloud registration strategy was implemented. Based on the matching rate, an adaptive selection between two processing paths, direct standard mesh replacement and segmentation-fitting generation was performed. For features with low matching rates, a suite of segmentation-fitting algorithms was proposed. These algorithms incorporate incomplete cylinder parameter extraction, Monte Carlo boundary identification, and internal point cloud reordering, thereby facilitating high-quality mid-surface mesh generation for complex topological structures. Finally, experimental validation was conducted on typical automotive interior plastic parts as well as on new cross-platform vehicle models. The results demonstrate that the proposed method reduces mesh modeling time by 67% while preserving the accuracy of geometric feature restoration. The mesh quality compliance rate increases from 52.27% to 90.9% with the proposed method, reaching a level comparable to that of professional manual meshing. In cross-platform validation, the proposed method maintained high accuracy. Consequently, this approach significantly enhances the intelligence and engineering reliability of CAE pre-processing, providing effective technical support for the automated simulation modeling of complex thin-walled components. Full article
Show Figures

Figure 1

48 pages, 2547 KB  
Review
Security and Privacy in Generative Semantic Communication Systems: A Comprehensive Survey
by Mehwish Ali Naqvi and Insoo Sohn
Mathematics 2026, 14(9), 1522; https://doi.org/10.3390/math14091522 - 30 Apr 2026
Abstract
semantic communication (SemCom) has emerged as a task-oriented communication paradigm that prioritizes meaning delivery over exact bit recovery. The integration of generative artificial intelligence (GenAI) into SemCom further enables knowledge-guided inference, multimodal reconstruction, and semantic compression through architectures such as large language models, [...] Read more.
semantic communication (SemCom) has emerged as a task-oriented communication paradigm that prioritizes meaning delivery over exact bit recovery. The integration of generative artificial intelligence (GenAI) into SemCom further enables knowledge-guided inference, multimodal reconstruction, and semantic compression through architectures such as large language models, variational autoencoders, generative adversarial networks, and diffusion models. At the same time, this integration introduces new security and privacy risks, including semantic eavesdropping, model inversion, semantic jamming, covert backdoors, prompt manipulation, and knowledge-base leakage, which are not adequately captured by conventional communication security models. In this survey, we provide a security-centric review of GenAI-assisted semantic communication systems by organizing the literature according to threat models, attack surfaces, defence strategies, and semantic modalities across text, image, and multimodal settings. The survey was conducted using IEEE Xplore, ACM Digital Library, SpringerLink, arXiv, and Google Scholar. Approximately 180 papers were initially screened, and 53 representative studies published between 2021 and 2026 were selected for detailed review. Based on this analysis, we classify the major threats into adversarial perturbation, jamming, poisoning and backdoor attacks, privacy leakage and semantic eavesdropping, and generative-model-specific vulnerabilities involving diffusion, large language models, and multimodal foundation models. We further map the corresponding defences, including adversarial training, model ensembling, semantic-aware encryption, diffusion-guided denoising, privacy-preserving representation learning, and secure resource allocation. The survey also identifies persistent open challenges, including the lack of standardized semantic security metrics, unified benchmarks, cross-layer evaluation frameworks, and robust defences for GenAI-native and multimodal semantic communication systems. Overall, this work provides a structured reference for the design of secure, trustworthy, and attack-resilient generative semantic communication systems for future intelligent networks. Full article
(This article belongs to the Special Issue Advances in Blockchain and Intelligent Computing)
21 pages, 10232 KB  
Review
The Significance of Angiopoietin Valency in Vascular Health and Disease
by Yan Ting Zhao, Devon D. Ehnes, Julie Mathieu and Hannele Ruohola-Baker
Cells 2026, 15(9), 820; https://doi.org/10.3390/cells15090820 - 30 Apr 2026
Abstract
The Angiopoietin–Tie2 pathway is a key regulator of postnatal vascular maintenance and remodeling, regulating vascular barrier function and integrity. While the opposing roles of the ligands Angiopoietin-1 (Ang 1) and Angiopoietin-2 (Ang 2) have been recognized for decades, the structural mechanism governing their [...] Read more.
The Angiopoietin–Tie2 pathway is a key regulator of postnatal vascular maintenance and remodeling, regulating vascular barrier function and integrity. While the opposing roles of the ligands Angiopoietin-1 (Ang 1) and Angiopoietin-2 (Ang 2) have been recognized for decades, the structural mechanism governing their distinct signaling outputs has only recently been elucidated. As artificial intelligence and protein design continue to develop, emerging evidence suggests that ligand valency and receptor clustering are key determinants of Tie2 pathway activation and endothelial cell function; that is, “form follows function”. This review summarizes the latest discovery in the structural biology and signaling mechanism of the Tie2 pathway using protein design to decode the ligand–receptor interactions. Probing the underlying molecular basis of Tie2 offers new therapeutic opportunities for targeting diseases, featuring vascular dysfunctions such as sepsis, traumatic brain injury, acute respiratory diseases, chronic inflammation, and cancer. This also highlights the next generation of AI-designed protein therapeutics. Full article
(This article belongs to the Section Cell Signaling)
Show Figures

Figure 1

44 pages, 856 KB  
Article
A GPT-Based Assessment of Alignment Between Privacy Legal Frameworks & ISO/IEC 27701:2025: A Latin American Case Study
by David Cevallos-Salas, José Estrada-Jiménez and Danny S. Guamán
Technologies 2026, 14(5), 273; https://doi.org/10.3390/technologies14050273 - 30 Apr 2026
Abstract
The 2025 update of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 27701 standard offers a major advantage by enabling organizations to implement a Privacy Information Management System (PIMS) autonomously while maintaining alignment with the General Data Protection Regulation (GDPR). However, it remains [...] Read more.
The 2025 update of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 27701 standard offers a major advantage by enabling organizations to implement a Privacy Information Management System (PIMS) autonomously while maintaining alignment with the General Data Protection Regulation (GDPR). However, it remains unclear to what extent privacy legal frameworks in developing jurisdictions, particularly in Latin American countries, align with this new standard. At the same time, the traditional method for assessing the alignment between privacy legal frameworks and ISO/IEC 27701 continues to rely on manual mapping between the standard’s subclauses and privacy regulatory articles, a process that is time-consuming, costly, and error-prone. More critically, no method exists to quantitatively assess the reliability of such mappings, leaving alignment assessments largely subjective. To address these limitations, this paper proposes a novel method based on an OpenAI Generative Pre-trained Transformer (GPT) combined with a Chain-of-Thought (CoT) reasoning strategy to quantitatively assess the alignment between privacy legal frameworks and ISO/IEC 27701:2025. By leveraging GPT’s logarithmic probabilities (logprobs) and the standard’s subclause definitions as classification categories, the method enables confidence-based evaluation of legal–standard alignment. The proposed method is then applied to analyze the privacy legal frameworks of Paraguay, Chile, Ecuador, México, Colombia, and Perú, examining how effectively they promote the standard’s guidelines. A suitable confidence threshold is then selected by assessing the GDPR and comparing the results with the reference mappings reported in Annex D of the standard. Finally, the method identifies the number of compliant subclauses per clause, the regulatory articles influencing the resulting logprobs, and the underlying privacy gaps for reduced alignment across the analyzed privacy legal frameworks. Overall, our results indicate that while Latin American privacy legal frameworks mandate protective measures by promoting a suitable operation and continuous improvement of a PIMS, they do not explicitly demand adequate risk management and sufficient preventive safeguards for citizens’ Personally Identifiable Information (PII) in dynamic contexts. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Graphical abstract

31 pages, 895 KB  
Article
From Smart Maps to Smart Citizens: Evaluating AI-Based Urban Mapping as a Tool for Informal Sustainability Education in Manchester
by Yundi Zhang and Marcellus Forh Mbah
Sustainability 2026, 18(9), 4378; https://doi.org/10.3390/su18094378 - 29 Apr 2026
Abstract
This paper explores the ways in which AI-based urban mapping tools may influence informal sustainability learning, with a particular emphasis on their use in participatory “Green Walk” activities in Manchester. As cities continue to integrate algorithmic systems to respond to environmental concerns, it [...] Read more.
This paper explores the ways in which AI-based urban mapping tools may influence informal sustainability learning, with a particular emphasis on their use in participatory “Green Walk” activities in Manchester. As cities continue to integrate algorithmic systems to respond to environmental concerns, it becomes increasingly relevant to ask how such technologies affect not only governance structures but also public modes of understanding and engagement. Grounded in theories of place-based learning, embodied cognition, and constructionism, the study captured participants’ interaction with AI-generated maps that visualised carbon data, land use, and ecological sites. Drawing on semi-structured interviews and field observations, the findings suggest that combining algorithmic representations with real-world walking experiences helped participants develop a stronger awareness of local environmental issues. The study points out both the pedagogical potential and limitations of AI-based tools in sustainability education. While they can support conceptual learning and foster new perspectives, they are not neutral or universally accessible. The effectiveness of these tools depends on how they are embedded within inclusive, dialogic, and situated pedagogical practices. Overall, this paper contributes to a more nuanced understanding of digital tools in place-based learning and informal education. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

27 pages, 979 KB  
Article
Time Series Evidence on Artificial Intelligence and Green Transformation: The Impact of AI Policy on Corporate Carbon Performance
by Wei Wen, Kangan Jiang and Xiaojing Shao
Mathematics 2026, 14(9), 1489; https://doi.org/10.3390/math14091489 - 28 Apr 2026
Viewed by 15
Abstract
Artificial intelligence development offers new solutions for enhancing corporate carbon performance and is crucial for promoting sustainable business practices. This study investigates the dynamic impact of artificial intelligence (AI) policy on corporate carbon performance using time series panel data of Chinese A-share listed [...] Read more.
Artificial intelligence development offers new solutions for enhancing corporate carbon performance and is crucial for promoting sustainable business practices. This study investigates the dynamic impact of artificial intelligence (AI) policy on corporate carbon performance using time series panel data of Chinese A-share listed companies from 2010 to 2024. Leveraging the staggered establishment of the National New Generation Artificial Intelligence Innovation Development Pilot Zones as a quasi-natural experiment, we develop a multi-period difference-in-differences framework with time-varying treatment. Our time series-based identification strategy addresses serial correlation and time-varying confounding factors through robust clustering and event study specifications. The findings reveal that AI policy significantly improves corporate carbon performance, a conclusion that remains robust after rigorous endogeneity tests, placebo checks, and counterfactual analyses. Using dynamic panel models, this study traces the temporal evolution of policy effects and demonstrates that AI exerts indirect effects through three time-lagged pathways: micro-level technological diffusion, future industry development, and the progressive accumulation of digital infrastructure and computing resources. Heterogeneity analysis reveals differentiated impacts across micro- and macro-levels, providing granular insights for forecasting heterogeneous treatment effects. By integrating panel time series econometrics with causal inference, this study contributes to the literature on corporate carbon performance while expanding analytical frameworks for understanding AI’s enabling effects. The findings offer policy insights and empirical benchmarks for forecasting green transition trajectories, with direct implications for green finance and sustainable economic development. Full article
(This article belongs to the Special Issue Time Series Forecasting for Green Finance and Sustainable Economics)
26 pages, 663 KB  
Review
Globalization in the Healthcare Industry: Drivers, Risks, and Adaptation
by Anasztázia Kész and Ildikó Balatoni
Healthcare 2026, 14(9), 1177; https://doi.org/10.3390/healthcare14091177 - 28 Apr 2026
Viewed by 173
Abstract
Globalization refers to the increasing density of economic, social, and technological interconnections on a global scale. In the healthcare industry, it simultaneously accelerates innovation and increases systemic vulnerabilities. This study aims to review and conceptually synthesise the main channels of impact: (1) pharmaceuticals, [...] Read more.
Globalization refers to the increasing density of economic, social, and technological interconnections on a global scale. In the healthcare industry, it simultaneously accelerates innovation and increases systemic vulnerabilities. This study aims to review and conceptually synthesise the main channels of impact: (1) pharmaceuticals, clinical development, and regulation; (2) supply chains and resilience; (3) service mobility (health tourism); (4) human resources and competencies; (5) digitalization, artificial intelligence (AI), and data governance; (6) ethics, law, and public policy; and (7) sustainability and climate change. The COVID-19 pandemic highlighted the risks associated with global interdependencies, particularly in supply chains, while also demonstrating the innovation-accelerating effects of knowledge sharing and international cooperation. Particular attention is given to artificial intelligence and digital health, which open up new potential for efficiency and quality improvement from research and development through diagnostics to healthcare organization, while simultaneously intensifying concerns related to data protection, cyber security, and liability. Telemedicine, platform-based systems, and real-world data may contribute to addressing the care needs of ageing societies, but only when supported by appropriate competencies and sound data governance. As global data flows intensify, the importance of data protection, bias mitigation, transparency, and accountability correspondingly increases. Through the cultural channels of globalization, health-conscious lifestyles and complementary approaches are also spreading, which we address in a brief, separate subsection. The guidelines of international organizations foster standardization; however, due to differences in local capacities and institutional environments, the effects are not homogeneous. In conclusion, the study emphasises the dual nature of globalization; it expands access and accelerates innovation, while at the same time creating new vulnerabilities—in supply chains, labour mobility, and data security—and, together with climate-related risks, generating complex adaptive pressures for the healthcare industry. Full article
(This article belongs to the Section Healthcare and Sustainability)
Show Figures

Figure 1

25 pages, 1105 KB  
Article
Few-Shot Portfolio Optimization: Can Large Language Models Outperform Quantitative Portfolio Optimization? A Comparative Study of LLMs and Optimized Portfolio Allocators
by Lamukanyani Alson Mantshimuli and John Weirstrass Muteba Mwamba
J. Risk Financial Manag. 2026, 19(5), 320; https://doi.org/10.3390/jrfm19050320 - 28 Apr 2026
Viewed by 102
Abstract
Recent advances in large language models (LLMs) have raised questions about their potential role in portfolio allocation beyond traditional sentiment analyses. This study investigated whether LLMs, when prompted directly, can autonomously generate portfolio weights that compete with classical optimization and AI-enhanced strategies. We [...] Read more.
Recent advances in large language models (LLMs) have raised questions about their potential role in portfolio allocation beyond traditional sentiment analyses. This study investigated whether LLMs, when prompted directly, can autonomously generate portfolio weights that compete with classical optimization and AI-enhanced strategies. We evaluated seven medium-sized open-source LLMs—Gemma-7B, Mistral-7B, Jansen Adapt-Finance-Llama2-7B, DeepSeek-R1-8B, QuantFactory Llama-3-8B-Instruct-Finance, Qwen-7B, and Llama2-7B—using systematic prompt engineering and temperature tuning. Portfolios were constructed from financial news headlines for S&P 500 equities and benchmarked against mean–variance optimization (MVO), the Black–Litterman model, AI-driven optimizers, and naive diversification strategies. The results show that, while LLM-generated portfolios outperformed naive diversification (Sharpe ratio up to 0.741), they lagged behind AI-optimized benchmarks (Sharpe ratio up to 1.361). A transaction cost analysis revealed that low-turnover LLM strategies retain their competitiveness post-costs, surpassing cap-weighted benchmarks. Statistical tests confirmed significant performance differences (p0.01). These findings highlight the ability of LLMs to extract signals from unstructured text, but also their limitations without explicit optimization. Future research should explore hybrid frameworks that combine LLM reasoning with quantitative optimization for cost-sensitive environments. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

28 pages, 10170 KB  
Article
An RL-Guided Hybrid Forecasting Framework for Aircraft Engine RUL and Performance Emission Prediction
by Ukbe Üsame Uçar and Hakan Aygün
Appl. Sci. 2026, 16(9), 4271; https://doi.org/10.3390/app16094271 - 27 Apr 2026
Viewed by 192
Abstract
In this paper, a new hybrid prediction method is proposed for estimating remaining useful life, emissions, and performance parameters using experimental data obtained from a micro-turbojet engine. Experiments were conducted under various rotational speed conditions, yielding a total of 342 measurement points. Turbine [...] Read more.
In this paper, a new hybrid prediction method is proposed for estimating remaining useful life, emissions, and performance parameters using experimental data obtained from a micro-turbojet engine. Experiments were conducted under various rotational speed conditions, yielding a total of 342 measurement points. Turbine speed, exhaust gas temperature, fuel flow rate, and thrust were considered as input variables in the study. Thermal efficiency, total power, CO2, and NO2 were considered as output variables. The experimental findings showed that thermal efficiency varied between 0.49% and 7.1%, total power between 0.266 and 13.94 kW, and CO2 emissions by volume between 0.317% and 2.183%. The proposed RL-MH-LR-CBR approach combines the advantages of multiple methods. In this method, the interpretable formulation of linear regression serves as the foundation. Additionally, in the adaptive meta-heuristic optimization process, a hyper-heuristic selection mechanism based on the UCB1-based multi-arm bandit approach is used to select the optimal algorithm from among the meta-heuristic methods. Finally, the CatBoost-based residual error learning component aims to capture non-linear patterns that cannot be explained by the linear model. The method was compared with 14 different methods on both the NASA C-MAPSS FD001 dataset and real engine data. The results demonstrate that the proposed framework exhibits more balanced, stable, and higher generalization capabilities compared to classical regression models and powerful AI methods, particularly in non-linear, noisy, and heterogeneous outputs. In the real engine dataset, the proposed method produced R2 values of 0.968 for CO2 and 0.936 for NO2, while the predictive performance was even stronger for thermal efficiency and total power, with corresponding R2 values of 0.998 and 0.995, respectively. Additionally, the method demonstrated a clear advantage in hard-to-model outputs by reducing the error level to 0.061 in NO2 predictions. These findings demonstrate that the proposed approach is not limited to micro-turbojet-engines. The developed method provides a robust decision support framework that is applicable, scalable, and generalizable to predictive maintenance, emissions monitoring, energy systems, aviation analytics, and other highly dynamic engineering problems. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

4 pages, 161 KB  
Editorial
Sci and AI
by Claus Jacob
Sci 2026, 8(5), 95; https://doi.org/10.3390/sci8050095 - 27 Apr 2026
Viewed by 122
Abstract
Artificial Intelligence (AI) is rapidly changing the format, style and content of scientific publishing. Traditional reviews are likely to give way to more personalized, AI-generated literature surveys on the one hand and more innovative, perhaps even controversial hypothesis, opinion or essay-style contributions on [...] Read more.
Artificial Intelligence (AI) is rapidly changing the format, style and content of scientific publishing. Traditional reviews are likely to give way to more personalized, AI-generated literature surveys on the one hand and more innovative, perhaps even controversial hypothesis, opinion or essay-style contributions on the other. Original publications based on experimental data are still less affected even if AI teams up with robots. Eventually, science and scientific publishing are social activities and although the AI-driven tools and technologies at hand may accelerate and also refine scientific publishing, scientists, as always, are well equipped to adapt and to turn these challenges into new opportunities, for instance in handling, processing and illustrating experimental data. Full article
28 pages, 1639 KB  
Article
A Generative AI-Based Framework for Proactive Quality Assurance and Auditing
by Galina Ilieva, Tania Yankova, Vera Hadzhieva and Yuliy Iliev
Appl. Sci. 2026, 16(9), 4237; https://doi.org/10.3390/app16094237 - 26 Apr 2026
Viewed by 202
Abstract
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards [...] Read more.
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards alignment and human oversight. The framework structures quality activities across supplier, in-process, and post-market domains and across three hierarchical levels—product, process, and operation—to link quality outcomes with documentary evidence requirements. A proof-of-concept (PoC) study in electronics manufacturing focused on New Product Introduction (NPI) planning and compared two parallel workflows: an expert QA team and a generative AI-assisted chatbot workflow. Within a fixed time window, both workflows produced an aligned Process Failure Mode and Effects Analysis (PFMEA), Control Plan, supplier Production Part Approval Process (PPAP) request package, and internal audit evidence pack. Three independent experts evaluated the integrated deliverable package using five indices covering documentation quality and audit readiness, detection and containment logic, process capability and stability, governance and provenance safeguards, and execution (time) efficiency. Compared with the expert package, the generative AI–assisted workflow produced more traceable, governance-rich documentation (ownership, versioning, clause-to-evidence links) and reduced manual audit-evidence consolidation, supporting quality planning and change-control readiness. Full article
31 pages, 1741 KB  
Article
AI-Driven Approaches to System Requirements and Test Case Generation: A New Paradigm in Software Engineering
by Ziad Salem, Luay Tahat, Yasmeen Humaidan and Noor Tahat
Technologies 2026, 14(5), 260; https://doi.org/10.3390/technologies14050260 - 25 Apr 2026
Viewed by 217
Abstract
Artificial intelligence (AI) is a new paradigm in software engineering that automates key phases of the development cycle. The methods of creating test cases and designing requirements are still mostly manual and prone to error. Unclear requirements can result in expensive rework and [...] Read more.
Artificial intelligence (AI) is a new paradigm in software engineering that automates key phases of the development cycle. The methods of creating test cases and designing requirements are still mostly manual and prone to error. Unclear requirements can result in expensive rework and undiscovered defects in the development process. Scalability and dependability are crucial concerns in complex systems. These shortcomings highlight the need for improved methods to enhance accuracy and consistency throughout these critical phases. To generate well-organized system requirements, this article outlines a clear strategy that leverages Extended Finite State Machine models as formal inputs for large language models (LLMs). Five system models are used to assess the suggested framework. The comparison analysis evaluates the accuracy, completeness, test coverage, and runtime efficiency of the artifacts. Along with a comparison with a human-made reference standard, the study evaluates the performance of LLMs such as ChatGPT-5, Claude Sonnet 4.5, and DeepSeek V3.2. The findings demonstrate that AI models can achieve human-comparable accuracy by exceeding 90% with EFSM-based prompting. Claude Sonnet generated the most reliable findings, ChatGPT demonstrated exceptional flexibility, and DeepSeek demonstrated exceptional runtime economy. These findings show that human–AI workflows provide a new paradigm in scalable, traceable, and reproducible system engineering. Full article
(This article belongs to the Section Information and Communication Technologies)
37 pages, 1099 KB  
Article
The Impact of National New-Generation Artificial Intelligence Innovation and Development Pilot Zone Construction on ESG Performance of Manufacturing Enterprises
by Yi Cao, Zhou Lan, Jie Dong and Ling Cao
Sustainability 2026, 18(9), 4190; https://doi.org/10.3390/su18094190 - 23 Apr 2026
Viewed by 159
Abstract
Enhancing the ESG performance of manufacturing enterprises represents a critical pathway for promoting high-quality economic development and achieving sustainable development goals. Leveraging the establishment of National New-Generation Artificial Intelligence Innovation and Development Pilot Zones as a quasi-natural experiment, this study examined A-share listed [...] Read more.
Enhancing the ESG performance of manufacturing enterprises represents a critical pathway for promoting high-quality economic development and achieving sustainable development goals. Leveraging the establishment of National New-Generation Artificial Intelligence Innovation and Development Pilot Zones as a quasi-natural experiment, this study examined A-share listed manufacturing enterprises on the Shanghai and Shenzhen Stock Exchanges from 2010 to 2023, employing a multi-period difference-in-differences model to systematically evaluate the policy’s impact on enterprise ESG performance and its underlying mechanisms. The empirical results demonstrate that the Artificial Intelligence Innovation and Development Pilot Zone policy exerts a significant positive effect on manufacturing enterprises’ ESG performance, with the robustness of this conclusion validated through parallel trends tests, placebo tests, and multiple robustness checks. A mechanism analysis revealed that the policy primarily enhances manufacturing enterprises’ ESG performance through two transmission channels: intensifying the R&D expenditure intensity and strengthening environmental compliance pressures. Furthermore, the enterprise resource allocation and operational efficiencies significantly moderate the policy effect, amplifying the enabling effect of the policy on ESG performance. A heterogeneity analysis indicates that, from the perspectives of enterprise ownership and responsibility orientation, the policy demonstrates more pronounced enabling effects on non-state-owned enterprises and non-high-pollution enterprises; from the perspectives of technological endowment and factor structure, the policy effects are more evident among high-tech enterprises, non-capital-intensive enterprises, and non-labor-intensive enterprises. This study elucidates the multi-dimensional transmission mechanisms through which the Artificial Intelligence Innovation and Development Pilot Zone policy empowers ESG development in manufacturing enterprises, providing theoretical foundations and practical guidance for refining artificial intelligence policy frameworks and promoting manufacturing enterprise sustainable development. The research findings also contribute empirical evidence from emerging economies to comparative research on global AI governance. Full article
Back to TopTop