Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = Peters’ model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1922 KB  
Article
Validated Transfer Learning Peters–Belson Methods for Survival Analysis: Ensemble Machine Learning Approaches with Overfitting Controls for Health Disparity Decomposition
by Menglu Liang and Yan Li
Stats 2025, 8(4), 114; https://doi.org/10.3390/stats8040114 - 10 Dec 2025
Viewed by 223
Abstract
Background: Health disparities research increasingly relies on complex survey data to understand survival differences between population subgroups. While Peters–Belson decomposition provides a principled framework for distinguishing disparities explained by measured covariates from unexplained residual differences, traditional approaches face challenges with complex data patterns [...] Read more.
Background: Health disparities research increasingly relies on complex survey data to understand survival differences between population subgroups. While Peters–Belson decomposition provides a principled framework for distinguishing disparities explained by measured covariates from unexplained residual differences, traditional approaches face challenges with complex data patterns and model validation for counterfactual estimation. Objective: To develop validated Peters–Belson decomposition methods for survival analysis that integrate ensemble machine learning with transfer learning while ensuring logical validity of counterfactual estimates through comprehensive model validation. Methods: We extend the traditional Peters–Belson framework through ensemble machine learning that combines Cox proportional hazards models, cross-validated random survival forests, and regularized gradient boosting approaches. Our framework incorporates a transfer learning component via principal component analysis (PCA) to discover shared latent factors between majority and minority groups. We note that this “transfer learning” differs from the standard machine learning definition (pre-trained models or domain adaptation); here, we use the term in its statistical sense to describe the transfer of covariate structure information from the pooled population to identify group-level latent factors. We develop a comprehensive validation framework that ensures Peters–Belson logical bounds compliance, preventing mathematical violations in counterfactual estimates. The approach is evaluated through simulation studies across five realistic health disparity scenarios using stratified complex survey designs. Results: Simulation studies demonstrate that validated ensemble methods achieve superior performance compared to individual models (proportion explained: 0.352 vs. 0.310 for individual Cox, 0.325 for individual random forests), with validation framework reducing logical violations from 34.7% to 2.1% of cases. Transfer learning provides additional 16.1% average improvement in explanation of unexplained disparity when significant unmeasured confounding exists, with 90.1% overall validation success rate. The validation framework ensures explanation proportions remain within realistic bounds while maintaining computational efficiency with 31% overhead for validation procedures. Conclusions: Validated ensemble machine learning provides substantial advantages for Peters–Belson decomposition when combined with proper model validation. Transfer learning offers conditional benefits for capturing unmeasured group-level factors while preventing mathematical violations common in standard approaches. The framework demonstrates that realistic health disparity patterns show 25–35% of differences explained by measured factors, providing actionable targets for reducing health inequities. Full article
Show Figures

Figure 1

16 pages, 313 KB  
Article
The Virgin Mary’s Image Usage in Albigensian Crusade Primary Sources
by Eray Özer and Meryem Gürbüz
Histories 2025, 5(4), 49; https://doi.org/10.3390/histories5040049 - 10 Oct 2025
Viewed by 1003
Abstract
The image of the Virgin Mary appears with increasing frequency in written sources from the 12th and 13th centuries compared to earlier periods. Three major works produced by four eyewitness authors of the Albigensian Crusade (Historia Albigensis, Chronica, and Canso [...] Read more.
The image of the Virgin Mary appears with increasing frequency in written sources from the 12th and 13th centuries compared to earlier periods. Three major works produced by four eyewitness authors of the Albigensian Crusade (Historia Albigensis, Chronica, and Canso de la Crozada) reflect on and respond to this popular theme. These sources focus on the Albigensian Crusade against heretical groups, particularly the Cathars, and employ the Virgin Mary motif for various purposes. The Virgin Mary is presented as a Catholic model for women drawn to Catharism (a movement in which female spiritual leadership was also present) as a divine protector of the just side in war and as a means of legitimizing the authors’ claims. While Mary appears sporadically in Peter of Vaux-de-Cernay’s Historia Albigensis, she is extensively invoked in the Canso by both William and his anonymous successor. In contrast, the image of the Virgin Mary is scarcely mentioned in Chronica, likely due to the narrative’s intended audience and objectives. This article aims to provide a comparative analysis of how the image of the Virgin Mary is utilized in these primary sources from the Albigensian Crusade and to offer a new perspective on the relationship between historical events and authors’ intentions, laying the groundwork for further research. Full article
(This article belongs to the Section Cultural History)
21 pages, 596 KB  
Review
Literature Review on Public Transport and Land Use: Based on CiteSpace Statistical Analysis
by Yinjie He, Biao Liu, Chengyou Xu and Dafang Wu
Land 2025, 14(5), 1096; https://doi.org/10.3390/land14051096 - 18 May 2025
Cited by 2 | Viewed by 4562
Abstract
With the growing demand for mobility fueled by global population expansion and rapid urbanization, the intricate interplay between public transport and land use, along with their economic, environmental, and social externalities, has emerged as a critical concern for policymakers and the public alike. [...] Read more.
With the growing demand for mobility fueled by global population expansion and rapid urbanization, the intricate interplay between public transport and land use, along with their economic, environmental, and social externalities, has emerged as a critical concern for policymakers and the public alike. This study assembles publicly available academic literature, including papers, reports, books, and news articles, to construct a comprehensive database. Using CiteSpace 5.8.R3 software, this study conducts a visualized analysis of 10,470 scholarly works on public transport and land use published since 1932, identifying and synthesizing the researcher, research stages, research theories, research models, and research hotspots. Findings reveal that since Mitchell and Rapkin first introduced the transport–land use relationship in 1954, research in this field has steadily gained traction, particularly after the 1973 oil embargo crisis. The Journal of Transport and Land Use and institutions such as the University of Minnesota’s Transportation Research Center have played pivotal roles, particularly with the establishment of the World Society for Transport and Land Use Research (WSTLUR). In recent years, China’s high-speed rail expansion has further revitalized interest in this field. Prominent scholars in this domain include Robert Cervero, Reid Ewing, Michael Duncan, and Peter Calthorpe. Major theoretical frameworks encompass utility theory, urban economic theory, and the human–land system theory. Key modeling approaches include the spatial interaction model, the stochastic utility model, and urban economic models. Current research hotspots center on safety and public health, equity and valuation, environmental sustainability and energy efficiency, as well as transit-oriented development (TOD) and accessibility. This systematic literature review offers valuable insights to inform land use planning, enhance spatial structure, guide transportation project decision making, and optimize transport infrastructure and service provision. Full article
(This article belongs to the Special Issue Territorial Space and Transportation Coordinated Development)
Show Figures

Figure 1

26 pages, 5141 KB  
Article
Multi-Hazard Assessment in Post-Mining Landscape and Potential for Geotourism Development (On the Example of the Central Spiš Region in Slovakia)
by Vladimír Čech, Radoslav Klamár, Juliana Krokusová and Jana Vašková
Land 2025, 14(5), 1000; https://doi.org/10.3390/land14051000 - 5 May 2025
Cited by 2 | Viewed by 1349
Abstract
The presented article is focused on a spatial analysis and identification of high and medium risk areas and their impact on potential for geotourism development in the Central Spiš region in Slovakia. To achieve this goal, we used a combination of two methods: [...] Read more.
The presented article is focused on a spatial analysis and identification of high and medium risk areas and their impact on potential for geotourism development in the Central Spiš region in Slovakia. To achieve this goal, we used a combination of two methods: the multi-hazard assessment method and the quantitative Geosite Assessment Model. The research results show that the geosites with the highest potential for the development of geotourism are also located in the low treat zone. These are mainly GS9 adit Pavol (overall point value 16.25), GS1 adit Rochus (15.25) and GS8 adit Peter (14.00). On the other hand, geosites with a low point value for the development of geotourism, such as GS10 sinkhole Baniská (7.75) and GS5 tailings impoundment Markušovce (10.50), are located in the high treat zone. The obtained results show that even in the significantly anthropogenically burdened and economically underdeveloped post-mining landscape of the studied area, it is possible to identify positive impulses for further development. This concerns in particular the use of evaluated geosites in geotourism, with the aim of simultaneously supporting the protection of local cultural heritage, the natural environment and the socioeconomic development of the local community. Full article
Show Figures

Figure 1

8 pages, 209 KB  
Article
Does Pre-Existing Chronic Obstructive Pulmonary Disease Increase the Risk of Checkpoint Inhibitor Pneumonitis in Advanced/Metastatic Non-Small Cell Lung Cancer Treated with Immune Checkpoint Inhibitors?
by David Spillane, Carmela Pepe, Goulnar Kasymjanova, Diane Cruiziat, Sara Cohen, Jeremy Naimer and Jason Agulnik
Curr. Oncol. 2025, 32(5), 259; https://doi.org/10.3390/curroncol32050259 - 29 Apr 2025
Cited by 1 | Viewed by 1282
Abstract
Objective: Immune checkpoint inhibitors (ICIs) are front-line treatment options for NSCLC. ICI therapy is associated with a risk of immune-related adverse events (irAEs). Checkpoint inhibitor pneumonitis (CIP) is a potentially life-threatening irAE. Previous studies have demonstrated that asthma and interstitial lung disease are [...] Read more.
Objective: Immune checkpoint inhibitors (ICIs) are front-line treatment options for NSCLC. ICI therapy is associated with a risk of immune-related adverse events (irAEs). Checkpoint inhibitor pneumonitis (CIP) is a potentially life-threatening irAE. Previous studies have demonstrated that asthma and interstitial lung disease are associated with an increased risk of CIP. We sought to determine whether chronic obstructive pulmonary disease (COPD) is associated with CIP. Methods: This retrospective study examines a cohort of ICI-treated NSCLC patients either with or without chemotherapy at the Anna and Peter Brojde Lung Cancer Centre, Jewish General Hospital in Montreal, Canada between 2014 and 2023. We explored associations between risk factors and CIP using the Mann–Whitney U test or Fisher’s exact test. Analysis of prognostic factors was performed using a logistic regression model. All statistical analyses were carried out using SPSS software, version 24.0 (SPSS, Chicago, IL, USA). p-values of 0.05 or less were considered significant. Results: Of the 327 selected patients on ICIs, 23 experienced an acute respiratory deterioration that was attributed to CIP, 87/327(26.6%) patients had a pre-existing diagnosis of COPD, and 11/87 (12.6%) COPD patients experienced CIP compared to 13/240 (5.5%) non-COPD patients (p = 0.061). There was no statistical or clinically meaningful correlation between COPD severity and CIP. The only variable significantly associated with CIP was a poor ECOG performance status. Among ECOG 1 patients, 18/91 (19.8%) experienced CIP compared to 5/226 (2.2%) of those with an ECOG of 0. A multivariate assessment involving all 327 patients revealed no significant factors affecting CIP development. Conclusions: Our single-institution study revealed that although there was a trend, the presence of COPD was not statistically associated with an increased risk of CIP. Additionally, neither FEV1 nor DLCO had a meaningful impact on the development of CIP in COPD patients. Given these findings, we emphasize the need for larger prospective studies to confirm these observations before drawing definitive clinical recommendations. Full article
(This article belongs to the Section Thoracic Oncology)
12 pages, 974 KB  
Article
Association of Maternal Exposure to Fine Particulate Matter During Pregnancy with Anterior Segment Dysgenesis Risk: A Matched Case-Control Study
by Sooyeon Choe, Kyung-Shin Lee, Ahnul Ha, Soontae Kim, Jin Wook Jeoung, Ki Ho Park, Yun-Chul Hong and Young Kook Kim
J. Clin. Med. 2025, 14(9), 3003; https://doi.org/10.3390/jcm14093003 - 26 Apr 2025
Viewed by 817
Abstract
Background/Objectives: To assess the association of residential-level maternal particulate matter of 2.5 μm diameter or less (PM2.5) exposure during pregnancy with anterior segment dysgenesis (ASD) risk. Methods: This study used data from children diagnosed with ASD (i.e., aniridia, iris hypoplasia, Peters [...] Read more.
Background/Objectives: To assess the association of residential-level maternal particulate matter of 2.5 μm diameter or less (PM2.5) exposure during pregnancy with anterior segment dysgenesis (ASD) risk. Methods: This study used data from children diagnosed with ASD (i.e., aniridia, iris hypoplasia, Peters anomaly, Axenfeld–Rieger syndrome, or primary congenital glaucoma) by an experienced pediatric ophthalmologist at a National Referral Center for Rare Diseases between 2004 and 2021 and their biological mothers. Individual PM2.5 exposure concentration was assessed by reference to residential addresses and district-specific PM2.5 concentrations predicted by the universal Kriging prediction model. Results: The study included 2328 children (582 ASD cases and 1746 controls [1:3 matched for birth year, sex, and birth-place]). The mean (SD) annual PM2.5 exposure was 29.2 (16.9) μg/m3. An IQR increase in PM2.5 during the preconception period (11.6 μg/m3; RR, 1.18; 95% CI, 1.03–1.34), the 1st trimester (11.1 μg/m3; RR, 1.15; 95% CI, 1.03–1.27), and the 2nd trimester (11.2 μg/m3; RR 1.14; 95% CI, 1.01–1.29) significantly increased ASD risk. Meanwhile, the association between IQR increase in PM2.5 during the 3rd trimester and ASD risk showed borderline significance (11.0 μg/m3; RR, 1.10; 95% CI, 0.99–1.21). An IQR increase in PM2.5 (6.9 μg/m3) from the preconception period to the 3rd trimester was associated with a significantly increased risk of ASD (RR, 1.13; 95% CI, 1.08–1.20). Conclusions: The findings of this study suggest that PM2.5 exposure during the preconception period and pregnancy is associated with increased risk of ASD, supporting a need for further improvements in air quality to prevent congenital ocular anomalies. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

19 pages, 1287 KB  
Article
Enhancing Personalized Explainable Recommendations with Transformer Architecture and Feature Handling
by Ming-Yen Lin, I-Chen Hsieh and Sue-Chen Hsush
Electronics 2025, 14(5), 998; https://doi.org/10.3390/electronics14050998 - 28 Feb 2025
Viewed by 2112
Abstract
The advancement of explainable recommendations aims to improve the quality of textual explanations for recommendations. Traditional methods primarily used Recurrent Neural Networks (RNNs) or their variants to generate personalized explanations. However, recent research has focused on leveraging Transformer architectures to enhance explanations by [...] Read more.
The advancement of explainable recommendations aims to improve the quality of textual explanations for recommendations. Traditional methods primarily used Recurrent Neural Networks (RNNs) or their variants to generate personalized explanations. However, recent research has focused on leveraging Transformer architectures to enhance explanations by extracting user reviews and incorporating features from interacted items. Nevertheless, previous studies have failed to fully exploit the relationship between reviews and user ratings to generate more personalized explanations. In this paper, we propose a novel model named EPER (Enhanced Personalization for Explainable Recommendation), which considers reviews, user ratings, feature words, and item titles to generate high-quality personalized explanations. The EPER model employs a masking mechanism to prevent interference between rating prediction and explanation generation. Moreover, we propose an innovative feature-handling method to manage missing interaction features in existing models. Experimental results on public datasets demonstrate that EPER generally outperforms other well-known methods, including NETE, PETER+, and MMCT. Compared with MMCT, EPER improves explanation quality (ROUGE metric) by 3.27%, personalization (FMR metric) by 6.82%, and rating prediction (MSE metric) by 1.2% for the Amazon Clothing dataset. Overall, the EPER model provides personalized recommendation explanations that match or exceed the best existing methods, demonstrating its potential for practical applications. Full article
Show Figures

Figure 1

17 pages, 2028 KB  
Systematic Review
The Role of Simpson Grading System in Spinal Meningioma Surgery: Institutional Case Series, Systematic Review and Meta-Analysis
by Giuseppe Corazzelli, Sergio Corvino, Valentina Cioffi, Ciro Mastantuoni, Maria Rosaria Scala, Salvatore Di Colandrea, Luigi Sigona, Antonio Bocchetti and Raffaele de Falco
Cancers 2025, 17(1), 34; https://doi.org/10.3390/cancers17010034 - 26 Dec 2024
Cited by 3 | Viewed by 2503
Abstract
Background: Although its validity has recently been questioned since its introduction, the Simpson grade has remained one of the most relevant factors in estimating the recurrence risk of intracranial meningiomas. This study aims to assess its role in spinal meningiomas through a retrospective [...] Read more.
Background: Although its validity has recently been questioned since its introduction, the Simpson grade has remained one of the most relevant factors in estimating the recurrence risk of intracranial meningiomas. This study aims to assess its role in spinal meningiomas through a retrospective analysis of a mono-institutional surgical series and literature meta-analysis. Methods: We conducted a systematic review and meta-analysis of the literature from 1980 to 2023, complemented by a mono-institutional series of 74 patients treated at “Santa Maria delle Grazie” hospital. Demographic, clinical, neuroradiological, pathological, surgical, and outcome data of case series were analyzed. For the meta-analysis, studies were selected based on predefined inclusion criteria, and a fixed-effects model was used to synthesize data due to assumed homogeneity among included studies. Statistical analyses included odds ratios (OR) for recurrence risk and assessment of publication bias using Peter’s test. Results: Mono-institutional sample included 74 patients, most of whom were women (85%) with a median age of 61.9 years. The thoracic spine was the most common tumor location (81%). Recurrences occurred in patients with Simpson grade II and III resections. The meta-analysis involved 2142 patients from 25 studies and revealed a significantly higher recurrence rate for Simpson grades III–V compared to grades I–II (OR 0.10; CI95 0.06–0.16). Additionally, Simpson grade II had a higher recurrence risk than grade I (OR 0.42; CI95 0.20–0.90). Conclusions: The Simpson grading remains a valid predictor of recurrence also for spinal meningiomas. Our findings revealed a significant increase in recurrence rate with higher Simpson grades. These results support the need to strive for Simpson grade I resection when feasible. Full article
(This article belongs to the Special Issue Meningioma Recurrences: Risk Factors and Management)
Show Figures

Figure 1

59 pages, 10748 KB  
Article
Manufacture of a 4-Degree-of-Freedom Robot to Support an IRB 120 Robot
by Ricardo Balcazar, José de Jesús Rubio, Mario Alberto Hernandez, Jaime Pacheco, Adrian-Josue Guel-Cortez, Genaro Ochoa, Enrique Garcia, Alejandro Zacarías and Gabriela Saavedra González
Actuators 2024, 13(12), 483; https://doi.org/10.3390/act13120483 - 28 Nov 2024
Viewed by 1855
Abstract
In this work, we present the construction and control of a four-degrees-of-freedom (DOF) manipulator aimed at addressing one of the key challenges faced by the Academy-Industry Cooperation Center (CCAI): the need for mechatronic equipment to support and facilitate the development of advanced robotic [...] Read more.
In this work, we present the construction and control of a four-degrees-of-freedom (DOF) manipulator aimed at addressing one of the key challenges faced by the Academy-Industry Cooperation Center (CCAI): the need for mechatronic equipment to support and facilitate the development of advanced robotic cells. We begin by designing the robot’s structure and components using SolidWorks software for computer-aided design (CAD) modeling. This ensures that all the links and parts fit together properly without collisions. The robot links are then manufactured using 3D printing. Additionally, we performed kinematic modeling, dynamic analysis, and PI-V control, along with control using a trigonometric function (hyperbolic tangent). To evaluate the robot’s movement, we simulate these processes using Matlab R2019a/Simulink software, focusing on key parameters such as position, velocity, and acceleration, which inform the design of PI-V control for each link. We also present the electrical and electronic designs, followed by system implementation. The kinematics of the robot play a crucial role in the dynamics and controller design. We validate the kinematics using Peter Corke’s libraries based on the Denavit–Hartenberg parameters. The results show that the controller based on the trigonometric function improves the response time, particularly enhancing the performance of axes 2 and 3. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

14 pages, 4100 KB  
Article
The Propagation Characteristics of Turbulent Expanding Flames of Methane/Hydrogen Blending Gas
by Haoran Zhao, Chunmiao Yuan, Gang Li and Fuchao Tian
Energies 2024, 17(23), 5997; https://doi.org/10.3390/en17235997 - 28 Nov 2024
Cited by 3 | Viewed by 1149
Abstract
In the present study, the effect of hydrogen addition on turbulent flame propagation characteristics is investigated in a fan-stirred combustion chamber. The turbulent burning velocities of methane/hydrogen mixture are determined over a wide range of hydrogen fractions, and four classical unified scaling models [...] Read more.
In the present study, the effect of hydrogen addition on turbulent flame propagation characteristics is investigated in a fan-stirred combustion chamber. The turbulent burning velocities of methane/hydrogen mixture are determined over a wide range of hydrogen fractions, and four classical unified scaling models (the Zimont model, Gulder model, Schmidt model, and Peters model) are evaluated by the experimental data. The acceleration onset, cellular structure, and acceleration exponent of turbulent expanding flames are determined, and an empirical model of turbulent flame acceleration is proposed. The results indicate that turbulent burning velocity increases nonlinearly with the hydrogen addition, which is similar to that of laminar burning velocity. Turbulent flame acceleration weakens with the hydrogen addition, which is different from that of laminar flame acceleration. Turbulent flame acceleration is dominated by turbulent stretch, and flame intrinsic instability is negligible. Turbulent stretch reduces with hydrogen addition, because the interaction duration between turbulent vortexes and flamelets is shortened. The relative data and conclusions can provide useful reference for the model optimization and risk assessment of hydrogen-enriched gas explosion. Full article
(This article belongs to the Special Issue Storage, Transportation and Use of Hydrogen-Rich Fuel)
Show Figures

Figure 1

13 pages, 439 KB  
Article
Impact of Screening on Mortality for Patients Diagnosed with Hepatocellular Carcinoma in a Safety-Net Healthcare System: An Opportunity for Addressing Disparities
by Kalyani Narra, Madison Hull, Kari J. Teigen, Vedaamrutha Reddy, Jolonda C. Bullock, Riyaz Basha, Nadia Alawi-Kakomanolis, David E. Gerber and Timothy J. Brown
Cancers 2024, 16(22), 3829; https://doi.org/10.3390/cancers16223829 - 14 Nov 2024
Viewed by 1598
Abstract
Purpose: We describe the impact of screening on outcomes of patients diagnosed with hepatocellular carcinoma (HCC) in an urban safety-net healthcare system compared to a non-screened cohort diagnosed with HCC. Methods: Patients diagnosed with HCC at John Peter Smith Health Network were identified [...] Read more.
Purpose: We describe the impact of screening on outcomes of patients diagnosed with hepatocellular carcinoma (HCC) in an urban safety-net healthcare system compared to a non-screened cohort diagnosed with HCC. Methods: Patients diagnosed with HCC at John Peter Smith Health Network were identified by querying the hospital tumor registry and allocated to the screened cohort if they had undergone any liver imaging within one year prior to HCC diagnosis, while the remainder were allocated to the non-screened cohort. Kaplan–Meier methods and log-rank tests were used to compare 3-year survival curves from an index date of HCC diagnosis. Cox proportional hazard models were used to calculate unadjusted and adjusted hazard ratios (HRs) and 95% confidence intervals (CIs). The Duffy adjustment was used to address lead-time bias. Results: A total of 158 patients were included (n = 53 screened, n = 105 non-screened). The median overall survival (OS) for the screened cohort was 19.0 months (95% CI: 9.9–NA) and that for the non-screened cohort was 5.4 months (95% CI: 3.7–8.5) [HR death (non-screened vs. screened) = 2.4, 95% CI: 1.6-3.6; log rank p < 0.0001]. The benefit of screening remained after adjusting for lead-time bias (HR 2.19, 95% CI 1.4–3.3, p = 0.0002). Conclusions: In an urban safety-net population, screening for HCC was associated with improved outcomes compared to patients diagnosed with HCC outside of a screening protocol. Full article
(This article belongs to the Special Issue Emerging Trends in Global Cancer Epidemiology: 2nd Edition)
Show Figures

Figure 1

16 pages, 7390 KB  
Article
Geoacoustic Digital Model for the Sea of Japan Shelf (Peter the Great Bay)
by Aleksandr Samchenko, Grigory Dolgikh, Igor Yaroshchuk, Roman Korotchenko and Alexandra Kosheleva
Geosciences 2024, 14(11), 288; https://doi.org/10.3390/geosciences14110288 - 28 Oct 2024
Cited by 1 | Viewed by 1314
Abstract
In this paper, the authors present and analyze the geoacoustic digital seabed model they developed, which is a digital description of the water column characteristics, seabed topography, and information about sediments and rocks (their composition and elastic properties) for Peter the Great Bay, [...] Read more.
In this paper, the authors present and analyze the geoacoustic digital seabed model they developed, which is a digital description of the water column characteristics, seabed topography, and information about sediments and rocks (their composition and elastic properties) for Peter the Great Bay, the Sea of Japan. The model consists of four relief layers, a foundation and three layers of bottom sediments, and also contains the velocities of longitudinal waves in rocks and statistical characteristics of the sound velocity distribution in the water layer for three seasons. Acoustic characteristics of geological structures are based on seismoacoustic studies, sediment lithology, and laboratory measurements of rock samples collected onshore. The velocities of longitudinal and transversal waves and also the density of the sediments were calculated from their empirical dependencies on the granulometric composition of bottom sediment samples over an area of about 800 km2. In a limited area of the shelf (approximately 130 km2), high-frequency acoustic studies were carried out using echo sounders, and the longitudinal wave velocities of the top sedimentary layer were determined. Porosity, density, longitudinal, and transverse wave velocities in bottom sediments were calculated using empirical models with a normal coefficient of reflection from the seabed. A comparison was made of the results of calculating the elastic properties of the seabed using various methods. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

8 pages, 1077 KB  
Proceeding Paper
Industrial Metabolism MFA Model Applied in a Startup in Canada
by Jorge Hernán Torres-Berrío and Miguel Ángel Ospina Usaquén
Eng. Proc. 2024, 76(1), 11; https://doi.org/10.3390/engproc2024076011 - 16 Oct 2024
Viewed by 1396
Abstract
Industrial metabolism is a recent field of research in engineering and sustainability. Its practical objective is to provide structural solutions to organizations and regulate the productive, ecological, and economic system. Due to being a relatively new topic and without a known pattern, the [...] Read more.
Industrial metabolism is a recent field of research in engineering and sustainability. Its practical objective is to provide structural solutions to organizations and regulate the productive, ecological, and economic system. Due to being a relatively new topic and without a known pattern, the present research adapts the 4R (resources) urban metabolism MFA model developed at ETH in Switzerland by Professor Peter Baccini and applies it to a Canadian food startup (Missfresh). Within the adjusted model, clean production tools, circularity plans, ecological design, inventory flow analysis, and the four general study variables (materials, infrastructure, impacts, and standards) were used for solutions within the company. This research seeks two academic results: the metabolic map (static–dynamic animation) that describes the behavior of the company during a period of time, and the industrial metabolism model adapted and validated for subsequent applications. In practical results, the impacts of the model in the 3Es of sustainability in the industry and the sector are evaluated: ecology, ergonomics, and economy. This research is conducted within the framework of an internship to obtain a master’s degree in Sustainable Development and Environment at the University of Montreal. Full article
Show Figures

Figure 1

19 pages, 4899 KB  
Article
The Many Shades of the Vegetation–Climate Causality: A Multimodel Causal Appreciation
by Yuhao Shao, Daniel Fiifi Tawia Hagan, Shijie Li, Feihong Zhou, Xiao Zou and Pedro Cabral
Forests 2024, 15(8), 1430; https://doi.org/10.3390/f15081430 - 14 Aug 2024
Cited by 2 | Viewed by 1789
Abstract
The causal relationship between vegetation and temperature serves as a driving factor for global warming in the climate system. However, causal relationships are typically characterized by complex facets, particularly within natural systems, necessitating the ongoing development of robust approaches capable of addressing the [...] Read more.
The causal relationship between vegetation and temperature serves as a driving factor for global warming in the climate system. However, causal relationships are typically characterized by complex facets, particularly within natural systems, necessitating the ongoing development of robust approaches capable of addressing the challenges inherent in causality analysis. Various causality approaches offer distinct perspectives on understanding causal structures, even when experiments are meticulously designed with a specific target. Here, we use the complex vegetation–climate interaction to demonstrate some of the many facets of causality analysis by applying three different causality frameworks including (i) the kernel Granger causality (KGC), a nonlinear extension of the Granger causality (GC), to understand the nonlinearity in the vegetation–climate causal relationship; (ii) the Peter and Clark momentary conditional independence (PCMCI), which combines the Peter and Clark (PC) algorithm with the momentary conditional independence (MCI) approach to distinguish the feedback and coupling signs in vegetation–climate interaction; and (iii) the Liang–Kleeman information flow (L-K IF), a rigorously formulated causality formalism based on the Liang–Kleeman information flow theory, to reveal the causal influence of vegetation on the evolution of temperature variability. The results attempt to capture a fuller understanding of the causal interaction of leaf area index (LAI) on air temperature (T) during 1981–2018, revealing the characteristics and differences in distinct climatic tipping point regions, particularly in terms of nonlinearity, feedback signals, and variability sources. This study demonstrates that realizing a more holistic causal structure of complex problems like the vegetation–climate interaction benefits from the combined use of multiple models that shed light on different aspects of its causal structure, thus revealing novel insights that are missing when we rely on one single approach. This prompts the need to move toward a multimodel causality analysis that could reduce biases and limitations in causal interpretations. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Forestry)
Show Figures

Figure 1

23 pages, 4536 KB  
Article
Timescales of Ecological Processes, Settling, and Estuarine Transport to Create Estuarine Turbidity Maxima: An Application of the Peter–Parker Model
by Lilian Engel and Mark Stacey
Water 2024, 16(15), 2084; https://doi.org/10.3390/w16152084 - 24 Jul 2024
Viewed by 1680
Abstract
The estuarine exchange flow increases the longitudinal dispersion of passive tracers and trap sinking particles, potentially creating an estuarine turbidity maximum (ETM): a localized maximum of suspended particulate matter concentration in an estuary. The ETM can have many implications: dead zones due to [...] Read more.
The estuarine exchange flow increases the longitudinal dispersion of passive tracers and trap sinking particles, potentially creating an estuarine turbidity maximum (ETM): a localized maximum of suspended particulate matter concentration in an estuary. The ETM can have many implications: dead zones due to increased turbidity or hypoxia from organic matter decomposition, naval navigation challenges, and other water quality problems. Using timescales, we investigate how the interaction between exchange flow and particle sinking leads to ETMs by modeling a sinking tracer in an idealized box model of the Total Exchange Flow (TEF) first developed by Parker MacCready. Results indicate that the balance of particle sinking and vertical mixing is critical to determining ETM size and location. We then focus on the role of ecology in ETM formation through the use of the Peter–Parker Model, a new biophysical model which combines the TEF box model with a Nutrient–Phytoplankton–Zooplankton–Detritus (NPZD) model, the likes of which were first developed by Peter J.S. Franks. Detritus sinking rates similarly influence detritus peak concentration and location (an ETM), but detritus ETMs occur in a different location than the sinking tracer due to the influence of biological factors, which create a time lag of about 1 day. Lastly, we characterize the flow of the models with a dimensionless parameter that compares timescales and summarizes the dynamics of the sinking tracer in ETM formation and that can be used across systems. Full article
Show Figures

Figure 1

Back to TopTop