Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = zero interval limit

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 8985 KiB  
Article
Impact of Daylight Saving Time on Energy Consumption in Higher Education Institutions: A Case Study of Portugal and Spain
by Ivo Araújo, João Garcia and António Curado
Energies 2025, 18(12), 3157; https://doi.org/10.3390/en18123157 - 16 Jun 2025
Viewed by 392
Abstract
Daylight Saving Time (DST), involving clock shifts forward in spring and backward in autumn, was introduced to promote energy savings. However, its effectiveness remains controversial, especially in buildings with temporary occupancy like academic institutions, which have high daytime use but low summer occupancy. [...] Read more.
Daylight Saving Time (DST), involving clock shifts forward in spring and backward in autumn, was introduced to promote energy savings. However, its effectiveness remains controversial, especially in buildings with temporary occupancy like academic institutions, which have high daytime use but low summer occupancy. This study investigates the impact of DST transitions on energy consumption across seven campuses of two higher education institutions (HEIs) in northern Portugal and Spain, located in different time zones, using measured data from 2023. The analysis accounted for the structural and operational characteristics of each campus to contextualize consumption patterns. Weekly electricity consumption before and after DST changes were compared using independent samples t-tests to assess statistical significance. Results show that the spring transition to DST led to an average energy saving of 1.7%, while the autumn return to standard time caused an average increase of 1.2%. Significant differences (p < 0.05) were found in five of the seven campuses. Descriptive statistics and confidence intervals indicated that only sites with intervals excluding zero exhibited consistent changes. Seasonal energy demand appeared more influenced by academic schedules and thermal comfort needs—particularly heating—than by DST alone. Higher consumption coincided with periods of intense academic activity and extreme temperatures, while lower demand aligned with holidays and longer daylight months. Although DST yielded modest energy savings, its overall impact on academic campus energy use is limited and highly dependent on local conditions. The findings highlight the need to consider regional climate, institutional policies, user behavior, and smart technology integration in future energy efficiency analyses in academic settings. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

21 pages, 7419 KiB  
Article
On Numerical Simulations of Turbulent Flows over a Bluff Body with Aerodynamic Flow Control Based on Trapped Vortex Cells: Viscous Effects
by Dmitry A. Lysenko
Fluids 2025, 10(5), 120; https://doi.org/10.3390/fluids10050120 - 8 May 2025
Viewed by 367
Abstract
Turbulent flows over a semi-circular cylinder (a limiting case of a thick airfoil with a chord equal to the diameter base) are investigated using high-fidelity large-eddy simulations at a diameter-based Reynolds number, Re = 130,000, Mach number, M = 0.05, and a zero [...] Read more.
Turbulent flows over a semi-circular cylinder (a limiting case of a thick airfoil with a chord equal to the diameter base) are investigated using high-fidelity large-eddy simulations at a diameter-based Reynolds number, Re = 130,000, Mach number, M = 0.05, and a zero angle of attack. The aerodynamic flow control system, designed with two trapped vortex cells, achieves a complete non-separated flow over the bluff body, except for low-scale turbulence effects, reaching approximately 80% of the theoretical lift coefficient limit (2π for the half-circular airfoil). Viscous effects are analyzed using the conventional Reynolds-averaged Navier–Stokes approach for a broad range of Reynolds numbers, 75,000 ≤ Re ≤ 1,000,000. Numerical results demonstrate that the aerodynamic properties of the implemented concept are independent of the Reynolds number within this interval, highlighting its significant potential for further development. Full article
(This article belongs to the Collection Feature Paper for Mathematical and Computational Fluid Mechanics)
Show Figures

Figure 1

20 pages, 630 KiB  
Article
Overcoming Dilution of Collision Probability in Satellite Conjunction Analysis via Confidence Distribution
by Hangbin Lee and Youngjo Lee
Entropy 2025, 27(4), 329; https://doi.org/10.3390/e27040329 - 21 Mar 2025
Viewed by 451
Abstract
In satellite conjunction analysis, dilution of collision probability has been recognized as a significant deficiency of probabilistic inference. A recent study identified the false confidence problem as another limitation and suggested a possible causal link between the two, arguing that addressing false confidence [...] Read more.
In satellite conjunction analysis, dilution of collision probability has been recognized as a significant deficiency of probabilistic inference. A recent study identified the false confidence problem as another limitation and suggested a possible causal link between the two, arguing that addressing false confidence could be necessary to prevent dilution of collision probability. However, this paper clarifies the distinction between probability dilution and false confidence by investigating a confidence distribution (CD) with a point mass at zero. Although the point mass in CD has often been perceived as paradoxical, we demonstrate that it plays an essential role in satellite conjunction analysis by capturing the uncertainty in the data. Consequently, the CD resolves the probability dilution of probabilistic inference and the ambiguity in Neymanian confidence intervals, while addressing the false confidence for the hypothesis of interest in satellite conjunction analysis. Furthermore, the confidence derived from the CD offers a direct interpretation as a p-value for hypothesis testing related to collision risk. Full article
Show Figures

Figure 1

23 pages, 1450 KiB  
Article
Supply–Demand Dynamics Quantification and Distributionally Robust Scheduling for Renewable-Integrated Power Systems with Flexibility Constraints
by Jiaji Liang, Jinniu Miao, Lei Sun, Liqian Zhao, Jingyang Wu, Peng Du, Ge Cao and Wei Zhao
Energies 2025, 18(5), 1181; https://doi.org/10.3390/en18051181 - 28 Feb 2025
Viewed by 811
Abstract
The growing penetration of renewable energy sources (RES) has exacerbated operational flexibility deficiencies in modern power systems under time-varying conditions. To address the limitations of existing flexibility management approaches, which often exhibit excessive conservatism or risk exposure in managing supply–demand uncertainties, this study [...] Read more.
The growing penetration of renewable energy sources (RES) has exacerbated operational flexibility deficiencies in modern power systems under time-varying conditions. To address the limitations of existing flexibility management approaches, which often exhibit excessive conservatism or risk exposure in managing supply–demand uncertainties, this study introduces a data-driven distributionally robust optimization (DRO) framework for power system scheduling. The methodology comprises three key phases: First, a meteorologically aware uncertainty characterization model is developed using Copula theory, explicitly capturing spatiotemporal correlations in wind and PV power outputs. System flexibility requirements are quantified through integrated scenario-interval analysis, augmented by flexibility adjustment factors (FAFs) that mathematically describe heterogeneous resource participation in multi-scale flexibility provision. These innovations facilitate the formulation of physics-informed flexibility equilibrium constraints. Second, a two-stage DRO model is established, incorporating demand-side resources such as electric vehicle fleets as flexibility providers. The optimization objective aims to minimize total operational costs, encompassing resource activation expenses and flexibility deficit penalties. To strike a balance between robustness and reduced conservatism, polyhedral ambiguity sets bounded by generalized moment constraints are employed, leveraging Wasserstein metric-based probability density regularization to diminish the probabilities of extreme scenarios. Third, the bilevel optimization structure is transformed into a solvable mixed-integer programming problem using a zero-sum game equivalence. This problem is subsequently solved using an enhanced column-and-constraint generation (C&CG) algorithm with adaptive cut generation. Finally, simulation results demonstrate that the proposed model positively impacts the flexibility margin and economy of the power system, compared to traditional uncertainty models. Full article
Show Figures

Figure 1

32 pages, 9228 KiB  
Article
Measurement-Based Assessment of Energy Performance and Thermal Comfort in Households Under Non-Controllable Conditions
by George M. Stavrakakis, Dimitris Bakirtzis, Dimitrios Tziritas, Panagiotis L. Zervas, Emmanuel Fotakis, Sofia Yfanti, Nikolaos Savvakis and Dimitris A. Katsaprakakis
Energies 2025, 18(5), 1087; https://doi.org/10.3390/en18051087 - 24 Feb 2025
Viewed by 695
Abstract
The current research presents a practical approach to assess energy performance and thermal comfort in households through monitoring campaigns. The campaigns are conducted in a Greek city, involving the installation of low-intrusive recording devices for hourly electricity consumption, indoor temperature, and relative humidity [...] Read more.
The current research presents a practical approach to assess energy performance and thermal comfort in households through monitoring campaigns. The campaigns are conducted in a Greek city, involving the installation of low-intrusive recording devices for hourly electricity consumption, indoor temperature, and relative humidity in different residences in winter and summer periods. The recorded indoor environmental conditions are initially compiled to the Predicted Mean Vote (PMV) index, followed by the formulation of databases of hourly electricity consumption, PMV and local outdoor climate conditions retrieved by an official source of meteorological conditions. A special algorithm for database processing was developed which takes into account the eligibility of data series, i.e., only the ones corresponding to non-zero electricity consumption are treated as eligible. First, the sequential temporal progress of energy consumption and thermal comfort is produced towards the assessment of energy-use intensity and thermal comfort patterns. Secondly, through summing of the electricity consumption within 0.5-step PMV intervals, under three outdoor temperature intervals with approximately the same number of eligible measurements, reliable interrelations of energy consumption and PMV are obtained even for residences with limited amount of measured data. It is revealed that the weekly electricity consumption ranged within 0.15–3.59 kWh/m2 for the winter cases and within 0.29–1.72 kWh/m2 for the summer cases. The acceptable range of −1 ≤ PMV ≤ 1 interval holds an occurrence frequency from 69.46% to 93.39% and from 37.94% to 70.31% for the winter and summer examined cases, respectively. Less resistance to discomfort conditions is observed at most of the summer examined households exhibiting the electricity peak within the 1 ≤ PMV ≤ 1.5 interval, contrary to the winter cases for which the electricity peak occurred within the −1 ≤ PMV ≤ −0.5 interval. The study provides graphical relationships of PMV and electricity consumption under various outdoor temperatures paving the way for correlating thermal comfort and energy consumption. Full article
(This article belongs to the Special Issue Research Trends of Thermal Comfort and Energy Efficiency in Buildings)
Show Figures

Figure 1

15 pages, 3457 KiB  
Article
Fractional Dynamical Behaviour Modelling Using Convolution Models with Non-Singular Rational Kernels: Some Extensions in the Complex Domain
by Jocelyn Sabatier
Fractal Fract. 2025, 9(2), 79; https://doi.org/10.3390/fractalfract9020079 - 24 Jan 2025
Viewed by 843
Abstract
This paper introduces a convolution model with non-singular rational kernels in which coefficients are considered complex. An interlacing property of the poles and zeros in these rational kernels permits the accurate approximation of the power law function tν in a predefined [...] Read more.
This paper introduces a convolution model with non-singular rational kernels in which coefficients are considered complex. An interlacing property of the poles and zeros in these rational kernels permits the accurate approximation of the power law function tν in a predefined time range, where ν can be complex or real. This class of model can be used to model fractional (dynamical) behaviours in order to avoid fractional calculus-based models which are now associated with several limitations. This is an extension of a previous study by the author. In the real case, this allows a better approximation, close to the limits of the approximation interval, compared to the author’s previous work. In the complex case, this extends the scope of application of the convolution models proposed by the author. Full article
(This article belongs to the Section Numerical and Computational Methods)
Show Figures

Figure 1

34 pages, 7354 KiB  
Article
Analysis of High-Frequency Sea-State Variability Using SWOT Nadir Measurements and Application to Altimeter Sea State Bias Modelling
by Estelle Mazaleyrat, Ngan Tran, Laïba Amarouche, Douglas Vandemark, Hui Feng, Gérald Dibarboure and François Bignalet-Cazalet
Remote Sens. 2024, 16(23), 4361; https://doi.org/10.3390/rs16234361 - 22 Nov 2024
Viewed by 1479
Abstract
The 1-day fast-sampling orbit phase of the Surface Water Ocean Topography (SWOT) satellite mission provides a unique opportunity to analyze high-frequency sea-state variability and its implications for altimeter sea state bias (SSB) model development. Time series with 1-day repeat sampling of sea-level anomaly [...] Read more.
The 1-day fast-sampling orbit phase of the Surface Water Ocean Topography (SWOT) satellite mission provides a unique opportunity to analyze high-frequency sea-state variability and its implications for altimeter sea state bias (SSB) model development. Time series with 1-day repeat sampling of sea-level anomaly (SLA) and SSB input parameters—comprising the significant wave height (SWH), wind speed (WS), and mean wave period (MWP)—are constructed using SWOT’s nadir altimeter data. The analyses corroborate the following key SSB modelling assumption central to empirical developments: the SLA noise due to all factors, aside from sea state change, is zero-mean. Global variance reduction tests on the SSB model’s performance using corrected SLA differences show that correction skill estimation using a specific (1D, 2D, or 3D) SSB model is unstable when using short time difference intervals ranging from 1 to 5 days, reaching a stable asymptotic limit after 5 days. It is proposed that this result is related to the temporal auto- and cross-correlations associated with the SSB model’s input parameters; the present study shows that SSB wind-wave input measurements take time (typically 1–4 days) to decorrelate in any given region. The latter finding, obtained using unprecedented high-frequency satellite data from multiple ocean basins, is shown to be consistent with estimates from an ocean wave model. The results also imply that optimal time-differencing (i.e., >4 days) should be considered when building SSB model data training sets. The SWOT altimeter data analysis of the temporal cross-correlations also permits an evaluation of the relationships between the SSB input parameters (SWH, WS, and MWP), where distinct behaviors are found in the swell- and wind-sea-dominated areas, and associated time scales are less than or on the order of 1 day. Finally, it is demonstrated that computing cross-correlations between the SLA (with and without SSB correction) and the SSB input parameters offers an additional tool for evaluating the relevance of candidate SSB input parameters, as well as for assessing the performance of SSB correction models, which, so far, mainly rely on the reduction in the variance of the differences in the SLA at crossover points. Full article
Show Figures

Figure 1

11 pages, 1995 KiB  
Article
Angle-Tunable Method for Optimizing Rear Reflectance in Fabry–Perot Interferometers and Its Application in Fiber-Optic Ultrasound Sensing
by Yufei Chu, Mohammed Alshammari, Xiaoli Wang and Ming Han
Photonics 2024, 11(12), 1100; https://doi.org/10.3390/photonics11121100 - 21 Nov 2024
Cited by 1 | Viewed by 1008
Abstract
With the introduction of advanced Fiber Bragg Grating (FBG) technology, Fabry–Pérot (FP) interferometers have become widely used in fiber-optic ultrasound detection. In these applications, the slope of the reflectance is a critical factor influencing detection results. Due to the intensity limitations of the [...] Read more.
With the introduction of advanced Fiber Bragg Grating (FBG) technology, Fabry–Pérot (FP) interferometers have become widely used in fiber-optic ultrasound detection. In these applications, the slope of the reflectance is a critical factor influencing detection results. Due to the intensity limitations of the laser source in fiber-optic ultrasound detection, the reflectance of the FBG is generally increased to enhance the signal-to-noise ratio (SNR). However, increasing reflectance can cause the reflectance curve to deviate from a sinusoidal shape, which in turn affects the slope of the reflectance and introduces greater errors. This paper first investigates the relationship between the transmission curve of the FP interferometer and reflectance, with a focus on the errors introduced by simplified assumptions. Further research shows that in sensors with asymmetric reflectance slopes, their transmittance curves deviate significantly from sinusoidal signals. This discrepancy highlights the importance of achieving symmetrical slopes to ensure consistent and accurate detection. To address this issue, this paper proposes an innovative method to adjust the rear-end reflectance of the FP interferometer by combining stress modulation, UV adhesive, and a high-reflectivity metal disk. Additionally, by adjusting the rear-end reflectance to ensure that the transmittance curve approximates a sinusoidal signal, the symmetry of the slope is maintained. Finally, through practical ultrasound testing, by adjusting the incident wavelength to the positions of slope extrema (or zero) at equal intervals, the expected ultrasound signals at extrema (or zero) can be detected. This method converts the problem of approximating a sinusoidal signal into a problem of the slope adjustment of the transmittance curve, making it easier and more direct to determine its impact on detection results. The proposed method not only improves the performance of fiber-optic ultrasound sensors but also reduces costs, paving the way for broader applications in medical diagnostics and structural health monitoring. Full article
(This article belongs to the Special Issue Optical Sensing Technologies, Devices and Their Data Applications)
Show Figures

Figure 1

21 pages, 2782 KiB  
Review
Situational and Dispositional Achievement Goals and Measures of Sport Performance: A Systematic Review with a Meta-Analysis
by Marc Lochbaum and Cassandra Sisneros
Sports 2024, 12(11), 299; https://doi.org/10.3390/sports12110299 - 4 Nov 2024
Cited by 1 | Viewed by 2370
Abstract
The purposes of this systematic review (PROSPERO ID: CRD42024510614, no funding source) were to quantify relationships between situational and dispositional dichotomous achievement goals and sport performance and explore potential relationship moderators. Published studies that reported at least one situational or dispositional achievement goal [...] Read more.
The purposes of this systematic review (PROSPERO ID: CRD42024510614, no funding source) were to quantify relationships between situational and dispositional dichotomous achievement goals and sport performance and explore potential relationship moderators. Published studies that reported at least one situational or dispositional achievement goal and a performance score were included. Studies without performance scores or based in a non-sport context were excluded. Information sources consisted of studies found in relevant published meta-analyses and EBSCOhost databases (finalized September 2024). The following statistics were conducted to assess the risk of bias: class-fail-safe n, Orwin’s fail-safe n, and funnel plots with trim and fill estimates. The summary statistics were r and d. Thirty studies from 1994 to 2024 met all inclusion criteria with 8708 participants from Europe, Asia, North America, and Oceania. The majority of samples were non-elite male youths and adolescents. The random-effects relationships (r) between task climate, 0.20 [0.14, 0.25], task orientation, 0.17 [0.12, 0.23], ego orientation, 0.09 [0.03, 0.16], and sport performance were small and significantly different (p < 0.05) from zero, while the ego motivational climate relationship was not, −0.00 [−0.48, 0.05]. The random-effects standard differences in means (d) for both the task orientation, 0.08 [0.02, 0.14], and ego orientation, 0.11 [−0.05, 0.26] were minimal in meaningfulness. Mixed-effects moderator analyses resulted in the following significant (p < 0.05) sub-group differences: subjective compared to objective performance measures (task orientation), elite compared to non-elite samples (task climate), and athlete-completed compared to coach-completed performance measures and performance records (task orientation). Finding only 30 studies meeting the inclusion criteria, which limited sub-group samples for moderation analyses, was the main limitation. Despite this limitation, AGT provides athletes and practitioners performance enhancement strategies. However, caution is warranted regarding relationship expectations given the small mean effect size values and the true prediction interval ranging from negative to positive, perhaps as a result of the heterogeneous samples and performance measures. A clear line of future research, considering the reviewed studies, with elite athletes is needed to verify the performance benefits of the task climate and ego orientation as well as the use of the ego goal orientation in selection decisions. Full article
Show Figures

Figure 1

23 pages, 438 KiB  
Article
Skew-Normal Inflated Models: Mathematical Characterization and Applications to Medical Data with Excess of Zeros and Ones
by Guillermo Martínez-Flórez, Roger Tovar-Falón, Víctor Leiva and Cecilia Castro
Mathematics 2024, 12(16), 2486; https://doi.org/10.3390/math12162486 - 12 Aug 2024
Cited by 2 | Viewed by 1300
Abstract
The modeling of data involving proportions, confined to a unit interval, is crucial in diverse research fields. Such data, expressing part-to-whole relationships, span from the proportion of individuals affected by diseases to the allocation of resources in economic sectors and the survival rates [...] Read more.
The modeling of data involving proportions, confined to a unit interval, is crucial in diverse research fields. Such data, expressing part-to-whole relationships, span from the proportion of individuals affected by diseases to the allocation of resources in economic sectors and the survival rates of species in ecology. However, modeling these data and interpreting information obtained from them present challenges, particularly when there is high zero–one inflation at the extremes of the unit interval, which indicates the complete absence or full occurrence of a characteristic or event. This inflation limits traditional statistical models, which often fail to capture the underlying distribution, leading to biased or imprecise statistical inferences. To address these challenges, we propose and derive the skew-normal zero–one inflated (SNZOI) models, a novel class of asymmetric regression models specifically designed to accommodate zero–one inflation presented in the data. By integrating a continuous-discrete mixture distribution with covariates in both continuous and discrete parts, SNZOI models exhibit superior capability compared to traditional models when describing these complex data structures. The applicability and effectiveness of the proposed models are demonstrated through case studies, including the analysis of medical data. Precise modeling of inflated proportion data unveils insights representing advancements in the statistical analysis of such studies. The present investigation highlights the limitations of existing models and shows the potential of SNZOI models to provide more accurate and precise inferences in the presence of zero–one inflation. Full article
(This article belongs to the Special Issue Applied Statistics in Real-World Problems)
Show Figures

Figure 1

12 pages, 1118 KiB  
Article
Bifenthrin Residues in Table Grapevine: Method Optimization, Dissipation and Removal of Residues in Grapes and Grape Leaves
by Saleh S. Alhewairini, Rania M. Abd El-Hamid, Nevein S. Ahmed, Sherif B. Abdel Ghani and Osama I. Abdallah
Plants 2024, 13(12), 1695; https://doi.org/10.3390/plants13121695 - 19 Jun 2024
Cited by 2 | Viewed by 1507
Abstract
The QuEChERS method was adjusted to determine bifenthrin residues in grapes and grape leaves. Extraction and cleanup procedures were optimized to decrease co-extracted materials and enhance the detection of bifenthrin. The method was validated per the European Union (EU) Guidelines criteria. Accuracy ranged [...] Read more.
The QuEChERS method was adjusted to determine bifenthrin residues in grapes and grape leaves. Extraction and cleanup procedures were optimized to decrease co-extracted materials and enhance the detection of bifenthrin. The method was validated per the European Union (EU) Guidelines criteria. Accuracy ranged from 98.8% to 93.5% for grapes and grape leaves, respectively. Precision values were 5.5 and 6.4 (RSDr) and 7.4 and 6.7 (RSDR) for grapes and grape leaves, respectively. LOQs (the lowest spiking level) were 2 and 20 µg/kg for grapes and grape leaves, respectively. Linearity as determination coefficient (R2) values were 0.9997 and 0.9964 for grapes and grape leaves, respectively, in a matrix over 1–100 µg/L range of analyte concentration. This was very close to the value in the pure solvent (0.9999), showing the efficiency of the cleanup in removing the co-extracted and co-injected materials; the matrix effect was close to zero in both sample matrices. Dissipation of bifenthrin was studied in a supervised trial conducted in a grapevine field during the summer of 2023 at the recommended dose and double the dose. Dissipation factor k values were 0.1549 and 0.1672 (recommended dose) and 0.235 and 0.208 (double dose) for grapes and grape leaves, respectively. Pre-harvest interval (PHI) was calculated for the Maximum Residue Limit (MRL) values of the EU database. Residues of bifenthrin were removed effectively from grapes using simple washing with tap water in a laboratory study. Residues reached the MRL level of 0.3 mg/kg in both washing treatments, running or soaking in tap water treatments for 5 min. Removal from leaves did not decrease residue levels to the MRL in grape leaves. Full article
(This article belongs to the Special Issue Pesticide Residues in Plants)
Show Figures

Figure 1

8 pages, 226 KiB  
Article
Multimodel Approaches Are Not the Best Way to Understand Multifactorial Systems
by Benjamin M. Bolker
Entropy 2024, 26(6), 506; https://doi.org/10.3390/e26060506 - 11 Jun 2024
Cited by 3 | Viewed by 1996
Abstract
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified [...] Read more.
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified models. (2) MMA improves on IT model selection by implementing a simple form of shrinkage estimation (a way to make accurate predictions from a model with many parameters relative to the amount of data, by “shrinking” parameter estimates toward zero). However, other shrinkage estimators such as penalized regression or Bayesian hierarchical models with regularizing priors are more computationally efficient and better supported theoretically. (3) In general, the procedures for extracting confidence intervals from MMA are overconfident, providing overly narrow intervals. If researchers want to use limited data sets to accurately estimate the strength of multiple competing ecological processes along with reliable confidence intervals, the current best approach is to use full (maximal) statistical models (possibly with Bayesian priors) after making principled, a priori decisions about model complexity. Full article
14 pages, 1847 KiB  
Article
Impact of Time on Parameters for Assessing the Microstructure Equivalence of Topical Products: Diclofenac 1% Emulsion as a Case Study
by Andreu Mañez-Asensi, Mª Jesús Hernández, Víctor Mangas-Sanjuán, Ana Salvador, Matilde Merino-Sanjuán and Virginia Merino
Pharmaceutics 2024, 16(6), 749; https://doi.org/10.3390/pharmaceutics16060749 - 1 Jun 2024
Viewed by 1268
Abstract
The demonstration of bioequivalence proposed in the European Medicines Agency’s (EMA’s) draft guideline for topical products with the same qualitative and quantitative composition requires the confirmation of the internal structure equivalence. The impact of the shelf-life on the parameters proposed for internal structure [...] Read more.
The demonstration of bioequivalence proposed in the European Medicines Agency’s (EMA’s) draft guideline for topical products with the same qualitative and quantitative composition requires the confirmation of the internal structure equivalence. The impact of the shelf-life on the parameters proposed for internal structure comparison has not been studied. The objectives of this work were: (1) to quantify the effect of the time since manufacturing on the mean value and variability of the parameters proposed by the EMA to characterize the internal structure and performance of topical formulations of a complex topical formulation, and (2) to evaluate the impact of these changes on the assessment of the microstructure equivalence. A total of 5 batches of a topical emulgel containing 1% diclofenac diethylamine were evaluated 5, 14, and 23 months after manufacture. The zero-shear viscosity (η0), viscosity at 100 s−1100), yield stress (σ0), elastic (G′) and viscous (G″) moduli, internal phase droplet size and in vitro release of the active ingredient were characterized. While no change in variability over time was detected, the mean value of all the parameters changed, especially the droplet size and in vitro release. Thus, combining data from batches of different manufacturing dates may compromise the determination of bioequivalence. The results confirm that to assess the microstructural similarity of complex formulations (such as emulgel), the 90% confidence interval limit for the mean difference in rheological and in vitro release parameters should be 20% and 25%, respectively. Full article
(This article belongs to the Special Issue Topical Drug Delivery: Current Status and Perspectives)
Show Figures

Figure 1

13 pages, 7076 KiB  
Article
CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies
by Daniel Vallejo-Cendrero, Juan Manuel Molina-Maza, Blanca Rodriguez-Gonzalez, David Viar-Hernandez, Borja Rodriguez-Vila, Javier Soto-Pérez-Olivares, Jaime Moujir-López, Carlos Suevos-Ballesteros, Javier Blázquez-Sánchez, José Acosta-Batlle and Angel Torrado-Carvajal
Appl. Sci. 2024, 14(11), 4655; https://doi.org/10.3390/app14114655 - 28 May 2024
Cited by 1 | Viewed by 1775
Abstract
In the field of knee imaging, the incorporation of MR-based pseudo-CT synthesis holds the potential to mitigate the need for separate CT scans, simplifying workflows, enhancing patient comfort, and reducing radiation exposure. In this work, we present a novel DL framework, grounded in [...] Read more.
In the field of knee imaging, the incorporation of MR-based pseudo-CT synthesis holds the potential to mitigate the need for separate CT scans, simplifying workflows, enhancing patient comfort, and reducing radiation exposure. In this work, we present a novel DL framework, grounded in the development of the Cycle-Consistent Generative Adversarial Network (CycleGAN) method, tailored specifically for the synthesis of pseudo-CT images in knee imaging to surmount the limitations of current methods. Upon visually examining the outcomes, it is evident that the synthesized pseudo-CTs show an excellent quality and high robustness. Despite the limited dataset employed, the method is able to capture the particularities of the bone contours in the resulting image. The experimental Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) report values of 30.4638 ± 7.4770, 28.1168 ± 1.5245, 0.9230 ± 0.0217, 0.9807 ± 0.0071, 0.8548 ± 0.1019, 0.0055 ± 0.0265, and 0.0302 ± 0.0218 (median ± median absolute deviation), respectively. The voxel-by-voxel correlation plot shows an excellent correlation between pseudo-CT and ground-truth CT Hounsfield units (m = 0.9785; adjusted R2 = 0.9988; ρ = 0.9849; p < 0.001). The Bland–Altman plot shows that the average of the differences is low ((HUCTHUpseudoCT = 0.7199 ± 35.2490; 95% confidence interval [−68.3681, 69.8079]). This study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

26 pages, 1767 KiB  
Article
Event-Triggered Adaptive Neural Network Control for State-Constrained Pure-Feedback Fractional-Order Nonlinear Systems with Input Delay and Saturation
by Changhui Wang, Jiaqi Yang and Mei Liang
Fractal Fract. 2024, 8(5), 256; https://doi.org/10.3390/fractalfract8050256 - 26 Apr 2024
Viewed by 1444
Abstract
In this research, the adaptive event-triggered neural network controller design problem is investigated for a class of state-constrained pure-feedback fractional-order nonlinear systems (FONSs) with external disturbances, unknown actuator saturation, and input delay. An auxiliary compensation function based on the integral function of the [...] Read more.
In this research, the adaptive event-triggered neural network controller design problem is investigated for a class of state-constrained pure-feedback fractional-order nonlinear systems (FONSs) with external disturbances, unknown actuator saturation, and input delay. An auxiliary compensation function based on the integral function of the input signal is presented to handle input delay. The barrier Lyapunov function (BLF) is utilized to deal with state constraints, and the event-triggered strategy is applied to overcome the communication burden from the limited communication resources. By the utilization of a backstepping scheme and radial basis function neural network, an adaptive event-triggered neural state-feedback stabilization controller is constructed, in which the fractional-order dynamic surface filters are employed to reduce the computational burden from the recursive procedure. It is proven that with the fractional-order Lyapunov analysis, all the solutions of the closed-loop system are bounded, and the tracking error can converge to a small interval around the zero, while the state constraint is satisfied and the Zeno behavior can be strictly ruled out. Two examples are finally given to show the effectiveness of the proposed control strategy. Full article
Show Figures

Figure 1

Back to TopTop