Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = discrete choice under uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1476 KiB  
Article
Ambiguities, Built-In Biases, and Flaws in Big Data Insight Extraction
by Serge Galam
Information 2025, 16(8), 661; https://doi.org/10.3390/info16080661 (registering DOI) - 2 Aug 2025
Abstract
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents [...] Read more.
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents unclassified or ambiguous data. A macro-color is assigned only if one color holds a strict majority among the pixels. Otherwise, the aggregate is labeled white, reflecting uncertainty. This setup mimics a percolation threshold at fifty percent. Assuming that directly accessing the various proportions from the data of colors is infeasible, I implement a hierarchical coarse-graining procedure. Elements (first pixels, then aggregates) are recursively grouped and reclassified via local majority rules, ultimately producing a single super-aggregate for which the color represents the inferred macro-property of the collection of pixels as a whole. Analytical results supported by simulations show that the process introduces additional white aggregates beyond white pixels, which could be present initially; these arise from groups lacking a clear majority, requiring arbitrary symmetry-breaking decisions to attribute a color to them. While each local resolution may appear minor and inconsequential, their repetitions introduce a growing systematic bias. Even with complete data, unavoidable asymmetries in local rules are shown to skew outcomes. This study highlights a critical limitation of recursive data reduction. Insight extraction is shaped not only by data quality but also by how local ambiguity is handled, resulting in built-in biases. Thus, the related flaws are not due to the data but to structural choices made during local aggregations. Although based on a simple model, these findings expose a high likelihood of inherent flaws in widely used hierarchical classification techniques. Full article
(This article belongs to the Section Artificial Intelligence)
12 pages, 343 KiB  
Article
Nepalese Cancer Patients’ Willingness to Pay for Improved Quality of Life: A Choice Experiment Study
by Adnan Shahid and Alok Bohara
Healthcare 2025, 13(14), 1645; https://doi.org/10.3390/healthcare13141645 - 8 Jul 2025
Viewed by 254
Abstract
Background/Objectives: In Nepal, cancer, among non-communicable diseases, has a high mortality rate. The disease significantly affects patients’ quality of life (QoL). This study aims to identify key attributes of QoL and explore patients’ preferences regarding these attributes. Methods: We implement a [...] Read more.
Background/Objectives: In Nepal, cancer, among non-communicable diseases, has a high mortality rate. The disease significantly affects patients’ quality of life (QoL). This study aims to identify key attributes of QoL and explore patients’ preferences regarding these attributes. Methods: We implement a discrete choice experiment (DCE) survey to understand cancer patients’ preferences for different attributes of QoL, their willingness to pay for improved QoL, and their preference heterogeneity. This study innovatively uses the EuroQol measure in a DCE setting to elicit the patients’ preferences and their willingness to pay. Results: Using a random parameter logit model, we find that cancer patients prefer lower levels of pain and higher levels of performing usual activities. Overall, we find that cancer patients are willing to pay a total amount of about NRS 2.6 million [about USD 26,000] for improved quality of life. Our analysis also shows that preference heterogeneity exists among cancer patients, and the presence of uncertainty in the preferences of patients does not affect the results. Conclusions: This study sheds light on the preferences and willingness to pay for improved quality of life among cancer patients in Nepal. Understanding these preferences can inform healthcare policy and resource allocation decisions aimed at improving the QoL of cancer patients in the region. Full article
Show Figures

Figure 1

19 pages, 2708 KiB  
Article
Simulation of Extreme Hydrographs in Heterogeneous Catchments with Limited Data
by Alfonso Arrieta-Pastrana, Oscar E. Coronado-Hernández and Helena M. Ramos
Water 2025, 17(11), 1713; https://doi.org/10.3390/w17111713 - 5 Jun 2025
Viewed by 475
Abstract
Rainfall-based methods have been employed for computing hydrographs in urban drainage systems. However, their implementation often introduces uncertainty in various aspects, such as the selection of a unit hydrograph, the choice of abstraction methods, and the formulas used to calculate the time of [...] Read more.
Rainfall-based methods have been employed for computing hydrographs in urban drainage systems. However, their implementation often introduces uncertainty in various aspects, such as the selection of a unit hydrograph, the choice of abstraction methods, and the formulas used to calculate the time of concentration, among others. Conventional consultancy studies tend to oversimplify catchment representation by treating it as a homogeneous unit or discretizing it into a few segments with simplified flood routing. This research proposes a streamlined methodology for computing hydrographs, considering the sub-basins’ heterogeneity. The methodology is based on the principles of proportionality and superposition. A sensitivity analysis of the proposed methodology is conducted, considering both homogeneous and heterogeneous catchments and the temporal distribution of rainfall. The proposed methodology is applied to the catchment of the Ricaurte channel, located in Cartagena de Indias (Colombia), with a watershed area of 728.8 ha. It has proven effective in representing a recorded simultaneous rainfall-runoff event, achieving a Root Mean Square Error of 3.93% in estimating the total volume of the measured hydrographs. A key advantage of the methodology, compared to traditional rainfall–runoff approaches, is that it does not require an extensive number of parameters to be calibrated. It may be utilized to estimate extreme flood events in urban areas with limited data availability, relying on minimal data inputs. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

48 pages, 26155 KiB  
Article
A Process Algebraic Approach to Predict and Control Uncertainty in Smart IoT Systems for Smart Cities Based on Permissible Probabilistic Equivalence
by Junsup Song, Dimitris Karagiannis and Moonkun Lee
Sensors 2024, 24(12), 3881; https://doi.org/10.3390/s24123881 - 15 Jun 2024
Viewed by 1411
Abstract
Process algebra is one of the most suitable formal methods to model smart IoT systems for smart cities. Each IoT in the systems can be modeled as a process in algebra. In addition, the nondeterministic behavior of the systems can be predicted by [...] Read more.
Process algebra is one of the most suitable formal methods to model smart IoT systems for smart cities. Each IoT in the systems can be modeled as a process in algebra. In addition, the nondeterministic behavior of the systems can be predicted by defining probabilities on the choice operations in some algebra, such as PALOMA and PACSR. However, there are no practical mechanisms in algebra either to measure or control uncertainty caused by the nondeterministic behavior in terms of satisfiability of the system requirements. In our previous research, to overcome the limitation, a new process algebra called dTP-Calculus was presented to verify probabilistically the safety and security requirements of smart IoT systems: the nondeterministic behavior of the systems was defined and controlled by the static and dynamic probabilities. However, the approach required a strong assumption to handle the unsatisfied probabilistic requirements: enforcing an optimally arbitrary level of high-performance probability from the continuous range of the probability domain. In the paper, the assumption from the previous research is eliminated by defining the levels of probability from the discrete domain based on the notion of Permissible Process and System Equivalences so that satisfiability is incrementally enforced by both Permissible Process Enhancement in the process level and Permissible System Enhancement in the system level. In this way, the unsatisfied probabilistic requirements can be incrementally enforced with better-performing probabilities in the discrete steps until the final decision for satisfiability can be made. The SAVE tool suite has been developed on the ADOxx meta-modeling platform to demonstrate the effectiveness of the approach with a smart EMS (emergency medical service) system example, which is one of the most practical examples for smart cities. SAVE showed that the approach is very applicable to specify, analyze, verify, and especially, predict and control uncertainty or risks caused by the nondeterministic behavior of smart IoT systems. The approach based on dTP-Calculus and SAVE may be considered one of the most suitable formal methods and tools to model smart IoT systems for smart cities. Full article
(This article belongs to the Special Issue Advanced IoT Systems in Smart Cities: 2nd Edition)
Show Figures

Figure 1

18 pages, 7025 KiB  
Article
Probabilistic Solar Forecasts as a Binary Event Using a Sky Camera
by Mathieu David, Joaquín Alonso-Montesinos, Josselin Le Gal La Salle and Philippe Lauret
Energies 2023, 16(20), 7125; https://doi.org/10.3390/en16207125 - 17 Oct 2023
Cited by 2 | Viewed by 1439
Abstract
With the fast increase of solar energy plants, a high-quality short-term forecast is required to smoothly integrate their production in the electricity grids. Usually, forecasting systems predict the future solar energy as a continuous variable. But for particular applications, such as concentrated solar [...] Read more.
With the fast increase of solar energy plants, a high-quality short-term forecast is required to smoothly integrate their production in the electricity grids. Usually, forecasting systems predict the future solar energy as a continuous variable. But for particular applications, such as concentrated solar plants with tracking devices, the operator needs to anticipate the achievement of a solar irradiance threshold to start or stop their system. In this case, binary forecasts are more relevant. Moreover, while most forecasting systems are deterministic, the probabilistic approach provides additional information about their inherent uncertainty that is essential for decision-making. The objective of this work is to propose a methodology to generate probabilistic solar forecasts as a binary event for very short-term horizons between 1 and 30 min. Among the various techniques developed to predict the solar potential for the next few minutes, sky imagery is one of the most promising. Therefore, we propose in this work to combine a state-of-the-art model based on a sky camera and a discrete choice model to predict the probability of an irradiance threshold suitable for plant operators. Two well-known parametric discrete choice models, logit and probit models, and a machine learning technique, random forest, were tested to post-process the deterministic forecast derived from sky images. All three models significantly improve the quality of the original deterministic forecast. However, random forest gives the best results and especially provides reliable probability predictions. Full article
Show Figures

Figure 1

26 pages, 3539 KiB  
Article
Inpainting in Discrete Sobolev Spaces: Structural Information for Uncertainty Reduction
by Marco Seracini and Stephen R. Brown
Appl. Sci. 2023, 13(16), 9405; https://doi.org/10.3390/app13169405 - 18 Aug 2023
Viewed by 1424
Abstract
In this article, we introduce a new mathematical functional whose minimization determines the quality of the solution for the exemplar-based inpainting-by-patch problem. The new functional expression includes finite difference terms in a similar fashion to what happens in the theoretical Sobolev spaces: its [...] Read more.
In this article, we introduce a new mathematical functional whose minimization determines the quality of the solution for the exemplar-based inpainting-by-patch problem. The new functional expression includes finite difference terms in a similar fashion to what happens in the theoretical Sobolev spaces: its use reduces the uncertainty in the choice of the most suitable values for each point to inpaint. Moreover, we introduce a probabilistic model by which we prove that the usual principal directions, generally employed for continuous problems, are not enough to achieve consistent reconstructions in the discrete inpainting asset. Finally, we formalize a new priority index and new rules for its dynamic update. The quality of the reconstructions, achieved using a reduced neighborhood size of more than 95% with respect to the current state-of-the-art algorithms based on the same inpainting approach, further provides the experimental validation of the method. Full article
(This article belongs to the Special Issue Signal and Image Processing: From Theory to Applications)
Show Figures

Figure 1

49 pages, 4386 KiB  
Article
Free Choice in Quantum Theory: A p-adic View
by Vladimir Anashin
Entropy 2023, 25(5), 830; https://doi.org/10.3390/e25050830 - 22 May 2023
Cited by 6 | Viewed by 2271
Abstract
In this paper, it is rigorously proven that since observational data (i.e., numerical values of physical quantities) are rational numbers only due to inevitably nonzero measurements errors, the conclusion about whether Nature at the smallest scales is discrete or continuous, random and chaotic, [...] Read more.
In this paper, it is rigorously proven that since observational data (i.e., numerical values of physical quantities) are rational numbers only due to inevitably nonzero measurements errors, the conclusion about whether Nature at the smallest scales is discrete or continuous, random and chaotic, or strictly deterministic, solely depends on experimentalist’s free choice of the metrics (real or p-adic) he chooses to process the observational data. The main mathematical tools are p-adic 1-Lipschitz maps (which therefore are continuous with respect to the p-adic metric). The maps are exactly the ones defined by sequential Mealy machines (rather than by cellular automata) and therefore are causal functions over discrete time. A wide class of the maps can naturally be expanded to continuous real functions, so the maps may serve as mathematical models of open physical systems both over discrete and over continuous time. For these models, wave functions are constructed, entropic uncertainty relation is proven, and no hidden parameters are assumed. The paper is motivated by the ideas of I. Volovich on p-adic mathematical physics, by G. ‘t Hooft’s cellular automaton interpretation of quantum mechanics, and to some extent, by recent papers on superdeterminism by J. Hance, S. Hossenfelder, and T. Palmer. Full article
(This article belongs to the Special Issue New Trends in Theoretical and Mathematical Physics)
Show Figures

Figure 1

31 pages, 2897 KiB  
Article
A Systematic Approach to the Management of Military Human Resources through the ELECTRE-MOr Multicriteria Method
by Igor Pinheiro de Araújo Costa, Adilson Vilarinho Terra, Miguel Ângelo Lellis Moreira, Maria Teresa Pereira, Luiz Paulo Lopes Fávero, Marcos dos Santos and Carlos Francisco Simões Gomes
Algorithms 2022, 15(11), 422; https://doi.org/10.3390/a15110422 - 9 Nov 2022
Cited by 21 | Viewed by 2970
Abstract
Personnel selection is increasingly proving to be an essential factor for the success of organizations. These issues almost universally involve multiple conflicting objectives, uncertainties, costs, and benefits in decision-making. In this context, personnel assessment problems, which include several candidates as alternatives, along with [...] Read more.
Personnel selection is increasingly proving to be an essential factor for the success of organizations. These issues almost universally involve multiple conflicting objectives, uncertainties, costs, and benefits in decision-making. In this context, personnel assessment problems, which include several candidates as alternatives, along with several complex evaluation criteria, can be solved by applying Multicriteria Decision Making (MCDM) methods. Uncertainty and subjectivity characterize the choice of personnel for missions or promotions at the military level. In this paper, we evaluated 30 Brazilian Navy officers in the light of four criteria and 34 subcriteria. To support the decision-making process regarding the promotion of officers, we applied the ELECTRE-Mor MCDM method. We categorized the alternatives into three classes in the modeling proposed in this work, namely: Class A (Promotion by deserving), Class B (Promotion by seniority), and Class C (Military not promoted). As a result, the method presented 20% of the officers evaluated with performance corresponding to class A, 53% of the alternatives to class B, and 26.7% with performances attributed to class C. In addition, we presented a sensitivity analysis procedure through variation of the cut-off level λ, allowing decision-making on more flexible or rigorous scenarios at the discretion of the Naval High Administration. This work brings a valuable contribution to academia and society since it represents the application of an MCDM method in state of the art to contribute to solving a real problem. Full article
Show Figures

Figure 1

15 pages, 1909 KiB  
Article
Robust Estimation in Continuous–Discrete Cubature Kalman Filters for Bearings-Only Tracking
by Haoran Hu, Shuxin Chen, Hao Wu and Renke He
Appl. Sci. 2022, 12(16), 8167; https://doi.org/10.3390/app12168167 - 15 Aug 2022
Cited by 1 | Viewed by 1818
Abstract
The model of bearings-only tracking is generally described by discrete–discrete filtering systems. Discrete robust methods are also frequently used to address measurement uncertainty problems in bearings-only tracking. The recently popular continuous–discrete filtering system considers the state model of the target to be continuous [...] Read more.
The model of bearings-only tracking is generally described by discrete–discrete filtering systems. Discrete robust methods are also frequently used to address measurement uncertainty problems in bearings-only tracking. The recently popular continuous–discrete filtering system considers the state model of the target to be continuous in time, and is more suitable for bearings-only tracking because of its higher mathematical solution accuracy. However, the sufficient evaluation of robust methods in continuous–discrete systems is not available. In addition, in the different continuous–discrete measurement environments, the choice of a robust algorithm also needs to be discussed. To fill this gap, this paper firstly establishes the continuous–discrete target tracking model, and then evaluates the performance of proposed robust square-root continuous–discrete cubature Kalman filter algorithms in the measurement of uncertainty problems. From the simulation results, the robust square-root continuous–discrete maximum correntropy cubature Kalman filter algorithm and the variational Bayesian square-root continuous–discrete cubature Kalman filter algorithm have better environmental adaptability, which provides a promising means for solving continuous–discrete robust problems. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

32 pages, 1893 KiB  
Review
A Review of Proxy Modeling Highlighting Applications for Reservoir Engineering
by Peyman Bahrami, Farzan Sahari Moghaddam and Lesley A. James
Energies 2022, 15(14), 5247; https://doi.org/10.3390/en15145247 - 20 Jul 2022
Cited by 39 | Viewed by 6318
Abstract
Numerical models can be used for many purposes in oil and gas engineering, such as production optimization and forecasting, uncertainty analysis, history matching, and risk assessment. However, subsurface problems are complex and non-linear, and making reliable decisions in reservoir management requires substantial computational [...] Read more.
Numerical models can be used for many purposes in oil and gas engineering, such as production optimization and forecasting, uncertainty analysis, history matching, and risk assessment. However, subsurface problems are complex and non-linear, and making reliable decisions in reservoir management requires substantial computational effort. Proxy models have gained much attention in recent years. They are advanced non-linear interpolation tables that can approximate complex models and alleviate computational effort. Proxy models are constructed by running high-fidelity models to gather the necessary data to create the proxy model. Once constructed, they can be a great choice for different tasks such as uncertainty analysis, optimization, forecasting, etc. The application of proxy modeling in oil and gas has had an increasing trend in recent years, and there is no consensus rule on the correct choice of proxy model. As a result, it is crucial to better understand the advantages and disadvantages of various proxy models. The existing work in the literature does not comprehensively cover all proxy model types, and there is a considerable requirement for fulfilling the existing gaps in summarizing the classification techniques with their applications. We propose a novel categorization method covering all proxy model types. This review paper provides a more comprehensive guideline on comparing and developing a proxy model compared to the existing literature. Furthermore, we point out the advantages of smart proxy models (SPM) compared to traditional proxy models (TPM) and suggest how we may further improve SPM accuracy where the literature is limited. This review paper first introduces proxy models and shows how they are classified in the literature. Then, it explains that the current classifications cannot cover all types of proxy models and proposes a novel categorization based on various development strategies. This new categorization includes four groups multi-fidelity models (MFM), reduced-order models (ROM), TPM, and SPM. MFMs are constructed based on simplifying physics assumptions (e.g., coarser discretization), and ROMs are based on dimensional reduction (i.e., neglecting irrelevant parameters). Developing these two models requires an in-depth knowledge of the problem. In contrast, TPMs and novel SPMs require less effort. In other words, they do not solve the complex underlying mathematical equations of the problem; instead, they decouple the mathematical equations into a numeric dataset and train statistical/AI-driven models on the dataset. Nevertheless, SPMs implement feature engineering techniques (i.e., generating new parameters) for its development and can capture the complexities within the reservoir, such as the constraints and characteristics of the grids. The newly introduced parameters can help find the hidden patterns within the parameters, which eventually increase the accuracy of SPMs compared to the TPMs. This review highlights the superiority of SPM over traditional statistical/AI-based proxy models. Finally, the application of various proxy models in the oil and gas industry, especially in subsurface modeling with a set of real examples, is presented. The introduced guideline in this review aids the researchers in obtaining valuable information on the current state of PM problems in the oil and gas industry. Full article
(This article belongs to the Special Issue Recent Advances in Reservoir Simulation)
Show Figures

Graphical abstract

18 pages, 8185 KiB  
Article
Structural Safety of the Steel Hall under Dynamic Excitation Using the Relative Probabilistic Entropy Concept
by Rafał Bredow and Marcin Kamiński
Materials 2022, 15(10), 3587; https://doi.org/10.3390/ma15103587 - 18 May 2022
Cited by 9 | Viewed by 1994
Abstract
This work aimed to analyze the reliability of a steel hall that was recently erected in central Poland subjected to dynamic wind excitation using the stochastic finite element method. Reliability analysis was completed using the relative entropy concept delivered by Bhattacharyya and contrasted [...] Read more.
This work aimed to analyze the reliability of a steel hall that was recently erected in central Poland subjected to dynamic wind excitation using the stochastic finite element method. Reliability analysis was completed using the relative entropy concept delivered by Bhattacharyya and contrasted with the first-order reliability method recommended by the engineering design codes. Bhattacharyya probabilistic relative entropy was additionally rescaled in this study to fit the demands and recommended admissibility intervals given in Eurocode 0. The finite element method study was carried out thanks to a discrete model created in the system ABAQUS 2019, while all further statistical and probabilistic computations were programmed and completed in the symbolic environment of MAPLE 2019. Contrary to most engineering analyses in steel structure areas, this study included the important warping effect while designing the hall ridges and the purlins. Dynamic structural responses were determined via the Hilber-Hughes-Taylor algorithm and their series were numerically obtained for a series of input uncertainty parameters representing several mechanical and environmental quantities. The generalized 10th order iterative stochastic perturbation technique was contrasted in this context with statistical estimators from the Monte Carlo simulations and numerical integration resulting from the semi-analytical approach. The key research finding of this study was an extremely good coincidence between the FORM indices and the rescaled relative probabilistic entropies for the given stochastic excitations, which additionally did not depend on a choice of one of the three proposed numerical approaches. Full article
Show Figures

Figure 1

15 pages, 5749 KiB  
Article
“Realistic Choice of Annual Matrices Contracts the Range of λS Estimates” under Reproductive Uncertainty Too
by Dmitrii O. Logofet, Leonid L. Golubyatnikov, Elena S. Kazantseva and Nina G. Ulanova
Mathematics 2021, 9(23), 3007; https://doi.org/10.3390/math9233007 - 24 Nov 2021
Cited by 4 | Viewed by 1729
Abstract
Our study is devoted to a subject popular in the field of matrix population models, namely, estimating the stochastic growth rate, λS, a quantitative measure of long-term population viability, for a discrete-stage-structured population monitored during many years. “Reproductive uncertainty [...] Read more.
Our study is devoted to a subject popular in the field of matrix population models, namely, estimating the stochastic growth rate, λS, a quantitative measure of long-term population viability, for a discrete-stage-structured population monitored during many years. “Reproductive uncertainty” refers to a feature inherent in the data and life cycle graph (LCG) when the LCG has more than one reproductive stage, but when the progeny cannot be associated to a parent stage in a unique way. Reproductive uncertainty complicates the procedure of λS estimation following the defining of λS from the limit of a sequence consisting of population projection matrices (PPMs) chosen randomly from a given set of annual PPMs. To construct a Markov chain that governs the choice of PPMs for a local population of Eritrichium caucasicum, an short-lived perennial alpine plant species, we have found a local weather index that is correlated with the variations in the annual PPMs, and we considered its long time series as a realization of the Markov chain that was to be constructed. Reproductive uncertainty has required a proper modification of how to restore the transition matrix from a long realization of the chain, and the restored matrix has been governing random choice in several series of Monte Carlo simulations of long-enough sequences. The resulting ranges of λS estimates turn out to be more narrow than those obtained by the popular i.i.d. methods of random choice (independent and identically distributed matrices); hence, we receive a more accurate and reliable forecast of population viability. Full article
(This article belongs to the Special Issue Advances in the Mathematics of Ecological Modelling)
Show Figures

Figure 1

25 pages, 9391 KiB  
Review
Review and Suggestion of Failure Theories in Voids Scenario for VARTM Processed Composite Materials
by Vivek Kumar Dhimole, Pruthvi Serrao and Chongdu Cho
Polymers 2021, 13(6), 969; https://doi.org/10.3390/polym13060969 - 22 Mar 2021
Cited by 15 | Viewed by 5747
Abstract
Fiber-reinforced composite structures are used in different applications due to their excellent strength to weight ratio. Due to cost and tool handling issues in conventional manufacturing processes, like resin transfer molding (RTM) and autoclave, vacuum-assisted resin transfer molding (VARTM) is the best choice [...] Read more.
Fiber-reinforced composite structures are used in different applications due to their excellent strength to weight ratio. Due to cost and tool handling issues in conventional manufacturing processes, like resin transfer molding (RTM) and autoclave, vacuum-assisted resin transfer molding (VARTM) is the best choice among industries. VARTM is highly productive and cheap. However, the VARTM process produces complex, lightweight, and bulky structures, suitable for mass and cost-effective production, but the presence of voids and fiber misalignment in the final processed composite influences its strength. Voids are the primary defects, and they cannot be eliminated completely, so a design without considering void defects will entail unreliability. Many conventional failure theories were used for composite design but did not consider the effect of voids defects, thus creating misleading failure characteristics. Due to voids, stress and strain uncertainty affects failure mechanisms, such as microcrack, delamination, and fracture. That’s why a proper selection and understanding of failure theories is necessary. This review discusses previous conventional failure theories followed by work considering the void’s effect. Based on the review, a few prominent theories were suggested to estimate composite strength in the void scenario because they consider the effect of the voids through crack density, crack, or void modeling. These suggested theories were based on damage mechanics (discrete damage mechanics), fracture mechanics (virtual crack closure technique), and micromechanics (representative volume element). The suggested theories are well-established in finite element modeling (FEM), representing an effective time and money-saving tool in design strategy, with better early estimation to enhance current design practices’ effectiveness for composites. This paper gives an insight into choosing the failure theories for composites in the presence of voids, which are present in higher percentages in mass production and less-costly processes (VARTM). Full article
(This article belongs to the Special Issue New Advances in Composites Design and Manufacturing)
Show Figures

Graphical abstract

14 pages, 872 KiB  
Article
Contracts to Govern the Transition towards Sustainable Production: Evidence from a Discrete Choice Analysis in the Durum Wheat Sector in Italy
by Stefano Ciliberti, Simone Del Sarto, Angelo Frascarelli, Giulia Pastorelli and Gaetano Martino
Sustainability 2020, 12(22), 9441; https://doi.org/10.3390/su12229441 - 13 Nov 2020
Cited by 6 | Viewed by 3022
Abstract
The increasing request for food sustainability is affecting the pasta sector in Italy. This phenomenon introduces different sources of uncertainties that, in turn, put pressure on all the stages of the supply chain, with a consequent emerging need for a higher level of [...] Read more.
The increasing request for food sustainability is affecting the pasta sector in Italy. This phenomenon introduces different sources of uncertainties that, in turn, put pressure on all the stages of the supply chain, with a consequent emerging need for a higher level of coordination. Based on the Transaction Costs Theory approach, the paper is aimed at verifying whether contract design—revolving around the negotiation of contractual attributes with different functions in terms of safeguard, adaptability, and coordination—plays a crucial role in aligning sources of uncertainty surrounding transactions with the allocation of property and decision rights. To this aim, a sample of durum wheat producers is interviewed for expressing their preferences about some contractual features, such as price, production and quality rules, sustainable environmental techniques, and advisory services. Using a discrete choice analysis through a multinomial logit model, results reveal that, thanks to the presence of attributes able to ensure coordination and adaptability, contracts are able to steer towards elements of sustainability related to food quality and safety, whereas further efforts are needed to share environmental goals with farmers. Full article
(This article belongs to the Special Issue Innovative Alignments of Economic Incentives with the Environment)
Show Figures

Figure 1

25 pages, 13934 KiB  
Article
LED Wristbands for Cell-Based Crowd Evacuation: An Adaptive Exit-Choice Guidance System Architecture
by Miguel A. Lopez-Carmona and Alvaro Paricio-Garcia
Sensors 2020, 20(21), 6038; https://doi.org/10.3390/s20216038 - 23 Oct 2020
Cited by 9 | Viewed by 2970
Abstract
Cell-based crowd evacuation systems provide adaptive or static exit-choice indications that favor a coordinated group dynamic, improving evacuation time and safety. While a great effort has been made to modeling its control logic by assuming an ideal communication and positioning infrastructure, the architectural [...] Read more.
Cell-based crowd evacuation systems provide adaptive or static exit-choice indications that favor a coordinated group dynamic, improving evacuation time and safety. While a great effort has been made to modeling its control logic by assuming an ideal communication and positioning infrastructure, the architectural dimension and the influence of pedestrian positioning uncertainty have been largely overlooked. In our previous research, a cell-based crowd evacuation system (CellEVAC) was proposed that dynamically allocates exit gates to pedestrians in a cell-based pedestrian positioning infrastructure. This system provides optimal exit-choice indications through color-based indications and a control logic module built upon an optimized discrete-choice model. Here, we investigate how location-aware technologies and wearable devices can be used for a realistic deployment of CellEVAC. We consider a simulated real evacuation scenario (Madrid Arena) and propose a system architecture for CellEVAC that includes: a controller node, a radio-controlled light-emitting diode (LED) wristband subsystem, and a cell-node network equipped with active Radio Frequency Identification (RFID) devices. These subsystems coordinate to provide control, display, and positioning capabilities. We quantitatively study the sensitivity of evacuation time and safety to uncertainty in the positioning system. Results showed that CellEVAC was operational within a limited range of positioning uncertainty. Further analyses revealed that reprogramming the control logic module through a simulation optimization process, simulating the positioning system’s expected uncertainty level, improved the CellEVAC performance in scenarios with poor positioning systems. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

Back to TopTop