Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = DREAM-Suite

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
57 pages, 12419 KB  
Article
The Learning Rate Is Not a Constant: Sandwich-Adjusted Markov Chain Monte Carlo Simulation
by Jasper A. Vrugt and Cees G. H. Diks
Entropy 2025, 27(10), 999; https://doi.org/10.3390/e27100999 - 25 Sep 2025
Viewed by 447
Abstract
A fundamental limitation of maximum likelihood and Bayesian methods under model misspecification is that the asymptotic covariance matrix of the pseudo-true parameter vector θ* is not the inverse of the Fisher information, but rather the sandwich covariance matrix [...] Read more.
A fundamental limitation of maximum likelihood and Bayesian methods under model misspecification is that the asymptotic covariance matrix of the pseudo-true parameter vector θ* is not the inverse of the Fisher information, but rather the sandwich covariance matrix 1nA*1B*1A*1, where A* and B* are the sensitivity and variability matrices, respectively, evaluated at θ* for training data record ω1,,ωn. This paper makes three contributions. First, we review existing approaches to robust posterior sampling, including the open-faced sandwich adjustment and magnitude- and curvature-adjusted Markov chain Monte Carlo (MCMC) simulation. Second, we introduce a new sandwich-adjusted MCMC method. Unlike existing approaches that rely on arbitrary matrix square roots, eigendecompositions or a single scaling factor applied uniformly across the parameter space, our method employs a parameter-dependent learning rate λ(θ) that enables direction-specific tempering of the likelihood. This allows the sampler to capture directional asymmetries in the sandwich distribution, particularly under model misspecification or in small-sample regimes, and yields credible regions that remain valid when standard Bayesian inference underestimates uncertainty. Third, we propose information-theoretic diagnostics for quantifying model misspecification, including a strictly proper divergence score and scalar summaries based on the Frobenius norm, Earth mover’s distance, and the Herfindahl index. These principled diagnostics complement residual-based metrics for model evaluation by directly assessing the degree of misalignment between the sensitivity and variability matrices, A* and B*. Applications to two parametric distributions and a rainfall-runoff case study with the Xinanjiang watershed model show that conventional Bayesian methods systematically underestimate uncertainty, while the proposed method yields asymptotically valid and robust uncertainty estimates. Together, these findings advocate for sandwich-based adjustments in Bayesian practice and workflows. Full article
Show Figures

Figure 1

24 pages, 10666 KB  
Article
Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm
by Xingyu Yang, Shiwei Zhao, Wei Gao, Peifeng Li, Zhe Feng, Lijing Li, Tongyao Jia and Xuejun Wang
Biomimetics 2025, 10(8), 551; https://doi.org/10.3390/biomimetics10080551 - 21 Aug 2025
Viewed by 645
Abstract
The multi-strategy optimized dream optimization algorithm (MSDOA) is proposed to address the challenges of inadequate search capability, slow convergence, and susceptibility to local optima in intelligent optimization algorithms applied to UAV three-dimensional path planning, aiming to enhance the global search efficiency and accuracy [...] Read more.
The multi-strategy optimized dream optimization algorithm (MSDOA) is proposed to address the challenges of inadequate search capability, slow convergence, and susceptibility to local optima in intelligent optimization algorithms applied to UAV three-dimensional path planning, aiming to enhance the global search efficiency and accuracy of UAV path planning algorithms in 3D environments. First, the algorithm utilizes Bernoulli chaotic mapping for population initialization to widen individual search ranges and enhance population diversity. Subsequently, an adaptive perturbation mechanism is incorporated during the exploration phase along with a lens imaging reverse learning strategy to update the population, thereby improving the exploration ability and accelerating convergence while mitigating premature convergence. Lastly, an Adaptive Individual-level Mixed Strategy (AIMS) is developed to conduct a more flexible search process and enhance the algorithm’s global search capability. The performance of the algorithm is evaluated through simulation experiments using the CEC2017 benchmark test functions. The results indicate that the proposed algorithm achieves superior optimization accuracy, faster convergence speed, and enhanced robustness compared to other swarm intelligence algorithms. Specifically, MSDOA ranks first on 28 out of 29 benchmark functions in the CEC2017 test suite, demonstrating its outstanding global search capability and conver-gence performance. Furthermore, UAV path planning simulation experiments conducted across multiple scenario models show that MSDOA exhibits stronger adaptability to complex three-dimensional environments. In the most challenging scenario, compared to the standard DOA, MSDOA reduces the best cost function fitness by 9% and decreases the average cost function fitness by 12%, thereby generating more efficient, smoother, and higher-quality flight paths. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

17 pages, 1584 KB  
Article
Race/Ethnicity and Homeownership in an Emerging Immigrant Gateway of the US Southeast: A Neighborhood Scale Analysis
by Madhuri Sharma
Soc. Sci. 2024, 13(11), 624; https://doi.org/10.3390/socsci13110624 - 18 Nov 2024
Viewed by 1486
Abstract
Owning a home has become a distant, often unattainable dream for many Americans since the 2007–2009 recession. The shortage of homes has decreased affordability, forcing 43 million U.S. households to become renters rather than owners. Racially targeted policies and widespread discrimination, coupled with [...] Read more.
Owning a home has become a distant, often unattainable dream for many Americans since the 2007–2009 recession. The shortage of homes has decreased affordability, forcing 43 million U.S. households to become renters rather than owners. Racially targeted policies and widespread discrimination, coupled with neoliberal urban renewal policies, have forced communities of color, especially immigrants and the foreign-born, at the greatest disadvantage in homeownership. This paper examines tract-scale disparities in homeownership across major racial/ethnic groups. Using the U.S. Census Office of Management and Budget’s (OMB) 2019 definition of the 13-county-metropolitan statistical area (MSA) of Nashville, Tennessee, as the study area, I use five-year American Community Survey (ACS) (2015–2019) data estimates to examine the spatial disparity in homeownership and its predictors. Nashville MSA is one of the fastest-growing southern gateways, and it is also the largest, most diverse, and most intermixed metropolis in Tennessee. It contains higher than the state’s overall share of foreign-born, and during 2019–2040, its share of immigrants is projected to grow by 40.7%, making it the best-suited laboratory for race/immigrant-focused research on housing. This analysis finds significant differences in race-based mean per-capita income, with Whites ($32,522) and Asians ($32,556) at the top, whereas Blacks ($25,062) and Hispanics ($20,091) are at the lowest. The ratio of race-based per-capita-income-versus-median housing values is the highest for Whites (15.19) and Asians (15.07) and the lowest for Blacks (11.49) and Hispanics (9.27), putting these two groups as the most disadvantaged regarding their affordability. Regression models suggest lower White homeownership in higher diversity tracts among foreign-born-not-citizens (FBNCs), whereas Black and Hispanic homeownerships are higher in tracts with higher diversity among FBNCs. Interestingly, Asian homeownership is high in tracts with high-income Black tracts, pointing toward the increasing significance of class. Full article
Show Figures

Figure 1

14 pages, 2453 KB  
Article
Advancing Persistent Character Generation: Comparative Analysis of Fine-Tuning Techniques for Diffusion Models
by Luca Martini, Saverio Iacono, Daniele Zolezzi and Gianni Viardo Vercelli
AI 2024, 5(4), 1779-1792; https://doi.org/10.3390/ai5040088 - 29 Sep 2024
Cited by 1 | Viewed by 4860
Abstract
In the evolving field of artificial intelligence, fine-tuning diffusion models is crucial for generating contextually coherent digital characters across various media. This paper examines four advanced fine-tuning techniques: Low-Rank Adaptation (LoRA), DreamBooth, Hypernetworks, and Textual Inversion. Each technique enhances the specificity and consistency [...] Read more.
In the evolving field of artificial intelligence, fine-tuning diffusion models is crucial for generating contextually coherent digital characters across various media. This paper examines four advanced fine-tuning techniques: Low-Rank Adaptation (LoRA), DreamBooth, Hypernetworks, and Textual Inversion. Each technique enhances the specificity and consistency of character generation, expanding the applications of diffusion models in digital content creation. LoRA efficiently adapts models to new tasks with minimal adjustments, making it ideal for environments with limited computational resources. It excels in low VRAM contexts due to its targeted fine-tuning of low-rank matrices within cross-attention layers, enabling faster training and efficient parameter tweaking. DreamBooth generates highly detailed, subject-specific images but is computationally intensive and suited for robust hardware environments. Hypernetworks introduce auxiliary networks that dynamically adjust the model’s behavior, allowing for flexibility during inference and on-the-fly model switching. This adaptability, however, can result in slightly lower image quality. Textual Inversion embeds new concepts directly into the model’s embedding space, allowing for rapid adaptation to novel styles or concepts, but is less effective for precise character generation. This analysis shows that LoRA is the most efficient for producing high-quality outputs with minimal computational overhead. In contrast, DreamBooth excels in high-fidelity images at the cost of longer training. Hypernetworks provide adaptability with some tradeoffs in quality, while Textual Inversion serves as a lightweight option for style integration. These techniques collectively enhance the creative capabilities of diffusion models, delivering high-quality, contextually relevant outputs. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

17 pages, 26443 KB  
Article
Generative Artificial Intelligence Image Tools among Future Designers: A Usability, User Experience, and Emotional Analysis
by Joana Casteleiro-Pitrez
Digital 2024, 4(2), 316-332; https://doi.org/10.3390/digital4020016 - 17 Apr 2024
Cited by 16 | Viewed by 6088
Abstract
Generative Artificial Intelligence (GenAI) image tools hold the promise of revolutionizing a designer’s creative process. The increasing supply of this type of tool leads us to consider whether they suit future design professionals. This study aims to unveil if three GenAI image tools—Midjourney [...] Read more.
Generative Artificial Intelligence (GenAI) image tools hold the promise of revolutionizing a designer’s creative process. The increasing supply of this type of tool leads us to consider whether they suit future design professionals. This study aims to unveil if three GenAI image tools—Midjourney 5.2, DreamStudio beta, and Adobe Firefly 2—meet future designers’ expectations. Do these tools have good Usability, show sufficient User Experience (UX), induce positive emotions, and provide satisfactory results? A literature review was performed, and a quantitative empirical study based on a multidimensional analysis was executed to answer the research questions. Sixty users used the GenAI image tools and then responded to a holistic evaluation framework. The results showed that while the GenAI image tools received favorable ratings for Usability, they fell short in achieving high scores, indicating room for improvement. None of the platforms received a positive evaluation in all UX scales, highlighting areas for enhancement. The benchmark comparison revealed that all platforms, except for Adobe Firefly’s Efficiency scale, require enhancements in pragmatic and hedonic qualities. Despite inducing neutral to above-average positive emotions and minimal negative emotions, the overall satisfaction was moderate, with Midjourney aligning more closely with user expectations. This study emphasizes the need for significant improvements in Usability, positive emotional resonance, and result satisfaction, even more so in UX, so that GenAI image tools can meet future designers’ expectations. Full article
(This article belongs to the Special Issue Digital in 2024)
Show Figures

Figure 1

29 pages, 3488 KB  
Article
A Two-Phase Machine Learning Framework for Context-Aware Service Selection to Empower People with Disabilities
by Abdallah Namoun, Adnan Ahmed Abi Sen, Ali Tufail, Abdullah Alshanqiti, Waqas Nawaz and Oussama BenRhouma
Sensors 2022, 22(14), 5142; https://doi.org/10.3390/s22145142 - 8 Jul 2022
Cited by 15 | Viewed by 3288
Abstract
The use of software and IoT services is increasing significantly among people with special needs, who constitute 15% of the world’s population. However, selecting appropriate services to create a composite assistive service based on the evolving needs and context of disabled user groups [...] Read more.
The use of software and IoT services is increasing significantly among people with special needs, who constitute 15% of the world’s population. However, selecting appropriate services to create a composite assistive service based on the evolving needs and context of disabled user groups remains a challenging research endeavor. Our research applies a scenario-based design technique to contribute (1) an inclusive disability ontology for assistive service selection, (2) semi-synthetic generated disability service datasets, and (3) a machine learning (ML) framework to choose services adaptively to suit the dynamic requirements of people with special needs. The ML-based selection framework is applied in two complementary phases. In the first phase, all available atomic tasks are assessed to determine their appropriateness to the user goal and profiles, whereas in the subsequent phase, the list of service providers is narrowed by matching their quality-of-service factors against the context and characteristics of the disabled person. Our methodology is centered around a myriad of user characteristics, including their disability profile, preferences, environment, and available IT resources. To this end, we extended the widely used QWS V2.0 and WS-DREAM web services datasets with a fusion of selected accessibility features. To ascertain the validity of our approach, we compared its performance against common multi-criteria decision making (MCDM) models, namely AHP, SAW, PROMETHEE, and TOPSIS. The findings demonstrate superior service selection accuracy in contrast to the other methods while ensuring accessibility requirements are satisfied. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 4781 KB  
Communication
Identifying Forest Degradation and Restoration Opportunities in the Lancang-Mekong Region: A Tool to Determine Criteria and Indicators
by Kalifi Ferretti-Gallon, James Douglas Langston, Guangyu Wang, Kebiao Huang, Chao Long and Hongbo Zhai
Climate 2022, 10(4), 52; https://doi.org/10.3390/cli10040052 - 30 Mar 2022
Cited by 1 | Viewed by 3780
Abstract
Forest restoration is increasingly becoming a priority at international and national levels. Identifying forest degradation, however, is challenging because its drivers are underlying and site-specific. Existing frameworks and principles for identifying forest degradation are useful at larger scales, however, a framework that includes [...] Read more.
Forest restoration is increasingly becoming a priority at international and national levels. Identifying forest degradation, however, is challenging because its drivers are underlying and site-specific. Existing frameworks and principles for identifying forest degradation are useful at larger scales, however, a framework that includes iterative input from local knowledge-holders would be useful at smaller scales. Here, we present a new mechanism; a framework for developing criteria and indicators that enables an approach for the identification of forest degradation and opportunities for restoration in landscapes that is free from failures that are often inherent to project cycles. The Degradation and Restoration Assessment Mechanism (DReAM) uses an iterative process that is based on local expertise and established regional knowledge to inform what is forest degradation and how to monitor restoration. We tested the mechanism’s utility at several sites in the Lancang-Mekong Region (Cambodia, Laos, Myanmar, Thailand, and Vietnam). The application of this mechanism rendered a suite of appropriate criteria and indicators for use in identifying degraded forests which can help inform detailed guidelines to develop rehabilitation approaches. The mechanism is designed to be utilized by any individual or group that is interested in degradation identification and/or rehabilitation assessment. Full article
(This article belongs to the Special Issue Climate Change and Deforestation and Forest Degradation)
Show Figures

Figure 1

26 pages, 1155 KB  
Article
Modeling Delayed Dynamics in Biological Regulatory Networks from Time Series Data
by Emna Ben Abdallah, Tony Ribeiro, Morgan Magnin, Olivier Roux and Katsumi Inoue
Algorithms 2017, 10(1), 8; https://doi.org/10.3390/a10010008 - 9 Jan 2017
Cited by 4 | Viewed by 6925
Abstract
Background: The modeling of Biological Regulatory Networks (BRNs) relies on background knowledge, deriving either from literature and/or the analysis of biological observations. However, with the development of high-throughput data, there is a growing need for methods that automatically generate admissible models. Methods: Our [...] Read more.
Background: The modeling of Biological Regulatory Networks (BRNs) relies on background knowledge, deriving either from literature and/or the analysis of biological observations. However, with the development of high-throughput data, there is a growing need for methods that automatically generate admissible models. Methods: Our research aim is to provide a logical approach to infer BRNs based on given time series data and known influences among genes. Results: We propose a new methodology for models expressed through a timed extension of the automata networks (well suited for biological systems). The main purpose is to have a resulting network as consistent as possible with the observed datasets. Conclusion: The originality of our work is three-fold: (i) identifying the sign of the interaction; (ii) the direct integration of quantitative time delays in the learning approach; and (iii) the identification of the qualitative discrete levels that lead to the systems’ dynamics. We show the benefits of such an automatic approach on dynamical biological models, the DREAM4(in silico) and DREAM8 (breast cancer) datasets, popular reverse-engineering challenges, in order to discuss the precision and the computational performances of our modeling method. Full article
(This article belongs to the Special Issue Biological Networks)
Show Figures

Figure 1

Back to TopTop