Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (124)

Search Parameters:
Keywords = alternative reward

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4405 KiB  
Article
Transcranial Direct Current Stimulation over the Orbitofrontal Cortex Enhances Self-Reported Confidence but Reduces Metacognitive Sensitivity in a Perceptual Decision-Making Task
by Daniele Saccenti, Andrea Stefano Moro, Gianmarco Salvetti, Sandra Sassaroli, Antonio Malgaroli, Jacopo Lamanna and Mattia Ferro
Biomedicines 2025, 13(7), 1522; https://doi.org/10.3390/biomedicines13071522 - 21 Jun 2025
Viewed by 481
Abstract
Background: Metacognition refers to the ability to reflect on and regulate cognitive processes. Despite advances in neuroimaging and lesion studies, its neural correlates, as well as their interplay with other cognitive domains, remain poorly understood. The orbitofrontal cortex (OFC) is proposed as [...] Read more.
Background: Metacognition refers to the ability to reflect on and regulate cognitive processes. Despite advances in neuroimaging and lesion studies, its neural correlates, as well as their interplay with other cognitive domains, remain poorly understood. The orbitofrontal cortex (OFC) is proposed as a potential substrate for metacognitive processing due to its contribution to evaluating and integrating reward-related information, decision-making, and self-monitoring. Methods: This study examined OFC involvement in metacognition using transcranial direct current stimulation (tDCS) while participants performed a two-alternative forced choice task with confidence ratings to assess their metacognitive sensitivity. Before stimulation, the subjects completed the Metacognitions Questionnaire-30 and a monetary intertemporal choice task for the quantification of delay discounting. Results: Linear mixed-effects models showed that anodal tDCS over the left OFC reduced participants’ metacognitive sensitivity compared to sham stimulation, leaving perceptual decision-making accuracy unaffected. Moreover, real stimulation increased self-reported confidence ratings compared to the sham. Significant correlations were found between metacognitive sensitivity and negative beliefs about thinking. Conclusions: These results highlight the potential involvement of the OFC in the processing of retrospective second-order judgments about decision-making performance. Additionally, they support the notion that OFC overstimulation contributes to metacognitive dysfunctions detected in clinical conditions, such as difficulties in assessing the reliability of one’s thoughts and decision outcomes. Full article
Show Figures

Figure 1

52 pages, 1790 KiB  
Review
Emotion, Motivation, Reasoning, and How Their Brain Systems Are Related
by Edmund T. Rolls
Brain Sci. 2025, 15(5), 507; https://doi.org/10.3390/brainsci15050507 - 16 May 2025
Viewed by 1425
Abstract
A unified theory of emotion and motivation is updated in which motivational states are states in which instrumental goal-directed actions are performed to obtain anticipated rewards or avoid punishers, and emotional states are states that are elicited when the (conditioned or unconditioned) instrumental [...] Read more.
A unified theory of emotion and motivation is updated in which motivational states are states in which instrumental goal-directed actions are performed to obtain anticipated rewards or avoid punishers, and emotional states are states that are elicited when the (conditioned or unconditioned) instrumental reward or punisher is or is not received. This advances our understanding of emotion and motivation, for the same set of genes and associated brain systems can define the primary or unlearned rewards and punishers such as a sweet taste or pain, and the brain systems that learn to expect rewards or punishers and that therefore produce motivational and emotional states. It is argued that instrumental actions under the control of the goal are important for emotion, because they require an intervening emotional state in which an action is learned or performed to obtain the goal, that is, the reward, or to avoid the punisher. The primate including human orbitofrontal cortex computes the reward value, and the anterior cingulate cortex is involved in learning the action to obtain the goal. In contrast, when the instrumental response is overlearned and becomes a habit with stimulus–response associations, emotional states may be less involved. In another route to output, the human orbitofrontal cortex has effective connectivity to the inferior frontal gyrus regions involved in language and provides a route for declarative reports about subjective emotional states to be produced. Reasoning brain systems provide alternative strategies to obtain rewards or avoid punishers and can provide different goals for action compared to emotional systems. Full article
(This article belongs to the Special Issue Defining Emotion: A Collection of Current Models)
Show Figures

Figure 1

21 pages, 554 KiB  
Review
The Emotional Reinforcement Mechanism of and Phased Intervention Strategies for Social Media Addiction
by Jingsong Wang and Shen Wang
Behav. Sci. 2025, 15(5), 665; https://doi.org/10.3390/bs15050665 - 13 May 2025
Viewed by 2597
Abstract
Social media addiction has become a global public health challenge, and understanding its mechanism’s complexity requires the integration of the transitional characteristics of addiction development stages and breaking through the traditional single-reinforcement-path explanatory framework. This study is based on the dual pathway of [...] Read more.
Social media addiction has become a global public health challenge, and understanding its mechanism’s complexity requires the integration of the transitional characteristics of addiction development stages and breaking through the traditional single-reinforcement-path explanatory framework. This study is based on the dual pathway of positive and negative emotional reinforcement, integrating multidisciplinary evidence from neuroscience, psychology, and computational behavioral science to propose an independent and dynamic interaction mechanism of positive reinforcement (driven by social rewards) and negative reinforcement (driven by emotional avoidance) in social media addiction. Through a review, it was found that early addiction is mediated by the midbrain limbic dopamine system due to immediate pleasurable experiences (such as liking), while late addiction is maintained by negative emotional cycles due to the dysfunction of the prefrontal limbic circuit. The transition from early addiction to late addiction is characterized by independence and interactivity. Based on this, a phased intervention strategy is proposed, which uses reward competition strategies (such as cognitive behavioral therapy and alternative rewards) to weaken dopamine sensitization in the positive reinforcement stage, enhances self-control by blocking emotional escape (such as through mindfulness training and algorithm innovation) in the negative reinforcement stage, and uses cross-pathway joint intervention in the interaction stage. This study provides a theoretical integration framework for interdisciplinary research on social media addiction from a dynamic perspective for the first time. It is recommended that emotional reinforcement variables are included in addiction diagnosis, opening up new paths for precise intervention in different stages of social media addiction development. Full article
Show Figures

Figure 1

22 pages, 6354 KiB  
Article
A Novel Integrated Path Planning and Mode Decision Algorithm for Wheel–Leg Vehicles in Unstructured Environment
by Kui Wang, Xitao Wu, Shaoyang Shi, Mingfan Xu, Yifei Han, Zhewei Zhu and Yechen Qin
Sensors 2025, 25(9), 2888; https://doi.org/10.3390/s25092888 - 3 May 2025
Viewed by 606
Abstract
Human exploration and rescue in unstructured environments including hill terrain and depression terrain are fraught with danger and difficulty, making autonomous vehicles a promising alternative in these areas. In flat terrain, traditional wheeled vehicles demonstrate excellent maneuverability; however, their passability is limited in [...] Read more.
Human exploration and rescue in unstructured environments including hill terrain and depression terrain are fraught with danger and difficulty, making autonomous vehicles a promising alternative in these areas. In flat terrain, traditional wheeled vehicles demonstrate excellent maneuverability; however, their passability is limited in unstructured terrains due to the constraints of the chassis and drivetrain. Considering the high passability and exploration efficiency, wheel–leg vehicles have garnered increasing attention in recent years. In the automation process of wheel–leg vehicles, planning and mode decisions are crucial components. However, current path planning and mode decision algorithms are mostly designed for wheeled vehicles and cannot determine when to adopt which mode, thus limiting the full exploitation of the multimodal advantages of wheel–leg vehicles. To address this issue, this paper proposes an integrated path planning and mode decision algorithm (IPP-MD) for wheel–leg vehicles in unstructured environments, modeling the mode decision problem using a Markov Decision Process (MDP). The state space, action space, and reward function are innovatively designed to dynamically determine the most suitable mode of progression, fully utilizing the potential of wheel–leg vehicles in autonomous movement. The simulation results show that the proposed method demonstrates significant advantages in terms of fewer mode-switching occurrences compared to existing methods. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

14 pages, 1078 KiB  
Article
Honey Bees Can Use Sequence Learning to Predict Rewards from a Prior Unrewarded Visual Stimulus
by Bahram Kheradmand, Ian Richardson-Ramos, Sarah Chan, Claudia Nelson and James C. Nieh
Insects 2025, 16(4), 358; https://doi.org/10.3390/insects16040358 - 31 Mar 2025
Viewed by 785
Abstract
Learning to anticipate upcoming events can increase fitness by allowing animals to choose the best course of action, and many species can learn sequences of events and anticipate rewards. To date, most studies have focused on sequences over short time scales such as [...] Read more.
Learning to anticipate upcoming events can increase fitness by allowing animals to choose the best course of action, and many species can learn sequences of events and anticipate rewards. To date, most studies have focused on sequences over short time scales such as a few seconds. Whereas events separated by a few seconds are easily learned, events separated by longer delays are typically more difficult to learn. Here, we show that honey bees (Apis mellifera) can learn a sequence of two visually distinct food sources alternating in profitability every few minutes. Bees were challenged to learn that the rewarded pattern was the one that was non-rewarded on the prior visit. We show that bees can predict and choose the feeder that will be rewarding upon their next approach more frequently than predicted by chance, and they improve with experience, with 64% correct choices made in the second half of their visit sequence (N = 320 visits by 20 different bees). These results increase our understanding of honey bee visual sequential learning and further demonstrate the flexibility of foragers’ learning strategies. Full article
(This article belongs to the Section Social Insects and Apiculture)
Show Figures

Figure 1

25 pages, 3050 KiB  
Article
Optimizing Autonomous Vehicle Performance Using Improved Proximal Policy Optimization
by Mehmet Bilban and Onur İnan
Sensors 2025, 25(6), 1941; https://doi.org/10.3390/s25061941 - 20 Mar 2025
Cited by 2 | Viewed by 2080
Abstract
Autonomous vehicles must make quick and accurate decisions to operate efficiently in complex and dynamic urban traffic environments, necessitating a reliable and stable learning mechanism. The proximal policy optimization (PPO) algorithm stands out among reinforcement learning (RL) methods for its consistent learning process, [...] Read more.
Autonomous vehicles must make quick and accurate decisions to operate efficiently in complex and dynamic urban traffic environments, necessitating a reliable and stable learning mechanism. The proximal policy optimization (PPO) algorithm stands out among reinforcement learning (RL) methods for its consistent learning process, ensuring stable decisions under varying conditions while avoiding abrupt deviations during execution. However, the PPO algorithm often becomes trapped in a limited search space during policy updates, restricting its adaptability to environmental changes and alternative strategy exploration. To overcome this limitation, we integrated Lévy flight’s chaotic and comprehensive exploration capabilities into the PPO algorithm. Our method helped the algorithm explore larger solution spaces and reduce the risk of getting stuck in local minima. In this study, we collected real-time data such as speed, acceleration, traffic sign positions, vehicle locations, traffic light statuses, and distances to surrounding objects from the CARLA simulator, processed via Apache Kafka. These data were analyzed by both the standard PPO and our novel Lévy flight-enhanced PPO (LFPPO) algorithm. While the PPO algorithm offers consistency, its limited exploration hampers adaptability. The LFPPO algorithm overcomes this by combining Lévy flight’s chaotic exploration with Apache Kafka’s real-time data streaming, an advancement absent in state-of-the-art methods. Tested in CARLA, the LFPPO algorithm achieved a 99% success rate compared to the PPO algorithm’s 81%, demonstrating superior stability and rewards. These innovations enhance safety and RL exploration, with the LFPPO algorithm reducing collisions to 1% versus the PPO algorithm’s 19%, advancing autonomous driving beyond existing techniques. Full article
Show Figures

Figure 1

22 pages, 436 KiB  
Article
Strategic Impacts of RSUs on Company Performance: Insights into EPS and Profitability Growth
by Won (Albert) Park, Elena Sernova and Cheong-Yeul Park
Int. J. Financial Stud. 2025, 13(1), 34; https://doi.org/10.3390/ijfs13010034 - 1 Mar 2025
Viewed by 1470
Abstract
Restricted stock units (RSUs) are a key component of executive compensation schemes, aligning executive incentives with the long-term goals of the company and compensating for the limitations of traditional stock options. This study empirically analyzes the impact of RSUs on corporate performance, particularly [...] Read more.
Restricted stock units (RSUs) are a key component of executive compensation schemes, aligning executive incentives with the long-term goals of the company and compensating for the limitations of traditional stock options. This study empirically analyzes the impact of RSUs on corporate performance, particularly earnings per share (EPS) and operating profit. S&P 500 companies’ 27 years of data from 1997 to 2023 were used to evaluate the change in performance before and after the introduction of RSUs, and a paired t-test and hierarchical regression analysis were applied. The research results show that the introduction of RSUs has a stronger performance improvement effect in the 6th to 10th year after the introduction, suggesting that over time, even if RSUs cause short-term cost burdens, they increase the company’s financial stability in the long term and contribute to sustainable growth. In addition, the same analysis was conducted by setting not only EPS but also operating profit as an alternative variable, and it was confirmed that RSUs also have a positive impact on actual profitability improvement. This study emphasizes the need for companies to design RSUs as a strategic compensation system for long-term value creation, not as a short-term performance reward, and suggests the need for a further analysis of the effects of RSUs in various industries and regions. Full article
27 pages, 3281 KiB  
Article
A Reinforcement Learning-Based Solution for the Capacitated Electric Vehicle Routing Problem from the Last-Mile Delivery Perspective
by Özge Aslan Yıldız, İnci Sarıçiçek and Ahmet Yazıcı
Appl. Sci. 2025, 15(3), 1068; https://doi.org/10.3390/app15031068 - 22 Jan 2025
Cited by 8 | Viewed by 2487
Abstract
The growth of the urban population and the increase in e-commerce activities have resulted in challenges for last-mile delivery. On the other hand, electric vehicles (EVs) have been introduced to last-mile delivery as an alternative to fossil fuel vehicles. Electric vehicles (EVs) not [...] Read more.
The growth of the urban population and the increase in e-commerce activities have resulted in challenges for last-mile delivery. On the other hand, electric vehicles (EVs) have been introduced to last-mile delivery as an alternative to fossil fuel vehicles. Electric vehicles (EVs) not only play a pivotal role in reducing greenhouse gas emissions and air pollution but also contribute significantly to the development of more energy-efficient and environmentally sustainable urban transportation systems. Within these dynamics, the Electric Vehicle Routing Problem (EVRP) has begun to replace the Vehicle Routing Problem (VRP) in last-mile delivery. While classic vehicle routing ignores fueling, both the location of charging stations and charging time should be included in the Electric Vehicle Routing Problem due to the long recharging time. This study addresses the Capacitated EVRP (CEVRP) with a novel Q-learning algorithm. Q-learning is a model-free reinforcement learning algorithm designed to maximize an agent’s cumulative reward over time by selecting optimal actions. Additionally, a new dataset is also published for the EVRP considering field constraints. For the design of the dataset, real geographical positions have been used, located in the province of Eskisehir, Türkiye. It also includes environmental information, such as streets, intersections, and traffic density, unlike classical EVRP datasets. Optimal solutions are obtained for each instance of the EVRP by using the mathematical model. The results of the proposed Q-learning algorithm are compared with the optimal solutions of the presented dataset. Test results show that the proposed algorithm provides remarkable advantages in obtaining routes in a shorter time for EVs. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 521 KiB  
Review
Understanding Deep Endometriosis: From Molecular to Neuropsychiatry Dimension
by Magdalena Pszczołowska, Kamil Walczak, Weronika Kołodziejczyk, Magdalena Kozłowska, Gracjan Kozłowski, Martyna Gachowska and Jerzy Leszek
Int. J. Mol. Sci. 2025, 26(2), 839; https://doi.org/10.3390/ijms26020839 - 20 Jan 2025
Cited by 1 | Viewed by 1742
Abstract
Endometriosis is a widely spread disease that affects about 8% of the world’s female population. This condition may be described as a spread of endometrial tissue apart from the uterine cavity, but this process’s pathomechanism is still unsure. Apart from classic endometriosis symptoms, [...] Read more.
Endometriosis is a widely spread disease that affects about 8% of the world’s female population. This condition may be described as a spread of endometrial tissue apart from the uterine cavity, but this process’s pathomechanism is still unsure. Apart from classic endometriosis symptoms, which are pelvic pain, infertility, and bleeding problems, there are neuropsychiatric comorbidities that are usually difficult to diagnose. In our review, we attempted to summarize some of them. Conditions like migraine, anxiety, and depression occur more often in women with endometriosis and have a significant impact on life quality and pain perception. Interestingly, 77% of endometriosis patients with depression also have anxiety. Neuroimaging gives an image of the so-called endometriosis brain, which means alternations in pain processing and cognition, self-regulation, and reward. Genetic factors, including mutations in KRAS, PTEN, and ARID1A, influence cellular proliferation, differentiation, and chromatin remodeling, potentially exacerbating lesion severity and complicating treatment. In this review, we focused on the aspects of sciatic and obturator nerve endometriosis, the emotional well-being of endometriosis-affected patients, and the potential influence of endometriosis on dementia, also focusing on prolonged diagnosis. Addressing endometriosis requires a multidisciplinary approach, encompassing molecular insights, innovative therapies, and attention to its psychological and systemic effects. Full article
Show Figures

Figure 1

17 pages, 5027 KiB  
Article
Ornstein–Uhlenbeck Adaptation as a Mechanism for Learning in Brains and Machines
by Jesús García Fernández, Nasir Ahmad and Marcel van Gerven
Entropy 2024, 26(12), 1125; https://doi.org/10.3390/e26121125 - 22 Dec 2024
Viewed by 1402
Abstract
Learning is a fundamental property of intelligent systems, observed across biological organisms and engineered systems. While modern intelligent systems typically rely on gradient descent for learning, the need for exact gradients and complex information flow makes its implementation in biological and neuromorphic systems [...] Read more.
Learning is a fundamental property of intelligent systems, observed across biological organisms and engineered systems. While modern intelligent systems typically rely on gradient descent for learning, the need for exact gradients and complex information flow makes its implementation in biological and neuromorphic systems challenging. This has motivated the exploration of alternative learning mechanisms that can operate locally and do not rely on exact gradients. In this work, we introduce a novel approach that leverages noise in the parameters of the system and global reinforcement signals. Using an Ornstein–Uhlenbeck process with adaptive dynamics, our method balances exploration and exploitation during learning, driven by deviations from error predictions, akin to reward prediction error. Operating in continuous time, Ornstein–Uhlenbeck adaptation (OUA) is proposed as a general mechanism for learning in dynamic, time-evolving environments. We validate our approach across a range of different tasks, including supervised learning and reinforcement learning in feedforward and recurrent systems. Additionally, we demonstrate that it can perform meta-learning, adjusting hyper-parameters autonomously. Our results indicate that OUA provides a promising alternative to traditional gradient-based methods, with potential applications in neuromorphic computing. It also hints at a possible mechanism for noise-driven learning in the brain, where stochastic neurotransmitter release may guide synaptic adjustments. Full article
Show Figures

Figure 1

15 pages, 965 KiB  
Review
Impact of Substance Use Disorder on Tryptophan Metabolism Through the Kynurenine Pathway: A Narrative Review
by Lindsey Contella, Christopher L. Farrell, Luigi Boccuto, Alain H. Litwin and Marion L. Snyder
Metabolites 2024, 14(11), 611; https://doi.org/10.3390/metabo14110611 - 10 Nov 2024
Cited by 1 | Viewed by 1586
Abstract
Background/Objectives: Substance use disorder is a crisis impacting many people in the United States. This review aimed to identify the effect addictive substances have on the kynurenine pathway. Tryptophan is an essential amino acid metabolized by the serotonin and kynurenine pathways. The [...] Read more.
Background/Objectives: Substance use disorder is a crisis impacting many people in the United States. This review aimed to identify the effect addictive substances have on the kynurenine pathway. Tryptophan is an essential amino acid metabolized by the serotonin and kynurenine pathways. The metabolites of these pathways play a role in the biological reward system. Addictive substances have been shown to cause imbalances in the ratios of these metabolites. With current treatment and therapeutic options being suboptimal, identifying biochemical mechanisms that are impacted during the use of addictive substances can provide alternative options for treatment or drug discovery. Methods: A systematic literature search was conducted to identify studies evaluating the relationship between substance use disorder and tryptophan metabolism through the kynurenine pathway. A total of 32 articles meeting eligibility criteria were used to review the relationship between the kynurenine pathway, tryptophan breakdown, and addictive substances. Results: The use of addictive substances dysregulates tryptophan metabolism and kynurenine metabolite concentrations. This imbalance directly affects the dopamine reward system and is thought to promote continued substance use. Conclusions: Further studies are needed to fully evaluate the metabolites of the kynurenine pathway, along with other options for treatment to repair the metabolite imbalance. Several possible therapeutics have been identified; drugs that restore homeostasis, such as Ro 61-8048 and natural products like Tinospora cordifolia or Decaisnea insignis, are promising options for the treatment of substance use disorder. Full article
(This article belongs to the Section Animal Metabolism)
Show Figures

Figure 1

28 pages, 1238 KiB  
Article
Resource Allocation in UAV-D2D Networks: A Scalable Heterogeneous Multi-Agent Deep Reinforcement Learning Approach
by Huayuan Wang, Hui Li, Xin Wang, Shilin Xia, Tao Liu and Ruonan Wang
Electronics 2024, 13(22), 4401; https://doi.org/10.3390/electronics13224401 - 10 Nov 2024
Cited by 1 | Viewed by 1671
Abstract
In unmanned aerial vehicle (UAV)-assisted device-to-device (D2D) caching networks, the uncertainty from unpredictable content demands and variable user positions poses a significant challenge for traditional optimization methods, often making them impractical. Multi-agent deep reinforcement learning (MADRL) offers significant advantages in optimizing multi-agent system [...] Read more.
In unmanned aerial vehicle (UAV)-assisted device-to-device (D2D) caching networks, the uncertainty from unpredictable content demands and variable user positions poses a significant challenge for traditional optimization methods, often making them impractical. Multi-agent deep reinforcement learning (MADRL) offers significant advantages in optimizing multi-agent system decisions and serves as an effective and practical alternative. However, its application in large-scale dynamic environments is severely limited by the curse of dimensionality and communication overhead. To resolve this problem, we develop a scalable heterogeneous multi-agent mean-field actor-critic (SH-MAMFAC) framework. The framework treats ground users (GUs) and UAVs as distinct agents and designs cooperative rewards to convert the resource allocation problem into a fully cooperative game, enhancing global network performance. We also implement a mixed-action mapping strategy to handle discrete and continuous action spaces. A mean-field MADRL framework is introduced to minimize individual agent training loads while enhancing total cache hit probability (CHP). The simulation results show that our algorithm improves CHP and reduces transmission delay. A comparative analysis with existing mainstream deep reinforcement learning (DRL) algorithms shows that SH-MAMFAC significantly reduces training time and maintains high CHP as GU count grows. Additionally, by comparing with SH-MAMFAC variants that do not include trajectory optimization or power control, the proposed joint design scheme significantly reduces transmission delay. Full article
Show Figures

Figure 1

14 pages, 273 KiB  
Article
Evaluation of the Friday Night Live Mentoring Program on Supporting Positive Youth Development Outcomes
by Kathleen P. Tebb and Ketan Tamirisa
Healthcare 2024, 12(21), 2199; https://doi.org/10.3390/healthcare12212199 - 4 Nov 2024
Viewed by 1381
Abstract
Introduction: The use of alcohol, tobacco, and other drugs (ATOD) is a leading cause of preventable morbidity and mortality among adolescents. While traditional interventions have targeted specific health-risk behaviors (e.g., substance use, initiation of sexual intercourse, truancy, etc.), the evidence suggests that using [...] Read more.
Introduction: The use of alcohol, tobacco, and other drugs (ATOD) is a leading cause of preventable morbidity and mortality among adolescents. While traditional interventions have targeted specific health-risk behaviors (e.g., substance use, initiation of sexual intercourse, truancy, etc.), the evidence suggests that using a positive youth development (PYD) framework may have positive impacts across a number of domains. Friday Night Live Mentoring (FNLM) is a PYD-based, cross-age peer mentoring program that engages teams of older high school-aged youth to mentor teams of middle school-aged youth in a structured, ongoing, one-on-one relationship. While studies have demonstrated significant but small effect sizes of intergenerational youth mentoring programs in which an adult mentor is paired with the youth mentee, research on cross-age mentoring programs is limited. The purpose of the current study is to evaluate FNLM on its ability to improve participants’ knowledge, attitudes, skills, opportunities to develop caring relationships, school engagement, and academic performance. Methods: A retrospective, pre–post survey was administered online to FNLM participants across 13 California counties. Participants rated their knowledge and attitudes about ATOD, skills, relationships with peers and adults, and academic indicators. Open-ended questions gathered information about participants’ experiences in FNLM. Non-parametric related-samples Wilcoxon signed rank tests (an alternative to paired t-test) were used to compare pre–post differences. Participants were also asked two open-ended questions: “What are the best parts of FNLM?” and “What, if anything, would you change?”. The responses to each question were reviewed, coded, and analyzed according to key themes. Results: A total of 512 participants completed the survey (287 mentors and 225 protégés). There were small but statistically significant improvements across all items for both mentors and protégés. Qualitative analyses showed that most mentors and protégés especially enjoyed getting to know and spend time with one another. Several mentors added that it was rewarding to be a positive influence on or to make a positive difference in the protégé’s life. Many youth stated that the relationships formed, especially with their partner, and the activities were the best part of FNLM. The overwhelming majority would not change anything about the program. Those who provided recommendations for program improvement suggested more activities or more hands-on and engaging activities and more or longer meetings. Conclusion: FNLM actively engages youth and provides them with support and opportunities that promote knowledge, skill development, positive relationships, academic engagement, and success and raise awareness of the harms that the use of alcohol, tobacco, and other drugs (ATOD) can cause. While ATOD use was low prior to program participation, it was significantly lower after participating in the program. Full article
33 pages, 2210 KiB  
Article
Online Three-Dimensional Fuzzy Reinforcement Learning Modeling for Nonlinear Distributed Parameter Systems
by Xianxia Zhang, Runbin Yan, Gang Zhou, Lufeng Wang and Bing Wang
Electronics 2024, 13(21), 4217; https://doi.org/10.3390/electronics13214217 - 27 Oct 2024
Cited by 1 | Viewed by 1125
Abstract
Distributed parameter systems (DPSs) frequently appear in industrial manufacturing processes, with complex characteristics such as time–space coupling, nonlinearity, infinite dimension, uncertainty and so on, which is full of challenges to the modeling of the system. At present, most DPS modeling methods are offline. [...] Read more.
Distributed parameter systems (DPSs) frequently appear in industrial manufacturing processes, with complex characteristics such as time–space coupling, nonlinearity, infinite dimension, uncertainty and so on, which is full of challenges to the modeling of the system. At present, most DPS modeling methods are offline. When the internal parameters or external environment of DPS change, the offline model is incapable of accurately representing the dynamic attributes of the real system. Establishing an online model for DPS that accurately reflects the real-time dynamics of the system is very important. In this paper, the idea of reinforcement learning is creatively integrated into the three-dimensional (3D) fuzzy model and a reinforcement learning-based 3D fuzzy modeling method is proposed. The agent improves the strategy by continuously interacting with the environment, so that the 3D fuzzy model can adaptively establish the online model from scratch. Specifically, this paper combines the deterministic strategy gradient reinforcement learning algorithm based on an actor critic framework with a 3D fuzzy system. The actor function and critic function are represented by two 3D fuzzy systems and the critic function and actor function are updated alternately. The critic function uses a TD (0) target and is updated via the semi-gradient method; the actor function is updated by using the chain derivation rule on the behavior value function and the actor function is the established DPS online model. Since DPS modeling is a continuous problem, this paper proposes a TD (0) target based on average reward, which can effectively realize online modeling. The suggested methodology is implemented on a three-zone rapid thermal chemical vapor deposition reactor system and the simulation results demonstrate the efficacy of the methodology. Full article
Show Figures

Figure 1

18 pages, 1197 KiB  
Article
The Influence of Fixed and Flexible Funding Mechanisms on Reward-Based Crowdfunding Success
by Lenny Phulong Mamaro and Athenia Bongani Sibindi
J. Risk Financial Manag. 2024, 17(10), 454; https://doi.org/10.3390/jrfm17100454 - 7 Oct 2024
Viewed by 1880
Abstract
This study examined whether fixed or flexible funding mechanisms influence crowdfunding success. Under the fixed funding mechanism, the pledges contributed to the crowdfunding campaign projects are returned to the backers if the project fails, whereas, under the flexible funding mechanism, the project creator [...] Read more.
This study examined whether fixed or flexible funding mechanisms influence crowdfunding success. Under the fixed funding mechanism, the pledges contributed to the crowdfunding campaign projects are returned to the backers if the project fails, whereas, under the flexible funding mechanism, the project creator can keep all the raised pledges, irrespective of whether the project succeeds or fails. Secondary data consisted of reward-based crowdfunding projects retrieved from The Crowd Data Centre. Logistic regression was employed to respond to research objectives. The results reveal that the fixed funding mechanism increases the probability of success more than flexible funding. Entrepreneur experience, spelling errors, and project description negatively affect crowdfunding success, and backers positively affect crowdfunding success. The findings guide entrepreneurs seeking financing to design and choose an appropriate funding mechanism that effectively reduces the failure rate. Although many entrepreneurs seek funding in the crowdfunding market, relatively little research has been conducted on the influence of flexible or fixed funding mechanisms on crowdfunding success in Africa. This study provides entrepreneurs with appropriate financing strategies that enhance crowdfunding success. The empirical literature indicates that the flexible funding mechanism creates distrust among backers due to unrealistic target amounts. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

Back to TopTop