Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (187)

Search Parameters:
Keywords = human-behavior representation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 4834 KB  
Article
Validation of Body Surface Area Equations for Estimating Fat-Free Mass by Dual X-Ray Absorptiometry in a Regional Chilean Sample Aged 4 to 85 Years
by Marco Cossio-Bolaños, Rubén Vidal Espinoza, Jose Sulla-Torres, Camilo Urra-Albornoz, Lucila Sanchez-Macedo, Miguel de Arruda, Fernando Alvear-Vasquez, Evandro Lazari and Rossana Gomez-Campos
Diagnostics 2025, 15(23), 2982; https://doi.org/10.3390/diagnostics15232982 - 24 Nov 2025
Viewed by 245
Abstract
Background/Objectives: Body surface area (BSA) is an important metric that represents human dimensionality and could provide a more accurate representation of body composition. The objectives were (a) to verify the validity of a set of equations based on BSA to estimate lean body [...] Read more.
Background/Objectives: Body surface area (BSA) is an important metric that represents human dimensionality and could provide a more accurate representation of body composition. The objectives were (a) to verify the validity of a set of equations based on BSA to estimate lean body mass (LBM), using dual X-ray absorptiometry (DXA) as a reference method and (b) to propose reference values of BSA by anthropometry and LBM by DXA in a regional sample of Chile aged 4 to 85 years. Methods: A descriptive cross-sectional study was performed. The sample size was 5493 participants. Weight and height were measured. BSA was calculated using seven equations. LBM was assessed by DXA. Results: Only three BSA equations (Dubois–Dubois, 1916, Fujimoto, Watanabe, 1969, and Mattar, 1981) best explained LBM. The explanatory power for males was R2 = 83 to 84%, and that for females was R2 = 69%. The standard error of estimation (SEE) of the three equations showed acceptable values in both sexes. These values ranged from 0.049 to 0.080 in males and from 0.035 to 0.088 in females. The Bland–Altman concordance analysis showed adequate limits of agreement. In men, they ranged from −0.092 to 0.069 m2. In females, they ranged from −0.064 to 0.084 m2. Reference values for BSA and LBM were constructed using percentiles. Conclusions: This study demonstrated the validity of three equations for estimating LBM in a Chilean sample aged between 4 and 85 years. These results show consistent behavior and acceptable accuracy, especially in the Mattar equation for all ages. However, the Dubois & Dubois and Fujimoto equations could also be an alternative in females. Reference values were generated for BSA and LBM according to age and sex. The results suggest their applicability and usefulness in clinical and public health contexts. Full article
Show Figures

Figure 1

12 pages, 509 KB  
Review
Deciding When to Align: Computational and Neural Mechanisms of Goal-Directed Social Alignment
by Aial Sobeh and Simone Shamay-Tsoory
Brain Sci. 2025, 15(11), 1200; https://doi.org/10.3390/brainsci15111200 - 7 Nov 2025
Viewed by 567
Abstract
Human behavior is shaped by a pervasive motive to align with others, manifesting across a wide range of tendencies—from motor synchrony and emotional contagion to convergence in beliefs and choices. Existing accounts explain how alignment arises through predictive coding and observation–execution mechanisms, but [...] Read more.
Human behavior is shaped by a pervasive motive to align with others, manifesting across a wide range of tendencies—from motor synchrony and emotional contagion to convergence in beliefs and choices. Existing accounts explain how alignment arises through predictive coding and observation–execution mechanisms, but they do not address how it is regulated in a manner that considers when alignment is adaptive and with whom it should occur. We propose a goal-directed model of social alignment that integrates computational and neural levels of analysis, to enhance our understanding of alignment as a context-sensitive decision process rather than a reflexive social tendency. Computationally, alignment is formalized as a prediction-error minimization process over the gap between self and other, augmented by a meta-learning layer in which the learning rate is adaptively tuned according to the inferred value of aligning versus maintaining independence. Assessments of the traits and mental states of self and other serve as key inputs to this regulatory function. Neurally, higher-order representations of these inputs are carried by the mentalizing network (dmPFC, TPJ), which exerts top-down control through the executive control network (dlPFC, rIFG) to enhance or inhibit alignment tendencies generated by observation–execution (mirror) circuitry. By reframing alignment as a form of social decision-making under uncertainty, the model specifies both the computations and neural circuits that integrate contextual cues to arbitrate when and with whom to align. It yields testable predictions across developmental, comparative, cognitive, and neurophysiological domains, and provides a unified framework for understanding the adaptive functions of social alignment, such as strategic social learning, as well as its maladaptive outcomes, including groupthink and false information cascades. Full article
Show Figures

Figure 1

18 pages, 1540 KB  
Review
From Fractal Geometry to Fractal Cognition: Experimental Tools and Future Directions for Studying Recursive Hierarchical Embedding
by Mauricio J. D. Martins
Fractal Fract. 2025, 9(10), 654; https://doi.org/10.3390/fractalfract9100654 - 10 Oct 2025
Viewed by 984
Abstract
The study of fractals has a long history in mathematics and signal analysis, providing formal tools to describe self-similar structures and scale-invariant phenomena. In recent years, cognitive science has developed a set of powerful theoretical and experimental tools capable of probing the representations [...] Read more.
The study of fractals has a long history in mathematics and signal analysis, providing formal tools to describe self-similar structures and scale-invariant phenomena. In recent years, cognitive science has developed a set of powerful theoretical and experimental tools capable of probing the representations that enable humans to extend hierarchical structures beyond given input and to generate fractal-like patterns across multiple domains, including language, music, vision, and action. These paradigms target recursive hierarchical embedding (RHE), a generative capacity that supports the production and recognition of self-similar structures at multiple scales. This article reviews the theoretical framework of RHE, surveys empirical methods for measuring it across behavioral and neural domains, and highlights their potential for cross-domain comparisons and developmental research. It also examines applications in linguistic, musical, visual, and motor domains, summarizing key findings and their theoretical implications. Despite these advances, the computational and biological mechanisms underlying RHE remain poorly understood. Addressing this gap will require linking cognitive models with algorithmic architectures and leveraging the large-scale behavioral and neuroimaging datasets generated by these paradigms for fractal analyses. Integrating theory, empirical tools, and computational modelling offers a roadmap for uncovering the mechanisms that give rise to recursive generativity in the human mind. Full article
(This article belongs to the Special Issue Fractal Dynamics of Complex Systems in Society and Behavioral Science)
Show Figures

Figure 1

30 pages, 1778 KB  
Article
AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research
by Ana Luize Bertoncini, Raul Matsushita and Sergio Da Silva
AI Educ. 2026, 1(1), 3; https://doi.org/10.3390/aieduc1010003 - 4 Oct 2025
Viewed by 4172
Abstract
This study examines how cognitive biases may shape ethical decision-making in AI-mediated environments, particularly within education and research. As AI tools increasingly influence human judgment, biases such as normalization, complacency, rationalization, and authority bias can lead to ethical lapses, including academic misconduct, uncritical [...] Read more.
This study examines how cognitive biases may shape ethical decision-making in AI-mediated environments, particularly within education and research. As AI tools increasingly influence human judgment, biases such as normalization, complacency, rationalization, and authority bias can lead to ethical lapses, including academic misconduct, uncritical reliance on AI-generated content, and acceptance of misinformation. To explore these dynamics, we developed an LLM-generated synthetic behavior estimation framework that modeled six decision-making scenarios with probabilistic representations of key cognitive biases. The scenarios addressed issues ranging from loss of human agency to biased evaluations and homogenization of thought. Statistical summaries of the synthetic dataset indicated that 71% of agents engaged in unethical behavior influenced by biases like normalization and complacency, 78% relied on AI outputs without scrutiny due to automation and authority biases, and misinformation was accepted in 65% of cases, largely driven by projection and authority biases. These statistics are descriptive of this synthetic dataset only and are not intended as inferential claims about real-world populations. The findings nevertheless suggest the potential value of targeted interventions—such as AI literacy programs, systematic bias audits, and equitable access to AI tools—to promote responsible AI use. As a proof-of-concept, the framework offers controlled exploratory insights, but all reported outcomes reflect text-based pattern generation by an LLM rather than observed human behavior. Future research should validate and extend these findings with longitudinal and field data. Full article
Show Figures

Figure 1

17 pages, 3363 KB  
Article
Social-LLM: Modeling User Behavior at Scale Using Language Models and Social Network Data
by Julie Jiang and Emilio Ferrara
Sci 2025, 7(4), 138; https://doi.org/10.3390/sci7040138 - 2 Oct 2025
Viewed by 1697
Abstract
The proliferation of social network data has unlocked unprecedented opportunities for extensive, data-driven exploration of human behavior. The structural intricacies of social networks offer insights into various computational social science issues, particularly concerning social influence and information diffusion. However, modeling large-scale social network [...] Read more.
The proliferation of social network data has unlocked unprecedented opportunities for extensive, data-driven exploration of human behavior. The structural intricacies of social networks offer insights into various computational social science issues, particularly concerning social influence and information diffusion. However, modeling large-scale social network data comes with computational challenges. Though large language models make it easier than ever to model textual content, any advanced network representation method struggles with scalability and efficient deployment to out-of-sample users. In response, we introduce a novel approach tailored for modeling social network data in user-detection tasks. This innovative method integrates localized social network interactions with the capabilities of large language models. Operating under the premise of social network homophily, which posits that socially connected users share similarities, our approach is designed with scalability and inductive capabilities in mind, avoiding the need for full-graph training. We conduct a thorough evaluation of our method across seven real-world social network datasets, spanning a diverse range of topics and detection tasks, showcasing its applicability to advance research in computational social science. Full article
(This article belongs to the Topic Social Computing and Social Network Analysis)
Show Figures

Figure 1

14 pages, 882 KB  
Article
Media Narratives of Human-Wildlife Conflict: Iberian Orcas and Boats in the Spanish Press
by José Domingo Villarroel, Joyse Vitorino and Alvaro Antón
Conservation 2025, 5(4), 54; https://doi.org/10.3390/conservation5040054 - 2 Oct 2025
Viewed by 1448
Abstract
The killer whale (Orcinus orca) is a crucial predator in marine ecosystems, affecting prey populations and overall ecosystem health. Since May 2020, Iberian killer whales in the Strait of Gibraltar have interacted unusually with pleasure boats, posing significant maritime safety challenges. [...] Read more.
The killer whale (Orcinus orca) is a crucial predator in marine ecosystems, affecting prey populations and overall ecosystem health. Since May 2020, Iberian killer whales in the Strait of Gibraltar have interacted unusually with pleasure boats, posing significant maritime safety challenges. Recognized as critically endangered by the IUCN, a conservation plan for these whales has been approved in Spain. This study analyzes media coverage of these interactions, as media can shape public opinion and influence policies regarding human–wildlife conflicts. A total of 107 news articles published between June 2022 and September 2024 in Spanish media were examined, focusing on the interactions between Iberian killer whales and boats. The research included six variables from prior studies to enhance understanding of media representation and its effects on conservation management. Findings suggest that media coverage often limits comprehension of orca behavior and their vulnerable status. Full article
(This article belongs to the Special Issue Social Sciences in Marine Ecology Conservation)
Show Figures

Figure 1

8 pages, 206 KB  
Proceeding Paper
Transitive Self-Reflection–A Fundamental Criterion for Detecting Intelligence
by Krassimir Markov and Velina Slavova
Proceedings 2025, 126(1), 8; https://doi.org/10.3390/proceedings2025126008 - 15 Sep 2025
Viewed by 687
Abstract
This survey investigates the concept of transitive self-reflection as a fundamental criterion for detecting and measuring intelligence. We explore the manifestation of this ability in humans, consider its potential presence in other animals, and discuss the challenges and possibilities of replicating it in [...] Read more.
This survey investigates the concept of transitive self-reflection as a fundamental criterion for detecting and measuring intelligence. We explore the manifestation of this ability in humans, consider its potential presence in other animals, and discuss the challenges and possibilities of replicating it in artificial intelligence systems. Transitive self-reflection is characterized by an awareness of oneself through complex cognitive abilities rooted in evolutionary mechanisms that are innate in humans. Although transitive self-reflection cannot be fully replicated in AI as an origin, its behavioral characteristics can be analyzed and, to some extent, imitated. The study delves into various forms of transitive self-reflection, including self-recognition, object-mediated self-reflection, and reflective social cognition, highlighting their philosophical roots and recent advancements in cognitive science. We also examine the multifaceted nature of intelligence, encompassing cognitive, emotional, and social dimensions. Despite significant progress, current AI systems lack true transitive self-reflection. Developing AI with this capability requires advances in knowledge representation, reasoning algorithms, and machine learning. Incorporating transitive self-reflection into AI systems holds transformative potential for creating socially adept and more human-like intelligence in machines. This research underscores the importance of transitive self-reflection in advancing our understanding of and the development of intelligent systems. Full article
(This article belongs to the Proceedings of The 1st International Online Conference of the Journal Philosophies)
8 pages, 171 KB  
Proceeding Paper
How Brook’s Behavior-Based Robots Teach Us a Lesson About Knowledge
by Saskia Janina Neumann
Proceedings 2025, 126(1), 5; https://doi.org/10.3390/proceedings2025126005 - 12 Sep 2025
Viewed by 414
Abstract
This work argues that there is more than one form of knowledge. By comparing human cognition with Rodney Brooks’ behavior-based robots, which act without representational content, I show that humans interact with the world through contentful representations, while robots rely on contentless, embodied [...] Read more.
This work argues that there is more than one form of knowledge. By comparing human cognition with Rodney Brooks’ behavior-based robots, which act without representational content, I show that humans interact with the world through contentful representations, while robots rely on contentless, embodied routines. Drawing on empirical cases—spreading activation, object recognition, agnosia, and vision reconstruction—I argue that humans require content and thus face the hard problem of content. I propose that content is internally generated. Ultimately, I defend a pluralistic view: knowledge can be both contentful and contentless and neither form is inherently superior. Full article
(This article belongs to the Proceedings of The 1st International Online Conference of the Journal Philosophies)
25 pages, 1689 KB  
Article
A Data-Driven Framework for Modeling Car-Following Behavior Using Conditional Transfer Entropy and Dynamic Mode Decomposition
by Poorendra Ramlall and Subhradeep Roy
Appl. Sci. 2025, 15(17), 9700; https://doi.org/10.3390/app15179700 - 3 Sep 2025
Viewed by 945
Abstract
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) [...] Read more.
Accurate modeling of car-following behavior is essential for understanding traffic dynamics and enabling predictive control in intelligent transportation systems. This study presents a novel data-driven framework that combines information-theoretic input selection via conditional transfer entropy (CTE) with dynamic mode decomposition with control (DMDc) for identifying and forecasting car-following dynamics. In the first step, CTE is employed to identify the specific vehicles that exert directional influence on a given subject vehicle, thereby systematically determining the relevant control inputs for modeling its behavior. In the second step, DMDc is applied to estimate and predict the dynamics by reconstructing the closed-form expression of the dynamical system governing the subject vehicle’s motion. Unlike conventional machine learning models that typically seek a single generalized representation across all drivers, our framework develops individualized models that explicitly preserve driver heterogeneity. Using both synthetic data from multiple traffic models and real-world naturalistic driving datasets, we demonstrate that DMDc accurately captures nonlinear vehicle interactions and achieves high-fidelity short-term predictions. Analysis of the estimated system matrices reveals that DMDc naturally approximates kinematic relationships, further reinforcing its interpretability. Importantly, this is the first study to apply DMDc to model and predict car-following behavior using real-world driving data. The proposed framework offers a computationally efficient and interpretable tool for traffic behavior analysis, with potential applications in adaptive traffic control, autonomous vehicle planning, and human-driver modeling. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

25 pages, 23235 KB  
Article
Multidimensional Representation Dynamics for Abstract Visual Objects in Encoded Tangram Paradigms
by Yongxiang Lian, Shihao Pan and Li Shi
Brain Sci. 2025, 15(9), 941; https://doi.org/10.3390/brainsci15090941 - 28 Aug 2025
Viewed by 865
Abstract
Background: The human visual system is capable of processing large quantities of visual objects with varying levels of abstraction. The brain also exhibits hierarchical integration and learning capabilities that combine various attributes of visual objects (e.g., color, shape, local features, and categories) into [...] Read more.
Background: The human visual system is capable of processing large quantities of visual objects with varying levels of abstraction. The brain also exhibits hierarchical integration and learning capabilities that combine various attributes of visual objects (e.g., color, shape, local features, and categories) into coherent representations. However, prevailing theories in visual neuroscience employ simple stimuli or natural images with uncontrolled feature correlations, which constrains the systematic investigation of multidimensional representation dynamics. Methods: In this study, we aimed to bridge this methodological gap by developing a novel large tangram paradigm in visual cognition research and proposing cognitive-associative encoding as a mathematical basis. Critical representation dimensions—including animacy, abstraction level, and local feature density—were computed across a public dataset of over 900 tangrams, enabling the construction of a hierarchical model of visual representation. Results: Neural responses to 85 representative images were recorded using Electroencephalography (n = 24), and subsequent behavioral analyses and neural decoding revealed that distinct representational dimensions are independently encoded and dynamically expressed at different stages of cognitive processing. Furthermore, representational similarity analysis and temporal generalization analysis indicated that higher-order cognitive processes, such as “change of mind,” reflect the selective activation or suppression of local feature processing. Conclusions: These findings demonstrate that tangram stimuli, structured through cognitive-associative encoding, provide a generalizable computational framework for investigating the dynamic stages of human visual object cognition. Full article
Show Figures

Figure 1

21 pages, 3261 KB  
Article
A Driving-Preference-Aware Framework for Vehicle Lane Change Prediction
by Ying Lyu, Yulin Wang, Huan Liu, Xiaoyu Dong, Yifan He and Yilong Ren
Sensors 2025, 25(17), 5342; https://doi.org/10.3390/s25175342 - 28 Aug 2025
Viewed by 855
Abstract
With the development of intelligent connected vehicle and artificial intelligence technologies, mixed traffic scenarios where autonomous and human-driven vehicles coexist are becoming increasingly common. Autonomous vehicles need to accurately predict the lane change behavior of preceding vehicles to ensure safety. However, lane change [...] Read more.
With the development of intelligent connected vehicle and artificial intelligence technologies, mixed traffic scenarios where autonomous and human-driven vehicles coexist are becoming increasingly common. Autonomous vehicles need to accurately predict the lane change behavior of preceding vehicles to ensure safety. However, lane change behavior of human-driven vehicles is influenced by both environmental factors and driver preferences, which increases its uncertainty and makes prediction more difficult. To address this challenge, this paper focuses on the mining of driving preferences and the prediction of lane change behavior. We clarify the definition of driving preference and its relationship with driving style and construct a representation of driving operations based on vehicle dynamics parameters and statistical features. A preference feature extractor based on the SimCLR contrastive learning framework is designed to capture high-dimensional driving preference features through unsupervised learning, effectively distinguishing between aggressive, normal, and conservative driving styles. Furthermore, a dual-branch lane change prediction model is proposed, which fuses explicit temporal features of vehicle states with implicit driving preference features, enabling efficient integration of multi-source information. Experimental results on the HighD dataset show that the proposed model significantly outperforms traditional models such as Transformer and LSTM in lane change prediction accuracy, providing technical support for improving the safety and human-likeness of autonomous driving decision-making. Full article
Show Figures

Figure 1

21 pages, 6890 KB  
Article
SOAR-RL: Safe and Open-Space Aware Reinforcement Learning for Mobile Robot Navigation in Narrow Spaces
by Minkyung Jun, Piljae Park and Hoeryong Jung
Sensors 2025, 25(17), 5236; https://doi.org/10.3390/s25175236 - 22 Aug 2025
Viewed by 1598
Abstract
As human–robot shared service environments become increasingly common, autonomous navigation in narrow space environments (NSEs), such as indoor corridors and crosswalks, becomes challenging. Mobile robots must go beyond reactive collision avoidance and interpret surrounding risks to proactively select safer routes in dynamic and [...] Read more.
As human–robot shared service environments become increasingly common, autonomous navigation in narrow space environments (NSEs), such as indoor corridors and crosswalks, becomes challenging. Mobile robots must go beyond reactive collision avoidance and interpret surrounding risks to proactively select safer routes in dynamic and spatially constrained environments. This study proposes a deep reinforcement learning (DRL)-based navigation framework that enables mobile robots to interact with pedestrians while identifying and traversing open and safe spaces. The framework fuses 3D LiDAR and RGB camera data to recognize individual pedestrians and estimate their position and velocity in real time. Based on this, a human-aware occupancy map (HAOM) is constructed, combining both static obstacles and dynamic risk zones, and used as the input state for DRL. To promote proactive and safe navigation behaviors, we design a state representation and reward structure that guide the robot toward less risky areas, overcoming the limitations of traditional approaches. The proposed method is validated through a series of simulation experiments, including straight, L-shaped, and cross-shaped layouts, designed to reflect typical narrow space environments. Various dynamic obstacle scenarios were incorporated during both training and evaluation. The results demonstrate that the proposed approach significantly improves navigation success rates and reduces collision incidents compared to conventional navigation planners across diverse NSE conditions. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

16 pages, 707 KB  
Article
High-Resolution Human Keypoint Detection: A Unified Framework for Single and Multi-Person Settings
by Yuhuai Lin, Kelei Li and Haihua Wang
Algorithms 2025, 18(8), 533; https://doi.org/10.3390/a18080533 - 21 Aug 2025
Viewed by 1741
Abstract
Human keypoint detection has become a fundamental task in computer vision, underpinning a wide range of downstream applications such as action recognition, intelligent surveillance, and human–computer interaction. Accurate localization of keypoints is crucial for understanding human posture, behavior, and interactions in various environments. [...] Read more.
Human keypoint detection has become a fundamental task in computer vision, underpinning a wide range of downstream applications such as action recognition, intelligent surveillance, and human–computer interaction. Accurate localization of keypoints is crucial for understanding human posture, behavior, and interactions in various environments. In this paper, we propose a deep-learning-based human skeletal keypoint detection framework that leverages a High-Resolution Network (HRNet) to achieve robust and precise keypoint localization. Our method maintains high-resolution representations throughout the entire network, enabling effective multi-scale feature fusion, without sacrificing spatial details. This approach preserves the fine-grained spatial information that is often lost in conventional downsampling-based methods. To evaluate its performance, we conducted extensive experiments on the COCO dataset, where our approach achieved competitive performance in terms of Average Precision (AP) and Average Recall (AR), outperforming several state-of-the-art methods. Furthermore, we extended our pipeline to support multi-person keypoint detection in real-time scenarios, ensuring scalability for complex environments. Experimental results demonstrated the effectiveness of our method in both single-person and multi-person settings, providing a comprehensive and flexible solution for various pose estimation tasks in dynamic real-world applications. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

27 pages, 1766 KB  
Article
A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography
by Maithili Shailesh Andhare, T. Vijayan, B. Karthik and Shabana Urooj
Brain Sci. 2025, 15(8), 835; https://doi.org/10.3390/brainsci15080835 - 4 Aug 2025
Viewed by 967
Abstract
Mental stress is a psychological or emotional strain that typically occurs because of threatening, challenging, and overwhelming conditions and affects human behavior. Various factors, such as professional, environmental, and personal pressures, often trigger it. In recent years, various deep learning (DL)-based schemes using [...] Read more.
Mental stress is a psychological or emotional strain that typically occurs because of threatening, challenging, and overwhelming conditions and affects human behavior. Various factors, such as professional, environmental, and personal pressures, often trigger it. In recent years, various deep learning (DL)-based schemes using electroencephalograms (EEGs) have been proposed. However, the effectiveness of DL-based schemes is challenging because of the intricate DL structure, class imbalance problems, poor feature representation, low-frequency resolution problems, and complexity of multi-channel signal processing. This paper presents a novel hybrid DL framework, BDDNet, which combines a deep convolutional neural network (DCNN), bidirectional long short-term memory (BiLSTM), and deep belief network (DBN). BDDNet provides superior spectral–temporal feature depiction and better long-term dependency on the local and global features of EEGs. BDDNet accepts multiple EEG features (MEFs) that provide the spectral and time-domain features of EEGs. A novel improved crow search algorithm (ICSA) was presented for channel selection to minimize the computational complexity of multichannel stress detection. Further, the novel employee optimization algorithm (EOA) is utilized for the hyper-parameter optimization of hybrid BDDNet to enhance the training performance. The outcomes of the novel BDDNet were assessed using a public DEAP dataset. The BDDNet-ICSA offers improved recall of 97.6%, precision of 97.6%, F1-score of 97.6%, selectivity of 96.9%, negative predictive value NPV of 96.9%, and accuracy of 97.3% to traditional techniques. Full article
Show Figures

Figure 1

27 pages, 9910 KB  
Article
Predicting the Next Location of Urban Individuals via a Representation-Enhanced Multi-View Learning Network
by Maoqi Lun, Peixiao Wang, Sheng Wu, Hengcai Zhang, Shifen Cheng and Feng Lu
ISPRS Int. J. Geo-Inf. 2025, 14(8), 302; https://doi.org/10.3390/ijgi14080302 - 2 Aug 2025
Cited by 1 | Viewed by 1103
Abstract
Accurately predicting the next location of urban individuals is a central issue in human mobility research. Human mobility exhibits diverse patterns, requiring the integration of spatiotemporal contexts for location prediction. In this context, multi-view learning has become a prominent method in location prediction. [...] Read more.
Accurately predicting the next location of urban individuals is a central issue in human mobility research. Human mobility exhibits diverse patterns, requiring the integration of spatiotemporal contexts for location prediction. In this context, multi-view learning has become a prominent method in location prediction. Despite notable advances, current methods still face challenges in effectively capturing non-spatial proximity of regional preferences, complex temporal periodicity, and the ambiguity of location semantics. To address these challenges, we propose a representation-enhanced multi-view learning network (ReMVL-Net) for location prediction. Specifically, we propose a community-enhanced spatial representation that transcends geographic proximity to capture latent mobility patterns. In addition, we introduce a multi-granular enhanced temporal representation to model the multi-level periodicity of human mobility and design a rule-based semantic recognition method to enrich location semantics. We evaluate the proposed model using mobile phone data from Fuzhou. Experimental results show a 2.94% improvement in prediction accuracy over the best-performing baseline. Further analysis reveals that community space plays a key role in narrowing the candidate location set. Moreover, we observe that prediction difficulty is strongly influenced by individual travel behaviors, with more regular activity patterns being easier to predict. Full article
Show Figures

Figure 1

Back to TopTop