Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = egocentric networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 867 KiB  
Article
(In)Visible Nuances: Analytical Methods for a Relational Impact Assessment of Anti-Poverty Projects
by M. Licia Paglione
Societies 2025, 15(4), 105; https://doi.org/10.3390/soc15040105 - 18 Apr 2025
Viewed by 362
Abstract
In recent social science debates, poverty is seen as a multidimensional phenomenon, not only economic, but also psychological, educational, moral, and relational. The empirical observation and analysis of this latter dimension and its qualities represent a sociological challenge, especially in assessing the integral [...] Read more.
In recent social science debates, poverty is seen as a multidimensional phenomenon, not only economic, but also psychological, educational, moral, and relational. The empirical observation and analysis of this latter dimension and its qualities represent a sociological challenge, especially in assessing the integral effectiveness of social projects. As part of this debate, this article proposes an analytical method—based on Social Network Analysis, according to the egocentric or personal approach—and describes its use during an empirical “relational impact assessment” of a specific anti-poverty project in the Northwest region of Argentina. Analysis of the data—collected longitudinally through questionnaires—highlights the changes in the personal “relational configurations” of small entrepreneurs in the tourist area, i.e., the beneficiaries of the project, while also highlighting the emergence of “relational goods”. In this way, this article offers an analytical method to evaluate the “relational impact” of anti-poverty projects in quali–quantitative terms. Full article
Show Figures

Figure 1

17 pages, 2231 KiB  
Article
Brain Functional Connectivity During First- and Third-Person Visual Imagery
by Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova and Elena Mershina
Vision 2025, 9(2), 30; https://doi.org/10.3390/vision9020030 - 6 Apr 2025
Viewed by 1317
Abstract
The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought [...] Read more.
The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the “visual buffer” circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

22 pages, 12110 KiB  
Article
Learning a Memory-Enhanced Multi-Stage Goal-Driven Network for Egocentric Trajectory Prediction
by Xiuen Wu, Sien Li, Tao Wang, Ge Xu and George Papageorgiou
Biomimetics 2024, 9(8), 462; https://doi.org/10.3390/biomimetics9080462 - 31 Jul 2024
Viewed by 1797
Abstract
We propose a memory-enhanced multi-stage goal-driven network (ME-MGNet) for egocentric trajectory prediction in dynamic scenes. Our key idea is to build a scene layout memory inspired by human perception in order to transfer knowledge from prior experiences to the current scenario in a [...] Read more.
We propose a memory-enhanced multi-stage goal-driven network (ME-MGNet) for egocentric trajectory prediction in dynamic scenes. Our key idea is to build a scene layout memory inspired by human perception in order to transfer knowledge from prior experiences to the current scenario in a top-down manner. Specifically, given a test scene, we first perform scene-level matching based on our scene layout memory to retrieve trajectories from visually similar scenes in the training data. This is followed by trajectory-level matching and memory filtering to obtain a set of goal features. In addition, a multi-stage goal generator takes these goal features and uses a backward decoder to produce several stage goals. Finally, we integrate the above steps into a conditional autoencoder and a forward decoder to produce trajectory prediction results. Experiments on three public datasets, JAAD, PIE, and KITTI, and a new egocentric trajectory prediction dataset, Fuzhou DashCam (FZDC), validate the efficacy of the proposed method. Full article
(This article belongs to the Special Issue Biomimetics and Bioinspired Artificial Intelligence Applications)
Show Figures

Figure 1

31 pages, 372 KiB  
Article
What about Your Friends? Friendship Networks and Mental Health in Critical Consciousness
by Christopher M. Wegemer, Emily Maurin-Waters, M. Alejandra Arce, Elan C. Hope and Laura Wray-Lake
Youth 2024, 4(2), 854-884; https://doi.org/10.3390/youth4020056 - 7 Jun 2024
Cited by 3 | Viewed by 3934
Abstract
Scholars have documented positive and negative relationships between adolescents’ critical consciousness and mental health. This study aims to clarify the role of friendship networks contributing to these associations. Using egocentric network data from a nationwide adolescent sample (N = 984, 55.0% female, [...] Read more.
Scholars have documented positive and negative relationships between adolescents’ critical consciousness and mental health. This study aims to clarify the role of friendship networks contributing to these associations. Using egocentric network data from a nationwide adolescent sample (N = 984, 55.0% female, 23.9% nonbinary, 72.7% non-white), regression analyses examined whether adolescents’ psychological distress and flourishing were predicted by their friend group’s average critical consciousness and the difference between adolescents and their friends on critical consciousness dimensions (sociopolitical action, critical agency, and critical reflection), accounting for network and demographic covariates. Higher friend group critical consciousness positively predicted flourishing, and higher friend group sociopolitical action negatively predicted psychological distress. Adolescents who participated in sociopolitical action more frequently than their friends had higher psychological distress and lower flourishing. Those with higher agency than their friends had lower flourishing. At the individual level, adolescents’ sociopolitical action predicted higher psychological distress and flourishing, critical agency predicted higher flourishing, and critical reflection predicted higher psychological distress and lower flourishing. Adolescent mental health is uniquely related to their friends’ critical consciousness. Findings highlight the utility of social network analyses for understanding social mechanisms that underlie relationships between critical consciousness and mental health. Full article
15 pages, 641 KiB  
Article
A Multi-Modal Egocentric Activity Recognition Approach towards Video Domain Generalization
by Antonios Papadakis and Evaggelos Spyrou
Sensors 2024, 24(8), 2491; https://doi.org/10.3390/s24082491 - 12 Apr 2024
Cited by 4 | Viewed by 2438
Abstract
Egocentric activity recognition is a prominent computer vision task that is based on the use of wearable cameras. Since egocentric videos are captured through the perspective of the person wearing the camera, her/his body motions severely complicate the video content, imposing several challenges. [...] Read more.
Egocentric activity recognition is a prominent computer vision task that is based on the use of wearable cameras. Since egocentric videos are captured through the perspective of the person wearing the camera, her/his body motions severely complicate the video content, imposing several challenges. In this work we propose a novel approach for domain-generalized egocentric human activity recognition. Typical approaches use a large amount of training data, aiming to cover all possible variants of each action. Moreover, several recent approaches have attempted to handle discrepancies between domains with a variety of costly and mostly unsupervised domain adaptation methods. In our approach we show that through simple manipulation of available source domain data and with minor involvement from the target domain, we are able to produce robust models, able to adequately predict human activity in egocentric video sequences. To this end, we introduce a novel three-stream deep neural network architecture combining elements of vision transformers and residual neural networks which are trained using multi-modal data. We evaluate the proposed approach using a challenging, egocentric video dataset and demonstrate its superiority over recent, state-of-the-art research works. Full article
Show Figures

Figure 1

15 pages, 2895 KiB  
Article
Patterns in Temporal Networks with Higher-Order Egocentric Structures
by Beatriz Arregui-García, Antonio Longa, Quintino Francesco Lotito, Sandro Meloni and Giulia Cencetti
Entropy 2024, 26(3), 256; https://doi.org/10.3390/e26030256 - 13 Mar 2024
Cited by 6 | Viewed by 2454
Abstract
The analysis of complex and time-evolving interactions, such as those within social dynamics, represents a current challenge in the science of complex systems. Temporal networks stand as a suitable tool for schematizing such systems, encoding all the interactions appearing between pairs of individuals [...] Read more.
The analysis of complex and time-evolving interactions, such as those within social dynamics, represents a current challenge in the science of complex systems. Temporal networks stand as a suitable tool for schematizing such systems, encoding all the interactions appearing between pairs of individuals in discrete time. Over the years, network science has developed many measures to analyze and compare temporal networks. Some of them imply a decomposition of the network into small pieces of interactions; i.e., only involving a few nodes for a short time range. Along this line, a possible way to decompose a network is to assume an egocentric perspective; i.e., to consider for each node the time evolution of its neighborhood. This was proposed by Longa et al. by defining the “egocentric temporal neighborhood”, which has proven to be a useful tool for characterizing temporal networks relative to social interactions. However, this definition neglects group interactions (quite common in social domains), as they are always decomposed into pairwise connections. A more general framework that also allows considering larger interactions is represented by higher-order networks. Here, we generalize the description of social interactions to hypergraphs. Consequently, we generalize their decomposition into “hyper egocentric temporal neighborhoods”. This enables the analysis of social interactions, facilitating comparisons between different datasets or nodes within a dataset, while considering the intrinsic complexity presented by higher-order interactions. Even if we limit the order of interactions to the second order (triplets of nodes), our results reveal the importance of a higher-order representation.In fact, our analyses show that second-order structures are responsible for the majority of the variability at all scales: between datasets, amongst nodes, and over time. Full article
Show Figures

Figure 1

19 pages, 310 KiB  
Article
A Neoteric Paradigm to Improve Food Security: The Predictors of Women’s Influence on Egocentric Networks’ Food Waste Behaviors
by Karissa Palmer, Robert Strong and Chanda Elbert
Nutrients 2024, 16(6), 788; https://doi.org/10.3390/nu16060788 - 10 Mar 2024
Viewed by 2056
Abstract
COVID-19, the most recent multi-dimensional global food crisis, challenged leadership and impacted individuals’ personal networks. Two cross-sectional surveys were disseminated to women involved in their state’s women’s leadership committee to understand food waste behaviors. An egocentric network analysis was chosen as the methodology [...] Read more.
COVID-19, the most recent multi-dimensional global food crisis, challenged leadership and impacted individuals’ personal networks. Two cross-sectional surveys were disseminated to women involved in their state’s women’s leadership committee to understand food waste behaviors. An egocentric network analysis was chosen as the methodology to better understand personal advice network characteristics and examine the impacts of Farm Bureau women’s leadership committee members’ advice networks on their food waste behavior. A multilevel model was conducted to identify factors related to respondents leading their network members toward positive food waste decisions. Independent variables included in the variables at the individual (e.g., each respondent’s race, generation), dyadic (e.g., length respondent has known each member of her network), and network levels (e.g., proportion of the respondent’s network that was female) were included in the model. Women were more likely to report connections with people they led to positive food waste behaviors and food security when: they had higher food waste sum scores, they were part of Generation X, the network member they led to more positive food waste behaviors was a friend, and if there were fewer women in their advice networks. Full article
(This article belongs to the Special Issue The Optimal Diet for a Sustainable Future)
21 pages, 179914 KiB  
Article
Integrating Egocentric and Robotic Vision for Object Identification Using Siamese Networks and Superquadric Estimations in Partial Occlusion Scenarios
by Elisabeth Menendez, Santiago Martínez, Fernando Díaz-de-María and Carlos Balaguer
Biomimetics 2024, 9(2), 100; https://doi.org/10.3390/biomimetics9020100 - 8 Feb 2024
Cited by 4 | Viewed by 2207
Abstract
This paper introduces a novel method that enables robots to identify objects based on user gaze, tracked via eye-tracking glasses. This is achieved without prior knowledge of the objects’ categories or their locations and without external markers. The method integrates a two-part system: [...] Read more.
This paper introduces a novel method that enables robots to identify objects based on user gaze, tracked via eye-tracking glasses. This is achieved without prior knowledge of the objects’ categories or their locations and without external markers. The method integrates a two-part system: a category-agnostic object shape and pose estimator using superquadrics and Siamese networks. The superquadrics-based component estimates the shapes and poses of all objects, while the Siamese network matches the object targeted by the user’s gaze with the robot’s viewpoint. Both components are effectively designed to function in scenarios with partial occlusions. A key feature of the system is the user’s ability to move freely around the scenario, allowing dynamic object selection via gaze from any position. The system is capable of handling significant viewpoint differences between the user and the robot and adapts easily to new objects. In tests under partial occlusion conditions, the Siamese networks demonstrated an 85.2% accuracy in aligning the user-selected object with the robot’s viewpoint. This gaze-based Human–Robot Interaction approach demonstrates its practicality and adaptability in real-world scenarios. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction: 2nd Edition)
Show Figures

Figure 1

20 pages, 2103 KiB  
Article
Fusion of Appearance and Motion Features for Daily Activity Recognition from Egocentric Perspective
by Mohd Haris Lye, Nouar AlDahoul and Hezerul Abdul Karim
Sensors 2023, 23(15), 6804; https://doi.org/10.3390/s23156804 - 30 Jul 2023
Cited by 2 | Viewed by 1383
Abstract
Vidos from a first-person or egocentric perspective offer a promising tool for recognizing various activities related to daily living. In the egocentric perspective, the video is obtained from a wearable camera, and this enables the capture of the person’s activities in a consistent [...] Read more.
Vidos from a first-person or egocentric perspective offer a promising tool for recognizing various activities related to daily living. In the egocentric perspective, the video is obtained from a wearable camera, and this enables the capture of the person’s activities in a consistent viewpoint. Recognition of activity using a wearable sensor is challenging due to various reasons, such as motion blur and large variations. The existing methods are based on extracting handcrafted features from video frames to represent the contents. These features are domain-dependent, where features that are suitable for a specific dataset may not be suitable for others. In this paper, we propose a novel solution to recognize daily living activities from a pre-segmented video clip. The pre-trained convolutional neural network (CNN) model VGG16 is used to extract visual features from sampled video frames and then aggregated by the proposed pooling scheme. The proposed solution combines appearance and motion features extracted from video frames and optical flow images, respectively. The methods of mean and max spatial pooling (MMSP) and max mean temporal pyramid (TPMM) pooling are proposed to compose the final video descriptor. The feature is applied to a linear support vector machine (SVM) to recognize the type of activities observed in the video clip. The evaluation of the proposed solution was performed on three public benchmark datasets. We performed studies to show the advantage of aggregating appearance and motion features for daily activity recognition. The results show that the proposed solution is promising for recognizing activities of daily living. Compared to several methods on three public datasets, the proposed MMSP–TPMM method produces higher classification performance in terms of accuracy (90.38% with LENA dataset, 75.37% with ADL dataset, 96.08% with FPPA dataset) and average per-class precision (AP) (58.42% with ADL dataset and 96.11% with FPPA dataset). Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

22 pages, 10163 KiB  
Article
Closed-Chain Inverse Dynamics for the Biomechanical Analysis of Manual Material Handling Tasks through a Deep Learning Assisted Wearable Sensor Network
by Riccardo Bezzini, Luca Crosato, Massimo Teppati Losè, Carlo Alberto Avizzano, Massimo Bergamasco and Alessandro Filippeschi
Sensors 2023, 23(13), 5885; https://doi.org/10.3390/s23135885 - 25 Jun 2023
Cited by 4 | Viewed by 3334
Abstract
Despite the automatization of many industrial and logistics processes, human workers are still often involved in the manual handling of loads. These activities lead to many work-related disorders that reduce the quality of life and the productivity of aged workers. A biomechanical analysis [...] Read more.
Despite the automatization of many industrial and logistics processes, human workers are still often involved in the manual handling of loads. These activities lead to many work-related disorders that reduce the quality of life and the productivity of aged workers. A biomechanical analysis of such activities is the basis for a detailed estimation of the biomechanical overload, thus enabling focused prevention actions. Thanks to wearable sensor networks, it is now possible to analyze human biomechanics by an inverse dynamics approach in ecological conditions. The purposes of this study are the conceptualization, formulation, and implementation of a deep learning-assisted fully wearable sensor system for an online evaluation of the biomechanical effort that an operator exerts during a manual material handling task. In this paper, we show a novel, computationally efficient algorithm, implemented in ROS, to analyze the biomechanics of the human musculoskeletal systems by an inverse dynamics approach. We also propose a method for estimating the load and its distribution, relying on an egocentric camera and deep learning-based object recognition. This method is suitable for objects of known weight, as is often the case in logistics. Kinematic data, along with foot contact information, are provided by a fully wearable sensor network composed of inertial measurement units. The results show good accuracy and robustness of the system for object detection and grasp recognition, thus providing reliable load estimation for a high-impact field such as logistics. The outcome of the biomechanical analysis is consistent with the literature. However, improvements in gait segmentation are necessary to reduce discontinuities in the estimated lower limb articular wrenches. Full article
Show Figures

Figure 1

21 pages, 4339 KiB  
Article
Recurrent Network Solutions for Human Posture Recognition Based on Kinect Skeletal Data
by Bruna Maria Vittoria Guerra, Stefano Ramat, Giorgio Beltrami and Micaela Schmid
Sensors 2023, 23(11), 5260; https://doi.org/10.3390/s23115260 - 1 Jun 2023
Cited by 10 | Viewed by 2137
Abstract
Ambient Assisted Living (AAL) systems are designed to provide unobtrusive and user-friendly support in daily life and can be used for monitoring frail people based on various types of sensors, including wearables and cameras. Although cameras can be perceived as intrusive in terms [...] Read more.
Ambient Assisted Living (AAL) systems are designed to provide unobtrusive and user-friendly support in daily life and can be used for monitoring frail people based on various types of sensors, including wearables and cameras. Although cameras can be perceived as intrusive in terms of privacy, low-cost RGB-D devices (i.e., Kinect V2) that extract skeletal data can partially overcome these limits. In addition, deep learning-based algorithms, such as Recurrent Neural Networks (RNNs), can be trained on skeletal tracking data to automatically identify different human postures in the AAL domain. In this study, we investigate the performance of two RNN models (2BLSTM and 3BGRU) in identifying daily living postures and potentially dangerous situations in a home monitoring system, based on 3D skeletal data acquired with Kinect V2. We tested the RNN models with two different feature sets: one consisting of eight human-crafted kinematic features selected by a genetic algorithm, and another consisting of 52 ego-centric 3D coordinates of each considered skeleton joint, plus the subject’s distance from the Kinect V2. To improve the generalization ability of the 3BGRU model, we also applied a data augmentation method to balance the training dataset. With this last solution we reached an accuracy of 88%, the best we achieved so far. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 12583 KiB  
Article
Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction
by László Kopácsi, Benjámin Baffy, Gábor Baranyi, Joul Skaf, Gábor Sörös, Szilvia Szeier, András Lőrincz and Daniel Sonntag
Sensors 2023, 23(11), 5126; https://doi.org/10.3390/s23115126 - 27 May 2023
Cited by 3 | Viewed by 2797
Abstract
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants [...] Read more.
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot’s perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications. Full article
Show Figures

Figure 1

23 pages, 5262 KiB  
Article
Adapting to Crisis: The Governance of Public Services for Migrants and Refugees during COVID-19 in Four European Cities
by Federica Zardo, Lydia Rössl and Christina Khoury
Soc. Sci. 2023, 12(4), 213; https://doi.org/10.3390/socsci12040213 - 4 Apr 2023
Cited by 4 | Viewed by 2777
Abstract
The lack of access to basic services played a big part among the key effects of COVID-19 on migrants and refugees. This paper examines the governance dynamics behind public services for migrants and refugees to understand how COVID-19 has impacted them and what [...] Read more.
The lack of access to basic services played a big part among the key effects of COVID-19 on migrants and refugees. This paper examines the governance dynamics behind public services for migrants and refugees to understand how COVID-19 has impacted them and what accounts for different levels of adaptive capacity. It employs a mixed methods approach, using egocentric network analysis and qualitative interviews to compare the service ecosystems in four European cities from 2020 to 2022 (Birmingham, Larissa, Malaga, and Palermo). The paper explores the impact of two conditions on the service ecosystems’ ability to adapt to the pandemic: the structure of governance and the presence of dynamic capabilities. We argue that the ability of local governments to manage pandemic challenges is highly dependent on the formal distribution of comprehensive competences across various levels (the structure of governance), and the quality of network cooperation between different administrations and civil society (dynamic capabilities). Our analysis reveals that while both conditions are critical for the level of adaptive capacity in public services’ provision, the structure of governance is more likely to act as a constraint or trigger for coping strategies. Full article
(This article belongs to the Section Contemporary Politics and Society)
Show Figures

Figure 1

24 pages, 14566 KiB  
Article
YOLO Series for Human Hand Action Detection and Classification from Egocentric Videos
by Hung-Cuong Nguyen, Thi-Hao Nguyen, Rafał Scherer and Van-Hung Le
Sensors 2023, 23(6), 3255; https://doi.org/10.3390/s23063255 - 20 Mar 2023
Cited by 21 | Viewed by 8110
Abstract
Hand detection and classification is a very important pre-processing step in building applications based on three-dimensional (3D) hand pose estimation and hand activity recognition. To automatically limit the hand data area on egocentric vision (EV) datasets, especially to see the development and performance [...] Read more.
Hand detection and classification is a very important pre-processing step in building applications based on three-dimensional (3D) hand pose estimation and hand activity recognition. To automatically limit the hand data area on egocentric vision (EV) datasets, especially to see the development and performance of the “You Only Live Once” (YOLO) network over the past seven years, we propose a study comparing the efficiency of hand detection and classification based on the YOLO-family networks. This study is based on the following problems: (1) systematizing all architectures, advantages, and disadvantages of YOLO-family networks from version (v)1 to v7; (2) preparing ground-truth data for pre-trained models and evaluation models of hand detection and classification on EV datasets (FPHAB, HOI4D, RehabHand); (3) fine-tuning the hand detection and classification model based on the YOLO-family networks, hand detection, and classification evaluation on the EV datasets. Hand detection and classification results on the YOLOv7 network and its variations were the best across all three datasets. The results of the YOLOv7-w6 network are as follows: FPHAB is P = 97% with TheshIOU = 0.5; HOI4D is P = 95% with TheshIOU = 0.5; RehabHand is larger than 95% with TheshIOU = 0.5; the processing speed of YOLOv7-w6 is 60 fps with a resolution of 1280 × 1280 pixels and that of YOLOv7 is 133 fps with a resolution of 640 × 640 pixels. Full article
Show Figures

Figure 1

18 pages, 2114 KiB  
Article
Towards an Integrated Framework for Information Exchange Network of Construction Projects
by Yingnan Yang, Xianjie Liu, Hongming Xie and Zhicheng Zhang
Buildings 2023, 13(3), 763; https://doi.org/10.3390/buildings13030763 - 14 Mar 2023
Cited by 3 | Viewed by 2184
Abstract
The application of building information modeling (BIM) disrupts the interaction between individuals and industry organizations from time and spatial dimensions. However, the temporal dimension of interaction is usually a neglected factor in the application of social network analysis (SNA) when studying the project [...] Read more.
The application of building information modeling (BIM) disrupts the interaction between individuals and industry organizations from time and spatial dimensions. However, the temporal dimension of interaction is usually a neglected factor in the application of social network analysis (SNA) when studying the project communication networks. Additionally, the social incorporation of BIM enables full collaboration across multiple disciplines and stakeholders, which calls for multi-dimensional research agendas and practice of different network models. To fill the gap, this study aims to develop an integrated framework to guide the analysis of information exchange in construction projects. According to the findings, three network models can be used for network analysis at the industry, project and individual levels. It is worth noting that the majority of recent attention about the project communication networks has been focused on industry and project levels. The network analysis at the individual level is under-researched so we actively explore how to extend the scope of the network analysis from the project and industry level to the individual level. An ego network model was thus proposed to explore the project communication networks at the individual level, where the network indices were derived. The outputs implied that the proposed model has the potential to explore the ego-centric network in the construction projects. Full article
(This article belongs to the Special Issue Research on BIM-Based Building Process Management)
Show Figures

Figure 1

Back to TopTop