Next Article in Journal
Enhancing Durability of Plant-Mixed Hot Recycled Asphalt Mixtures in Arid Climates Through Qingchuan Rock Asphalt Modification
Previous Article in Journal
From Space to Well-Being: Understanding the Restorative Potential of Urban Riverfront Landscapes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Signatures of Human Risk Perception in Post-Disaster Scenarios: Insights for Rapid Building Damage Assessment

1
Department of Disaster Mitigation for Structures, Tongji University, Shanghai 200092, China
2
Earthquake Engineering Research and Test Center, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Buildings 2026, 16(6), 1237; https://doi.org/10.3390/buildings16061237
Submission received: 20 February 2026 / Revised: 15 March 2026 / Accepted: 18 March 2026 / Published: 20 March 2026
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)

Abstract

Rapid post-disaster building damage assessment requires recognizing explicit structural failures and interpreting implicit situational cues in visually complex scenes. Whereas conventional automated methods are often confined to detecting explicit damage patterns, human perception naturally integrates both types of information into a holistic risk judgment. This study presents an exploratory investigation into the neural signatures underlying this integrated judgment process using electroencephalography. A modified paradigm was employed to probe the cognitive dynamics of risk evaluation in participants with civil engineering backgrounds. Although participants were instructed only to identify damaged buildings without explicit severity grading, event-related potential analysis revealed systematic, graded neural responses that scaled with damage severity. This suggests that the brain encodes damage-related information not as a binary state but as a continuous spectrum of perceived risk, implicitly processing severity, even in the absence of explicit instructions. Furthermore, single-trial analysis demonstrated that time-domain features contain robust discriminative information, verifying the feasibility of decoding these latent judgments from brain activity. These findings provide a physiological basis for developing future cognition-informed algorithms and human-in-the-loop frameworks, bridging the semantic gap to enhance the reliability of automated disaster assessment.

1. Introduction

In the immediate aftermath of natural hazards, rapid building damage assessment is a race against time, requiring assessors to extract critical safety information from complex and often chaotic visual environments [1,2,3]. Unlike controlled structural health monitoring, post-disaster assessment fundamentally relies on interpreting fragmented visual evidence to judge structural safety and potential habitability [3,4,5]. In these scenarios, the visual field is rarely composed of clean, isolated structural components. Instead, it is cluttered with debris, occlusions, and ambiguous environmental signals. Consequently, decision-making is not merely a retrieval of predefined damage rules, but a dynamic cognitive process of holistic risk appraisal under high uncertainty.
Current assessment workflows, typically conducted by experts interpreting remote sensing imagery, rely heavily on identifying explicit structural failures (such as collapsed roofs, fragmented walls, or scattered debris) that can be directly mapped to damage grades [6,7]. Whereas advances in computer vision (CV) have automated the detection of these visual patterns, such algorithms typically operate within a closed semantic system, confined to features that are explicitly defined in visual terms [8,9,10,11,12,13,14]. Consequently, they often struggle to capture the implicit situational cues that characterize complex disaster scenarios [15,16,17,18]. In real-world assessments, a building may appear structurally intact but be perceived as high-risk through subtle environmental anomalies: emergency cordons isolating a block, chaotic traffic patterns indicating functional paralysis, piles of relief supplies suggesting evacuation, or abnormal crowd gatherings hinting at instability (Figure 1). These cues bear no direct structural resemblance to conventional “damage” indicators but nonetheless convey critical contextual information about risk. Human assessors utilizing domain-relevant cognitive schemas (formed through education or experience) naturally integrate these implicit cues with explicit evidence to form a holistic risk judgment, traversing the semantic gap that currently challenges automated frameworks. This implicit risk perception contributes to the resilience and reliability of post-disaster decision-making, yet remains largely unquantified.
Human visual perception exhibits remarkable efficiency and robustness in complex environments, allowing observers to extract salient information and generate meaningful judgments within a few hundred milliseconds [19,20,21]. Importantly, such judgments extend beyond low-level feature detection to reflect higher-order evaluative processes associated with attention allocation and decision-related appraisal. Cognitive neuroscience research demonstrates that these internal evaluations are accompanied by measurable neural responses, suggesting that human judgment under uncertainty can be indirectly deciphered through neural signals. Electroencephalography (EEG), particularly event-related potentials (ERPs), offers a non-invasive and temporally precise means of capturing these rapid cognitive processes [22,23,24,25,26,27]. However, a significant gap remains in applying EEG to disaster assessment. Whereas EEG-based paradigms have been successfully applied to remote sensing tasks, existing studies have largely focused on binary target detection (e.g., identifying the presence of an object) rather than the graded evaluation of risk levels that is essential for structural safety [28,29,30,31,32,33,34]. Moreover, most prior work relies on static or deterministic feature extraction, often overlooking the complex, multi-component neural responses across time and frequency domains evoked by the inherent uncertainty of visual evaluation [35,36]. Consequently, the neural mechanisms underlying human risk perception remain underexplored, particularly in application-driven scenarios.
Motivated by these gaps, this study adopts a perception-driven perspective to decipher the neural signatures of human risk perception during rapid post-disaster image assessment. A realistic visual inspection task was designed for participants with civil engineering backgrounds. By analyzing averaged and single-trial neural signals, this study aims to provide empirical evidence that damage-related perceptual judgment constitutes a salient, quantifiable cognitive process. Beyond demonstrating feasibility, these findings aim to provide a physiological reference for the development of future cognition-informed algorithms and human-in-the-loop frameworks, thereby helping to bridge the semantic gap and enhance the reliability of automated disaster assessment.

2. Methodology

2.1. Experiments and Data Acquisition

2.1.1. Participants and Equipment

Seventeen healthy volunteers (Sub ID 01~17) were recruited for the experiment. All participants were postgraduate students from the College of Civil Engineering at Tongji University. This specific demographic was intentionally selected to ensure a homogeneous cohort with a shared disciplinary background and foundational domain knowledge in structural engineering. It should be noted that while these participants possessed relevant academic knowledge, they were not professional damage assessment experts, as they lacked formal field experience in post-disaster reconnaissance or remote-sensing-based visual inspection. The utilization of this domain-informed but non-expert cohort was a deliberate methodological choice. It allowed us to establish a tightly controlled experimental baseline by minimizing severe confounding variables, such as heterogeneous field experience, varying risk thresholds, and highly individualized inspection routines, that seasoned professionals typically introduce to early-stage laboratory studies. Furthermore, none of the participants had prior experience with EEG experiments. They were aged between 23 and 25 years, all right-handed, and had normal or corrected-to-normal vision. The study protocol was approved by the Ethics Committee for Science and Technology of Tongji University (approval number: tjdxsr086), and written informed consent was obtained from all participants. After data quality screening (see Section 3.1), 14 participants (gender ratio 1:1) were retained for the final analysis.
As shown in Figure 2, continuous EEG data were acquired using a portable wireless 64-channel system (NeuSen W64, Neuracle Technology Co., Ltd., Changzhou, China) at a sampling rate of 1000 Hz. Of the 64 channels, 59 were used as EEG electrodes, while the remaining channels (ECG, HEOR, HEOL, VEOU, and VEOL) served as auxiliary recordings. Electrode placement followed the international 10–20 system. Visual stimuli were presented on a 16-inch laptop display (2560 × 1600 pixels, 240 Hz refresh rate), which also controlled stimulus presentation and data logging. An additional external monitor was used for real-time visualization of EEG signals to ensure recording quality throughout the experiment.

2.1.2. Stimulus and Task

Remote sensing images were obtained from the Land Information New Zealand Data Service under the project “Christchurch Post-Earthquake 0.1 m Urban Aerial Photos (24 February 2011)”. Based on the spatial distribution of building damage in the 2011 Christchurch earthquake, 130 ortho-rectified RGB images with a spatial resolution of 0.1 m were selected and cropped to 512 × 512 pixels as visual stimuli (Figure 3a). As this study represents a first-step exploratory investigation, our primary methodological priority was to ensure strict internal validity before expanding to external validity. Therefore, all stimuli were intentionally derived from a single post-earthquake orthophoto release to minimize event-irrelevant heterogeneity in low-level visual properties.
Images containing damaged buildings constituted a small proportion of the original dataset, naturally satisfying the probability imbalance required by the oddball paradigm. For experimental control, target stimuli (images containing damaged buildings) were manually selected to comprise 10% of the total image pool. Crucially, the final stimulus set was carefully screened to rigorously satisfy multiple constraints simultaneously: clear nadir-view visibility, absolute exclusion of stitching or distortion artifacts that could mislead visual processing, and reliable damage labeling from an overhead perspective. The formal experiment consisted of 15 randomized blocks, each containing 100 images (10 target images and 90 non-target images), yielding a total of 1500 trials. The sampled scenes covered central urban, industrial, and residential areas within the study region, providing a necessary degree of contextual variety while preserving dataset homogeneity. All images were normalized for brightness and contrast to further reduce the influence of low-level visual confounds, ensuring that the observed neural differences could be confidently attributed to the cognitive process of damage-related appraisal.
During the experiment, participants were seated approximately 60 cm from the display (visual angle ≈ 6.56°), while EEG signals were continuously recorded (Figure 2 and Figure 3b). Visual stimuli were presented using a custom PsychoPy script (Figure 3c). Each image was displayed for 750 ms, followed by a jittered inter-stimulus interval (ISI) of 1000~1500 ms to reduce expectancy effects [37]. A fixation cross (“+”) was shown prior to each image sequence to guide attention.
Under nadir-view remote sensing conditions, earthquake-induced building damage primarily manifests as surrounding debris and roof collapse or deformation. To examine neural responses associated with different damage conditions, building images were annotated according to a four-level damage scheme inspired by the Joint Damage Scale used in the xBD dataset [9]: intact, minor damaged, major damaged, and collapsed. When multiple damage levels were present in a single image, the label was assigned based on the most severe damage observed. Each damage category contained an equal number of target samples. In addition to this primary categorization, three derived label groupings (one three-class and two binary schemes) were defined for subsequent analysis (Figure 3d). Crucially, participants were completely blind to these internal categorizations.
Participants were only instructed to silently count the number of images containing damaged buildings in each block and report their estimate afterward. This silent counting task served as an orthogonal behavioral proxy for damage-related risk perception, intentionally omitting explicit severity-grading requirements. This passive design served two critical neurophysiological purposes. First, under rapid serial visual presentation, requiring a trial-wise forced-choice manual rating would introduce massive decisional, response-mapping, and motor execution components (e.g., motor-related cortical potentials). These motor artifacts are known to severely overlap with and contaminate the exact stimulus-locked ERP time windows that are utilized for cognitive interpretation. Second, it explicitly minimized top-down task bias, as explicit categorization demands can fundamentally alter late evaluative ERP components, often competing with spontaneous or implicit forms of processing for shared neural resources. By utilizing this passive design, we were able to rigorously test whether damage-related severity is nonetheless spontaneously and implicitly encoded in human neural activity during rapid visual inspection. A short practice session preceded the formal experiment to familiarize participants with the task and presentation pace.

2.2. EEG Processing and Analysis

2.2.1. Data Preprocessing

Raw EEG recordings were preprocessed in EEGLAB following a standard pipeline [38]. As illustrated in Figure 4, the main steps were: exclude auxiliary channels and retain 59 EEG channels; apply 0.1 Hz high-pass, 30 Hz low-pass, and 48~52 Hz notch filter to remove drifts, high-frequency noise, and power line interference; segment data into epochs from −1 to 2 s relative to stimulus onset and baseline-correct using the −1~0 s interval; detect and interpolate bad channels using spherical interpolation; re-reference to the common average; run independent component analysis (ICA) [39] and remove components classified as ocular/muscular artifacts; and reject remaining epochs exceeding ±100 μV. These preprocessing choices balance artifact suppression and signal preservation for single-trial ERP analysis and were applied consistently across all subjects.

2.2.2. Feature Extraction

To identify neural correlates of perceived damage, features were extracted using a data-driven approach. Rather than assuming fixed time windows, point-wise statistical tests were conducted to identify temporal intervals exhibiting significant modulation by damage severity. For each of the 59 electrodes, point-wise paired t-tests (α = 0.05) were conducted along the temporal dimension (−1 to 2 s) to compare ERP responses across conditions. In the four-category damage labeling task, six ( C 4 2 ) pair-wise condition comparisons were performed at each electrode, yielding a p-value at every time point. To adopt a conservative criterion in this multi-comparison setting, the maximum p-value across the six comparisons was retained at each time point. Time points were considered potentially discriminative only if this maximum p-value was below the significance threshold. To further control the risk of false positives arising from multiple comparisons [40], false discovery rate (FDR) correction was applied using the Benjamini–Hochberg procedure (q = 0.05) [40,41]. The resulting statistically significant temporal segments were retained as time-domain ERP features.
In addition to time-domain ERP features, complementary spectral–temporal information was extracted using time–frequency analysis. Preprocessed EEG signals were transformed via short-time Fourier transform using a 0.2 s Hanning window, followed by fast Fourier transform, yielding time–frequency representations that capture transient oscillatory dynamics associated with visual processing. Statistically significant time–frequency segments were identified in parallel with the time-domain features and included in subsequent analyses.

2.2.3. Single-Trial Classification

To assess whether the identified neural features contained discriminative information at the single-trial level, we employed a classical machine learning classifier as a conservative feasibility baseline. A support vector machine (SVM) with a radial basis function kernel was deliberately selected for this neurocognitive proof-of-concept. The present paradigm involves a high-dimensional, relatively small-sample EEG classification task, a scenario where SVM-based approaches continue to demonstrate robust generalization capabilities without the extreme overfitting risks often associated with more complex models. The primary epistemological purpose of this step was therefore to rigorously test whether the statistically identified, physiologically interpretable EEG features carried sufficient intrinsic discriminative information for single-trial decoding. Crucially, the interpretability of our framework originates from these explicitly defined neurophysiological inputs rather than the classifier itself. Employing a classical baseline allowed us to transparently validate the discriminability of these specific features without conflating them with the hidden representational complexity of opaque end-to-end models. Thus, SVM was strictly deployed to establish a methodologically conservative and physiologically transparent baseline, before future systematic comparison with more expressive deep learning architectures.
Time-domain and time–frequency features were evaluated separately and in combination. For each participant, class balance was ensured by random undersampling, and all features were standardized using z-score normalization. Model performance was evaluated using ten-fold cross-validation, and results were summarized using accuracy (Acc), F1 score (F1), and area under the receiver operating characteristic curve (AUC).

3. Results

3.1. Task Feasibility and Behavioral Consistency

To verify that the experimental task was intuitive and could be reliably completed without formal expert training, behavioral performance was examined prior to the EEG analysis. Two summary indices were used for this purpose: M1, the mean reported number of damaged-building images per block, and M2, the mean absolute deviation between the reported count and the ground truth count of 10. Most participants adapted to the task within three practice blocks, indicating rapid familiarization with the task instructions and presentation pace. After initial data quality screening, three participants (Sub ID 03/06/09) were excluded due to excessive motion artifacts or unstable electrode contact. Consequently, EEG recordings from 14 participants were retained for the final analysis. Table 1 summarizes the behavioral feedback data and the EEG epoch retention rates (all >90% after artifact rejection) for these retained participants. Across the retained participants, M1 averaged 10.1, closely matching the ground truth, whereas M2 averaged 0.6, indicating low variability and stable task performance across blocks. These behavioral results confirm that participants could reliably execute the damage detection task, thereby providing a sound, well-controlled behavioral basis for the subsequent EEG analyses.

3.2. Neural Correlates of Damage-Related Risk Perception

To examine whether post-disaster building damage elicits systematic neural responses, ERPs were analyzed across different damage conditions. Representative time-domain and time–frequency results are shown in Figure 5 and Figure 6.
At the parietal electrode Pz, ERP waveforms revealed clear differences across damage categories (Figure 5a). Statistical comparison indicated that responses to “Minor Damaged” and “Major Damaged” buildings did not differ significantly, whereas both differed from the “Intact” and “Collapsed” conditions. This lack of differentiation between intermediate levels suggests that under the strict temporal constraints of rapid visual inspection, the neural system may default to a coarse-grained categorization strategy rather than fine-grained grading. Accordingly, these two intermediate categories were merged into a unified “Damaged” group for subsequent analyses, yielding a three-level condition structure (Intact/Damaged/Collapsed) (Figure 5b).
Specifically, two prominent ERP components were observed: an N2 component at approximately 290 ms and a later P3 component spanning roughly 350~750 ms. While N2 amplitudes primarily differentiated intact from damaged buildings, the P3 component exhibited a graded amplitude increase corresponding to damage severity (Intact < Damaged < Collapsed). Damaged and collapsed buildings elicited significantly stronger and more sustained neural responses than intact structures, particularly within the late positive time windows associated with evaluative processing. After FDR correction, two statistically significant time-domain segments were identified at Pz (365~383 ms and 448~666 ms; Figure 5c). Figure 5d further illustrates that the strongest activity within these late windows was concentrated over parietal–occipital regions.
To further capture complementary neural signatures, time–frequency analysis was performed using short-time Fourier transform (STFT). Figure 6a shows the averaged time-domain signals for the three-class comparison at occipital sites (e.g., Oz), serving as the basis for spectral decomposition, while Figure 6b presents the corresponding time–frequency representations obtained by the STFT. To systematically quantify these oscillatory dynamics, a Boolean monotonic trend mask across categories was extracted to highlight regions showing consistent severity-related variation (Figure 6c). Parallel pair-wise paired-sample t-tests were then applied in the time–frequency domain (Figure 6d). Intersecting the monotonic trend regions with statistically significant regions yielded candidate time–frequency features (Figure 6e). Finally, FDR correction retained a distinct Alpha-band feature at 175~203 ms and 8~9 Hz (Figure 6f). Because this feature did not temporally overlap with the significant time-domain segments at the same channel, it represents a distinct, parallel neural signature of damage perception. Figure 6g maps the topographical distribution of this extracted Alpha-band feature across the three categories.
Across all 59 channels, 12 time-domain and 7 time–frequency feature segments were identified. As detailed in Table 2, significant discriminative features were predominantly distributed over parietal and occipital regions, with a marked concentration in the left hemisphere. Time-domain features were distributed across frontal, central, parietal, and occipital regions, whereas time–frequency features were primarily localized to parietal-occipital electrodes within Delta (1~4 Hz)-, Theta (4~8 Hz)-, and Alpha (8~13 Hz)-bands. These spatiotemporal results collectively indicate that damage-related appraisal under rapid inspection engages coordinated posterior visual–evaluative processing rather than a single local effect. Their cognitive interpretation and potential interrelationships are further discussed in Section 4.1.

3.3. Single-Trial Evidence for Perceptual Discriminability

To determine whether the group-level neural signatures identified in Section 3.2 were also informative at the single-trial level, classification was performed separately for each retained participant using subject-specific SVM models and ten-fold cross-validation. The resulting performance values were then averaged across participants. For comparison, classifiers were also trained on EEG segments unrelated to post-stimulus processing, namely a prestimulus time-domain segment (−50~0 ms at Pz) and a prestimulus high-frequency segment (−50~0 ms, 10~15 Hz at Pz) serving as the time–frequency baseline. As expected, these irrelevant segments yielded chance-level performance across all tasks, confirming that the subsequent performance gains genuinely reflected task-relevant neural information rather than bias within the evaluation framework.
Figure 7 summarizes the classification results across feature domains and combination sizes. For time-domain features (Figure 7a), performance improved steadily as statistically significant segments were combined, indicating that distinct temporal windows contributed complementary discriminative information. The best performance was obtained with six features in the three-class task (F1 = 0.481, Acc = 0.500, AUC = 0.688), seven features in Binary-A (intact vs. damaged; F1 = 0.731, Acc = 0.734, AUC = 0.793), and five features in Binary-B (intact vs. collapsed; F1 = 0.753, Acc = 0.761, AUC = 0.827). In contrast, time–frequency features (Figure 7b) peaked earlier and at consistently lower levels, with gains saturating after only two or three features and then tending to plateau or decline. This pattern indicates that while oscillatory features carry discriminative information, their incremental contribution is less scalable and more redundant than that of time-domain ERP features. Across both domains, the two binary tasks outperformed the three-class task, with Binary-B demonstrating the highest discriminability.
Because the time-domain combinations provided the strongest and most stable results, the error structure was further examined using the optimal time-domain models (Figure 8). In the three-class task (Figure 8a), recall values were 0.65 for “Intact”, 0.35 for “Damaged”, and 0.50 for “Collapsed”. The dominant error mode was mutual confusion between the two non-intact categories: 34% of “Damaged” trials were predicted as “Collapsed”, and 32% of “Collapsed” trials were predicted as “Damaged”. Although “Damaged” and “Collapsed” are clearly distinct engineering states, they can share partially overlapping nadir-view visual cues (such as scattered debris and partial roof deformation) under rapid remote sensing inspection, especially because the pooled “Damaged” category contains heterogeneous intermediate cases. This pattern aligns with the neural evidence reported in Section 3.2, indicating that while the presence of damage is reliably detected, discriminating the precise severity grade remains challenging. Crucially, by comparison, misclassification as “Intact” was lower but still non-negligible, amounting to 31% for “Damaged” and 19% for “Collapsed”. In the binary tasks, performance became markedly more stable. In Binary-A (Figure 8b), 77% of “Intact” and 70% of “Damaged” trials were correctly classified. In Binary-B (Figure 8c), the corresponding correct classification rates were 78% for “Intact” and 74% for “Collapsed”. These results collectively indicate that the presence of damage was detected much more reliably than its precise severity grade under rapid visual inspection.
To further evaluate inter-individual variability, classification was repeated using only the single most discriminative feature from each domain (Figure 9), namely the late time-domain feature at Pz (448~666 ms) and the time–frequency feature at O1 (256~279 ms, 1~4 Hz). The time-domain feature produced comparatively compact cross-subject distributions across all three tasks, indicating a relatively stable neural template for damage-related appraisal. The time–frequency feature, by contrast, showed broader distributions and larger tails, suggesting that its discriminability depended more strongly on individual strategy, signal quality, or cortical dynamics. Notably, a single spectral feature sometimes achieved slightly higher median values than the single time-domain feature, particularly in the three-class and Binary-A tasks. However, this advantage was not consistent across participants and disappeared once multiple features were combined (Figure 7). Taken together, these results suggest that oscillatory features can be highly informative for certain individuals, whereas late time-domain ERPs provide the more reliable basis for generalized single-trial decoding. These results are further discussed in Section 4.2.

4. Discussion

4.1. Cognitive Mechanisms

The discriminative EEG features identified in this study provide neurophysiological evidence for the holistic risk appraisal process during rapid post-disaster building assessment. Unlike automated algorithms that isolate explicit damage features, the observed neural spatiotemporal organization suggests that the human brain engages a continuous cognitive cascade to interpret visually ambiguous and incomplete information embedded in complex post-disaster scenes.
  • Risk appraisal relies on parietal–occipital processing that supports fine-grained structural interpretation, rather than simple object-level damage detection. The parietal–occipital dominance observed in the discriminative feature distribution aligns with the cognitive demand for detailed visual analysis and spatial attention [42,43,44,45,46,47]. In post-disaster imagery, intact and damaged buildings often share similar global appearances, while information that is relevant to potential risk is conveyed through localized, indirect, or context-dependent visual cues. Under such conditions, perceptual judgment relies less on global pattern recognition and more on the fine-grained interpretation of local anomalies, thereby transcending simple object detection. The engagement of posterior networks reflects the detailed scrutiny and semantic integration of these ambiguous cues, while the additional fronto-parietal activation suggests goal-directed cognitive regulation, confirming that participants were actively evaluating the structural implications of visual evidence rather than merely detecting salient features [48,49].
  • Damage severity is encoded in late-stage neural responses as a continuous risk representation, rather than a binary classification outcome. While early components (<200 ms) are commonly associated with initial sensory encoding, the dominant discriminative effects emerged in later time windows (>200 ms), reflecting extended post-perceptual evaluation [50]. Rather than reflecting visual saliency alone, the prolonged P3 activity observed in damage conditions is interpreted as increased evaluative demand, as participants integrate incomplete or indirect visual cues to form intuitive inferences about potential hazards. In this sense, damage perception in the present task does not correspond to a simple binary decision, but to an iterative judgment process in which visual evidence is progressively weighed against perceived structural risk. Crucially, the graded P3 amplitude mirrors a continuous spectrum of perceived severity, indicating that the human brain represents damage-related risk in an analog, severity-dependent manner rather than as a discrete label.
  • Multi-frequency oscillatory dynamics reflect coordinated cognitive processing for risk appraisal in visually complex scenarios. The modulation of occipital Alpha activity is commonly linked to selective attention and the suppression of irrelevant information [51], supporting focused processing of ambiguous, risk-relevant visual cues. Theta-band activity reflects working-memory engagement and cognitive control, likely associated with the comparison between observed structures and internal representations of intact buildings [51,52]. Delta-band involvement, often associated with decision-making and signal detection, suggests that images depicting severe structural damage may trigger sustained risk-related appraisal and context updating beyond purely local visual analysis [53]. Together, these oscillatory patterns indicate that rapid damage assessment engages attentional, mnemonic, and evaluative processes in an integrated manner to support reliable judgment under time pressure.

4.2. Single-Trial Interpretation

Unlike conventional EEG target detection paradigms that rely on simplified or well-defined stimuli [28,30,36], the present experiment employed realistic post-disaster remote sensing imagery characterized by heterogeneous content, subtle inter-class differences, and inherently ambiguous visual cues. Under these naturalistic conditions, the single-trial classification results substantiate the feasibility of externalizing this implicit risk judgment in real time. These findings complement the cognitive interpretations in Section 4.1 and reinforce the potential role of EEG as a window into latent human knowledge that is currently inaccessible to standard CV metrics.
  • Time-domain ERP features provide the most robust and scalable basis for single-trial risk decoding. The comparative analysis of feature domains reveals a critical engineering insight: time-domain ERP features offer superior robustness and scalability compared to oscillatory markers. While spectral features capture individual cognitive strategies, the stability of time-locked responses (e.g., the P3 timing) suggests a shared neural template for risk evaluation across observers. This finding is pivotal for developing generalized cognition-informed tools, as it implies that the temporal structure and magnitude of neural responses provide a stable physiological reference for calibrating automated assessment models.
  • Misclassification is concentrated in intermediate damage states, reflecting intrinsic perceptual ambiguity rather than noise. The observed asymmetry in perceptual errors, where confusion primarily occurred between damage categories while intact buildings remained distinct, reflects a fundamental characteristic of human judgment under uncertainty. Since participants detected damage as a proxy for perceived risk rather than performing fine-grained grading, the differentiation between damaged and collapsed structures relied on spontaneous inference. The “Damaged” category inherently includes diverse and partially conflicting cues (e.g., limited roof deformation without global collapse), increasing perceptual ambiguity. From this perspective, the reduced discriminability for intermediate damage states should not be viewed merely as noise, but as a neural signature of uncertainty processing. This highlights that ambiguity is an intrinsic component of human risk perception, offering a potential “soft label” for training automated systems to better handle borderline cases in realistic disaster scenes.
  • Inter-individual variability necessitates adaptive human-in-the-loop assessment rather than full automation. The successful decoding using standard SVM classifiers establishes a conservative baseline for feasibility. It proves that discriminative risk information is accessible without relying on “black-box” deep learning models, preserving the interpretability of the neural features. The substantial inter-individual variability observed reflects differences in perceptual strategies and prior experience, which are common sources of human error in visual inspection. Rather than being treated solely as a limitation, such variability represents a defining characteristic of human risk perception, underscoring the necessity of adaptive human-in-the-loop frameworks that can accommodate personalized cognitive baselines to mitigate human error when integrating human judgment with automated analysis.
Finally, it is essential to position this proposed EEG framework relative to existing automated CV algorithms. We acknowledge that large-scale benchmarks (e.g., the xBD dataset) have enabled rapid and substantial advances in image-based building damage assessment. However, a strict head-to-head quantitative benchmark (e.g., directly comparing F1/Acc/AUC) between the present EEG protocol and standard CV detectors would be methodologically incongruous. Conventional automated algorithms infer physical damage directly from explicitly defined image pixels. In contrast, our classifier is designed to decode latent, holistic human evaluative states from neural activity. Despite their high throughput, the recent literature documents several persistent challenges for CV models in complex, real-world disaster settings. These include severe performance degradation under domain shifts, under-representation of specific disaster types, and continuing difficulty in capturing the implicit contextual information required for fine-grained damage discrimination [7,54,55]. From this perspective, the value of the proposed EEG framework is fundamentally complementary rather than competitive. It identifies and quantifies a vital source of information that current CV models do not explicitly encode: the human observer’s latent risk judgment when dealing with visually ambiguous or borderline cases. Ultimately, the two approaches target different but highly synergistic variables, positioning this cognition-informed EEG method as a physiological complement within future human-in-the-loop disaster assessment architectures.

4.3. Limitations and Future Work

While this study provides foundational neurophysiological evidence for implicit risk appraisal, its exploratory nature entails certain limitations that define the trajectory for future research.
First, to prioritize internal validity and isolate the core cognitive hypotheses from severe confounding variables, we utilized a tightly controlled, single-event image dataset and a domain-informed but non-expert participant cohort. Although this design effectively minimized event-irrelevant visual heterogeneity and idiosyncratic expert biases, it restricts the immediate external validity. Future studies must validate whether the identified neural signatures remain stable across multi-event, cross-region datasets (e.g., incorporating floods and hurricanes). Furthermore, rigorous comparative analyses involving professional damage assessment experts, ideally utilizing synchronized EEG and eye-tracking to map spatial attention priors, are necessary to quantify how professional field experience modulates these cognitive strategies.
Second, methodologically, this study prioritized physiological interpretability over algorithmic maximization. The implementation of a silent counting task and a conservative SVM baseline successfully captured spontaneous neural evaluations while preventing motor artifacts and attribution ambiguity. However, this entails that the current framework does not represent an optimally scaled engineering solution, nor does it fully validate explicit fine-grained severity grading. A systematic progression toward more expressive deep learning architectures (e.g., EEGNet [56]) and asymmetric multimodal integration is a critical next step. In such future frameworks, imagery would remain the primary operational modality, whereas EEG could provide auxiliary cognitive supervision (e.g., serving as “soft labels” or confidence weights) to calibrate image-based algorithms for ambiguous cases.
Ultimately, these current boundaries outline a clear roadmap toward a staged, human-in-the-loop decision-support architecture. In practical deployment, high-throughput automated computer vision models will remain responsible for broad-area screening. Cases characterized by low algorithmic confidence or conflicting visual cues will then be escalated for cognition-augmented human review. Within this workflow, the neurophysiological markers identified in this study will serve as a physiological basis to quantify uncertainty-sensitive human appraisal. By addressing practical challenges such as cross-subject calibration and wearable hardware optimization, future research can successfully transition these neurocognitive insights into robust, operational disaster assessment workflows.

5. Conclusions

This study investigated the neural signatures of rapid damage-related appraisal during post-disaster building inspection under visually complex remote sensing conditions. The experimental results revealed that damage-related information was systematically reflected in measurable EEG signatures, even when observers were not explicitly instructed to rate damage severity. In the time-domain, the most prominent discriminative activity was observed at Pz, where two significant windows (365~383 ms and 448~666 ms) were identified after FDR correction. Here, the late positive component (P3) showed the clearest monotonic differentiation across different damage conditions. In the time–frequency domain, a distinct occipital Alpha-band feature (175~203 ms at 8~9 Hz) provided complementary evidence that rapid damage appraisal engages not only late evaluative processing but also early attention-related modulation. At the single-trial level, time-domain features showed more robust and scalable performance for decoding these latent evaluative judgments than time–frequency features. Utilizing optimal combinations of time-domain features achieved strong discriminative performance, with peak average AUC values reaching 0.83 in binary classification tasks, while also demonstrating lower cross-subject variability than the best spectral features. This supports the feasibility of decoding certain aspects of latent human evaluative processing from high-dimensional neural data.
At the same time, these findings should be interpreted within the boundaries of the present proof-of-concept design, which utilized a tightly controlled single-event image pool, a domain-informed but non-expert cohort, an orthogonal counting task, and a conservative SVM baseline. Accordingly, these results provide neurophysiological evidence for implicit and relatively coarse-grained damage-related appraisal, rather than acting as a fully operational or fine-grained rating system. The practical contribution of this work is therefore to establish a measurable human cognitive baseline that may inform future multimodal, uncertainty-aware, human-in-the-loop frameworks for disaster assessment.

Author Contributions

Conceptualization, E.Z., C.Y. and Q.K.; methodology, E.Z. and Q.K.; software, E.Z.; validation, E.Z.; formal analysis, E.Z.; investigation, E.Z.; resources, Q.K.; data curation, E.Z.; writing—original draft preparation, E.Z.; writing—review and editing, E.Z., C.Y., H.H. and Q.K.; visualization, E.Z.; supervision, C.Y., H.H. and Q.K.; project administration, E.Z. and Q.K.; funding acquisition, Q.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shanghai Municipal Commission of Economy and Informatization, grant number RZ-CYA1-01-25-0804.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Science and Technology of Tongji University (tjdxsr086, 31 December 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank all the participants who contributed the data for this study. During the preparation of this manuscript, the authors used ChatGPT 5.4 in order to improve the readability and language of the manuscript. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CVComputer vision
EEGElectroencephalography
ERPEvent-related potential
ISIInter-stimulus interval
ICAIndependent component analysis
FDRFalse discovery rate
STFTShort-time Fourier transform
SVMSupport vector machine
AccAccuracy
F1F1 score
AUCArea under the receiver operating characteristic curve

References

  1. Braik, A.M.; Koliou, M. Automated building damage assessment and large-scale mapping by integrating satellite imagery, GIS, and deep learning. Comput.-Aided Civ. Infrastruct. Eng. 2024, 39, 2389–2404. [Google Scholar] [CrossRef]
  2. Kaur, N.; Lee, C.; Mostafavi, A.; Mahdavi-Amiri, A. Large-scale building damage assessment using a novel hierarchical transformer architecture on satellite images. Comput.-Aided Civ. Infrastruct. Eng. 2023, 38, 2072–2091. [Google Scholar] [CrossRef]
  3. Khajwal, A.B.; Cheng, C.; Noshadravan, A. Post-disaster damage classification based on deep multi-view image fusion. Comput.-Aided Civ. Infrastruct. Eng. 2023, 38, 528–544. [Google Scholar] [CrossRef]
  4. Singh, D.K.; Hoskere, V. Post Disaster damage Assessment using Ultra-high-resolution Aerial Imagery with Semi-supervised transformers. Sensors 2023, 23, 8235. [Google Scholar] [CrossRef]
  5. Koliou, M.; van de Lindt, J.W.; McAllister, T.P.; Ellingwood, B.R.; Dillard, M.; Cutler, H. State of the research in community resilience: Progress and challenges. Sustain. Resilient Infrastruct. 2020, 5, 131–151. [Google Scholar] [CrossRef]
  6. Kang, D.; Cha, Y.J. Autonomous UAVs for structural health monitoring using deep learning and an ultrasonic beacon system with geo-tagging. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 885–902. [Google Scholar] [CrossRef]
  7. Saleem, M.R.; Mayne, R.; Napolitano, R. Evaluating Human Expert Knowledge in Damage Assessment Using Eye Tracking: A Disaster Case Study. Buildings 2024, 14, 2114. [Google Scholar] [CrossRef]
  8. Yuan, F.-G.; Zargar, S.A.; Chen, Q.; Wang, S. Machine learning for structural health monitoring: Challenges and opportunities. Sens. Smart Struct. Technol. Civ. Mech. Aerosp. Syst. 2020, 11379, 1137903. [Google Scholar]
  9. Gupta, R.; Hosfelt, R.; Sajeev, S.; Patel, N.; Goodman, B.; Doshi, J.; Heim, E.; Choset, H.; Gaston, M. xbd: A dataset for assessing building damage from satellite imagery. arXiv 2019, arXiv:1911.09296. [Google Scholar] [CrossRef]
  10. Gupta, R.; Shah, M. Rescuenet: Joint building segmentation and damage assessment from satellite imagery. In 2020 25th International Conference on Pattern Recognition (ICPR); IEEE: New York, NY, USA, 2021. [Google Scholar]
  11. Weber, E.; Kané, H. Building disaster damage assessment in satellite imagery with multi-temporal fusion. arXiv 2020, arXiv:2004.05525. [Google Scholar] [CrossRef]
  12. Shen, Y.; Zhu, S.; Yang, T.; Chen, C.; Pan, D.; Chen, J.; Xiao, L.; Du, Q. Bdanet: Multiscale convolutional neural network with cross-directional attention for building damage assessment from satellite images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  13. Hao, H.; Baireddy, S.; Bartusiak, E.R.; Konz, L.; LaTourette, K.; Gribbons, M.; Chan, M.; Delp, E.J.; Comer, M.L. An attention-based system for damage assessment using satellite imagery. In 2021 IEEE International Geoscience and Remote Sensing Symposium Igarss; IEEE: New York, NY, USA, 2021. [Google Scholar]
  14. Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building damage detection using U-Net with attention mechanism from pre-and post-disaster remote sensing datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
  15. Cheng, C.S.; Behzadan, A.H.; Noshadravan, A. Deep learning for post-hurricane aerial damage assessment of buildings. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 695–710. [Google Scholar] [CrossRef]
  16. Matin, S.S.; Pradhan, B. Challenges and limitations of earthquake-induced building damage mapping techniques using remote sensing images-A systematic review. Geocarto Int. 2022, 37, 6186–6212. [Google Scholar] [CrossRef]
  17. van Dyck, L.E.; Gruber, W.R. Seeing eye-to-eye? A comparison of object recognition performance in humans and deep convolutional neural networks under image manipulation. arXiv 2020, arXiv:2007.06294. [Google Scholar]
  18. Wang, Z.; Cha, Y.-J. Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect damage. Struct. Health Monit. 2021, 20, 406–425. [Google Scholar] [CrossRef]
  19. Oyekoya, O.; Stentiford, F. Perceptual image retrieval using eye movements. Int. J. Comput. Math. 2007, 84, 1379–1391. [Google Scholar] [CrossRef]
  20. Mohedano, E.; Healy, G.; McGuinness, K.; Giró-I-Nieto, X.; O’cOnnor, N.E.; Smeaton, A.F. Improving object segmentation by using EEG signals and rapid serial visual presentation. Multimed. Tools Appl. 2015, 74, 10137–10159. [Google Scholar] [CrossRef]
  21. Keysers, C.; Xiao, D.-K.; Földiák, P.; Perrett, D.I. The speed of sight. J. Cogn. Neurosci. 2001, 13, 90–101. [Google Scholar] [CrossRef]
  22. Torres, E.P.; Torres, E.A.; Hernández-Álvarez, M.; Yoo, S.G. EEG-based BCI emotion recognition: A survey. Sensors 2020, 20, 5083. [Google Scholar] [CrossRef]
  23. Henry, J.C. Electroencephalography: Basic principles, clinical applications, and related fields. Neurology 2006, 67, 2092-2092-a. [Google Scholar] [CrossRef]
  24. Haynes, J.-D.; Rees, G. Decoding mental states from brain activity in humans. Nat. Rev. Neurosci. 2006, 7, 523–534. [Google Scholar] [CrossRef] [PubMed]
  25. Squires, K.C.; Wickens, C.; Squires, N.K.; Donchin, E. The effect of stimulus sequence on the waveform of the cortical event-related potential. Science 1976, 193, 1142–1146. [Google Scholar] [CrossRef] [PubMed]
  26. Tan, J.; Luo, F.; Zhang, X.; Liu, J. Visual stimulus event related potential and its advances in related studies. Chin. J. Forensic Med. 2017, 32, 44–47. [Google Scholar]
  27. Donchin, E.; Karis, D.; Bashore, T.R.; Coles, M.G.H.; Gratton, G. Cognitive psychophysiology and human information processing. In Psychophysiology: Systems, Processes, and Applications; Coles, M.G.H., Donchin, E., Porges, S.W., Eds.; Guilford Press: New York, NY, USA, 1986; pp. 244–267. [Google Scholar]
  28. Bigdely-Shamlo, N.; Vankov, A.; Ramirez, R.R.; Makeig, S. Brain activity-based image classification from rapid serial visual presentation. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 432–441. [Google Scholar] [CrossRef]
  29. Matran-Fernandez, A.; Poli, R. Collaborative brain-computer interfaces for target localisation in rapid serial visual presentation. In 2014 6th Computer Science and Electronic Engineering Conference (CEEC); IEEE: New York, NY, USA, 2014. [Google Scholar]
  30. Matran-Fernandez, A.; Poli, R. Brain–computer interfaces for detection and localization of targets in aerial images. IEEE Trans. Biomed. Eng. 2016, 64, 959–969. [Google Scholar] [CrossRef]
  31. Fan, L.; Shen, H.; Xie, F.; Su, J.; Yu, Y.; Hu, D. DC-tCNN: A deep model for EEG-based detection of dim targets. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 1727–1736. [Google Scholar] [CrossRef]
  32. Sajda, P.; Gerson, A.; Parra, L. High-throughput image search via single-trial event detection in a rapid serial visual presentation task. In First International IEEE EMBS Conference on Neural Engineering; Conference Proceedings; IEEE: New York, NY, USA, 2003. [Google Scholar]
  33. Gerson, A.D.; Parra, L.C.; Sajda, P. Cortically coupled computer vision for rapid image search. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 174–179. [Google Scholar] [CrossRef]
  34. Parra, L.C.; Christoforou, C.; Gerson, A.C.; Dyrholm, M.; Luo, A.; Wagner, M.; Philiastides, M.G.; Sajda, P. Spatiotemporal linear decoding of brain state. IEEE Signal Process. Mag. 2007, 25, 107–115. [Google Scholar] [CrossRef]
  35. Simanova, I.; Van Gerven, M.; Oostenveld, R.; Hagoort, P. Identifying object categories from event-related EEG: Toward decoding of conceptual representations. PLoS ONE 2010, 5, e14465. [Google Scholar] [CrossRef]
  36. Wang, C.; Xiong, S.; Hu, X.; Yao, L.; Zhang, J. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects. J. Neural Eng. 2012, 9, 056013. [Google Scholar] [CrossRef] [PubMed]
  37. Böcker, K.B.E.; Brunia, C.H.M.; Berg-Lenssen, M.M.C.v.D. A spatiotemporal dipole model of the stimulus preceding negativity (SPN) prior to feedback stimuli. Brain Topogr. 1994, 7, 71–88. [Google Scholar] [CrossRef] [PubMed]
  38. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  39. Makeig, S.; Bell, A.; Jung, T.P.; Sejnowski, T.J. Independent component analysis of electroencephalographic data. Adv. Neural Inf. Process. Syst. 1995, 8, 145–151. [Google Scholar]
  40. Hu, L.; Zhang, Z. EEG Signal Processing and Feature Extraction; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  41. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodol.) 1995, 57, 289–300. [Google Scholar] [CrossRef]
  42. Iidaka, T.; Yamashita, K.; Kashikura, K.; Yonekura, Y. Spatial frequency of visual image modulates neural responses in the temporo-occipital lobe. An investigation with event-related fMRI. Cogn. Brain Res. 2004, 18, 196–204. [Google Scholar] [CrossRef]
  43. Peyrin, C.; Baciu, M.; Segebarth, C.; Marendaz, C. Cerebral regions and hemispheric specialization for processing spatial frequencies during natural scene recognition. An event-related fMRI study. Neuroimage 2004, 23, 698–707. [Google Scholar] [CrossRef]
  44. Rossion, B.; Schiltz, C.; Crommelinck, M. The functionally defined right occipital and fusiform “face areas” discriminate novel from visually familiar faces. Neuroimage 2003, 19, 877–883. [Google Scholar] [CrossRef]
  45. Sugiura, M.; Shah, N.J.; Zilles, K.; Fink, G.R. Cortical representations of personally familiar objects and places: Functional organization of the human posterior cingulate cortex. J. Cogn. Neurosci. 2005, 17, 183–198. [Google Scholar] [CrossRef]
  46. Elman, J.A.; Cohn-Sheehy, B.I.; Shimamura, A.P. Dissociable parietal regions facilitate successful retrieval of recently learned and personally familiar information. Neuropsychologia 2013, 51, 573–583. [Google Scholar] [CrossRef]
  47. Xu, Y.; Chun, M.M. Visual grouping in human parietal cortex. Proc. Natl. Acad. Sci. USA 2007, 104, 18766–18771. [Google Scholar] [CrossRef]
  48. Marek, S.; Dosenbach, N.U. The frontoparietal network: Function, electrophysiology, and importance of individual precision mapping. Dialogues Clin. Neurosci. 2018, 20, 133–140. [Google Scholar] [CrossRef] [PubMed]
  49. Scolari, M.; Seidl-Rathkopf, K.N.; Kastner, S. Functions of the human frontoparietal attention network: Evidence from neuroimaging. Curr. Opin. Behav. Sci. 2015, 1, 32–39. [Google Scholar] [CrossRef] [PubMed]
  50. Luck, S.J.; Kappenman, E.S. The Oxford Handbook of Event-Related Potential Components; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  51. Klimesch, W. EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis. Brain Res. Rev. 1999, 29, 169–195. [Google Scholar] [CrossRef] [PubMed]
  52. Herweg, N.A.; Solomon, E.A.; Kahana, M.J. Theta oscillations in human memory. Trends Cogn. Sci. 2020, 24, 208–227. [Google Scholar] [CrossRef]
  53. Harmony, T. The functional significance of delta oscillations in cognitive processing. Front. Integr. Neurosci. 2013, 7, 83. [Google Scholar] [CrossRef]
  54. Gu, J.; Xie, Z.; Zhang, J.; He, X. Advances in rapid damage identification methods for post-disaster regional buildings based on remote sensing images: A survey. Buildings 2024, 14, 898. [Google Scholar] [CrossRef]
  55. Lagap, U.; Ghaffarian, S.; Gelinas-Gagne, S.; Jilma, J.; Liu, Z.; Luo, Z. Towards reliable deep learning for post-disaster damage assessment: An XAI-based evaluation. Int. J. Disaster Risk Reduct. 2025, 108, 105839. [Google Scholar] [CrossRef]
  56. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
Figure 1. The semantic gap in post-disaster building damage inspection.
Figure 1. The semantic gap in post-disaster building damage inspection.
Buildings 16 01237 g001
Figure 2. Real photos of the experimental scene and overview of data collection.
Figure 2. Real photos of the experimental scene and overview of data collection.
Buildings 16 01237 g002
Figure 3. Experimental design and stimulus preparation paradigm. (a) Construction of experimental stimuli. (b) Acquisition of EEG data. (c) Design of experimental paradigm. (d) Assignment of classification tasks.
Figure 3. Experimental design and stimulus preparation paradigm. (a) Construction of experimental stimuli. (b) Acquisition of EEG data. (c) Design of experimental paradigm. (d) Assignment of classification tasks.
Buildings 16 01237 g003
Figure 4. Data preprocessing procedures.
Figure 4. Data preprocessing procedures.
Buildings 16 01237 g004
Figure 5. Time-domain ERP responses to varying levels of building damage at Pz. (a) Four-class comparison. (b) Three-class comparison and the corresponding paired-sample t-test results. (c) Significant time-domain feature segments identified after multiple comparisons correction. (d) Topographical distributions of the representative late time-domain responses across categories.
Figure 5. Time-domain ERP responses to varying levels of building damage at Pz. (a) Four-class comparison. (b) Three-class comparison and the corresponding paired-sample t-test results. (c) Significant time-domain feature segments identified after multiple comparisons correction. (d) Topographical distributions of the representative late time-domain responses across categories.
Buildings 16 01237 g005
Figure 6. Time–frequency ERP responses to varying levels of building damage at Oz. (a) Time-domain signals in the three-class task. (b) Time–frequency representations obtained by the STFT. (c) Monotonic trend consistency mask across the three categories. (d) Pair-wise paired-sample t-tests among the three categories. (e) Intersection of the trend consistency and significance regions. (f) Statistically significant time–frequency feature retained after multiple comparisons correction. (g) Topographical distributions of the extracted Alpha-band feature across categories.
Figure 6. Time–frequency ERP responses to varying levels of building damage at Oz. (a) Time-domain signals in the three-class task. (b) Time–frequency representations obtained by the STFT. (c) Monotonic trend consistency mask across the three categories. (d) Pair-wise paired-sample t-tests among the three categories. (e) Intersection of the trend consistency and significance regions. (f) Statistically significant time–frequency feature retained after multiple comparisons correction. (g) Topographical distributions of the extracted Alpha-band feature across categories.
Buildings 16 01237 g006
Figure 7. Single-trial classification performance metrics across feature domains. (a) Results of time-domain features and their combinations. (b) Results of time–frequency features and their combinations. A downward-pointing arrow (“↓”) at the top of a bar indicates a decline in the corresponding metric compared to the previous feature count.
Figure 7. Single-trial classification performance metrics across feature domains. (a) Results of time-domain features and their combinations. (b) Results of time–frequency features and their combinations. A downward-pointing arrow (“↓”) at the top of a bar indicates a decline in the corresponding metric compared to the previous feature count.
Buildings 16 01237 g007
Figure 8. Confusion matrices under the optimal time-domain feature combinations. (a) Three-class (6 features). (b) Binary-A (7 features). (c) Binary-B (5 features).
Figure 8. Confusion matrices under the optimal time-domain feature combinations. (a) Three-class (6 features). (b) Binary-A (7 features). (c) Binary-B (5 features).
Buildings 16 01237 g008
Figure 9. Distribution of classification performance for optimal single features. Box and violin plots, with black dots indicating the mean values.
Figure 9. Distribution of classification performance for optimal single features. Box and violin plots, with black dots indicating the mean values.
Buildings 16 01237 g009
Table 1. Behavioral feedback summary for the retained participants.
Table 1. Behavioral feedback summary for the retained participants.
Sub IDM1M2Retention RateSub IDM1M2Retention Rate
019.70.593%1110.31.099.5%
0210.30.497%1210.20.590.9%
049.21.197.5%1310.10.399.7%
0511.11.394.7%1410.70.799.7%
079.50.699.1%1510.20.399.5%
0810.80.992.0%169.90.5100.0%
109.90.399.7%179.50.598.5%
Table 2. Significant features after t-test and FDR.
Table 2. Significant features after t-test and FDR.
Time-DomainTime–Frequency Domain
ChannelTime WindowChannelTime WindowFrequency WindowFrequency Band
FC1439~466 msPO5220~283 ms4~7 HzTheta
CP1389~422 ms261~282 ms1~4 HzDelta
580~651 msOz175~203 ms8~9 HzAlpha
657~673 msO1186~207 ms8~9 HzAlpha
Pz365~383 ms191~206 ms7~8 HzTheta
448~666 ms240~279 ms4~6 HzTheta
P3249~268 ms256~279 ms1~4 HzDelta
392~431 msBuildings 16 01237 i001
436~764 ms
Oz411~449 ms
O1418~450 ms
O2418~440 msTopographical distribution of significant electrodes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, E.; Yuan, C.; Hao, H.; Kong, Q. Neural Signatures of Human Risk Perception in Post-Disaster Scenarios: Insights for Rapid Building Damage Assessment. Buildings 2026, 16, 1237. https://doi.org/10.3390/buildings16061237

AMA Style

Zhu E, Yuan C, Hao H, Kong Q. Neural Signatures of Human Risk Perception in Post-Disaster Scenarios: Insights for Rapid Building Damage Assessment. Buildings. 2026; 16(6):1237. https://doi.org/10.3390/buildings16061237

Chicago/Turabian Style

Zhu, Erqi, Cheng Yuan, Hong Hao, and Qingzhao Kong. 2026. "Neural Signatures of Human Risk Perception in Post-Disaster Scenarios: Insights for Rapid Building Damage Assessment" Buildings 16, no. 6: 1237. https://doi.org/10.3390/buildings16061237

APA Style

Zhu, E., Yuan, C., Hao, H., & Kong, Q. (2026). Neural Signatures of Human Risk Perception in Post-Disaster Scenarios: Insights for Rapid Building Damage Assessment. Buildings, 16(6), 1237. https://doi.org/10.3390/buildings16061237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop