2. State of the Art
In the work related to the evacuation of visually impaired people from buildings that are on fire, we consider an entire pipeline, from sensing and localization, through environment representation as a topological graph derived from a floor plan or map, to real-time, multi-objective route optimization, followed by a multimodal interface (audio/haptic/visual) and validation via simulation. For users with visual impairments, the literature reports a consistent shift from single-tech solutions to hybrid systems (BLE/Wi-Fi/UWB combined with IMU, vision, ultrasonic, and mobile applications), with a strong focus on operational robustness, the consistency of instructions, and scalability in public spaces, with recent review papers serving as helpful reference points for classifications, standards, and current research gaps [
3,
4,
5,
6].
Over the past decade, digital evacuation support has shifted from fixed infrastructure to devices in everyone’s hands [
7]. The smartphone has emerged as a guidance node: it can fuse indoor positioning, scene perception (computer vision and lightweight AR), a building’s digital representation, and adaptive messaging to deliver personalized routes and instructions [
8]. Multiple studies report that mobile applications can provide dynamic personalized guidance and various UI modalities [
7,
9,
10,
11]. This technical convergence aligns with a change in people’s attitudes. Interviews report a strong willingness to rely on phones for instructions in buildings that are on fire, suggesting that the mobile channel provided by the application offers credible routes, remains resilient to localization errors, and communicates clearly [
12,
13,
14]. Beyond algorithmic pathfinding, there are two valuable practical premises: evacuation time is jointly determined by individual heterogeneity and exit congestion, and minute-level improvements in response times translate into meaningful life safety gains [
15,
16]. The study by [
17] develops an evacuation time estimation model for large buildings that explicitly parameterizes pedestrian characteristics and real-time exit crowding, thereby motivating congestion-aware rerouting at the handset level. In parallel, Jaldell [
18] quantifies the value of time in fire and rescue contexts, showing that a faster response leads to measurable life safety benefits. Crucially for accessibility, the article by [
19] presents the distinct movement patterns of blind and visually impaired evacuees under fire conditions, implicating lower effective speeds, different pre-movement behaviors, and a stronger dependence on non-visual cues.
For deaf and hard-of-hearing people, empirical work in sleep contexts shows that low-frequency alarm signals (around 520 Hz) and tactile notifiers (bed/pillow shakers) are effective in awakening and alarming [
20]. Although the bedroom use case is specific, the broader principle of combining low-frequency audio with robust tactile/visual notifications generalizes to mobile evacuation apps (e.g., strong vibration patterns plus text and pictograms), particularly for users with residual hearing. Beyond alerting, newer mobile sound recognition applications with interactive ML can translate critical auditory events (sirens, fire alarms, spoken announcements) into tactile/visual cues during pre-movement, thereby reducing uncertainty before route guidance begins [
21,
22].
Mobility limitations introduce different constraints. Reviews focused on functional limitations in high-rise buildings survey vertical evacuation aids, defend-in-place strategies, refuge areas, and the circumstances under which evacuation elevators can be used safely [
23,
24,
25]. A mobile app can serve as a personal plan broadcaster that aligns users, assistants, and facility teams and as an interface to refuge systems while filtering building graphs to identify barrier-free routes. Regarding inclusive evacuation, beyond visual impairment, review articles map existing solutions and gaps for groups with functional limitations, including high-rise buildings, and cover aspects ranging from human behavior to infrastructure provisions, such as refuge areas and evacuation elevators. A recurring conclusion is that accessibility criteria (e.g., avoiding stairs, slopes, and width constraints; and providing multimodal signage) are poorly standardized in routing models, despite being essential for realistic evacuation times [
24].
For indoor navigation targeting users with visual impairments [
3], foundational work has combined UWB/BLE infrastructure with topological representations and dynamic programming algorithms, thereby validating smartphone-based guidance. The GuideBeacon BLE line [
26] remains a representative example in terms of the cost–coverage trade-off in large buildings. In parallel, infrastructure approaches based on mobile computer vision with IMU fusion are advancing, while systems such as VISA [
27] integrate global navigation and proximity perception into a single application. Recent evaluations suggest that, in the presence of crowding or smoke, the temporal consistency of the position estimate and the quality of instructions matter more than absolute metric accuracy [
28]. Building on infrastructure-assisted navigation, Ref. [
29] presents SUGAR, a UWB-based smartphone system that fuses high-precision indoor positioning with a path-planning algorithm, demonstrating feasibility through a functional prototype tested with a blind user and highlighting UWB’s accuracy and its relatively light deployment in complex interiors. In contrast, Ref. [
30] advances an RFID-based architecture that maps building blueprints, localizes the user, and performs obstacle-aware guidance for visually impaired and elderly users.
In recent years, the literature has converged on the idea that the smartphone can act as the primary guidance interface for daily activities [
27,
31] or during evacuations, as it enables personalized, multimodal, real-time delivery of instructions, even when static signage is compromised by smoke or darkness [
32,
33]. A particularly experimental result comes from the study dedicated to SVGES (Smartphone Voice-Guided Evacuation System) [
7]: in a university basement, all 26 participants (100%) successfully evacuated with the application enabled, compared to 58% without it, a difference attributed primarily to the voice instructions, which override visual ambiguity under smoke conditions. The system relies on Bluetooth beacons associated with detectors, pre-designed routes compliant with relevant codes, and audio/graphical notifications on the phone, thereby avoiding a strict reliance on high-accuracy positioning when such estimates become unstable. Despite this, tactile feedback remains valuable and provides users with confidence [
34,
35].
On the occupant-acceptance side, qualitative interview studies indicate that people who carry their smartphones with them during evacuation expect personalized information (e.g., “the safest route in this moment is…”) and prefer short, clear messages delivered with minimal delay. These findings support an interface design based on step-by-step, minimalist, spatially synchronized audio instructions and confirm the usefulness of redundant channels (text and pictograms) when hearing is not impaired [
12].
Recent research increasingly shifts from occupant-only guidance to collaborative fire emergency response, where evacuation routing is coupled with information exchange and responder workflows. For example, the CORE system explicitly supports indoor navigation and information sharing among occupants, firefighters, and facility managers, enabling person-to-person rescue, evacuation, and access to resources for suppression [
36]. Complementary work also integrates real-time evacuation with rescue-oriented actions, highlighting the need to align automated guidance with operational constraints and responder priorities during dynamic incidents [
37]. In addition, BIM-centred emergency platforms increasingly fuse hazard simulation and building semantics to provide updated situational information, which is crucial for coordinating evacuation decisions with on-scene responses [
38].
Emergency wayfinding is strongly shaped by acute stress and uncertainty, which can alter perception, attention, and decision-making; recent syntheses formalize these effects and report strong compliance effects for salient cues and social influence in emergencies [
39]. Beyond general population behaviour, controlled experiments show that stress can measurably degrade navigation efficiency and strategy choice, underscoring the importance of designing guidance that minimizes cognitive burden [
40]. For impaired users, recent assistive navigation studies highlight that guidance modality and message complexity can introduce additional cognitive load; accordingly, systems should prioritize concise, unambiguous prompts and low-cognitive load interaction patterns, especially when conditions are time-critical [
41]. Decision-making research in fire-like VR scenarios further indicates that smoke can alter exit choice behavior and the relative weight of familiarity/social cues, reinforcing that smoke-aware guidance and modeling are critical for fire evacuation validity [
42].
Recent post-disaster communication research demonstrates rapidly deployable ad hoc networking approaches that enable device-to-device exchange without relying on intact infrastructure—an important direction for decentralized, field-ready systems [
43]. In parallel, responder-focused mobile tools combine tracking (e.g., IMU/RFID or similar) with geo-referenced Points of Interest, supporting coordination and situational awareness during indoor missions where GPS fails [
44].
The current landscape effectively legitimizes the smartphone as the final guidance node in evacuation: from solid experimental results (e.g., SVGES) and frameworks that reroute on the order of seconds to platforms such as EVAGUIDE/eVACUATE. Based on this, we explicitly commit in this paper to the following pipeline: map–regulation-compliant graph–accessibility-aware cost function (smoke, congestion, stairs/slopes, decision-point density)–algorithm routing with rerouting–audio and video instructions directly on the smartphone. Moreover, for users with visual impairments, the voice channel remains central. In contrast, visual feedback is a natural extension, enabling a mobile application to close the sensor optimization feedback loop that enables visual feedback.
6. Wi-Fi RSSI-Based Indoor Positioning
Accurate user location is essential for computing evacuation routes from the current position to safe exits. The system implements Wi-Fi-based indoor positioning using Received Signal Strength Indicator (RSSI) measurements from multiple access points.
The positioning algorithm estimates the user coordinates
by measuring the signal strength of
access points with known positions
. The relationship between RSSI and distance follows the log-distance path loss model:
where
: Measured signal strength from access point i (dBm).
: Reference signal strength at distance m.
: Path loss exponent (typically 2–4 for indoor environments).
: Distance to the access point i.
: Zero-mean Gaussian noise representing multipath interference.
The Flutter application continuously scans for Wi-Fi signals and applies weighted least squares optimization to estimate position as described in
Algorithm A1.
Weighted Least Squares: Minimizes the weighted residual error to find the optimal user position:
where
finds the
coordinates that minimize the sum,
are known AP positions,
are estimated distances from RSSI, and
are confidence weights.
The estimated position
is mapped to the nearest graph vertex
using
where
finds the vertex
v that minimizes the expression,
and
are the coordinates of the vertex
v in the building graph, and
denotes the Euclidean distance (L2 norm):
.
Position updates trigger route recalculation when
User moves m from previous position.
Estimated position deviates m from planned route.
Position accuracy drops below threshold ( m).
Accuracy: With 4–6 access points, the system achieves 2–4 m positioning accuracy in typical building environments, sufficient for corridor-level routing. Kalman filtering could be implemented in the future to improve jitter reduction and reduce false recalculations caused by RSSI variability.
7. Flutter/Dart Implementation
The evacuation routing system is implemented in Dart and the Flutter framework. This technology stack choice provides several critical advantages for a real-time, safety-critical application. Flutter provides cross-platform development (iOS/Android) with native performance through Dart-to-ARM compilation [
46]. Its reactive UI and built-in accessibility support (TalkBack/VoiceOver integration) make it ideal for deploying assistive technology. Some of the advantages of Dart and Flutter are as follows:
Performance and AOT Compilation: Dart compiles to native machine code (ARM/x86) for mobile devices (Ahead-Of-Time compilation). This ensures a consistent, low-latency performance, crucial for recalculating evacuation routes in real time as environmental conditions change.
Strong Typing and Null Safety: The Dart type system, with its sound null safety, significantly reduces the risk of runtime errors (such as null pointer exceptions) that could cause the application to crash during an emergency.
Asynchronous Processing: Dart’s Future and Stream APIs allow for the efficient handling of asynchronous sensor data updates without blocking the main execution thread, ensuring that the user interface remains responsive.
Cross-Platform Deployment: The Flutter framework enables the deployment of a single codebase to both iOS and Android platforms, ensuring broad accessibility for all building occupants.
The application uses three isolates (Dart’s lightweight threads): Main Isolate: UI rendering; Positioning Isolate: continuous sensor processing; Routing Isolate: pathfinding and sensor monitoring without blocking the UI. Isolates communicate via message passing, ensuring that pathfinding latency does not affect UI responsiveness.
7.1. User Interface
Within the Main Isolate (UI), state management is handled natively through the StatefulWidget lifecycle, ensuring efficient updates to UI components. For auditory feedback, the system integrates the flutter_tts package, providing synthesized text-to-speech capabilities to reinforce visual instructions.
The core of the Flutter application is encapsulated within the
MainPage, a
StatefulWidget that orchestrates the lifecycle of the evacuation guidance view. The layout employs a responsive hierarchy rooted in a
Scaffold widget, which provides the fundamental material design visual structure. The main body is centered and arranged vertically using a
Column widget, as depicted in
Figure 5. The directional grid is structured with a forward indicator at the top, a middle row containing left and right indicators, and a backward indicator at the bottom. Additionally, a control interface includes a “Start” button to initiate the guidance sequence, primarily for simulation and testing.
The visual state of the directional arrows is managed dynamically by an active arrow state variable. In the active state, the arrow corresponding to the current navigational instruction is rendered in green to signify the correct path. Conversely, all inactive arrows are rendered in grey. This contrast is intentionally designed to reduce cognitive load, allowing the user to focus solely on the active direction without distraction. To further enhance situational awareness, the UI integrates with the flutter_tts engine. The system is programmed to trigger a synthesized voice command (e.g., “Forward”, “Right”) synchronously with visual state changes. This multimodal feedback mechanism ensures that users receive directional guidance even if their visual attention is momentarily diverted. The UI architecture is designed to be a passive consumer of data from backend modules. The evacuation system relies on two primary modules: the Navigation Module and the Routing Module. The Navigation Module is responsible for real-time computation of the user’s precise indoor location. Subsequently, the Routing Module uses this location data to dynamically calculate the shortest path to the nearest safe exit and determine the immediate directional vector required for the user.
7.2. Pathfinding Logic
The pathfinding logic, already introduced in
Section 4, is encapsulated in the
DijkstraAlgorithm class. The implementation uses Dart’s standard library collections to improve efficiency. The implementation uses the following key data structures:
GraphNode represents a physical location with a unique ID, name, exit status, and blocked status;
Edge represents a connection between two nodes, with a base distance and a dynamically computed current cost.
SensorData contains real-time measurements: smoke level
, crowding level
, fire presence flag
, and timestamp.
Graph maintains the complete building topology with methods for path finding, cost updates, and node management.
7.3. Indoor Positioning and Graph Mapping Implementation
The positioning logic is encapsulated within a dedicated background isolate. The main thread communicates with the positioning isolate via asynchronous message passing using SendPort and ReceivePort with the following information: an input list of detected AccessPoint objects, containing known coordinates and estimated distances derived from the Received Signal Strength Indicator (RSSI), an output PositioningResult object containing the computed device coordinates, and the identifier of the nearest graph edge.
The core positioning logic employs a 2D trilateration algorithm, already presented in
Section 6. Given distances
from three known access points
, the device’s position
is determined by solving the intersection of three circles defined by
The implementation linearizes these equations by subtracting them pairwise to eliminate the quadratic terms, resulting in a system of linear equations of the form , which is solved using Cramer’s rule or substitution. Once the coordinate is determined, the system identifies the user’s location within the building’s navigational context. The building is modeled as a graph , where the nodes V represent junctions or access points and the edges E represent walkable corridors. The algorithm iterates through all edges in the graph. For each edge defined by endpoints A and B, it calculates the perpendicular distance from the user’s position P to the line segment . The edge minimizing this distance is selected as the user’s current location. This projection logic handles cases in which the projection falls outside the segment endpoints by clamping it to the nearest endpoint.
10. Conclusions
This research presented a comprehensive mobile evacuation guidance system designed to address the critical safety gaps faced by visually impaired individuals during fire emergencies. By shifting the computational burden from vulnerable building infrastructure to the user’s mobile device, the proposed decentralized architecture ensures operational continuity even when central networks fail. The system integrates real-time environmental monitoring with personalized navigation, providing a robust safety net that adapts to the unpredictable evolution of fires.
The technical core of the solution relies on a modified Dijkstra’s algorithm that goes beyond static pathfinding. By incorporating dynamic edge weights derived from real-time sensor data—including smoke density, crowd congestion, and fire proximity—the system effectively models the changing risk landscape. This is complemented by a Wi-Fi RSSI-based positioning system that, despite the inherent challenges of indoor signal propagation, provides sufficient accuracy for corridor-level navigation. The integration of these components into a cross-platform Flutter application demonstrates the practical feasibility of performing complex, life-critical computations directly on consumer hardware with negligible latency.
Experimental validation across multiple simulated scenarios confirmed the system’s ability to make life-saving decisions in real time. In tests involving sudden blockages and rapidly spreading hazards, the routing engine successfully identified safe alternatives, diverting users from compromised exits to viable safe zones with minimal delay. Performance evaluation on mobile emulators showed that initial path calculations average 100 ms, while critical recalculations occur in just 150 ms, comfortably meeting the sub-second response times required for emergency guidance.
The simulation conducted using the ns-3 platform confirms that the proposed multi-tier architecture achieves high reliability, low latency, stable temporal behavior, and efficient energy use. Separating sensing, edge processing, and centralized management enables scalable deployment while maintaining quality-of-service guarantees.
Privacy and security were foundational to the system’s design, not afterthoughts. The architecture strictly adheres to privacy-by-design principles, ensuring that sensitive user location data never leaves the device. By relying on broadcast-only sensor networks and local processing, the system eliminates the risks associated with centralized tracking while maintaining full functionality. This approach not only protects user privacy but also enhances system scalability, as the network does not become a bottleneck during mass evacuations.
Future work will focus on refining the user interaction model, specifically by implementing a more granular directional guidance system using the Flutter framework to ensure cross-platform compatibility. The proposed implementation prioritizes inclusive design, aiming to expand the system’s utility beyond visual impairments. Future iterations will incorporate redundant sensory cues, such as enhanced haptic patterns and visual strobing, to support deaf and hard-of-hearing individuals. Additionally, further research is planned to validate the system in larger, multi-story complexes and to explore integrating sensor fusion techniques [
47,
48], combining Wi-Fi RSSI with inertial odometry to improve positioning stability in complex environments [
49,
50].
11. Limitations and Practical Considerations
While the proposed system demonstrates significant potential to enhance the safety of individuals with impairments during fire evacuations, several limitations and practical challenges must be addressed to ensure robust deployment in real-world scenarios. It is important to highlight, however, that the current decentralized architecture and cross-platform implementation provide a scientifically sound methodological foundation. By shifting the computational load to the user’s device, the system reduces latency and the dependence on central processing servers, a critical step toward resilience. The successful integration of real-time sensor data into a modified Dijkstra’s algorithm validates the core concept of dynamic routing. Nevertheless, to bridge the gap between this promising proof of concept and a fully deployable industrial solution, this section critically examines the system’s reliance on infrastructure and the current scope of its evacuation logic.
11.1. Infrastructure Dependency and Resilience
A critical limitation of the current implementation is its reliance on the building’s Wi-Fi infrastructure for both indoor positioning (RSSI-based) and data transmission. In a severe fire incident, power outages or structural damage may compromise the Wi-Fi network, potentially rendering the localization and real-time update mechanisms inoperable. This contradicts the goal of a fully resilient, decentralized system.
To mitigate this risk, future iterations of the system must incorporate multimodal fallback strategies. Specifically, integrating Bluetooth Low Energy (BLE) beacons, which can operate on battery power independent of the building’s main electrical supply, would provide a robust alternative for localization when Wi-Fi is unavailable. Additionally, leveraging inertial measurement units (IMUs) on the user’s device (accelerometers and gyroscopes) could enable dead-reckoning navigation in a degraded mode, allowing the application to maintain a position estimate even without external signals. Furthermore, exploring opportunistic device-to-device (D2D) communication protocols could allow users’ devices to share hazard information directly, creating a mesh network that functions without a central server or access points.
11.2. Evacuation Logic and Decision Support
The current routing algorithm focuses primarily on finding the optimal path to an exit. However, this approach simplifies the complex decision-making required in a fire emergency, particularly for mobility-impaired users. The system currently lacks a comprehensive Risk Assessment step to determine whether immediate evacuation is indeed the safest option compared to a “defend-in-place” strategy.
For users with severe mobility limitations, attempting to traverse a smoke-filled corridor might pose a greater risk than remaining in a designated fire-safe compartment. To address this, future developments will expand the decision support logic to include a predictive risk protocol. This would involve integrating a simplified fire dynamics model to forecast the spread of smoke and heat over a short time horizon (e.g., the next 5–10 min). By making path costs temporally aware, the system could recommend a “shelter-in-place” strategy if all viable evacuation routes are predicted to become hazardous before the user can traverse them. The decision logic should explicitly evaluate the safety of the current zone against the predicted conditions of the evacuation path, and provide recommendations such as “Evacuate via Route A, Move to Refuge Area B,” or “Remain in Safe Zone.”
11.3. Network Degradation
While the experimental validation demonstrated efficiency gains, the tests assumed a functioning network environment. Real-world fire scenarios often involve significant signal interference and network congestion. Future validation efforts will include simulations of network degradation to quantify the system’s performance limits under packet loss and high-latency conditions, ensuring that the fail-safe mechanisms are triggered reliably when communication quality drops below a critical threshold.