Next Article in Journal
A Deep Learning Method for the Automated Mapping of Archaeological Structures from Geospatial Data: A Case Study of Delos Island
Previous Article in Journal
Exploring the Application of NeRF in Enhancing Post-Disaster Response: A Case Study of the Sasebo Landslide in Japan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Viewpoint Selection for 3D Scenes in Map Narratives

by
Shichuan Liu
1,
Yong Wang
1,*,
Qing Tang
1 and
Yaoyao Han
2
1
Research Centre of Geo-Spatial Big Data Application, Chinese Academy of Surveying and Mapping, Beijing 100036, China
2
School of Spatial Informatics and Geomatics Engineering, Anhui University of Science and Technology, Huainan 232001, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2025, 14(6), 219; https://doi.org/10.3390/ijgi14060219
Submission received: 6 March 2025 / Revised: 21 May 2025 / Accepted: 27 May 2025 / Published: 31 May 2025

Abstract

Narrative mapping, an advanced geographic information visualization technology, presents spatial information episodically, enhancing readers’ spatial understanding and event cognition. However, during 3D scene construction, viewpoint selection is heavily reliant on the cartographer’s subjective interpretation of the event. Even with fixed-angle settings, the task of ensuring that selected viewpoints align with the narrative theme remains challenging. To address this, an automated viewpoint selection method constrained by narrative relevance and visual information is proposed. Narrative relevance is determined by calculating spatial distances between each element and the thematic element within the scene. Visual information is quantified by assessing the visual salience of elements as the ratio of their projected area on the view window to their total area. Pearson’s correlation coefficient is used to evaluate the relationship between visual salience and narrative relevance, serving as a constraint to construct a viewpoint fitness function that integrates the visual salience of the convex polyhedron enclosing the scene. The chaotic particle swarm optimization (CPSO) algorithm is utilized to locate the viewpoint position while maximizing the fitness function, identifying a viewpoint meeting narrative and visual salience requirements. Experimental results indicate that, compared to the maximum projected area method and fixed-value method, a higher viewpoint fitness is achieved by this approach. The narrative views generated by this method were positively recognized by approximately two-thirds of invited professionals. This process aligns effectively with narrative visualization needs, enhances 3D narrative map creation efficiency, and offers a robust strategy for viewpoint selection in 3D scene-based narrative mapping.

1. Introduction

Map narrative, which refers to the use of map language to describe events [1], represents a cartographic approach that incorporates a narrative structure [2]. Map narratives enable episodic representation of spatial data, enhancing the comprehensibility of complex events [3], particularly fostering a deep understanding of the geospatial context of the locations where events occur [4]. Consequently, map narratives have been applied in diverse fields, including disaster management [5,6], historical research [7], and geography education [8].
To support immersive and cognitively effective storytelling, 3D visualization has emerged as a powerful extension of traditional 2D cartographic techniques. Compared with flat maps, 3D scenes offer richer visual expressiveness, enabling users to explore events from spatial perspectives. In map narratives, spatial scenes encompass information about the state of events and are composed of spatial elements [9]. These scenes and elements, designed to convey the narrative theme, are referred to as narrative scenes and narrative elements, respectively, in this study. However, due to 3D scenes often containing redundant visual information, which can lead to significant visual search challenges [10], 3D narrative maps often suffer from inefficient information delivery and poor focus control.
Effective viewpoint selection thus plays a vital role in map narratives: it not only helps reduce the visual search burden but also aligns the viewer’s focus with the intended narrative theme. By controlling viewpoint position and orientation, storytellers can emphasize critical elements and suppress distracting information [11]. From a cognitive perspective, the viewing angles of 3D objects are constrained by principles of cognitive psychology, which lead to the formation of similar perspectives [12], and these constraints have also been validated physiologically [13]. Similarly, the viewpoints chosen by humans when observing a 3D scene should have a convergent distribution pattern. In the context of map narratives, appropriate viewpoint selection highlights focus, refines details, and accentuates contrasts among narrative elements [14]. This process necessitates accounting for the differentiation of visual information based on the importance or relevance of elements to the narrative theme.
Although viewpoint selection is critical for enhancing narrative effectiveness in 3D map visualization, how to automatically determine viewpoints that best serve narrative goals remains underexplored in cartographic research. A variety of visual metrics—such as visibility [15], projected area [16], surface area entropy [17], curvature [18], outline [16,18], and grid salience [19]—have been used to measure the differentiation in visual information. However, many of these metrics are limited in their applicability to 3D map narratives, where elements may include non-volumetric entities such as points, polylines, and polygons. To ensure generalizability, this study adopts projected area as a universal visual metric, as it applies across element types and also facilitates the assessment of the scene’s overall visual salience [20].
Meanwhile, the viewpoint space in a 3D scene is continuous and infinite, posing a considerable computational challenge. Traditional approaches have relied on fixed or manually defined viewpoints [21,22,23] which often fail to adapt to the semantic or visual complexity of narrative scenes. Manual selection of viewpoints, while intuitive, relies heavily on the subjective cognition of the user and is time-consuming and labor-intensive. Machine learning methods estimate viewpoint parameters using neural networks, but they often lack interpretability and require extensive training datasets [24]. Candidate-based approaches determine viewpoints by fitting vertices and centroids of convex polyhedrons [15,25] or uniformly sampling points on enclosing spheres [26]. However, these methods may overlook more appropriate viewpoints. To address these limitations, this study leverages particle swarm optimization (PSO), particularly chaotic PSO, which enhances global search efficiency and convergence capability in complex, multi-constraint scenarios [20,27].
Building upon the above, this study focuses on viewpoint selection within a single 3D narrative scene that depicts a specific event. It proposes a viewpoint selection method tailored to narrative scenes, which evaluates the narrative relevance of scene elements based on their spatial distances, constrains the visual differentiation of elements using their projected areas, incorporates the projected area of the scene to enhance global salience control, and utilizes a chaotic particle swarm optimization (CPSO) algorithm to efficiently explore the viewpoint space under multiple constraints. The main contributions are as follows:
(1)
Introducing spatial distance as a quantitative measure to assess the narrative relevance of elements within a scene.
(2)
Proposing a constraint framework to regulate the visual differentiation of elements in narrative processes.
(3)
Applying a chaotic particle swarm optimization algorithm to achieve efficient and interpretable viewpoint selection for narrative scenes.

2. Methodological Framework

2.1. Overview

The process of the viewpoint selection method proposed in this paper is outlined in Figure 1. First, the spatial distance between each element in the scene and the narrative thematic element is calculated to assess narrative relevance. The external convex polyhedron and the enclosing sphere of all elements are then obtained. The initial viewpoint positions are determined by uniformly sampling points from the spherical surface. For each viewpoint, the visual salience sequence of elements projected onto the view window is calculated, and the degree of fit between this sequence and the narrative relevance sequence is determined. Additionally, the visual salience of the convex polyhedron of the scene is integrated to evaluate viewpoint fitness. The particle swarm optimization (PSO) algorithm is applied to search for the viewpoint position that maximizes the fitness function, and the viewpoint with the highest fitness is selected as the final result.

2.2. Spatial Reference and Viewpoint Orientation

As shown in Figure 2a, the spatial coordinate system used in this study is based on the Earth-Centered, Earth-Fixed (ECEF) coordinate system, where the origin of the Cartesian coordinate system is located at the center of the Earth. The x-axis points to the direction of longitude 0 degrees on the equator, the y-axis points to the direction of longitude 90 degrees on the equator, and the z-axis points toward the North Pole. A narrative scene S consists of multiple elements, and the elements are divided into narrative thematic element set E and non-thematic element set E . The relationship between the thematic and non-thematic elements is described by the following equation, where the thematic element is denoted by e and the non-thematic element by e .
S = E E , E = e i , i + , E = e i , i +
The viewpoint orientation is defined by the position of the viewpoint, which points toward the center of the scene, as shown in Figure 2b. The convex polyhedron of the scene is obtained using the QuickHull3D algorithm [28], and the center ( x 0 , y 0 , z 0 ) of the outer sphere of the polyhedron is the center s of the scene. The orientation of the viewpoint p calculated based on the viewpoint’s position x , y , z . The process for calculating the orientation angles of the viewpoint, such as heading α and pitch β , is as follows:
(1)
The ground normal vector g is calculated, and its unit vector is defined as g ^ :
g = g x , g y , g z = x 0 , y 0 , z 0
(2)
The north vector n is the projection of the vector 0 , 0 , 1 onto the ground plane, with its unit vector defined as n ^ :
n = 0 , 0 , 1 g 0 , 0 , 1 g 2 g
(3)
The viewpoint direction vector d and the projection of this vector onto the ground are also calculated as d g , with the corresponding unit vectors defined as d ^ and d ^ g :
d = d x , d y , d z = x 0 x , y 0 y , z 0 z d g = d ^ d ^ g ^ g ^
(4)
The heading and pitch angles are computed by determining the clockwise angle relative to the north vector, which is derived by comparing the viewpoint’s longitude l o n p with the scene’s center point longitude l o n s , and the angle between the viewpoint direction vector and the ground normal vector, respectively:
α = arccos d ^ g n ^ ,                               l o n p > l o n s 2 π arccos d ^ g n ^ ,           l o n p l o n s
β = arccos d ^ g ^

2.3. Element Evaluation

According to the principle of spatial relevance, the closer narrative elements are to the thematic elements, the higher their relevance is. Thus, to evaluate the various elements in the scene, this study proposes narrative relevance (NR) based on spatial distance, reflecting the association of elements in relation to the thematic ones. Assuming that the geometric center of the thematic and non-thematic elements are located at x e , y e , z e and x e , y e , z e respectively, and its N R e calculation method is shown in Equation (7).
N R e = I ( e ) 1 σ + σ I e = max I e + σ I e = 1 x e x e 2 + y e y e 2 + z e z e 2 τ + ε
where N R e is the narrative relevance of all elements, taking values in the range of σ , 1 ; σ is the standard deviation of all variables I ; τ is the decay constraint, where the larger its value, the faster the correlation decays as the spatial distance increases; ε is a control value much smaller than the distance to ensure that I e makes sense when there is a distance of 0; I e is obtained by using the maximum-minimum normalization method to standardize I e ; and I e is the maximum value of I e plus its standard deviation σ .
This method provides a quantitative assessment of how strongly each element in the scene is connected to the narrative’s central theme. The narrative relevance is critical in guiding the viewpoint selection process, ensuring that elements which are more relevant to the narrative theme are emphasized in the final visualization.

2.4. Viewpoint Information Quantification

The 3D scene and its elements’ visualization results must effectively convey visual information through a viewpoint-centered window. This study uses the projected area to calculate the prominence of each element in visual perception when observing the 3D scene from a particular viewpoint, thereby measuring the amount of visual information conveyed by that viewpoint.
The prominence of an element is directly related to the size of its projected area in the view window. The larger the projected area of an element relative to the total area of the window, the more visually significant it is from the perspective of the viewer. This quantification method enables the evaluation of how well each element is represented visually when viewed from a specific viewpoint.
The horizontal unit component u ^ and the vertical unit component v ^ on the view window are represented:
u ^ = o ^ × g ^ o ^ × g ^ , v ^ = d ^ × g ^ d ^ × g ^ o ^ = 1 , 0 , 0 ,   if   g ^ 0 , 1 , 0 0 , 1 , 0 ,   otherwise
where o ^ is an arbitrary unit vector not parallel to g ^ , e.g., 1 , 0 , 0 or 0 , 1 , 0 .
The horizontal coordinate u i and vertical coordinate v i of the viewpoint V i x i , y i , z i with serial number i in the scene, projected onto the view window, are calculated as:
u i = V i u ^ , v i = V i v ^ V i = x i x , y i y , z i z , V i = V i V i d ^ d ^
where V i is the vector of point V i pointing to the viewpoint x , y , z , and V i is the vector of projections from V i to the window.
The projected area is calculated differently when the element e geometry is different:
(1)
Calculation of the projected area A e when e a polygon element, is defined by a set Λ k = x k , y k , z k , k + of n vertices where k is the index of each vertex, is considered:
A e = 1 2 k = 1 n u k v k + 1 v k u k + 1 , u n + 1 , v n + 1 u 1 , v 1
(2)
When e a polyline element, defined by a set Λ k = x k , y k , z k , k + of n vertices with width is w , the calculation method of its projected area A e is as follows:
A e = w k = 1 n u k + 1 u k 2 + v k + 1 v k 2
(3)
When e is a point element, the corresponding projected area A e is calculated according to its specific visualization method.
The ratio of the projected area A e of the element e to the area A w i n d o w of the window is used to express the visual salience V S e of the element e to the viewpoint p :
V S e = A e A w i n d o w , A w i n d o w = W i d t h H e i g h t
The value of V S e in Equation (12) ranges from 0 to 1. The larger the value, the more visually significant the element is.
The ratio of the area A c o n v e x of the scene outline projected onto the window to the area A w i n d o w of the window is the visual salience V S s of the scene S :
V S s = A c o n v e x A w i n d o w
A c o n v e x in Equation (13) is calculated similarly to Equation (10), using the contour vertices of the scene. A c o n v e x also ranges from 0 to 1, with a value close to 1 indicating that the scene almost fills the view window, indicating that the scene is visually significant.

2.5. Viewpoint Search and Determination

In our method, the chaotic particle swarm optimization (C-PSO) algorithm is improved by introducing chaos theory to particle awarm optimization (PSO). Compared with the traditional PSO algorithm, this algorithm enhances the global search capability of the algorithm while restricting the randomness of the search results, which can make the results of viewpoint selection more suitable and reliable. In the outer sphere, the geometric center x 0 , y 0 , z 0 of the convex packet polyhedron of the scene is the sphere center and R is the radius length using the Fibonacci method to uniformly take the points and take the point above the ground. The starting coordinates x i , y i , z i of the viewpoint p i are calculated as follows:
θ = 2 π i ϕ , ϕ = 1 + 5 2
where ϕ is the golden ratio, and θ is the inverse distribution of the golden ratio to avoid the symmetry caused by the uniform distribution of angles, so that the points can be distributed more uniformly on the sphere, and i is the index of the viewpoint.
z = 1 2 i n 1 ,   r = 1 z 2
where z is the z-axis component of the viewpoint p i , and r is the projection length of the p in the plane of intersection between the x-axis and the y-axis of the scene sphere coordinates.
x i = r cos θ R + x 0 , y i = r sin θ R + y 0 , z i = z R + z 0
where x i , y i , and z i are the x-axis, y-axis, and z-axis of the ECEF coordinate system, respectively.
The fitness function F with the values of [ 1 , 1 ] for the fitness to the location of the viewpoint during particle search is designed according to 0 narrative relevance and 0 visual salience:
F = ς + λ V S s 1 + λ V S s
where λ is the control constant, and ς is Pearson’s correlation coefficient calculated for the narrative relevance sequence and the visual salience sequence of all elements in the scene:
ς = i = 1 n N R e i N R ¯ e V S e i V S ¯ e i = 1 n N R e i N R ¯ e 2 i = 1 n V S e i V S ¯ e 2
where N R ¯ e and V S ¯ e are the average values of the respective sequences.
For each generation, the particle velocity and position are updated with Equations (19) and (20), respectively, and the performance of each particle is evaluated using the fitness function F , and the individual optimal position l b e s t and the global optimal position g b e s t are updated.
υ x , j ( t + 1 ) = ω υ x , j ( t ) + c 1 r 1 ( l x , b e s t x j ( t ) ) + c 2 r 2 ( g x , b e s t x j ( t ) ) υ y , j ( t + 1 ) = ω υ y , j ( t ) + c 1 r 1 ( l y , b e s t y j ( t ) ) + c 2 r 2 ( g y , b e s t y j ( t ) ) υ z , j ( t + 1 ) = ω υ z , j ( t ) + c 1 r 1 ( l z , b e s t y j ( t ) ) + c 2 r 2 ( g z , b e s t z j ( t ) )
x j ( t + 1 ) = x j ( t ) + υ x , j ( t + 1 ) y j ( t + 1 ) = y j ( t ) + υ y , j ( t + 1 ) z j ( t + 1 ) = z j ( t ) + υ z , j ( t + 1 )
where υ x , j , υ y , j , and υ z , j represent the quantum variable velocities of the particle along the x-axis, y-axis, and z-axis, respectively, while x j , y j , and z j denote the particle’s coordinate position. ω is the inertia weight, which controls the influence of the current velocity and is typically used to maintain the particle’s direction of movement. c 1 is the self-learning factor that quantifies the particle’s attraction to its own historically optimal position, l b e s t , while c 2 is the social learning factor that measures the particle’s attraction to the global optimal position, g b e s t . r 1 and r 2 are random numbers with the values of [ 0 , 1 ] , incorporated to enhance the randomness of the search. l x , b e s t , l y , b e s t , and l z , b e s t represent the individual optimal positions, while g x , b e s t , g y , b e s t , and g z , b e s t denote the global optimal positions. Particles are randomly repositioned to one of the initial viewpoints if they move outside the view window.

3. Experiment

3.1. Data

The experimental data used in this study are based on the narrative description of a rainstorm disaster that occurred in July 2024 in Hanyuan County, Sichuan Province, China. These descriptions, obtained from the Baidu Encyclopedia (https://baike.baidu.com/) (accessed on 25 November 2024), provide details about the events, including the geographic distribution of affected areas, damaged infrastructure, and key locations. Table 1 outlines the titles and descriptions of the two narrative scenes. Table 2 lists the extracted narrative elements from both scenes, along with their geometric types and identifiers. “Flooded Area 1#” serves as the thematic element for Scene A, while “Damaged Bridge 2#” and “Damaged Bridge 3#” are designated as thematic elements for Scene B. Figure 3 illustrates the spatial distribution of all narrative elements, offering visual context.

3.2. Parameters Analysis

The number N of initial viewpoints and the number G of iterations of the particle swarm algorithm greatly affect the speed and effect of the algorithm. In this study, we tested different values of (100, 200, 300, and 400) and (100, 200, 300, and 400) to analyze their impacts on computational time and fitness outcomes. Figure 4a–d shows the spatial distributions of initial viewpoints for different values of (100, 200, 300, and 400). As it increases, the uniformity of the sampled points improves, providing a more comprehensive coverage of the potential viewpoint space. Similarly, increasing the number of iterations enhances the optimization process by allowing more refined adjustments of the particles’ positions, leading to higher fitness values.
Based on the iterative calculations of the initial viewpoint shown in Figure 4 and the corresponding time consumption and fitness depicted in Figure 5a–d, it is observed that the graphs reach stable results within a certain time frame. When N = 300, the algorithm demonstrates better computational efficiency; thus, the optimal number of iterations for the particle swarm algorithm is determined to be 300. With G = 300, the impact of varying N on computational results is illustrated in Figure 5e, revealing that the algorithm achieves optimal performance when N is also set to 300.

3.3. Result and Analysis

The results of the viewpoint selection experiments are examined from the perspectives of narrative relevance and visual salience. The proposed method is evaluated in comparison to two baseline approaches: the fixed-angle method and the maximum projected area method. The maximum projected area method is designed to maximize the projected area of the entire scene, ensuring the largest possible projection area, i.e., V S s is the largest. In contrast, the fixed-angle method positions the viewpoint along the vertical angular bisector of the field of view at a 45° angle to the scene plane. The calculation results of narrative relevance and visual salience are presented in Table 3 and Table 4. Table 3 summarizes the results of Scene A, where *F1 represents the narrative thematic element F1, and VS-Max, VS-Fixed, and VS-CPSO denote the visual salience scores of the maximum projected area method, the fixed-value method, and the proposed method, respectively. Table 4 presents the corresponding results for Scene B, in which *B2 and *B3 are jointly considered as the narrative thematic elements.
A visualization of Figure 6, derived from Table 3 and Table 4, clearly demonstrates that the visual salience achieved through the proposed method aligns more closely with narrative relevance. Furthermore, it enhances the differentiation between elements, particularly polygonal elements.
The resultant views are presented in Figure 7, in which (a) corresponds to the view generated by the maximum projected area method, (b) represents the view with a fixed 45° downward angle, (c) illustrates the view produced by the proposed method for Scene A, and (d) shows the result of the proposed method applied to Scene B.
Taking the viewpoint selection process of Scene A as an example, the viewpoint selection process of the proposed method is depicted in Figure 8. The upper section illustrates the results of viewpoint selection across generations, with fitness values represented by different colors, while the lower section depicts the spatial distribution of viewpoint fitness. It is evident that viewpoints at higher altitudes exhibit greater fitness values. As shown in combination with Figure 4, viewpoints aligned with the orientation of narrative thematic elements also display higher fitness values, which are not fixed but exhibit a preference for a regional range.

3.4. Discussion

To effectively evaluate the method proposed in this paper, Fitness and Vote are selected as evaluation metrics, and the results are presented in Table 5. Among them, fitness represents the evaluation of fitness, calculated as shown in Equation (17). Furthermore, 26 professionals with relevant backgrounds, such as map mapping, disaster relief, and emergency management, were invited to vote for the most appropriate viewpoint based on the scenario description. The number of votes received for each viewpoint was recorded as ‘Vote’ in Table 5.
Table 3 shows that 66% of the respondents agree that the scene view selected by the method proposed in this paper provides the most accurate description. This indicates that the viewpoints selected by this method outperform those of the fixed-value and maximum projected area methods and align more closely with the respondents’ understanding and cognition of the scene. However, the method proposed in this paper still has some limitations, particularly for folded line and point elements, which do not reflect the visual information of these elements in a balanced manner. Furthermore, to determine the orientation of the viewpoint, it is fixed to always face the center of the scene, i.e., the center point of the view window coincides with the center point of the scene. This approach does not exclude the possibility of overlooking a more suitable viewpoint.

4. Conclusions

Aiming to address the issues of redundant scene information and the low efficiency of viewpoint selection in the construction of 3D scenes for map narratives, this paper proposes a viewpoint evaluation method that integrates narrative relevance and visual salience, enabling the automatic selection of viewpoints for 3D scenes based on the chaotic particle swarm algorithm. The experimental results demonstrate that the viewpoint selection method not only aligns more closely with human visual cognition but also enhances the efficiency of viewpoint selection in 3D scene construction, which is significant to the 3D visualization method in map narratives. However, for a continuous scene describing an event, when the viewpoints are transitioned, continuity is maintained between the scenes, and the underlying logic of the narrative is implied through this continuity. Therefore, expressing the logical relationship between the previous and subsequent viewpoints in the map narrative represents the objective of our future research.

5. Software

Figure 3 was created using QGIS version 3.34.5, while Figure 8 was generated with ArcGIS Pro version 3.3.2. The CesiumJS 3D engine was employed to visualize the scene in 3D and to achieve the view results shown in Figure 7. The primary programming languages used are JavaScript and Vue.js, with the Microsoft Edge browser serving as the runtime environment.

Author Contributions

Conceptualization, Shichuan Liu and Yong Wang; methodology, Shichuan Liu; software, Shichuan Liu; validation, Yaoyao Han; formal analysis, Shichuan Liu; investigation, Shichuan Liu; resources, Yong Wang; data curation, Qing Tang; writing—original draft preparation, Shichuan Liu; writing—review & editing, Yaoyao Han; visualization, Qing Tang; supervision, Yong Wang; project administration, Yong Wang; funding acquisition, Yong Wang. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Project of the National Key Research and Development Program of China (2024YFC3015603).

Data Availability Statement

The event description page used in this study is from the Baidu Encyclopedia: https://baike.baidu.com/item/7·20汉源暴雨/64672866 (accessed on 25 November 2024). All code implementations and data used in this experiment have been made publicly available on GitHub: https://github.com/LeoSzechwan/ViewpointSelection.git (master branch used).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Su, S.; Wang, L.; Du, Q.; Zhang, J.; Kang, M.; Weng, M. Revisiting Narrative Maps: Fundamental Theoretical Issues and a Research Agenda. Acta Geod. Cartogr. Sin. 2023, 52, 168–186. [Google Scholar] [CrossRef]
  2. Roth, R.E. Cartographic Design as Visual Storytelling: Synthesis and Review of Map-Based Narratives, Genres, and Tropes. Cartogr. J. 2021, 58, 83–114. [Google Scholar] [CrossRef]
  3. Wood, D. The Power of Maps; Guilford Press: New York, NY, USA, 1992; ISBN 978-0-89862-492-2. [Google Scholar]
  4. Bodenhamer, D.J.; Corrigan, J.; Harris, T.M. Deep Maps and Spatial Narratives; Indiana University Press: Bloomington, IN, USA, 2015; ISBN 978-0-253-01555-6. [Google Scholar]
  5. A’rachman, F.R.; Setiawan, C.; Hardi, O.S.; Insani, N.; Alicia, R.N.; Fitriani, D.; Hafizh, A.R.; Alhadin, M.; Mozzata, A.N. Designing Effective Educational Storymaps for Flood Disaster Mitigation in the Ciliwung River Basin: An Empirical Study. IOP Conf. Ser. Earth Environ. Sci. 2024, 1314, 012082. [Google Scholar] [CrossRef]
  6. Tasliya, R.; Fatimah, E.; Umar, M. Investigating the Impact of Story Maps in Developing Students’ Spatial Abilities on Hydrometeorological Disasters for E-Portfolio Assignments. Int. J. Soc. Sci. Educ. Econ. Agric. Res. Technol. 2023, 2, 459–475. [Google Scholar] [CrossRef]
  7. Carrard, P. Mapped Stories: Cartography, History, and the Representation of Time in Space. Front. Narrat. Stud. 2018, 4, 263–276. [Google Scholar] [CrossRef]
  8. Li, J.; Xia, H.; Qin, Y.; Fu, P.; Guo, X.; Li, R.; Zhao, X. Web GIS for Sustainable Education: Towards Natural Disaster Education for High School Students. Sustainability 2022, 14, 2694. [Google Scholar] [CrossRef]
  9. Lv, G.N.; Yu, Z.Y.; Yuan, L.W.; Luo, W.; Zhou, L.C.; Wu, M.G.; Sheng, Y.H. Is the Future of Cartography the Scenario Science? J. Geo-Inf. Sci. 2018, 20, 5–10. [Google Scholar]
  10. Dong, W.; Liao, H.; Zhan, Z.; Liu, B.; Wang, S.; Yang, T. New Research Progress of Eye Tracking-Based Map Cognition in Cartography since 2008. Acta Geogr. Sin. 2019, 74, 599–614. [Google Scholar]
  11. Liu, B.; Dong, W.; Wang, Y.; Zhang, N. The Influence of FOV and Viewing Angle on the VisualInformation Processing of 3D Maps. J. Geo-Inf. Sci. 2015, 17, 1490–1496. [Google Scholar]
  12. Blanz, V.; Tarr, M.J.; Bülthoff, H.H. What Object Attributes Determine Canonical Views? Perception 1999, 28, 575–599. [Google Scholar] [CrossRef]
  13. Stewart, E.E.M.; Fleming, R.W.; Schütz, A.C. A Simple Optical Flow Model Explains Why Certain Object Viewpoints Are Special. Proc. R. Soc. B 2024, 291, 20240577. [Google Scholar] [CrossRef]
  14. Shen, Y.; Yajie, X.; Yu, L. Elements and Organization of Narrative Maps: A Element-Structure-Scene Architecture Based on the Video Game Perspective. Acta Geod. Cartogr. Sin. 2024, 53, 967–980. [Google Scholar]
  15. Bing-Jie, L.; Chang-Bin, W. An Optimal Viewpoint Selection Approach for 3D Cadastral Property Units Considering Human Visual Perception. Geogr. Geo-Inf. Sci. 2023, 39, 3–9. [Google Scholar]
  16. Câmara, G.; Egenhofer, M.J.; Fonseca, F.; Vieira Monteiro, A.M. What’s in an Image? In Spatial Information Theory; Montello, D.R., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2205, pp. 474–488. ISBN 978-3-540-42613-4. [Google Scholar]
  17. Vázquez, P.-P.; Feixas, M.; Sbert, M.; Heidrich, W. Viewpoint Selection Using Viewpoint Entropy. In Proceedings of the Vision Modeling and Visualization Conference 2001; Aka GmbH: Frankfurt, Germany, 2001; pp. 273–280. [Google Scholar]
  18. Page, D.L.; Koschan, A.F.; Sukumar, S.R.; Roui-Abidi, B.; Abidi, M.A. Shape Analysis Algorithm Based on Information Theory. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003; Volume 1, pp. I-229–I-232. [Google Scholar]
  19. Lee, C.H.; Varshney, A.; Jacobs, D.W. Mesh Saliency. ACM Trans. Graph. 2005, 24, 659–666. [Google Scholar] [CrossRef]
  20. Zhang, F.; Li, M.; Wang, X.; Wang, M.; Tang, Q. 3D Scene Viewpoint Selection Based on Chaos-Particle Swarm Optimization. In Proceedings of the Seventh International Symposium of Chinese CHI, Xiamen, China, 27 June 2019; ACM: New York, NY, USA, 2019; pp. 97–100. [Google Scholar]
  21. Häberling, C.; Bär, H.; Hurni, L. Proposed Cartographic Design Principles for 3D Maps: A Contribution to an Extended Cartographic Theory. Cartographica 2008, 43, 175–188. [Google Scholar] [CrossRef]
  22. Schmidt, M.; Delazari, L. Gestalt Aspects for Differentiating the Representation of Landmarks in Virtual Navigation. Cartogr. Geogr. Inf. Sci. 2013, 40, 159–164. [Google Scholar] [CrossRef]
  23. Liu, Z.; Ma, J.; Pan, X. Optimal Viewpoint Extraction Algorithm for Three-Dimensional Model Based on Features Adaption. J. Comput. Aided Des. Comput. Graph. 2014, 26, 1774–1780. [Google Scholar]
  24. Zhang, Y.; Fei, G.; Yang, G. 3D Viewpoint Estimation Based on Aesthetics. IEEE Access 2020, 8, 108602–108621. [Google Scholar] [CrossRef]
  25. Cao, W.; Hu, P.; Li, H.; Lin, Z. Canonical Viewpoint Selection Based on Distance-Histogram. J. Comput. Aided Des. Comput. Graph. 2010, 22, 1515–1521. [Google Scholar]
  26. Neuville, R.; Pouliot, J.; Poux, F.; Billen, R. 3D Viewpoint Management and Navigation in Urban Planning: Application to the Exploratory Phase. Remote Sens. 2019, 11, 236. [Google Scholar] [CrossRef]
  27. Panpan, J.; Yanyang, Z.; Yunxia, F. Best Viewpoint Selection for 3D Visualization Using Particle Swarm Optimization. J. Syst. Simul. 2017, 29, 156–162. [Google Scholar] [CrossRef]
  28. Poppe, M. QuickHull3D. Available online: https://github.com/mauriciopoppe/quickhull3d (accessed on 24 December 2024).
Figure 1. Method flow chart. This flowchart illustrates an iterative viewpoint optimization process integrating narrative relevance, visual salience, and fitness evaluation. It begins with calculating narrative relevance and extracting viewpoints from a convex hull. The visual salience of these viewpoints is analyzed, followed by relevance and fitness computations. The process iterates by updating viewpoint positions until a termination condition is met, leading to the selection of the final viewpoint.
Figure 1. Method flow chart. This flowchart illustrates an iterative viewpoint optimization process integrating narrative relevance, visual salience, and fitness evaluation. It begins with calculating narrative relevance and extracting viewpoints from a convex hull. The visual salience of these viewpoints is analyzed, followed by relevance and fitness computations. The process iterates by updating viewpoint positions until a termination condition is met, leading to the selection of the final viewpoint.
Ijgi 14 00219 g001
Figure 2. Spatial references and viewpoints. A composite diagram illustrating a visibility-based fitness evaluation. (a) shows a 3D distribution of viewpoints forming intersecting curves around a sphere. (b) provides a zoomed-in view of a specific viewpoint and its visibility analysis.
Figure 2. Spatial references and viewpoints. A composite diagram illustrating a visibility-based fitness evaluation. (a) shows a 3D distribution of viewpoints forming intersecting curves around a sphere. (b) provides a zoomed-in view of a specific viewpoint and its visibility analysis.
Ijgi 14 00219 g002
Figure 3. Map of the narrative elements of the scene. A geographical map highlighting flood-affected areas with satellite imagery as the base layer. The map includes markers for signal stations, damaged bridges, and damaged roads, represented by distinct icons in the legend. A blue polygon outlines the flooded area. Latitude and longitude coordinates are displayed on the map’s borders. A scale bar indicating 250 and 500 m is located in the bottom right corner.
Figure 3. Map of the narrative elements of the scene. A geographical map highlighting flood-affected areas with satellite imagery as the base layer. The map includes markers for signal stations, damaged bridges, and damaged roads, represented by distinct icons in the legend. A blue polygon outlines the flooded area. Latitude and longitude coordinates are displayed on the map’s borders. A scale bar indicating 250 and 500 m is located in the bottom right corner.
Ijgi 14 00219 g003
Figure 4. Different number of initial viewpoints. Four-panel visualization of 3D terrain depicting particle simulations with varying particle counts: (a) N = 100, (b) N = 200, (c) N = 300, and (d) N = 400. Each panel shows red lines radiating outward from a central region, with yellow points marking the ends of the lines. A blue feature in the center represents a key area of interest. The terrain is shaded green with visible elevation variations.
Figure 4. Different number of initial viewpoints. Four-panel visualization of 3D terrain depicting particle simulations with varying particle counts: (a) N = 100, (b) N = 200, (c) N = 300, and (d) N = 400. Each panel shows red lines radiating outward from a central region, with yellow points marking the ends of the lines. A blue feature in the center represents a key area of interest. The terrain is shaded green with visible elevation variations.
Ijgi 14 00219 g004
Figure 5. Parameter effect comparison. A series of five-line charts analyzing the relationship between computational time and fitness value. Charts (ad) show variations in the parameter G (generations) for fixed particle counts N of 100, 200, 300, and 400, respectively. Chart (e) compares results across different N values for a fixed G = 300. Blue lines represent time in milliseconds, while green lines represent fitness values. Both metrics increase with larger parameters, with some variation in fitness peaks. Dual y-axes are used to plot time and fitness on separate scales.
Figure 5. Parameter effect comparison. A series of five-line charts analyzing the relationship between computational time and fitness value. Charts (ad) show variations in the parameter G (generations) for fixed particle counts N of 100, 200, 300, and 400, respectively. Chart (e) compares results across different N values for a fixed G = 300. Blue lines represent time in milliseconds, while green lines represent fitness values. Both metrics increase with larger parameters, with some variation in fitness peaks. Dual y-axes are used to plot time and fitness on separate scales.
Ijgi 14 00219 g005
Figure 6. Comparison of the relationship between NR and VS. A radial chart comparing performance metrics across different methods, labeled as NR, VS-CPSO, VS-Max, and VS-Fixed. (a) Results for Scene A, with F1 as the narrative thematic element. (b) Results for Scene B, with B2 and B3 jointly considered as thematic elements. The segments are divided into multiple radial sections labeled R1 to R7, B1 to B3, F1, F2, and SS. The chart uses varying shades of blue to represent the different methods, with darker shades indicating better performance. Each segment’s width and height visually indicate the relative performance of each method.
Figure 6. Comparison of the relationship between NR and VS. A radial chart comparing performance metrics across different methods, labeled as NR, VS-CPSO, VS-Max, and VS-Fixed. (a) Results for Scene A, with F1 as the narrative thematic element. (b) Results for Scene B, with B2 and B3 jointly considered as thematic elements. The segments are divided into multiple radial sections labeled R1 to R7, B1 to B3, F1, F2, and SS. The chart uses varying shades of blue to represent the different methods, with darker shades indicating better performance. Each segment’s width and height visually indicate the relative performance of each method.
Ijgi 14 00219 g006
Figure 7. Results view. Comparison of 3D scene viewpoints selected using three methods: (a) maximum projected area method, (b) fixed-value method, (c) the proposed method for Scene A, and (d) the proposed method for Scene B. Features are marked as magenta for damaged bridges, orange for signal stations, yellow for damaged roads, and blue for flooded areas. The proposed method offers a clearer, narrative-focused view in both scenes, effectively emphasizing thematic elements such as flooded areas and damaged infrastructure, in contrast to the baseline methods that either obscure key features or fail to reflect narrative relevance.
Figure 7. Results view. Comparison of 3D scene viewpoints selected using three methods: (a) maximum projected area method, (b) fixed-value method, (c) the proposed method for Scene A, and (d) the proposed method for Scene B. Features are marked as magenta for damaged bridges, orange for signal stations, yellow for damaged roads, and blue for flooded areas. The proposed method offers a clearer, narrative-focused view in both scenes, effectively emphasizing thematic elements such as flooded areas and damaged infrastructure, in contrast to the baseline methods that either obscure key features or fail to reflect narrative relevance.
Ijgi 14 00219 g007
Figure 8. The viewpoint selection process. 3D visualization showing the spatial distribution of viewpoint fitness values over a terrain. The upper layer maps individual viewpoints with color-coded fitness levels, where yellow indicates higher fitness (0.79–0.84) and dark purple to black indicates lower fitness (−0.35 to −0.22). The lower layer represents a fitness surface plot, with peaks corresponding to higher fitness regions and valleys indicating lower fitness. This visualization highlights the areas with optimal viewpoints for narrative relevance and visual salience.
Figure 8. The viewpoint selection process. 3D visualization showing the spatial distribution of viewpoint fitness values over a terrain. The upper layer maps individual viewpoints with color-coded fitness levels, where yellow indicates higher fitness (0.79–0.84) and dark purple to black indicates lower fitness (−0.35 to −0.22). The lower layer represents a fitness surface plot, with peaks corresponding to higher fitness regions and valleys indicating lower fitness. This visualization highlights the areas with optimal viewpoints for narrative relevance and visual salience.
Ijgi 14 00219 g008
Table 1. Events.
Table 1. Events.
TitleDescription
Scene ASudden Flash FloodAt about 2:30 a.m. on 20 July 2024, a flash flood occurred in Xinhua Village, Malie Township, Hanyuan County, Ya’an City, as a result of heavy rainfall, disrupting signals, roads, and bridges.
Scene BTraffic Conditions ReturnBy 9:21 a.m. on 22 July 2024, both small bridges to the disaster area had been restored. Doufushi Bridge was reopened earlier that morning with a temporary emergency bridge, and a heavy-duty steel bridge was installed later to meet flood season and reconstruction needs.
Table 2. Scene elements.
Table 2. Scene elements.
NameIdGeometry Type
Flooded Area 1#F1Polygon
Flooded Area 2#F2Polygon
Damaged Road 1#R1Polyline
Damaged Road 2#R2Polyline
Damaged Road 3#R3Polyline
Damaged Road 4#R4Polyline
Damaged Road 5#R5Polyline
Damaged Road 6#R6Polyline
Damaged Road 7#R7Polyline
Signal StationSSPoint
Damaged Bridge 1#B1Point
Damaged Bridge 2#B2Point
Damaged Bridge 3#B3Point
Table 3. Calculation results of narrative relevance and visual salience in Scene A.
Table 3. Calculation results of narrative relevance and visual salience in Scene A.
Element IdNRVS-MaxVS-FixedVS-CPSO
*F11.0000 8.3559 × 10−38.1118 × 10−31.3010 × 10−2
R20.8315 2.0557 × 10−41.0091 × 10−41.2151 × 10−4
R30.6984 1.2787 × 10−42.4231 × 10−42.6994 × 10−4
B10.6839 9.6880 × 10−59.6880 × 10−59.6880 × 10−5
SS0.6177 4.3058 × 10−54.3058 × 10−54.3058 × 10−5
R10.5616 5.2424 × 10−43.0681 × 10−43.7913 × 10−4
R40.5484 1.7082 × 10−41.0009 × 10−41.1906 × 10−4
F20.4637 2.7533 × 10−32.1064 × 10−33.2765 × 10−3
R50.4593 8.7834 × 10−52.0996 × 10−42.4224 × 10−4
B20.3408 9.6880 × 10−59.6880 × 10−59.6880 × 10−5
R60.3362 4.2096 × 10−43.4138 × 10−44.2183 × 10−4
B30.2956 9.6880 × 10−59.6880 × 10−59.6880 × 10−5
R70.2946 3.7097 × 10−42.7704 × 10−43.5369 × 10−4
Table 4. Calculation results of narrative relevance and visual salience in Scene B.
Table 4. Calculation results of narrative relevance and visual salience in Scene B.
Element IdNRVS-MaxVS-FixedVS-CPSO
*B219.69 × 10−59.69 × 10−52.06 × 10−4
*B319.69 × 10−59.69 × 10−52.06 × 10−4
R70.74873.71 × 10−42.77 × 10−43.48 × 10−4
R60.73244.21 × 10−43.41 × 10−44.17 × 10−4
R50.38008.78 × 10−52.10 × 10−41.70 × 10−4
R40.35551.71 × 10−41.00 × 10−41.53 × 10−4
SS0.35154.31 × 10−54.31 × 10−59.16 × 10−5
F10.32658.36 × 10−38.11 × 10−37.44 × 10−3
B10.31779.69 × 10−59.69 × 10−52.06 × 10−4
R10.31755.24 × 10−43.07 × 10−43.56 × 10−4
R30.31631.28 × 10−42.42 × 10−41.99 × 10−4
F20.31092.75 × 10−32.11 × 10−31.80 × 10−3
R20.21602.06 × 10−41.01 × 10−41.63 × 10−4
Table 5. Evaluation results.
Table 5. Evaluation results.
MethodFitnessVote
Maximum projected area method0.374
Fixed-value method0.385
Our method0.8417
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, S.; Wang, Y.; Tang, Q.; Han, Y. Viewpoint Selection for 3D Scenes in Map Narratives. ISPRS Int. J. Geo-Inf. 2025, 14, 219. https://doi.org/10.3390/ijgi14060219

AMA Style

Liu S, Wang Y, Tang Q, Han Y. Viewpoint Selection for 3D Scenes in Map Narratives. ISPRS International Journal of Geo-Information. 2025; 14(6):219. https://doi.org/10.3390/ijgi14060219

Chicago/Turabian Style

Liu, Shichuan, Yong Wang, Qing Tang, and Yaoyao Han. 2025. "Viewpoint Selection for 3D Scenes in Map Narratives" ISPRS International Journal of Geo-Information 14, no. 6: 219. https://doi.org/10.3390/ijgi14060219

APA Style

Liu, S., Wang, Y., Tang, Q., & Han, Y. (2025). Viewpoint Selection for 3D Scenes in Map Narratives. ISPRS International Journal of Geo-Information, 14(6), 219. https://doi.org/10.3390/ijgi14060219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop