Map Generalization for the Future: Editorial Comments on the Special Issue

Generalization of geospatial data is a cornerstone of cartography, a sequence of often unnoticed operations that lays the foundation of visual communication [...]

Generalization of geospatial data is a cornerstone of cartography, a sequence of often unnoticed operations that lays the foundation of visual communication. In recent years, the practice of cartographic generalization has advanced beyond a narrow focus on display and legibility to include strategies that support analysis and feature recognition, exploit spatial and semantic contexts, and preserve relationships between and among features. It is increasingly recognized that generalization exists not just to produce static reference maps, but also to enable dynamic, interactive web maps, facilitate data distribution, and support multi-scale modelling and analysis. Generalization methods are in regular use in national mapping agencies that produce and steward very large data sets and archives, in private commercial organizations managing multi-scale data portals, in web-based mapping platforms that permit a range of zoom levels, and in volunteer-driven and open source mapping hubs that provide services for downloading base maps and thematic data layers. It is within this context of widening needs and applications that emerging research in generalization and multiscale representation was sought for this Special Issue.
Although the corpus of eight articles published herein is perhaps too small to encapsulate the entire field, three topics emerge that seem broadly representative of current research trends, and point to significant positive advances that may be expected to permeate the field in the near future. These are: (1) adoption of new adaptive frameworks to intelligently guide traditional generalization techniques, (2) use of advanced deep learning and artificially intelligent algorithms that perform generalization operations, and (3) expansion of generalization principles beyond traditional application areas to new, specialized or hitherto neglected corners of cartography. With these advances, cartographers are strategically extending existing techniques, exploring new computational paradigms, and expanding their field of vision. In its own way, each of the eight articles in this issue demonstrates these principles.
Cartography suffers no lack of algorithms that select, simplify, aggregate, exaggerate or otherwise modify features for display at smaller scales. The criticism, of course, is that many algorithms are quite mechanical in their performance, blindly repeating simple steps without due consideration of context, nuance and competing priorities. One way out of this dilemma is to adopt innovative frameworks that intelligently sequence and parameterize existing algorithms based on multiple objectives and metrics. The hope is that such frameworks will lead to adaptive processes grounded in the logic of experts, and be able to replicate the aesthetic balancing act that cartographers have performed manually for centuries. Three of the articles here explore such avenues, refining existing logical frameworks and creating new ones to address core problems in cartographic generalization.
Borrowing from information theory, Liu and Li [1] develop a model of feature-based information entropy that informs the aggregation of road segments (i.e., stroke generation) to define road features for selection and thinning. Their goal is straightforward: find the optimal angle threshold to parameterize stroke generation for different cities. By measuring changes in the information content at different angle thresholds, they identify a unique pattern for each road network and parameterize the stroke generation procedure accordingly. The authors illustrate how the process defines different parameters for road networks in different cities, and the results are an example of adaptive procedures that promise to move cartographic generalization beyond a one-size-fits-all approach. Han et al. [2] also address stroke generation for road selection, but their focus prioritizes multiple topological and contextual criteria. Combining network-based metrics of each road segment's importance relative to nearby points of interest (POIs), they use an analytical hierarchy process (AHP) to derive scores for each segment based on weights calculated for each criterion. Their research demonstrates that an AHP can balance competing priorities. Since cartographic generalization necessarily involves consideration of many criteria, this approach may simulate choices made by experts. Fiedukowicz [3] compares different variations of fuzzy and rough set theory to determine attributes that can be used for automatic selection or omission of roads, buildings and watercourses on small-scale maps. Focusing on the concept of reducts (subsets of attributes that are sufficient for discriminating between features selected vs. omitted by experts), it is shown that some variations of fuzzy or rough sets can better identify core attributes than others. Specifically, they find that dominance-based rough set theory retains an excessively large number of variables, while fuzzy-rough set theory tends to eliminate too many variables; rough set theory seems to strike a happy middle ground. More experiments of this nature are critical to guide automation of selection and other generalization processes.
A second theme of articles in this special issue on map generalization focuses on deep learning and artificially intelligent algorithms. Whereas research on frameworks for sequencing and parameterizing existing techniques promises to capture and formalize the intrinsic logic of cartographic experts, methods using neural networks and deep learning attempt to bypass much of this formalization, promising instead to replicate good cartography through pattern recognition. While such techniques are sometimes criticized as "black-box" approaches, Karsznia and Sielicka [4] demonstrate that pattern recognition and formalization are not mutually exclusive. Tackling the classic problem of settlement selection, they use genetic algorithms to determine variables that explain which settlements are selected and which are omitted in the General Geographic Object Database for several Polish districts. The result is more than a technique for replicating the selection process. It also builds a decision tree that defines the rules of selection. Thus, the black box leads to transparency: where we may not understand how the rules were derived, we can understand the rules themselves, and refine them if need be. Neural networks are also employed by Courtial et al. [5] to tackle the classic problem of line simplification in the context of sinuous mountain roads. In their training of a convolutional neural network with raster images of detailed and generalized roads from a national topographic database, they demonstrate that deep learning can replicate many difficult aspects of expert generalization, such as line smoothing, enlargement and displacement. Although issues of topological connectivity and creation of artifacts remain, their research demonstrates the potential of deep learning to solve some of the most challenging problems in cartographic generalization.
The last theme emerging from this special issue identifies new cartography domains where research on generalization is just beginning or has been neglected. Sayidov et al. [6] examine constraint-based generalization of geologic maps, which stand out in terms of their complex pattern of areal partitions that vary in size, shape and cluster patterns. They show that a combination of constraints on area, separation distances, and granularity can successfully guide elimination, enlargement, aggregation and displacement of geologic units. The paper indicates that explicit identification and handling of polygon clusters is an important area for future research. Mapping indoor spaces is a burgeoning activity in some urban areas where navigation of shopping malls, office complexes, and entertainment venues dominates the lived experience of many citizens. Generalization processes required for user-centric maps of indoor spaces are explored by Xiao et al. [7] through the development of a web-based application. Their work highlights the higher density of information that is needed on indoor maps compared to outdoor maps, the concomitant need to focus on POIs and simple geometric representations, and the advantages of 3D visualization and of adaptive maps that can be personalized to meet user needs. As indoor mapping and navigation apps continue to roll out at an unprecedented pace, cartographers need to join or lead the conversation about how to solve design problems in these contexts. And last, in a thoughtful survey that will interest any advanced cartography student, Raposo et al. [8] discuss generalization techniques employed in some of the most famous thematic maps. Reviewing both classics, such as Beck's London Tube map and Minard's map of Napoleon's march on Russia, as well as newer innovations such as Benjamin Hennig's election cartograms of the United Kingdom and Cameron Beccario's "Earth", they find some generalization operations are particularly common and effective in thematic maps-namely, content reclassification and aggregation, merging, and simplification of geometries. They make a compelling argument that in order to remain relevant and influence how mapping is used in the modern era, cartographers should pay more attention to generalization processes behind the plethora of compelling thematic, dynamic, and interactive content that increasingly captures public attention.
It is worthwhile to reflect on topics that receive less attention in this corpus of eight articles. Just as interpretation of a map is informed as much by what it omits as by what it depicts, so too must the current state of cartographic research be evaluated in part by what it is failing to adequately address. One observation is that assessment of generalization techniques remains challenging and difficult to reproduce. While several authors use maps or databases created by national mapping agencies as standards for assessment, the data they use differ from previous studies, and future studies will likely use different data as well. It would benefit the community to develop standard expert-generalized datasets that could be used by multiple authors, allowing direct comparison of the efficacy of different techniques and algorithms. With growing adherence to reproducible science practices, such reusable datasets and tools should become more readily available.
Another area that deserves greater attention is the effect of generalization on spatial analysis. There is an implicit assumption that scientific studies will universally use the highest available resolution data, but this assumption becomes untenable in an era of ultra-high spatial and temporal resolution data. If we want to empower people to contribute to the scientific enterprise of studying our world, we need to ensure that data are generalized for analysis as well as aesthetics. Although standards for data production have been developed and are in wide use, formal methods to evaluate the impacts of generalization on higher level geometric, topologic, and semantic properties are still not widely available. This is an area where partnership between academic researchers and mapping agencies can result in tangible benefits to the broader community.
Finally, there continues to be a need for cognitive research and discourse to answer the most important questions about how, when and why generalization is effective. With modern mapping driven by readily available and easily usable maps from the likes of Google and Bing, valid questions are being posed about whether cartographic generalization is still relevant. These questions should be met with clear and compelling evidence for the value of generalization, as well as an honest assessment of its limitations. Are there some cases where the need for generalization has been driven more by production constraints than by perceptual needs? If so, surely there must be other examples where the value of high-quality generalized data and maps is greater than ever. Clearly elucidating when and how cartographic generalization supports cognitive processes, and acknowledging situations where it does not, would go a long way towards building a scientific foundation for vast map production projects that are ongoing.
Author Contributions: The three authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.
Disclaimer: Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.