Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = hand-drawn sketch

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11460 KiB  
Article
A Computational Sketch-Based Approach Towards Optimal Product Design Solutions
by Paschalis Charalampous
Appl. Sci. 2025, 15(5), 2413; https://doi.org/10.3390/app15052413 - 24 Feb 2025
Viewed by 756
Abstract
This paper presents a numerical sketch-based methodology to achieve optimal product design solutions, bridging the gap between initial conceptual sketches and advanced engineering analyses. The proposed approach enables the transformation of simple hand-drawn sketches into digital models suitable for complex computational simulations and [...] Read more.
This paper presents a numerical sketch-based methodology to achieve optimal product design solutions, bridging the gap between initial conceptual sketches and advanced engineering analyses. The proposed approach enables the transformation of simple hand-drawn sketches into digital models suitable for complex computational simulations and design optimization. Using computer vision algorithms, sketches are processed to generate digital design components that serve as inputs for Finite Element Analysis (FEA). In order to further enhance the overall design process, topology optimization (TO) is also performed, iteratively refining the geometry to achieve optimal material distribution for improved structural performance. Additionally, Adaptive Mesh Refinement (AMR) techniques are applied to ensure computational efficiency and accuracy by dynamically refining the mesh in regions of high complexity or stress concentration. The synergy of sketch-based modeling, FEA, TO, and AMR demonstrates significant potential in reducing design cycles while maintaining high-performance standards. Finally, it should be noted that the proposed pipeline consists of a fully automated procedure, hence it could reduce the learning curve for the designers, enabling companies to onboard employees faster and integrate advanced design techniques into their workflows without extensive training. The above-mentioned modules render the introduced approach particularly suitable for applications in product design development that can be utilized in several industries like mechanical, manufacturing, and furniture. Full article
(This article belongs to the Special Issue Smart Manufacturing and Materials Ⅱ)
Show Figures

Figure 1

20 pages, 21419 KiB  
Article
A New Approach to Detect Hand-Drawn Dashed Lines in Engineering Sketches
by Raquel Plumed, Manuel Contero, Ferran Naya and Pedro Company
Appl. Sci. 2024, 14(10), 4023; https://doi.org/10.3390/app14104023 - 9 May 2024
Viewed by 2220
Abstract
Sketched drawings sometimes include non-solid lines drawn as sets of consecutive strokes. They represent dashed lines, which are useful for various purposes. Recognizing such dashed lines while parsing drawings is reasonably straightforward if they are outlined with a ruler and compass but becomes [...] Read more.
Sketched drawings sometimes include non-solid lines drawn as sets of consecutive strokes. They represent dashed lines, which are useful for various purposes. Recognizing such dashed lines while parsing drawings is reasonably straightforward if they are outlined with a ruler and compass but becomes challenging when they are hand-drawn. The problem is manageable if the strokes are drawn consecutively so we can leverage the entire sequence. However, it becomes more challenging if they are drawn unordered, and/or we do not have access to the sequence (like in batch vectorization). In this paper, we describe a new approach to identify groups of strokes as depicting single hand-drawn dashed lines. The approach does not use sequence information and is tolerant with irregularities and imprecisions of the strokes. Our goal is to identify hidden lines of sketched engineering line-drawings, which would enable the interpretation of line-drawings with hidden edges, which currently cannot be efficiently vectorized. We speculate that other fields like hand-drawn graph interpretation may also benefit from our approach. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 7704 KiB  
Article
Zero-Shot Sketch-Based Remote-Sensing Image Retrieval Based on Multi-Level and Attention-Guided Tokenization
by Bo Yang, Chen Wang, Xiaoshuang Ma, Beiping Song, Zhuang Liu and Fangde Sun
Remote Sens. 2024, 16(10), 1653; https://doi.org/10.3390/rs16101653 - 7 May 2024
Cited by 2 | Viewed by 2036
Abstract
Effectively and efficiently retrieving images from remote-sensing databases is a critical challenge in the realm of remote-sensing big data. Utilizing hand-drawn sketches as retrieval inputs offers intuitive and user-friendly advantages, yet the potential of multi-level feature integration from sketches remains underexplored, leading to [...] Read more.
Effectively and efficiently retrieving images from remote-sensing databases is a critical challenge in the realm of remote-sensing big data. Utilizing hand-drawn sketches as retrieval inputs offers intuitive and user-friendly advantages, yet the potential of multi-level feature integration from sketches remains underexplored, leading to suboptimal retrieval performance. To address this gap, our study introduces a novel zero-shot, sketch-based retrieval method for remote-sensing images, leveraging multi-level feature extraction, self-attention-guided tokenization and filtering, and cross-modality attention update. This approach employs only vision information and does not require semantic knowledge concerning the sketch and image. It starts by employing multi-level self-attention guided feature extraction to tokenize the query sketches, as well as self-attention feature extraction to tokenize the candidate images. It then employs cross-attention mechanisms to establish token correspondence between these two modalities, facilitating the computation of sketch-to-image similarity. Our method significantly outperforms existing sketch-based remote-sensing image retrieval techniques, as evidenced by tests on multiple datasets. Notably, it also exhibits robust zero-shot learning capabilities in handling unseen categories and strong domain adaptation capabilities in handling unseen novel remote-sensing data. The method’s scalability can be further enhanced by the pre-calculation of retrieval tokens for all candidate images in a database. This research underscores the significant potential of multi-level, attention-guided tokenization in cross-modal remote-sensing image retrieval. For broader accessibility and research facilitation, we have made the code and dataset used in this study publicly available online. Full article
Show Figures

Figure 1

22 pages, 38737 KiB  
Article
A Computer Vision Framework for Structural Analysis of Hand-Drawn Engineering Sketches
by Isaac Joffe, Yuchen Qian, Mohammad Talebi-Kalaleh and Qipei Mei
Sensors 2024, 24(9), 2923; https://doi.org/10.3390/s24092923 - 3 May 2024
Viewed by 2734
Abstract
Structural engineers are often required to draw two-dimensional engineering sketches for quick structural analysis, either by hand calculation or using analysis software. However, calculation by hand is slow and error-prone, and the manual conversion of a hand-drawn sketch into a virtual model is [...] Read more.
Structural engineers are often required to draw two-dimensional engineering sketches for quick structural analysis, either by hand calculation or using analysis software. However, calculation by hand is slow and error-prone, and the manual conversion of a hand-drawn sketch into a virtual model is tedious and time-consuming. This paper presents a complete and autonomous framework for converting a hand-drawn engineering sketch into an analyzed structural model using a camera and computer vision. In this framework, a computer vision object detection stage initially extracts information about the raw features in the image of the beam diagram. Next, a computer vision number-reading model transcribes any handwritten numerals appearing in the image. Then, feature association models are applied to characterize the relationships among the detected features in order to build a comprehensive structural model. Finally, the structural model generated is analyzed using OpenSees. In the system presented, the object detection model achieves a mean average precision of 99.1%, the number-reading model achieves an accuracy of 99.0%, and the models in the feature association stage achieve accuracies ranging from 95.1% to 99.5%. Overall, the tool analyzes 45.0% of images entirely correctly and the remaining 55.0% of images partially correctly. The proposed framework holds promise for other types of structural sketches, such as trusses and frames. Moreover, it can be a valuable tool for structural engineers that is capable of improving the efficiency, safety, and sustainability of future construction projects. Full article
Show Figures

Figure 1

16 pages, 3196 KiB  
Article
Enhancing Urban Landscape Design: A GAN-Based Approach for Rapid Color Rendering of Park Sketches
by Ran Chen, Jing Zhao, Xueqi Yao, Yueheng He, Yuting Li, Zeke Lian, Zhengqi Han, Xingjian Yi and Haoran Li
Land 2024, 13(2), 254; https://doi.org/10.3390/land13020254 - 18 Feb 2024
Cited by 11 | Viewed by 3791
Abstract
In urban ecological development, the effective planning and design of living spaces are crucial. Traditional color plan rendering methods, mainly using generative adversarial networks (GANs), rely heavily on edge extraction. This often leads to the loss of important details from hand-drawn drafts, significantly [...] Read more.
In urban ecological development, the effective planning and design of living spaces are crucial. Traditional color plan rendering methods, mainly using generative adversarial networks (GANs), rely heavily on edge extraction. This often leads to the loss of important details from hand-drawn drafts, significantly affecting the portrayal of the designer’s key concepts. This issue is especially critical in complex park planning. To address this, our study introduces a system based on conditional GANs. This system rapidly converts black-and-white park sketches into comprehensive color designs. We also employ a data augmentation strategy to enhance the quality of the output. The research reveals: (1) Our model efficiently produces designs suitable for industrial applications. (2) The GAN-based data augmentation improves the data volume, leading to enhanced rendering effects. (3) Our unique approach of direct rendering from sketches offers a novel method in urban planning and design. This study aims to enhance the rendering aspect of an intelligent workflow for landscape design. More efficient rendering techniques will reduce the iteration time of early design solutions and promote the iterative speed of designers’ thinking, thus improving the speed and efficiency of the whole design process. Full article
Show Figures

Figure 1

20 pages, 14749 KiB  
Article
FBANet: Transfer Learning for Depression Recognition Using a Feature-Enhanced Bi-Level Attention Network
by Huayi Wang, Jie Zhang, Yaocheng Huang and Bo Cai
Entropy 2023, 25(9), 1350; https://doi.org/10.3390/e25091350 - 17 Sep 2023
Cited by 3 | Viewed by 2277
Abstract
The House-Tree-Person (HTP) sketch test is a psychological analysis technique designed to assess the mental health status of test subjects. Nowadays, there are mature methods for the recognition of depression using the HTP sketch test. However, existing works primarily rely on manual analysis [...] Read more.
The House-Tree-Person (HTP) sketch test is a psychological analysis technique designed to assess the mental health status of test subjects. Nowadays, there are mature methods for the recognition of depression using the HTP sketch test. However, existing works primarily rely on manual analysis of drawing features, which has the drawbacks of strong subjectivity and low automation. Only a small number of works automatically recognize depression using machine learning and deep learning methods, but their complex data preprocessing pipelines and multi-stage computational processes indicate a relatively low level of automation. To overcome the above issues, we present a novel deep learning-based one-stage approach for depression recognition in HTP sketches, which has a simple data preprocessing pipeline and calculation process with a high accuracy rate. In terms of data, we use a hand-drawn HTP sketch dataset, which contains drawings of normal people and patients with depression. In the model aspect, we design a novel network called Feature-Enhanced Bi-Level Attention Network (FBANet), which contains feature enhancement and bi-level attention modules. Due to the limited size of the collected data, transfer learning is employed, where the model is pre-trained on a large-scale sketch dataset and fine-tuned on the HTP sketch dataset. On the HTP sketch dataset, utilizing cross-validation, FBANet achieves a maximum accuracy of 99.07% on the validation dataset, with an average accuracy of 97.71%, outperforming traditional classification models and previous works. In summary, the proposed FBANet, after pre-training, demonstrates superior performance on the HTP sketch dataset and is expected to be a method for the auxiliary diagnosis of depression. Full article
(This article belongs to the Special Issue Entropy: The Cornerstone of Machine Learning)
Show Figures

Figure 1

18 pages, 5154 KiB  
Article
Conditional Generative Adversarial Networks with Total Variation and Color Correction for Generating Indonesian Face Photo from Sketch
by Mia Rizkinia, Nathaniel Faustine and Masahiro Okuda
Appl. Sci. 2022, 12(19), 10006; https://doi.org/10.3390/app121910006 - 5 Oct 2022
Cited by 8 | Viewed by 3820
Abstract
Historically, hand-drawn face sketches have been commonly used by Indonesia’s police force, especially to quickly describe a person’s facial features in searching for fugitives based on eyewitness testimony. Several studies have been performed, aiming to increase the effectiveness of the method, such as [...] Read more.
Historically, hand-drawn face sketches have been commonly used by Indonesia’s police force, especially to quickly describe a person’s facial features in searching for fugitives based on eyewitness testimony. Several studies have been performed, aiming to increase the effectiveness of the method, such as comparing the facial sketch with the all-points bulletin (DPO in Indonesian terminology) or generating a facial composite. However, making facial composites using an application takes quite a long time. Moreover, when these composites are directly compared to the DPO, the accuracy is insufficient, and thus, the technique requires further development. This study applies a conditional generative adversarial network (cGAN) to convert a face sketch image into a color face photo with an additional Total Variation (TV) term in the loss function to improve the visual quality of the resulting image. Furthermore, we apply a color correction to adjust the resulting skin tone similar to that of the ground truth. The face image dataset was collected from various sources matching Indonesian skin tone and facial features. We aim to provide a method for Indonesian face sketch-to-photo generation to visualize the facial features more accurately than the conventional method. This approach produces visually realistic photos from face sketches, as well as true skin tones. Full article
(This article belongs to the Special Issue Recent Advances in Deep Learning for Image Analysis)
Show Figures

Figure 1

18 pages, 2852 KiB  
Article
Exploring the Spatial Image of Traditional Villages from the Tourists’ Hand-Drawn Sketches
by Zuoming Jiang and Yang Sun
Sustainability 2022, 14(10), 5977; https://doi.org/10.3390/su14105977 - 14 May 2022
Cited by 9 | Viewed by 2548
Abstract
As an important concept in cognitive psychology and behavioural geography, destination spatial image cognition has a significant impact on the quality of tourists’ experience, and on their behavioural intention. However, studies of spatial image cognition in small-scale traditional villages are limited. Therefore, the [...] Read more.
As an important concept in cognitive psychology and behavioural geography, destination spatial image cognition has a significant impact on the quality of tourists’ experience, and on their behavioural intention. However, studies of spatial image cognition in small-scale traditional villages are limited. Therefore, the present study analyses the spatial image characteristics of four traditional villages of World Cultural Heritage sites in China through the use of tourists’ hand-drawn sketches, using a sample of 366 respondents to further explore the evolution process of cognitive map types and constituent elements with tourists’ stay days. Results indicate that the spatial cognitive map and landmarks are the main types and dominant elements of spatial image cognition, respectively. The tourists’ spatial cognitive process includes two sequences, as follows: the evolution sequence of dominant cognitive maps is “spatial + individual → spatial + individual + hybrid → spatial + individual”, while the evolution sequence of dominant cognition elements is “landmark + path + animal and plant → landmark + animal and plant + path”. This study extends the current destination spatial image cognition literature, and has substantial value for the destination in terms of developing traditional village sustainable tourism based on the tourists’ attitude, as obtained by the cognitive map method. Full article
Show Figures

Figure 1

19 pages, 3117 KiB  
Article
DrawnNet: Offline Hand-Drawn Diagram Recognition Based on Keypoint Prediction of Aggregating Geometric Characteristics
by Jiaqi Fang, Zhen Feng and Bo Cai
Entropy 2022, 24(3), 425; https://doi.org/10.3390/e24030425 - 19 Mar 2022
Cited by 14 | Viewed by 9011
Abstract
Offline hand-drawn diagram recognition is concerned with digitizing diagrams sketched on paper or whiteboard to enable further editing. Some existing models can identify the individual objects like arrows and symbols, but they become involved in the dilemma of being unable to understand a [...] Read more.
Offline hand-drawn diagram recognition is concerned with digitizing diagrams sketched on paper or whiteboard to enable further editing. Some existing models can identify the individual objects like arrows and symbols, but they become involved in the dilemma of being unable to understand a diagram’s structure. Such a shortage may be inconvenient to digitalization or reconstruction of a diagram from its hand-drawn version. Other methods can accomplish this goal, but they live on stroke temporary information and time-consuming post-processing, which somehow hinders the practicability of these methods. Recently, Convolutional Neural Networks (CNN) have been proved that they perform the state-of-the-art across many visual tasks. In this paper, we propose DrawnNet, a unified CNN-based keypoint-based detector, for recognizing individual symbols and understanding the structure of offline hand-drawn diagrams. DrawnNet is designed upon CornerNet with extensions of two novel keypoint pooling modules which serve to extract and aggregate geometric characteristics existing in polygonal contours such as rectangle, square, and diamond within hand-drawn diagrams, and an arrow orientation prediction branch which aims to predict which direction an arrow points to through predicting arrow keypoints. We conducted wide experiments on public diagram benchmarks to evaluate our proposed method. Results show that DrawnNet achieves 2.4%, 2.3%, and 1.7% recognition rate improvements compared with the state-of-the-art methods across benchmarks of FC-A, FC-B, and FA, respectively, outperforming existing diagram recognition systems on each metric. Ablation study reveals that our proposed method can effectively enable hand-drawn diagram recognition. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

25 pages, 6995 KiB  
Article
The Drawing and Perception of Architectural Spaces through Immersive Virtual Reality
by Hugo C. Gómez-Tone, John Bustamante Escapa, Paola Bustamante Escapa and Jorge Martin-Gutierrez
Sustainability 2021, 13(11), 6223; https://doi.org/10.3390/su13116223 - 1 Jun 2021
Cited by 30 | Viewed by 6748
Abstract
The technologies that have sought to intervene in the architectural drawing process have focused on the sense of sight, leaving aside the use of the hands and the entire body that together achieve more sensory designs. Nowadays, to the benefit of the draftsman, [...] Read more.
The technologies that have sought to intervene in the architectural drawing process have focused on the sense of sight, leaving aside the use of the hands and the entire body that together achieve more sensory designs. Nowadays, to the benefit of the draftsman, that ideal scenery in which sight, hands and body work holistically is returning thanks to Immersive Virtual Reality (IVR). The purpose of this research is to analyze the perception of two-dimensionally drawn spaces, the drawing of such spaces through three-dimensional sketches in IVR, and both the perception of 3D sketched spaces and those which are also modeled realistically in IVR. First and fifth year architecture students went through the four phases of the experiment: (a) the perception of a space based on 2D sketches, (b) real-scale 3D space drawing in IVR, (c) the perception of a space drawn in 3D in IVR, and (d) the perception of the same space realistically modeled in 3D in IVR. Through three questionnaires and a grading sheet, the data was obtained. The perception of two-dimensionally drawn spaces was high (70.8%), while the precision of a space drawn in an IVR was even higher (83.9%). The real or natural scale in which the spaces can be experienced in an IVR is the characteristic that was most recognized by the students; however, this and the other qualities did not allow for a reliable conclusion for a homogeneous perception of sensations within the virtual spaces. Full article
(This article belongs to the Special Issue Visual Technologies for Sustainable Digital Environments)
Show Figures

Figure 1

28 pages, 1401 KiB  
Article
An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks
by Hashim Yasin and Björn Krüger
Sensors 2021, 21(7), 2415; https://doi.org/10.3390/s21072415 - 1 Apr 2021
Cited by 6 | Viewed by 5400
Abstract
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in [...] Read more.
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

30 pages, 4677 KiB  
Article
Multi-Domain Recognition of Hand-Drawn Diagrams Using Hierarchical Parsing
by Vincenzo Deufemia and Michele Risi
Multimodal Technol. Interact. 2020, 4(3), 52; https://doi.org/10.3390/mti4030052 - 14 Aug 2020
Cited by 3 | Viewed by 5539
Abstract
This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations. The recognition systems automatically generated from SkGs process the input sketches according to the [...] Read more.
This paper presents an approach for the recognition of multi-domain hand-drawn diagrams, which exploits Sketch Grammars (SkGs) to model the symbols’ shape and the abstract syntax of diagrammatic notations. The recognition systems automatically generated from SkGs process the input sketches according to the following phases: the user’ strokes are first segmented and interpreted as primitive shapes, then by exploiting the domain context they are clustered into symbols of the domain and, finally, an interpretation of whole diagram is given. The main contribution of this paper is an efficient model of parsing suitable for both interactive and non-interactive sketch-based interfaces, configurable to different domains, and able to exploit contextual information for improving recognition accuracy and solving interpretation ambiguities. The proposed approach was evaluated in the domain of UML class diagrams obtaining good results in terms of recognition accuracy and usability. Full article
Show Figures

Figure 1

13 pages, 5140 KiB  
Article
A Novel Sketch-Based Three-Dimensional Shape Retrieval Method Using Multi-View Convolutional Neural Network
by Dianhui Mao and Zhihao Hao
Symmetry 2019, 11(5), 703; https://doi.org/10.3390/sym11050703 - 23 May 2019
Cited by 10 | Viewed by 3683
Abstract
Retrieving 3D models by adopting hand-drawn sketches to be the input has turned out to be a popular study topic. Most current methods are based on manually selected features and the best view produced for 3D model calculations. However, there are many problems [...] Read more.
Retrieving 3D models by adopting hand-drawn sketches to be the input has turned out to be a popular study topic. Most current methods are based on manually selected features and the best view produced for 3D model calculations. However, there are many problems with these methods such as distortion. For the purpose of dealing with such issues, this paper proposes a novel feature representation method to select the projection view and adapt the maxout network to the extended Siamese network architecture. In addition, the strategy is able to handle the over-fitting issue of convolutional neural networks (CNN) and mitigate the discrepancies between the 3D shape domain and the sketch. A pre-trained AlexNet was used to sketch the extract features. For 3D shapes, multiple 2D views were compiled into compact feature vectors using pre-trained multi-view CNNs. Then the Siamese convolutional neural networks were learnt for transforming the two domains’ original characteristics into nonlinear feature space, which mitigated the domain discrepancy and kept the discriminations. Two large data sets were used for experiments, and the experimental results show that the method is superior to the prior art methods in accuracy. Full article
Show Figures

Figure 1

17 pages, 13278 KiB  
Article
Using the Spatial Knowledge of Map Users to Personalize City Maps: A Case Study with Tourists in Madrid, Spain
by María-Teresa Manrique-Sancho, Silvania Avelar, Teresa Iturrioz-Aguirre and Miguel-Ángel Manso-Callejo
ISPRS Int. J. Geo-Inf. 2018, 7(8), 332; https://doi.org/10.3390/ijgi7080332 - 20 Aug 2018
Cited by 16 | Viewed by 5138
Abstract
The aim of personalized maps is to help individual users to read maps and focus on the most task-relevant information. Several approaches have been suggested to develop personalized maps for cities, but few consider the spatial knowledge of its users. We propose the [...] Read more.
The aim of personalized maps is to help individual users to read maps and focus on the most task-relevant information. Several approaches have been suggested to develop personalized maps for cities, but few consider the spatial knowledge of its users. We propose the design of “cognitively-aware” personalized maps, which take into account the previous experience of users in the city and how the urban space is configured in their minds. Our aim is to facilitate users’ mental links between maps and city places, stimulating users to recall features of the urban space and to assimilate new spatial knowledge. To achieve this goal, we propose the personalization of maps through a map design process based on user modeling and on inferring personalization guidelines from hand-drawn sketches of urban spaces. We applied this process in an experiment with tourists in Madrid, Spain. We categorized the participants into three types of tourists—“Guided”, “Explorer”, and “Conditioned”—according to individual and contextual factors that can influence their spatial knowledge of the city. We also extracted design guidelines from tourists’ sketches and developed map prototypes. The empirical results seem to be promising for developing personalized city maps that could be produced on-the-fly in the future. Full article
Show Figures

Figure 1

20 pages, 4498 KiB  
Article
Digital Sketch Maps and Eye Tracking Statistics as Instruments to Obtain Insights Into Spatial Cognition
by Merve Keskin, Kristien Ooms, Ahmet Ozgur Dogru and Philippe De Maeyer
J. Eye Mov. Res. 2018, 11(3), 1-20; https://doi.org/10.16910/jemr.11.3.4 - 15 Jun 2018
Cited by 16 | Viewed by 112
Abstract
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was [...] Read more.
This paper explores map users' cognitive processes in learning, acquiring and remembering information presented via screen maps. In this context, we conducted a mixed-methods user experiment employing digital sketch maps and eye tracking. On the one hand, the performance of the participants was assessed based on the order with which the objects were drawn and the influence of visual variables (e.g., presence & location, size, shape, color). On the other hand, trial durations and eye tracking statistics such as average duration of fixations, and number of fixations per seconds were compared. Moreover, selected AoIs (Area of Interests) were explored to gain a deeper insight on visual behavior of map users. Depending on the normality of the data, we used either two-way ANOVA or Mann-Whitney U test to inspect the significance of the results. Based on the evaluation of the drawing order, we observed that experts and males drew roads first whereas; novices and females focused more on hydrographic object. According to the assessment of drawn elements, no significant differences emerged between neither experts and novices, nor females and males for the retrieval of spatial information presented on 2D maps with a simple design and content. The differences in trial durations between novices and experts were not statistically significant while both studying and drawing. Similarly, no significant difference occurred between female and male participants for either studying or drawing. Eye tracking metrics also supported these findings. For average duration of fixation, there was found no significant difference between experts and novices, as well as between females and males. Similarly, no significant differences were found for the mean number of fixation. Full article
Show Figures

Figure 1

Back to TopTop