Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = photo to sketch

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 7485 KiB  
Article
SKVOS: Sketch-Based Video Object Segmentation with a Large-Scale Benchmark
by Ruolin Yang, Da Li, Conghui Hu and Honggang Zhang
Appl. Sci. 2025, 15(4), 1751; https://doi.org/10.3390/app15041751 - 9 Feb 2025
Viewed by 1039
Abstract
In this paper, we propose sketch-based video object segmentation (SKVOS), a novel task that segments objects consistently across video frames using human-drawn sketches as queries. Traditional reference-based methods, such as photo masks and language descriptions, are commonly used for segmentation. Photo masks provide [...] Read more.
In this paper, we propose sketch-based video object segmentation (SKVOS), a novel task that segments objects consistently across video frames using human-drawn sketches as queries. Traditional reference-based methods, such as photo masks and language descriptions, are commonly used for segmentation. Photo masks provide high precision but are labor intensive, limiting scalability. While language descriptions are easy to provide, they often lack the specificity needed to distinguish visually similar objects within a frame. Despite their simplicity, sketches capture rich, fine-grained details of target objects and can be rapidly created, even by non-experts, making them an attractive alternative for segmentation tasks. We introduce a new approach that utilizes sketches as efficient and informative references for video object segmentation. To evaluate sketch-guided segmentation, we introduce a new benchmark consisting of three datasets: Sketch-DAVIS16, Sketch-DAVIS17, and Sketch-YouTube-VOS. Building on a memory-based framework for semi-supervised video object segmentation, we explore effective strategies for integrating sketch-based references. To ensure robust spatiotemporal coherence, we introduce two key innovations: the Temporal Relation Module and Sketch-Anchored Contrastive Learning. These modules enhance the model’s ability to maintain consistency both across time and across different object instances. Our method is evaluated on the Sketch-VOS benchmark, demonstrating superior performance with overall improvements of 1.9%, 3.3%, and 2.0% over state-of-the-art methods on the Sketch-YouTube-VOS, Sketch-DAVIS 2016, and Sketch-DAVIS 2017 validation sets, respectively. Additionally, on the YouTube-VOS validation set, our method outperforms the leading language-based VOS approach by 10.1%. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Semantic Segmentation, 2nd Edition)
Show Figures

Figure 1

17 pages, 5563 KiB  
Article
Portrait Sketch Generative Model for Misaligned Photo-to-Sketch Dataset
by Hyungbum Kim, Junho Kim and Heekyung Yang
Mathematics 2023, 11(17), 3761; https://doi.org/10.3390/math11173761 - 1 Sep 2023
Cited by 1 | Viewed by 2089
Abstract
A deep-learning-based model for generating line-based portrait sketches from portrait photos is proposed in this paper. The misalignment problem is addressed by the introduction of a novel loss term, designed to tolerate misalignments between Ground Truth sketches and generated sketches. Artists’ sketching strategies [...] Read more.
A deep-learning-based model for generating line-based portrait sketches from portrait photos is proposed in this paper. The misalignment problem is addressed by the introduction of a novel loss term, designed to tolerate misalignments between Ground Truth sketches and generated sketches. Artists’ sketching strategies are mimicked by dividing the portrait into face and hair regions, with separate models trained for each region, and the outcomes subsequently combined. Our contributions include the resolution of misalignment between photos and artist-created sketches, and high-quality sketch results via region-based model training. The experimental results show the effectiveness of our approach in generating convincing portrait sketches, with both quantitative and visual comparisons to State-of-the-Art techniques. The quantitative comparisons demonstrate that our method preserves the identity of the input portrait photos, while applying the style of Ground Truth sketch. Full article
Show Figures

Figure 1

12 pages, 1866 KiB  
Article
Backdoor Attack against Face Sketch Synthesis
by Shengchuan Zhang and Suhang Ye
Entropy 2023, 25(7), 974; https://doi.org/10.3390/e25070974 - 25 Jun 2023
Cited by 1 | Viewed by 1854
Abstract
Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks [...] Read more.
Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks focuses on image classification and object detection. In this article, we investigated backdoor attacks in facial sketch synthesis, which can be beneficial for many applications, such as animation production and assisting police in searching for suspects. Specifically, we propose a simple yet effective poison-only backdoor attack suitable for generation tasks. We demonstrate that when the backdoor is integrated into the target model via our attack, it can mislead the model to synthesize unacceptable sketches of any photos stamped with the trigger patterns. Extensive experiments are executed on the benchmark datasets. Specifically, the light strokes devised by our backdoor attack strategy can significantly decrease the perceptual quality. However, the FSIM score of light strokes is 68.21% on the CUFS dataset and the FSIM scores of pseudo-sketches generated by FCN, cGAN, and MDAL are 69.35%, 71.53%, and 72.75%, respectively. There is no big difference, which proves the effectiveness of the proposed backdoor attack method. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

22 pages, 5579 KiB  
Article
Experiencing Temporary Home Design for Young Urban Dwellers: “We Can’t Put Anything on the Wall”
by Marjolein Euwkje Overtoom, Marja G. Elsinga and Philomena M. Bluyssen
Buildings 2023, 13(5), 1318; https://doi.org/10.3390/buildings13051318 - 18 May 2023
Viewed by 2274
Abstract
A significant number of young people live in temporary homes, which are designed to fulfil basic needs and provide space for normal activities. However, it is unclear what those basic activities are. Moreover, the indoor environmental quality is often left out of the [...] Read more.
A significant number of young people live in temporary homes, which are designed to fulfil basic needs and provide space for normal activities. However, it is unclear what those basic activities are. Moreover, the indoor environmental quality is often left out of the meaning of home, although activities and objects can affect its experienced quality. We therefore verbally and visually explored how young temporary dwellers appropriate and experience their homes, including the indoor environmental quality. Fourteen young adults took part in semi-structured interviews and photographed their most used as well as their favourite place. The interviews were transcribed and analysed following an interpretative phenomenological analysis. The experiences of appropriation in the home were connected to the physical environment through an analysis of the photos and floor plans (sketched by the researcher) using an architectural analysis from the user perspective. The outcome showed that the young adults appropriated their home in three ways: by familiarising the place with objects and “normal” activities, organising where things are and when they happen, and managing the indoor environmental quality through activities and objects. It is concluded that qualitative and visual analyses can assist with making recommendations to improve the design of temporary housing. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

17 pages, 6183 KiB  
Article
Feature Fusion and Metric Learning Network for Zero-Shot Sketch-Based Image Retrieval
by Honggang Zhao, Mingyue Liu and Mingyong Li
Entropy 2023, 25(3), 502; https://doi.org/10.3390/e25030502 - 14 Mar 2023
Cited by 5 | Viewed by 2331
Abstract
Zero-shot sketch-based image retrieval (ZS-SBIR) is an important computer vision problem. The image category in the test phase is a new category that was not visible in the training stage. Because sketches are extremely abstract, the commonly used backbone networks (such as VGG-16 [...] Read more.
Zero-shot sketch-based image retrieval (ZS-SBIR) is an important computer vision problem. The image category in the test phase is a new category that was not visible in the training stage. Because sketches are extremely abstract, the commonly used backbone networks (such as VGG-16 and ResNet-50) cannot handle both sketches and photos. Semantic similarities between the same features in photos and sketches are difficult to reflect in deep models without textual assistance. To solve this problem, we propose a novel and effective feature embedding model called Attention Map Feature Fusion (AMFF). The AMFF model combines the excellent feature extraction capability of the ResNet-50 network with the excellent representation ability of the attention network. By processing the residuals of the ResNet-50 network, the attention map is finally obtained without introducing external semantic knowledge. Most previous approaches treat the ZS-SBIR problem as a classification problem, which ignores the huge domain gap between sketches and photos. This paper proposes an effective method to optimize the entire network, called domain-aware triplets (DAT). Domain feature discrimination and semantic feature embedding can be learned through DAT. In this paper, we also use the classification loss function to stabilize the training process to avoid getting trapped in a local optimum. Compared with the state-of-the-art methods, our method shows a superior performance. For example, on the Tu-berlin dataset, we achieved 61.2 + 1.2% Prec200. On the Sketchy_c100 dataset, we achieved 62.3 + 3.3% mAPall and 75.5 + 1.5% Prec100. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

18 pages, 5154 KiB  
Article
Conditional Generative Adversarial Networks with Total Variation and Color Correction for Generating Indonesian Face Photo from Sketch
by Mia Rizkinia, Nathaniel Faustine and Masahiro Okuda
Appl. Sci. 2022, 12(19), 10006; https://doi.org/10.3390/app121910006 - 5 Oct 2022
Cited by 8 | Viewed by 3815
Abstract
Historically, hand-drawn face sketches have been commonly used by Indonesia’s police force, especially to quickly describe a person’s facial features in searching for fugitives based on eyewitness testimony. Several studies have been performed, aiming to increase the effectiveness of the method, such as [...] Read more.
Historically, hand-drawn face sketches have been commonly used by Indonesia’s police force, especially to quickly describe a person’s facial features in searching for fugitives based on eyewitness testimony. Several studies have been performed, aiming to increase the effectiveness of the method, such as comparing the facial sketch with the all-points bulletin (DPO in Indonesian terminology) or generating a facial composite. However, making facial composites using an application takes quite a long time. Moreover, when these composites are directly compared to the DPO, the accuracy is insufficient, and thus, the technique requires further development. This study applies a conditional generative adversarial network (cGAN) to convert a face sketch image into a color face photo with an additional Total Variation (TV) term in the loss function to improve the visual quality of the resulting image. Furthermore, we apply a color correction to adjust the resulting skin tone similar to that of the ground truth. The face image dataset was collected from various sources matching Indonesian skin tone and facial features. We aim to provide a method for Indonesian face sketch-to-photo generation to visualize the facial features more accurately than the conventional method. This approach produces visually realistic photos from face sketches, as well as true skin tones. Full article
(This article belongs to the Special Issue Recent Advances in Deep Learning for Image Analysis)
Show Figures

Figure 1

16 pages, 2469 KiB  
Article
Exploiting an Intermediate Latent Space between Photo and Sketch for Face Photo-Sketch Recognition
by Seho Bae, Nizam Ud Din, Hyunkyu Park and Juneho Yi
Sensors 2022, 22(19), 7299; https://doi.org/10.3390/s22197299 - 26 Sep 2022
Cited by 4 | Viewed by 2417
Abstract
The photo-sketch matching problem is challenging because the modality gap between a photo and a sketch is very large. This work features a novel approach to the use of an intermediate latent space between the two modalities that circumvents the problem of modality [...] Read more.
The photo-sketch matching problem is challenging because the modality gap between a photo and a sketch is very large. This work features a novel approach to the use of an intermediate latent space between the two modalities that circumvents the problem of modality gap for face photo-sketch recognition. To set up a stable homogenous latent space between a photo and a sketch that is effective for matching, we utilize a bidirectional (photo → sketch and sketch → photo) collaborative synthesis network and equip the latent space with rich representation power. To provide rich representation power, we employ StyleGAN architectures, such as StyleGAN and StyleGAN2. The proposed latent space equipped with rich representation power enables us to conduct accurate matching because we can effectively align the distributions of the two modalities in the latent space. In addition, to resolve the problem of insufficient paired photo/sketch samples for training, we introduce a three-step training scheme. Extensive evaluation on a public composite face sketch database confirms superior performance of the proposed approach compared to existing state-of-the-art methods. The proposed methodology can be employed in matching other modality pairs. Full article
(This article belongs to the Special Issue Challenges in Energy Perspective on Mobile Sensor Networks)
Show Figures

Figure 1

15 pages, 4577 KiB  
Article
Multi-Level Cycle-Consistent Adversarial Networks with Attention Mechanism for Face Sketch-Photo Synthesis
by Danping Ren, Jiajun Yang and Zhongcheng Wei
Sensors 2022, 22(18), 6725; https://doi.org/10.3390/s22186725 - 6 Sep 2022
Cited by 2 | Viewed by 2238
Abstract
The synthesis between face sketches and face photos has important application values in law enforcement and digital entertainment. In cases of a lack of paired sketch-photo data, this paper proposes an unsupervised model to solve the problems of missing key facial details and [...] Read more.
The synthesis between face sketches and face photos has important application values in law enforcement and digital entertainment. In cases of a lack of paired sketch-photo data, this paper proposes an unsupervised model to solve the problems of missing key facial details and a lack of realism in the synthesized images of existing methods. The model is built on the CycleGAN architecture. To retain more semantic information in the target domain, a multi-scale feature extraction module is inserted before the generator. In addition, the convolutional block attention module is introduced into the generator to enhance the ability of the model to extract important feature information. Via CBAM, the model improves the quality of the converted image and reduces the artifacts caused by image background interference. Next, in order to preserve more identity information in the generated photo, this paper constructs the multi-level cycle consistency loss function. Qualitative experiments on CUFS and CUFSF public datasets show that the facial details and edge structures synthesized by our model are clearer and more realistic. Meanwhile the performance indexes of structural similarity and peak signal-to-noise ratio in quantitative experiments are also significantly improved compared with other methods. Full article
Show Figures

Figure 1

21 pages, 11786 KiB  
Article
aRTIC GAN: A Recursive Text-Image-Conditioned GAN
by Edoardo Alati, Carlo Alberto Caracciolo, Marco Costa, Marta Sanzari , Paolo Russo and Irene Amerini
Electronics 2022, 11(11), 1737; https://doi.org/10.3390/electronics11111737 - 30 May 2022
Cited by 3 | Viewed by 3277
Abstract
Generative Adversarial Networks have recently demonstrated the capability to synthesize photo-realistic real-world images. However, they still struggle to offer high controllability of the output image, even if several constraints are provided as input. In this work, we present a Recursive Text-Image-Conditioned GAN (aRTIC [...] Read more.
Generative Adversarial Networks have recently demonstrated the capability to synthesize photo-realistic real-world images. However, they still struggle to offer high controllability of the output image, even if several constraints are provided as input. In this work, we present a Recursive Text-Image-Conditioned GAN (aRTIC GAN), a novel approach for multi-conditional image generation under concurrent spatial and text constraints. It employs few line drawings and short descriptions to provide informative yet human-friendly conditioning. The proposed scenario is based on accessible constraints with high degrees of freedom: sketches are easy to draw and add strong restrictions on the generated objects, such as their orientation or main physical characteristics. Text on its side is so common and expressive that easily enforces information otherwise impossible to provide with minimal illustrations, such as objects components color, color shades, etc. Our aRTIC GAN is suitable for the sequential generation of multiple objects due to its compact design. In fact, the algorithm exploits the previously generated image in conjunction with the sketch and the text caption, resulting in a recurrent approach. We developed three network blocks to tackle the fundamental problems of catching captions’ semantic meanings and of handling the trade-off between smoothing grid-pattern artifacts and visual detail preservation. Furthermore, a compact three-task discriminator (covering global, local and textual aspects) was developed to preserve a lightweight and robust architecture. Extensive experiments proved the validity of aRTIC GAN and show that the combined use of sketch and description allows us to avoid explicit object labeling. Full article
(This article belongs to the Collection Image and Video Analysis and Understanding)
Show Figures

Figure 1

17 pages, 2608 KiB  
Article
A Decision Support System for Face Sketch Synthesis Using Deep Learning and Artificial Intelligence
by Irfan Azhar, Muhammad Sharif, Mudassar Raza, Muhammad Attique Khan and Hwan-Seung Yong
Sensors 2021, 21(24), 8178; https://doi.org/10.3390/s21248178 - 8 Dec 2021
Cited by 11 | Viewed by 4085
Abstract
The recent development in the area of IoT technologies is likely to be implemented extensively in the next decade. There is a great increase in the crime rate, and the handling officers are responsible for dealing with a broad range of cyber and [...] Read more.
The recent development in the area of IoT technologies is likely to be implemented extensively in the next decade. There is a great increase in the crime rate, and the handling officers are responsible for dealing with a broad range of cyber and Internet issues during investigation. IoT technologies are helpful in the identification of suspects, and few technologies are available that use IoT and deep learning together for face sketch synthesis. Convolutional neural networks (CNNs) and other constructs of deep learning have become major tools in recent approaches. A new-found architecture of the neural network is anticipated in this work. It is called Spiral-Net, which is a modified version of U-Net fto perform face sketch synthesis (the phase is known as the compiler network C here). Spiral-Net performs in combination with a pre-trained Vgg-19 network called the feature extractor F. It first identifies the top n matches from viewed sketches to a given photo. F is again used to formulate a feature map based on the cosine distance of a candidate sketch formed by C from the top n matches. A customized CNN configuration (called the discriminator D) then computes loss functions based on differences between the candidate sketch and the feature. Values of these loss functions alternately update C and F. The ensemble of these nets is trained and tested on selected datasets, including CUFS, CUFSF, and a part of the IIT photo–sketch dataset. Results of this modified U-Net are acquired by the legacy NLDA (1998) scheme of face recognition and its newer version, OpenBR (2013), which demonstrate an improvement of 5% compared with the current state of the art in its relevant domain. Full article
Show Figures

Figure 1

12 pages, 5238 KiB  
Article
Virtual 3D Campus for Universiti Teknologi Malaysia (UTM)
by Syahiirah Salleh, Uznir Ujang and Suhaibah Azri
ISPRS Int. J. Geo-Inf. 2021, 10(6), 356; https://doi.org/10.3390/ijgi10060356 - 22 May 2021
Cited by 25 | Viewed by 6049
Abstract
University campuses consists of many buildings within a large area managed by a single organization. Like 3D city modeling, a 3D model of campuses can be utilized to provide a better foundation for planning, navigation and management of buildings. This study approaches 3D [...] Read more.
University campuses consists of many buildings within a large area managed by a single organization. Like 3D city modeling, a 3D model of campuses can be utilized to provide a better foundation for planning, navigation and management of buildings. This study approaches 3D modeling of the UTM campus by utilizing data from aerial photos and site observations. The 3D models of buildings were drawn from building footprints in SketchUp and converted to CityGML using FME software. The CityGML models were imported into a geodatabase using 3DCityDB and visualized in Cesium. The resulting 3D model of buildings was in CityGML format level of detail 2, consisting of ground, wall and roof surfaces. The 3D models were positioned with real-world coordinates using the geolocation function in SketchUp. The non-spatial attributes of the 3D models were also stored in a database managed by PostgreSQL. While the methodology demonstrated in this study was found to be able to create LoD2 building models. However, issues of accuracy arose in terms of building details and positioning. Therefore, higher accuracy data, such as point cloud data, should produce higher LoD models and accurate positioning. Full article
(This article belongs to the Special Issue Virtual 3D City Models)
Show Figures

Figure 1

26 pages, 6717 KiB  
Article
Integrated Archaeological Research: Archival Resources, Surveys, Geophysical Prospection and Excavation Approach at an Execution and Burial Site: The German Nazi Labour Camp in Treblinka
by Sebastian Różycki, Rafał Zapłata, Jerzy Karczewski, Andrzej Ossowski and Jacek Tomczyk
Geosciences 2020, 10(9), 336; https://doi.org/10.3390/geosciences10090336 - 24 Aug 2020
Cited by 8 | Viewed by 8926
Abstract
This article presents the results of multidisciplinary research undertaken in 2016–2019 at the German Nazi Treblinka I Forced Labour Camp. Housing 20,000 prisoners, Treblinka I was established in 1941 as a part of a network of objects such as forced labour camps, resettlement [...] Read more.
This article presents the results of multidisciplinary research undertaken in 2016–2019 at the German Nazi Treblinka I Forced Labour Camp. Housing 20,000 prisoners, Treblinka I was established in 1941 as a part of a network of objects such as forced labour camps, resettlement camps and prison camps that were established in the territory of occupied Poland from September 1939. This paper describes archaeological research conducted in particular on the execution site and burial site—the area where the “death pits” have been found—in the so-called Las Maliszewski (Maliszewa Forest). In this area (poorly documented) exhumation work was conducted only until 1947, so the location of these graves is only approximately known. The research was resumed at the beginning of the 21st century using, e.g., non-invasive methods and remote-sensing data. The leading aim of this article is to describe the comprehensive research strategy, with a particular stress on non-invasive geophysical surveys. The integrated archaeological research presented in this paper includes an analysis of archive materials (aerial photos, witness accounts, maps, plans, and sketches), contemporary data resources (orthophotomaps, airborne laser scanning-ALS data), field work (verification of potential objects, ground penetrating radar-GPR surveys, excavations), and the integration, analysis and interpretation of all these datasets using a GIS platform. The results of the presented study included the identification of the burial zone within the Maliszewa Forest area, including six previously unknown graves, creation of a new database, and expansion of the Historical-GIS-Treblinka. Obtained results indicate that the integration and analyses within the GIS environment of various types of remote-sensing data and geophysical measurements significantly contribute to archaeological research and increase the chances to discover previously unknown “graves” from the time when the labour camp Treblinka I functioned. Full article
(This article belongs to the Special Issue Selected papers from the SAGA Workshop 1)
Show Figures

Graphical abstract

12 pages, 3839 KiB  
Article
A Joint Training Model for Face Sketch Synthesis
by Weiguo Wan and Hyo Jong Lee
Appl. Sci. 2019, 9(9), 1731; https://doi.org/10.3390/app9091731 - 26 Apr 2019
Cited by 10 | Viewed by 3680
Abstract
The exemplar-based method is most frequently used in face sketch synthesis because of its efficiency in representing the nonlinear mapping between face photos and sketches. However, the sketches synthesized by existing exemplar-based methods suffer from block artifacts and blur effects. In addition, most [...] Read more.
The exemplar-based method is most frequently used in face sketch synthesis because of its efficiency in representing the nonlinear mapping between face photos and sketches. However, the sketches synthesized by existing exemplar-based methods suffer from block artifacts and blur effects. In addition, most exemplar-based methods ignore the training sketches in the weight representation process. To improve synthesis performance, a novel joint training model is proposed in this paper, taking sketches into consideration. First, we construct the joint training photo and sketch by concatenating the original photo and its sketch with a high-pass filtered image of their corresponding sketch. Then, an offline random sampling strategy is adopted for each test photo patch to select the joint training photo and sketch patches in the neighboring region. Finally, a novel locality constraint is designed to calculate the reconstruction weight, allowing the synthesized sketches to have more detailed information. Extensive experimental results on public datasets show the superiority of the proposed joint training model, both from subjective perceptual and the FaceNet-based face recognition objective evaluation, compared to existing state-of-the-art sketch synthesis methods. Full article
(This article belongs to the Special Issue Intelligent Imaging and Analysis)
Show Figures

Figure 1

17 pages, 1473 KiB  
Article
Interpretation of Aerial Photographs and Satellite SAR Interferometry for the Inventory of Landslides
by Tazio Strozzi, Christian Ambrosi and Hugo Raetzo
Remote Sens. 2013, 5(5), 2554-2570; https://doi.org/10.3390/rs5052554 - 22 May 2013
Cited by 66 | Viewed by 9689
Abstract
An inventory of landslides with an indication of the state of activity is necessary in order to establish hazard maps. We combine interpretation of aerial photographs and information on surface displacement from satellite Synthetic Aperture Radar (SAR) interferometry for mapping landslides and intensity [...] Read more.
An inventory of landslides with an indication of the state of activity is necessary in order to establish hazard maps. We combine interpretation of aerial photographs and information on surface displacement from satellite Synthetic Aperture Radar (SAR) interferometry for mapping landslides and intensity classification. Sketch maps of landslides distinguished by typology and depth, including geomorphological features, are compiled by stereoscopic photo-interpretation. Results achieved with differential SAR interferometry (InSAR) and Persistent Scatterer Interferometry (PSI) are used to estimate the state of activity of landslides around villages and in sparsely vegetated areas with numerous exposed rocks. For validation and possible extension of the inventory around vegetated areas, where InSAR and PSI failed to retrieve displacement information, traditional monitoring data such as topographic measurements and GPS are considered. Our results, covering extensive areas, are a valuable contribution towards the analysis of landslide hazards in areas where traditional monitoring techniques are sparse or unavailable. In this contribution we discuss our methodology for a study area around the deep-seated landslide in Osco in southern Switzerland. Full article
Show Figures

Back to TopTop