Machine and Deep Learning in Computer Vision Applications

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (15 February 2022) | Viewed by 11552

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, University of A Coruña, A Coruña, Galicia, Spain
Interests: computational science; artificial intelligence

E-Mail Website
Guest Editor
Department of Computer Science, University of A Coruña, A Coruña, Galicia, Spain
Interests: computational science; artificial intelligence

Special Issue Information

Dear Colleagues,

In recent years, we have witnessed a revolutionary advance in the areas of machine learning and deep learning applied to computer vision. Machine and deep learning have always been closely related to computer vision and image processing, and used in object recognition, background subtraction, video tracking, detection, and motion estimation, in applications ranging from driverless cars to facial recognition to robotics or bioinformatics. Nowadays, machine and deep learning have displaced traditional algorithms in the interpretation stage of computer vision. In turn, computer vision has broadened the scope of machine learning and deep learning.

This Special Issue is dedicated to the presentation of novel approaches and results in machine learning and deep learning in computer vision applied scenarios, from the application of existing algorithms in diverse contexts to the development of new techniques. Submissions are invited across a range of topics related to machine and deep learning in computer vision, including but not limited to the following fields:

Transport and mobility, smart cities, medical imaging, health monitoring, sports and rehabilitation, agriculture, marine science, ecology, geology, forestry, urban/rural planning, civil engineering, smart manufacturing, industrial inspection, disaster management, climate, and atmosphere, navigation systems, etc.

Dr. Álvaro Rodríguez Tajes
Mr. Alberto José Alvarellos González
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • data analysis
  • big data analytics
  • image processing: detection, recognition, classification, tracking
  • computer vision
  • robot vision
  • medical imaging
  • civil engineering
  • inspection
  • intelligent manufacturing
  • remote sensing
  • nondestructive testing and evaluation (NDT/E.)
  • single or multiple modalities: visible spectrum, 3D, infrared, THz, X-ray, etc.
  • multispectral and hyperspectral imaging
  • data fusion
  • optics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1815 KiB  
Article
Google Street View Images as Predictors of Patient Health Outcomes, 2017–2019
by Quynh C. Nguyen, Tom Belnap, Pallavi Dwivedi, Amir Hossein Nazem Deligani, Abhinav Kumar, Dapeng Li, Ross Whitaker, Jessica Keralis, Heran Mane, Xiaohe Yue, Thu T. Nguyen, Tolga Tasdizen and Kim D. Brunisholz
Big Data Cogn. Comput. 2022, 6(1), 15; https://doi.org/10.3390/bdcc6010015 - 27 Jan 2022
Cited by 11 | Viewed by 5440
Abstract
Collecting neighborhood data can both be time- and resource-intensive, especially across broad geographies. In this study, we leveraged 1.4 million publicly available Google Street View (GSV) images from Utah to construct indicators of the neighborhood built environment and evaluate their associations with 2017–2019 [...] Read more.
Collecting neighborhood data can both be time- and resource-intensive, especially across broad geographies. In this study, we leveraged 1.4 million publicly available Google Street View (GSV) images from Utah to construct indicators of the neighborhood built environment and evaluate their associations with 2017–2019 health outcomes of approximately one-third of the population living in Utah. The use of electronic medical records allows for the assessment of associations between neighborhood characteristics and individual-level health outcomes while controlling for predisposing factors, which distinguishes this study from previous GSV studies that were ecological in nature. Among 938,085 adult patients, we found that individuals living in communities in the highest tertiles of green streets and non-single-family homes have 10–27% lower diabetes, uncontrolled diabetes, hypertension, and obesity, but higher substance use disorders—controlling for age, White race, Hispanic ethnicity, religion, marital status, health insurance, and area deprivation index. Conversely, the presence of visible utility wires overhead was associated with 5–10% more diabetes, uncontrolled diabetes, hypertension, obesity, and substance use disorders. Our study found that non-single-family and green streets were related to a lower prevalence of chronic conditions, while visible utility wires and single-lane roads were connected with a higher burden of chronic conditions. These contextual characteristics can better help healthcare organizations understand the drivers of their patients’ health by further considering patients’ residential environments, which present both risks and resources. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Computer Vision Applications)
Show Figures

Figure 1

15 pages, 3933 KiB  
Article
GANs and Artificial Facial Expressions in Synthetic Portraits
by Pilar Rosado, Rubén Fernández and Ferran Reverter
Big Data Cogn. Comput. 2021, 5(4), 63; https://doi.org/10.3390/bdcc5040063 - 4 Nov 2021
Cited by 7 | Viewed by 4763
Abstract
Generative adversarial networks (GANs) provide powerful architectures for deep generative learning. GANs have enabled us to achieve an unprecedented degree of realism in the creation of synthetic images of human faces, landscapes, and buildings, among others. Not only image generation, but also image [...] Read more.
Generative adversarial networks (GANs) provide powerful architectures for deep generative learning. GANs have enabled us to achieve an unprecedented degree of realism in the creation of synthetic images of human faces, landscapes, and buildings, among others. Not only image generation, but also image manipulation is possible with GANs. Generative deep learning models are inherently limited in their creative abilities because of a focus on learning for perfection. We investigated the potential of GAN’s latent spaces to encode human expressions, highlighting creative interest for suboptimal solutions rather than perfect reproductions, in pursuit of the artistic concept. We have trained Deep Convolutional GAN (DCGAN) and StyleGAN using a collection of portraits of detained persons, portraits of dead people who died of violent causes, and people whose portraits were taken during an orgasm. We present results which diverge from standard usage of GANs with the specific intention of producing portraits that may assist us in the representation and recognition of otherness in contemporary identity construction. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Computer Vision Applications)
Show Figures

Figure 1

Back to TopTop