Information Fusion in Medical Image Computing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: closed (15 August 2021) | Viewed by 2469

Special Issue Editor

School of Information Technology and Systems, Faculty of Science and Technology, University of Canberra, 11 Kirinari Street, Bruce, ACT 2617, Australia
Interests: multimodal systems; sensor fusion; big data analytics; Internet of Things; computer vision; pattern recognition; data mining; medical image computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The acquisition of biomedical data happens in a broad range of formats and a wide range of application settings and can range from unstructured text notes to structured data records and biomedical sensor signals, either as 1D signals, 2D images or 3D volumes, or even higher dimensional data such as temporal 3D sequences. This massive amount of heterogenous data needs to be processed in an appropriate way so that useful information and knowledge can be extracted and used to provide automatic, computer-based decision support systems.

In this Special Issue, contributions are solicited from researchers and practitioners on recent advances in algorithms and applications that involve fusing or combining medical information from multiple sources, and intelligent processing of this information based on innovative machine learning, deep learning and AI approaches, and their implementations in different biomedical computing application settings.

Topics for this Special Issue include (but are not limited to):

  • Information fusion of structured and unstructured data for biomedical systems;
  • Machine learning and deep learning for biomedical image data;
  • Multisensor, multimodal, and multiview learning;
  • Combining multiple sources and models for biomedical data;
  • Early, feature and late fusion for biomedical data;
  • Hierarchical models and architectures for biomedical fusion systems;
  • Joint feature learning and cross modal learning.

Dr. Girija Chetty
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Information fusion
  • Multimodal
  • Biomedical
  • Machine learning

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5419 KiB  
Article
PFSegIris: Precise and Fast Segmentation Algorithm for Multi-Source Heterogeneous Iris
by Lin Dong, Yuanning Liu and Xiaodong Zhu
Algorithms 2021, 14(9), 261; https://doi.org/10.3390/a14090261 - 30 Aug 2021
Cited by 6 | Viewed by 1984
Abstract
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris [...] Read more.
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris segmentation models occupy more space and take a long time. Therefore, a lightweight, precise, and fast segmentation network model, PFSegIris, aimed at the multi-source heterogeneous iris is proposed by us. First, the iris feature extraction modules designed were used to fully extract heterogeneous iris feature information, reducing the number of parameters, computation, and the loss of information. Then, an efficient parallel attention mechanism was introduced only once between the encoder and the decoder to capture semantic information, suppress noise interference, and enhance the discriminability of iris region pixels. Finally, we added a skip connection from low-level features to catch more detailed information. Experiments on four near-infrared datasets and three visible datasets show that the segmentation precision is better than that of existing algorithms, and the number of parameters and storage space are only 1.86 M and 0.007 GB, respectively. The average prediction time is less than 0.10 s. The proposed algorithm can segment multi-source heterogeneous iris images more precisely and quicker than other algorithms. Full article
(This article belongs to the Special Issue Information Fusion in Medical Image Computing)
Show Figures

Figure 1

Back to TopTop