You are currently viewing a new version of our website. To view the old version click .

Journal of Imaging

Journal of Imaging is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques, published online monthly by MDPI.

Indexed in PubMed | Quartile Ranking JCR - Q2 (Imaging Science and Photographic Technology)

All Articles (2,204)

A Slicer-Independent Framework for Measuring G-Code Accuracy in Medical 3D Printing

  • Michel Beyer,
  • Alexandru Burde and
  • Andreas E. Roser
  • + 3 authors

In medical 3D printing, accuracy is critical for fabricating patient-specific implants and anatomical models. Although printer performance has been widely examined, the influence of slicing software on geometric fidelity is less frequently quantified. The slicing step, which converts STL files into printer-readable G-code, may introduce deviations that affect the final printed object. To quantify slicer-induced G-code deviations by comparing G-code-derived geometries with their reference STL modelsTwenty mandibular models were processed using five slicers (PrusaSlicer (version 2.9.1.), Cura (version 5.2.2.), Simplify3D (version 4.1.2.), Slic3r (version 1.3.0.) and Fusion 360 (version 2.0.19725)). A custom Python workflow converted the G-code into point clouds and reconstructed STL meshes through XY and Z corrections, marching cubes surface extraction, and volumetric extrusion. A calibration object enabled coordinate normalization across slicers. Accuracy was assessed using Mean Surface Distance (MSD), Root Mean Square (RMS) deviation, and Volume Difference. MSD ranged from 0.071 to 0.095 mm, and RMS deviation from 0.084 to 0.113 mm, depending on the slicer. Volumetric differences were slicer-dependent. PrusaSlicer yielded the highest surface accuracy; Simplify3D and Slic3r showed best repeatability. Fusion 360 produced the largest deviations. The slicers introduced geometric deviations below 0.1 mm that represent a substantial proportion of the overall error in the FDM workflow.

4 January 2026

Mandible with its calibration object near the global origin and the corresponding regions as point clouds.

Facial expression recognition (FER) technology has progressively matured over time. However, existing FER methods are primarily optimized for frontal face images, and their recognition accuracy significantly degrades when processing profile or large-angle rotated facial images. Consequently, this limitation hinders the practical deployment of FER systems. To mitigate the interference caused by large pose variations and improve recognition accuracy, we propose a FER method based on profile-to-frontal transformation and multimodal learning. Specifically, we first leverage the visual understanding and generation capabilities of Qwen-Image-Edit that transform profile images to frontal viewpoints, preserving key expression features while standardizing facial poses. Second, we introduce the CLIP model to enhance the semantic representation capability of expression features through vision–language joint learning. The qualitative and quantitative experiments on the RAF (89.39%), EXPW (67.17%), and AffectNet-7 (62.66%) datasets demonstrate that our method outperforms the existing approaches.

4 January 2026

Outline of the proposed QC-FER. QC-FER identifies the emotional state of the target through four components. Qwen denotes qwen-edit-image; CLIP refers to CLIP ViT-L/14; AFP stands for the Adaptive Feature Preprocessing module; ESV represents the Ensemble Soft Voting classification module. LR indicates logistic regression, RF denotes Random Forest, and SVM refers to Support Vector Machine.

The thematic processing of pseudocolor composite images, especially those created from remote sensing data, is of considerable interest. The set of spectral classes comprising such images is typically described by a nominal scale, meaning the absence of any predetermined relationships between the classes. However, in many cases, images of this type may contain elements of a regular spatial order, one variant of which is a gradient structure. Gradient structures are characterized by a certain regular spatial ordering of spectral classes. Recognizing gradient patterns in the structure of pseudocolor composite images opens up new possibilities for deeper thematic images processing. This article describes an algorithm for analyzing the spatial structure of a pseudocolor composite image to identify gradient patterns. In this process, the initial nominal scale of spectral classes is transformed into a rank scale of the gradient legend. The algorithm is based on the analysis of Moore neighborhoods for each image pixel. This creates an array of the prevalence of all types of local binary patterns (the pixel’s nearest neighbors). All possible variants of the spectral class rank scale composition are then considered. The rank scale variant that describes the largest proportion of image pixels within its gradient order is used as a final result. The user can independently define the criteria for the significance of the gradient order in the analyzed image, focusing either on the overall statistics of the proportion of pixels consistent with the spatial structure of the selected gradient or on the statistics of a selected key image region. The proposed algorithm is illustrated using analysis of test examples.

4 January 2026

Schemes of accounting for the nearest environment on a square lattice ((a) Von Neumann neighborhood; (b) Moore neighborhood).

To assess the efficiency of vision–language models in detecting and classifying carious and non-carious lesions from intraoral photo imaging. A dataset of 172 annotated images were classified for microcavitation, cavitated lesions, staining, calculus, and non-carious lesions. Florence-2, PaLI-Gemma, and YOLOv8 models were trained on the dataset and model performance. The dataset was divided into 80:10:10 split, and the model performance was evaluated using mean average precision (mAP), mAP50-95, class-specific precision and recall. YOLOv8 outperformed the vision–language models, achieving a mean average precision (mAP) of 37% with a precision of 42.3% (with 100% for cavitation detection) and 31.3% recall. PaLI-Gemma produced a recall of 13% and 21%. Florence-2 yielded a mean average precision of 10% with a precision and recall was 51% and 35%. YOLOv8 achieved the strongest overall performance. Florence-2 and PaLI-Gemma models underperformed relative to YOLOv8 despite the potential for multimodal contextual understanding, highlighting the need for larger, more diverse datasets and hybrid architectures to achieve improved performance.

3 January 2026

Dataset overview: polygonal JSON annotation of upper and lower jaw.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Computational Intelligence in Remote Sensing
Reprint

Computational Intelligence in Remote Sensing

2nd Edition
Editors: Yue Wu, Kai Qin, Maoguo Gong, Qiguang Miao

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
J. Imaging - ISSN 2313-433X