Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = volumetric video

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 7676 KiB  
Article
A Novel 3D-Printing Model Resin with Low Volumetric Shrinkage and High Accuracy
by Long Ling, Theresa Lai, Pei-Ting Chung, Sara Sabet, Victoria Tran and Raj Malyala
Polymers 2025, 17(5), 610; https://doi.org/10.3390/polym17050610 - 25 Feb 2025
Viewed by 1380
Abstract
This study aims to assess and compare the shrinkage, accuracy, and accuracy stability of a novel 3D-printing model resin and eight commercially available 3D-printing model resin materials. The experimental model resin was developed by our 3D-printing proprietary resin technology. Eight commercially available 3D-printing [...] Read more.
This study aims to assess and compare the shrinkage, accuracy, and accuracy stability of a novel 3D-printing model resin and eight commercially available 3D-printing model resin materials. The experimental model resin was developed by our 3D-printing proprietary resin technology. Eight commercially available 3D-printing model resins were included for comparison. The AcuVol video imaging technique was used to test volumetric shrinkage. Full-arch tooth models were printed for each model resin via digital light processing (DLP) technology. The 3D average distance between the scanned model and the designed CAD digital file was applied to determine the dimensional accuracy of the 3D-printed full-arch tooth models. One-way ANOVA and Tukey’s post hoc test (p < 0.05) were utilized to analyze the average values of volumetric shrinkage and 3D average distance (dimensional accuracy). The experimental model resin showed significantly lower volumetric shrinkage (7.28%) and significantly higher or higher accuracy and accuracy stability (11.66–13.77 µm from the initial day to four weeks) than the other commercially available model resins (7.66–11.2%, 14.03–41.14 µm from the initial day to four weeks). A strong correlation was observed between volumetric shrinkage and dimensional accuracy (Pearson correlation coefficient R = 0.7485). For clinically successful modelling applications in restorations, orthodontics, implants, and so on, the new 3D-printing model resin is a promising option. Full article
(This article belongs to the Section Polymer Applications)
Show Figures

Figure 1

15 pages, 2250 KiB  
Article
Video-Based Plastic Bag Grabbing Action Recognition: A New Video Dataset and a Comparative Study of Baseline Models
by Pei Jing Low, Bo Yan Ng, Nur Insyirah Mahzan, Jing Tian and Cheung-Chi Leung
Sensors 2025, 25(1), 255; https://doi.org/10.3390/s25010255 - 4 Jan 2025
Cited by 1 | Viewed by 1238
Abstract
Recognizing the action of plastic bag taking from CCTV video footage represents a highly specialized and niche challenge within the broader domain of action video classification. To address this challenge, our paper introduces a novel benchmark video dataset specifically curated for the task [...] Read more.
Recognizing the action of plastic bag taking from CCTV video footage represents a highly specialized and niche challenge within the broader domain of action video classification. To address this challenge, our paper introduces a novel benchmark video dataset specifically curated for the task of identifying the action of grabbing a plastic bag. Additionally, we propose and evaluate three distinct baseline approaches. The first approach employs a combination of handcrafted feature extraction techniques and a sequential classification model to analyze motion and object-related features. The second approach leverages a multiple-frame convolutional neural network (CNN) to exploit temporal and spatial patterns in the video data. The third approach explores a 3D CNN-based deep learning model, which is capable of processing video data as volumetric inputs. To assess the performance of these methods, we conduct a comprehensive comparative study, demonstrating the strengths and limitations of each approach within this specialized domain. Full article
(This article belongs to the Special Issue Computer Vision-Based Human Activity Recognition)
Show Figures

Figure 1

23 pages, 23514 KiB  
Article
Deep-Learning-Based Automated Building Construction Progress Monitoring for Prefabricated Prefinished Volumetric Construction
by Wei Png Chua and Chien Chern Cheah
Sensors 2024, 24(21), 7074; https://doi.org/10.3390/s24217074 - 2 Nov 2024
Viewed by 2663
Abstract
Prefabricated prefinished volumetric construction (PPVC) is a relatively new technique that has recently gained popularity for its ability to improve flexibility in scheduling and resource management. Given the modular nature of PPVC assembly and the large amounts of visual data amassed throughout a [...] Read more.
Prefabricated prefinished volumetric construction (PPVC) is a relatively new technique that has recently gained popularity for its ability to improve flexibility in scheduling and resource management. Given the modular nature of PPVC assembly and the large amounts of visual data amassed throughout a construction project today, PPVC building construction progress monitoring can be conducted by quantifying assembled PPVC modules within images or videos. As manually processing high volumes of visual data can be extremely time consuming and tedious, building construction progress monitoring can be automated to be more efficient and reliable. However, the complex nature of construction sites and the presence of nearby infrastructure could occlude or distort visual data. Furthermore, imaging constraints can also result in incomplete visual data. Therefore, it is hard to apply existing purely data-driven object detectors to automate building progress monitoring at construction sites. In this paper, we propose a novel 2D window-based automated visual building construction progress monitoring (WAVBCPM) system to overcome these issues by mimicking human decision making during manual progress monitoring with a primary focus on PPVC building construction. WAVBCPM is segregated into three modules. A detection module first conducts detection of windows on the target building. This is achieved by detecting windows within the input image at two scales by using YOLOv5 as a backbone network for object detection before using a window detection filtering process to omit irrelevant detections from the surrounding areas. Next, a rectification module is developed to account for missing windows in the mid-section and near-ground regions of the constructed building that may be caused by occlusion and poor detection. Lastly, a progress estimation module checks the processed detections for missing or excess information before performing building construction progress estimation. The proposed method is tested on images from actual construction sites, and the experimental results demonstrate that WAVBCPM effectively addresses real-world challenges. By mimicking human inference, it overcomes imperfections in visual data, achieving higher accuracy in progress monitoring compared to purely data-driven object detectors. Full article
Show Figures

Figure 1

20 pages, 4837 KiB  
Article
Optical Particle Tracking in the Pneumatic Conveying of Metal Powders through a Thin Capillary Pipe
by Lorenzo Pedrolli, Luigi Fraccarollo, Beatriz Achiaga and Alejandro Lopez
Technologies 2024, 12(10), 191; https://doi.org/10.3390/technologies12100191 - 3 Oct 2024
Viewed by 4764
Abstract
Directed Energy Deposition (DED) processes necessitate a consistent material flow to the melt pool, typically achieved through pneumatic conveying of metal powder via thin pipes. This study aims to record and analyze the multiphase fluid–solid flow. An experimental setup utilizing a high-speed camera [...] Read more.
Directed Energy Deposition (DED) processes necessitate a consistent material flow to the melt pool, typically achieved through pneumatic conveying of metal powder via thin pipes. This study aims to record and analyze the multiphase fluid–solid flow. An experimental setup utilizing a high-speed camera and specialized optics was constructed, and the flow through thin transparent pipes was recorded. The resulting information was analyzed and compared with coupled Computational Fluid Dynamics-Discrete Element Modeling (CFD-DEM) simulations, with special attention to the solids flow fluctuations. The proposed methodology shows a significant improvement in accuracy and reliability over existing approaches, particularly in capturing flow rate fluctuations and particle velocity distributions in small-scale systems. Moreover, it allows for accurately analyzing Particle Size Distribution (PSD) in the same setup. This paper details the experimental design, video analysis using particle tracking, and a novel method for deriving volumetric concentrations and flow rate from flat images. The findings confirm the accuracy of the CFD-DEM simulations and provide insights into the dynamics of pneumatic conveying and individual particle movement, with the potential to improve DED efficiency by reducing variability in material deposition rates. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

18 pages, 6787 KiB  
Article
An Implementation of LASER Beam Welding Simulation on Graphics Processing Unit Using CUDA
by Ernandes Nascimento, Elisan Magalhães, Arthur Azevedo, Luiz E. S. Paes and Ariel Oliveira
Computation 2024, 12(4), 83; https://doi.org/10.3390/computation12040083 - 17 Apr 2024
Cited by 3 | Viewed by 1994
Abstract
The maximum number of parallel threads in traditional CFD solutions is limited by the Central Processing Unit (CPU) capacity, which is lower than the capabilities of a modern Graphics Processing Unit (GPU). In this context, the GPU allows for simultaneous processing of several [...] Read more.
The maximum number of parallel threads in traditional CFD solutions is limited by the Central Processing Unit (CPU) capacity, which is lower than the capabilities of a modern Graphics Processing Unit (GPU). In this context, the GPU allows for simultaneous processing of several parallel threads with double-precision floating-point formatting. The present study was focused on evaluating the advantages and drawbacks of implementing LASER Beam Welding (LBW) simulations using the CUDA platform. The performance of the developed code was compared to that of three top-rated commercial codes executed on the CPU. The unsteady three-dimensional heat conduction Partial Differential Equation (PDE) was discretized in space and time using the Finite Volume Method (FVM). The Volumetric Thermal Capacitor (VTC) approach was employed to model the melting-solidification. The GPU solutions were computed using a CUDA-C language in-house code, running on a Gigabyte Nvidia GeForce RTX 3090 video card and an MSI 4090 video card (both made in Hsinchu, Taiwan), each with 24 GB of memory. The commercial solutions were executed on an Intel® Core i9-12900KF CPU (made in Hillsboro, Oregon, United States of America) with a 3.6 GHz base clock and 16 cores. The results demonstrated that GPU and CPU processing achieve similar precision, but the GPU solution exhibited significantly faster speeds and greater power efficiency, resulting in speed-ups ranging from 75.6 to 1351.2 times compared to the CPU solutions. The in-house code also demonstrated optimized memory usage, with an average of 3.86 times less RAM utilization. Therefore, adopting parallelized algorithms run on GPU can lead to reduced CFD computational costs compared to traditional codes while maintaining high accuracy. Full article
Show Figures

Figure 1

15 pages, 2036 KiB  
Article
Assessment of Degree of Conversion and Volumetric Shrinkage of Novel Self-Adhesive Cement
by Long Ling, Yulin Chen and Raj Malyala
Polymers 2024, 16(5), 581; https://doi.org/10.3390/polym16050581 - 21 Feb 2024
Cited by 3 | Viewed by 1745
Abstract
The degree of monomer conversion and polymerization shrinkage are two of the main reasons for potential adhesion failure between the tooth structure and the restoration substrate. To evaluate the degree of conversion and polymerization shrinkage of a newly developed self-adhesive resin cement, the [...] Read more.
The degree of monomer conversion and polymerization shrinkage are two of the main reasons for potential adhesion failure between the tooth structure and the restoration substrate. To evaluate the degree of conversion and polymerization shrinkage of a newly developed self-adhesive resin cement, the degree of conversion (DC) was measured using FTIR under different activation modes, temperatures, and times. Volumetric shrinkage was tested using the AcuVol video imaging method. The experimental cement showed a higher DC than other cements under self-curing. The DC of the experimental cement was higher than that of other cements, except SpeedCem Plus under light curing. The experimental cement had a higher DC than other cements, except SpeedCem Plus in some conditions under dual curing. All self-adhesive cements had a higher DC at 37 °C than at 23 °C under self-curing, and there was no statistical difference between 23 °C and 37 °C under light curing. All self-adhesive cements showed a significantly higher DC at 10 min than at 5 min under self-curing. There was no statistical difference between 5 min and 10 min for most cements under dual curing. All self-adhesive cements statistically had the same volumetric shrinkage under light curing and self-curing. The newly developed self-adhesive resin cement exhibited a higher degree of conversion and similar volumetric shrinkage compared to these commercial self-adhesive resin cements. Full article
Show Figures

Figure 1

55 pages, 1876 KiB  
Review
A Survey on Video Streaming for Next-Generation Vehicular Networks
by Chenn-Jung Huang, Hao-Wen Cheng, Yi-Hung Lien and Mei-En Jian
Electronics 2024, 13(3), 649; https://doi.org/10.3390/electronics13030649 - 4 Feb 2024
Cited by 12 | Viewed by 4586
Abstract
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In [...] Read more.
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In addition to conventional 2D videos, 360° videos are gaining popularity, and volumetric videos, which can offer users a better immersive experience, have been discussed. However, these applications place high demands on network capabilities, leading to a dependence on next-generation wireless communication technology to address network bottlenecks. Therefore, this study provides an exhaustive overview of the latest advancements in video streaming over vehicular networks. First, we introduce related work and background knowledge, and provide an overview of recent developments in vehicular networking and video types. Next, we detail various video processing technologies, including the latest released standards. Detailed explanations are provided for network strategies and wireless communication technologies that can optimize video transmission in vehicular networks, paying special attention to the relevant literature regarding the current development of 6G technology that is applied to vehicle communication. Finally, we proposed future research directions and challenges. Building upon the technologies introduced in this paper and considering diverse applications, we suggest a suitable vehicular network architecture for next-generation video transmission. Full article
(This article belongs to the Special Issue Featured Review Papers in Electrical and Autonomous Vehicles)
Show Figures

Figure 1

31 pages, 1853 KiB  
Review
Taxonomy and Survey of Current 3D Photorealistic Human Body Modelling and Reconstruction Techniques for Holographic-Type Communication
by Radostina Petkova, Ivaylo Bozhilov, Desislava Nikolova, Ivaylo Vladimirov and Agata Manolova
Electronics 2023, 12(22), 4705; https://doi.org/10.3390/electronics12224705 - 19 Nov 2023
Cited by 1 | Viewed by 2278
Abstract
The continuous evolution of video technologies is now primarily focused on enhancing 3D video paradigms and consistently improving their quality, realism, and level of immersion. Both the research community and the industry work towards improving 3D content representation, compression, and transmission. Their collective [...] Read more.
The continuous evolution of video technologies is now primarily focused on enhancing 3D video paradigms and consistently improving their quality, realism, and level of immersion. Both the research community and the industry work towards improving 3D content representation, compression, and transmission. Their collective efforts culminate in the striving for real-time transfer of volumetric data between distant locations, laying the foundation for holographic-type communication (HTC). However, to truly enable a realistic holographic experience, the 3D representation of the HTC participants must accurately convey the real individuals’ appearance, emotions, and interactions by creating authentic and animatable 3D human models. In this regard, our paper aims to examine the most recent and widely acknowledged works in the realm of 3D human body modelling and reconstruction. In addition, we provide insights into the datasets and the 3D parametric body models utilized by the examined approaches, along with the employed evaluation metrics. Our contribution involves organizing the examined techniques, making comparisons based on various criteria, and creating a taxonomy rooted in the nature of the input data. Furthermore, we discuss the assessed approaches concerning different indicators and HTC. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning in Computer Vision)
Show Figures

Figure 1

18 pages, 8885 KiB  
Article
Physics-Based Differentiable Rendering for Efficient and Plausible Fluid Modeling from Monocular Video
by Yunchi Cen, Qifan Zhang and Xiaohui Liang
Entropy 2023, 25(9), 1348; https://doi.org/10.3390/e25091348 - 17 Sep 2023
Viewed by 2735
Abstract
Realistic fluid models play an important role in computer graphics applications. However, efficiently reconstructing volumetric fluid flows from monocular videos remains challenging. In this work, we present a novel approach for reconstructing 3D flows from monocular inputs through a physics-based differentiable renderer coupled [...] Read more.
Realistic fluid models play an important role in computer graphics applications. However, efficiently reconstructing volumetric fluid flows from monocular videos remains challenging. In this work, we present a novel approach for reconstructing 3D flows from monocular inputs through a physics-based differentiable renderer coupled with joint density and velocity estimation. Our primary contributions include the proposed efficient differentiable rendering framework and improved coupled density and velocity estimation strategy. Rather than relying on automatic differentiation, we derive the differential form of the radiance transfer equation under single scattering. This allows the direct computation of the radiance gradient with respect to density, yielding higher efficiency compared to prior works. To improve temporal coherence in the reconstructed flows, subsequent fluid densities are estimated via a coupled strategy that enables smooth and realistic fluid motions suitable for applications that require high efficiency. Experiments on synthetic and real-world data demonstrated our method’s capacity to reconstruct plausible volumetric flows with smooth dynamics efficiently. Comparisons to prior work on fluid motion reconstruction from monocular video revealed over 50–170x speedups across multiple resolutions. Full article
Show Figures

Figure 1

14 pages, 1126 KiB  
Article
Comparison of a Nanofiber-Reinforced Composite with Different Types of Composite Resins
by Zümrüt Ceren Özduman, Burcu Oglakci, Derya Merve Halacoglu Bagis, Binnur Aydogan Temel and Evrim Eliguzeloglu Dalkilic
Polymers 2023, 15(17), 3628; https://doi.org/10.3390/polym15173628 - 1 Sep 2023
Cited by 6 | Viewed by 2799
Abstract
 The aim of this study was a comprehensive evaluation and comparison of the physical and mechanical properties of a newly developed nano-sized hydroxyapatite fiber-reinforced composite with other fiber-reinforced and particle-filled composites. Commercially available eight composite resins (3 fiber-reinforced and 5 particle-filled) were used: [...] Read more.
 The aim of this study was a comprehensive evaluation and comparison of the physical and mechanical properties of a newly developed nano-sized hydroxyapatite fiber-reinforced composite with other fiber-reinforced and particle-filled composites. Commercially available eight composite resins (3 fiber-reinforced and 5 particle-filled) were used: Fiber-reinforced composites: (1) NovaPro Fill (Nanova): newly developed nano-sized hydroxyapatite fiber-reinforced composite (nHAFC-NF); (2) Alert (Pentron): micrometer-scale glass fiber-reinforced composite (µmGFC-AL); (3) Ever X Posterior (GC Corp): millimeter-scale glass fiber-reinforced composite (mmGFC-EX); Particle-filled composites: (4) SDR Plus (Dentsply) low-viscosity bulk-fill (LVBF-SDR); (5) Estelite Bulk Fill (Tokuyama Corp.) low-viscosity bulk-fill (LVBF-EBF); (6) Filtek Bulk Fill Flow (3M ESPE) low-viscosity bulk-fill (LVBF-FBFF); (7) Filtek Bulk Fill (3M ESPE) high-viscosity bulk-fill (HVBF-FBF); and (8) Filtek Z250 (3M ESPE): microhybrid composite (µH-FZ). For Vickers microhardness, cylindrical-shaped specimens (diameter: 4 mm, height: 2 mm) were fabricated (n = 10). For the three-point bending test, bar-shaped (2 × 2 × 25 mm) specimens were fabricated (n = 10). Flexural strength and modulus elasticity were calculated. AcuVol, a video image device, was used for volumetric polymerization shrinkage (VPS) evaluations (n = 6). The polymerization degree of conversion (DC) was measured on the top and bottom surfaces with Fourier Transform Near-Infrared Spectroscopy (FTIR; n = 5). The data were statistically analyzed using one-way ANOVA, Tukey HSD, Welsch ANOVA, and Games–Howell tests (p < 0.05). Pearson coefficient correlation was used to determine the linear correlation. Group µH-FZ displayed the highest microhardness, flexural strength, and modulus elasticity, while Group HVBF-FBF exhibited significantly lower VPS than other composites. When comparing the fiber-reinforced composites, Group mmGFC-EX showed significantly higher microhardness, flexural strength, modulus elasticity, and lower VPS than Group nHAFC-NF but similar DC. A strong correlation was determined between microhardness, VPS and inorganic filler by wt% and vol% (r = 0.572–0.877). Fiber type and length could affect the physical and mechanical properties of fibers containing composite resins.   Full article
(This article belongs to the Special Issue Polymer Materials in Dentistry)
Show Figures

Figure 1

22 pages, 3715 KiB  
Article
Feel the Music!—Audience Experiences of Audio–Tactile Feedback in a Novel Virtual Reality Volumetric Music Video
by Gareth W. Young, Néill O’Dwyer, Mauricio Flores Vargas, Rachel Mc Donnell and Aljosa Smolic
Arts 2023, 12(4), 156; https://doi.org/10.3390/arts12040156 - 13 Jul 2023
Cited by 7 | Viewed by 4542
Abstract
The creation of imaginary worlds has been the focus of philosophical discourse and artistic practice for millennia. Humans have long evolved to use media and imagination to express their inner worlds outwardly via artistic practice. As a fundamental factor of fantasy world-building, the [...] Read more.
The creation of imaginary worlds has been the focus of philosophical discourse and artistic practice for millennia. Humans have long evolved to use media and imagination to express their inner worlds outwardly via artistic practice. As a fundamental factor of fantasy world-building, the imagination can produce novel objects, virtual sensations, and unique stories related to previously unlived experiences. The expression of the imagination often takes a narrative form that applies some medium to facilitate communication, for example, books, statues, music, or paintings. These virtual realities are expressed and communicated via multiple multimedia immersive technologies, stimulating modern audiences via their combined Aristotelian senses. Incorporating interactive graphic, auditory, and haptic narrative elements in extended reality (XR) permits artists to express their imaginative intentions with visceral accuracy. However, these technologies are constantly in flux, and the precise role of multimodality has yet to be fully explored. Thus, this contribution to Feeling the Future—Haptic Audio explores the potential of novel multimodal technology to communicate artistic expression via an immersive virtual reality (VR) volumetric music video. We compare user experiences of our affordable volumetric video (VV) production to more expensive commercial VR music videos. Our research also inspects audio–tactile interactions in the auditory experience of immersive music videos, where both auditory and haptic channels receive vibrations during the imaginative virtual performance. This multimodal interaction is then analyzed from the audience’s perspective to capture the user’s experiences and examine the impact of this form of haptic feedback in practice via applied human–computer interaction (HCI) evaluation practices. Our results demonstrate the application of haptics in contemporary music consumption practices, discussing how they affect audience experiences regarding functionality, usability, and the perceived quality of a musical performance. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

18 pages, 4365 KiB  
Article
Video-Monitoring Tools for Assessing Beach Morphodynamics in Tidal Beaches
by Juan Montes, Laura del Río, Theocharis A. Plomaritis, Javier Benavente, María Puig and Gonzalo Simarro
Remote Sens. 2023, 15(10), 2650; https://doi.org/10.3390/rs15102650 - 19 May 2023
Cited by 6 | Viewed by 2394
Abstract
Beach behaviour and evolution are controlled by a large number of factors, being susceptible to human-derived pressures and the impacts of climate change. In order to understand beach behaviour at different scales, systematic monitoring programs that assess shoreline and volumetric changes are required. [...] Read more.
Beach behaviour and evolution are controlled by a large number of factors, being susceptible to human-derived pressures and the impacts of climate change. In order to understand beach behaviour at different scales, systematic monitoring programs that assess shoreline and volumetric changes are required. Video-monitoring systems are widely used in this regard, as they are cost-effective and acquire data automatically and continuously, even in bad weather conditions. This work presents a methodology to use the basic products of low-cost IP video cameras to identify both the cross-shore and long-shore variability of tidal beaches. Shorelines were automatically obtained, digital elevation models (DEMs) were generated and validated with real data, and the outputs were combined to analyse beach behaviour from a morphodynamic perspective. The proposed methodology was applied to La Victoria Beach (SW Spain) for the analysis of beach variations over a 5-year period. The combination of shoreline position analysis and data from DEMs facilitates understanding and provides a complete overview of beach behaviour, revealing alongshore differences in an apparently homogeneous beach. Furthermore, the methods used allowed us to inter-relate the different processes occurring on the beach, which is difficult to achieve with other types of techniques. Full article
(This article belongs to the Special Issue Advances in Remote Sensing in Coastal Geomorphology Ⅱ)
Show Figures

Figure 1

11 pages, 954 KiB  
Article
The Effect of Two Different Light-Curing Units and Curing Times on Bulk-Fill Restorative Materials
by Gokcen Deniz Bayrak, Elif Yaman-Dosdogru and Senem Selvi-Kuvvetli
Polymers 2022, 14(9), 1885; https://doi.org/10.3390/polym14091885 - 5 May 2022
Cited by 5 | Viewed by 2707
Abstract
This study aimed to evaluate the effect of two different light-curing units and curing times on the surface microhardness (SMH), compressive strength (CS), and volumetric shrinkage (VS) of four restorative materials (FiltekTM Z250, FiltekTM Bulk Fill Posterior, Beautifil® Bulk Restorative, [...] Read more.
This study aimed to evaluate the effect of two different light-curing units and curing times on the surface microhardness (SMH), compressive strength (CS), and volumetric shrinkage (VS) of four restorative materials (FiltekTM Z250, FiltekTM Bulk Fill Posterior, Beautifil® Bulk Restorative, ACTIVATM BioACTIVE). For all tests, each material was divided into two groups depending on the curing unit (Woodpecker LED-E and CarboLED), and each curing unit group was further divided into two subgroups according to curing time (10 s and 20 s). SMH was evaluated using a Vickers hardness tester, CS was tested using a universal testing machine, and VS was measured using video imaging. In all the restorative materials cured with Woodpecker LED-E, the 20 s subgroup demonstrated significantly higher SMH values than the 10 s subgroup. In both light-curing time subgroups, the CarboLED group showed significantly higher CS values than the Woodpecker LED-E group for all restorative materials except FiltekTM Bulk Fill Posterior cured for 20 s. ACTIVATM BioACTIVE showed significantly greater volumetric change than the other restorative materials. A higher curing light intensity and longer curing time had a positive effect on the SMH and CS of the restorative materials tested in this study. On the other hand, curing unit and time did not show a significant effect on the VS values of restorative materials. Full article
(This article belongs to the Topic Advances in Biomaterials)
Show Figures

Figure 1

14 pages, 3200 KiB  
Article
Applying Compressed Sensing Volumetric Interpolated Breath-Hold Examination and Spiral Ultrashort Echo Time Sequences for Lung Nodule Detection in MRI
by Yu-Sen Huang, Emi Niisato, Mao-Yuan Marine Su, Thomas Benkert, Ning Chien, Pin-Yi Chiang, Wen-Jeng Lee, Jin-Shing Chen and Yeun-Chung Chang
Diagnostics 2022, 12(1), 93; https://doi.org/10.3390/diagnostics12010093 - 31 Dec 2021
Cited by 11 | Viewed by 3220
Abstract
This prospective study aimed to investigate the ability of spiral ultrashort echo time (UTE) and compressed sensing volumetric interpolated breath-hold examination (CS-VIBE) sequences in magnetic resonance imaging (MRI) compared to conventional VIBE and chest computed tomography (CT) in terms of image quality and [...] Read more.
This prospective study aimed to investigate the ability of spiral ultrashort echo time (UTE) and compressed sensing volumetric interpolated breath-hold examination (CS-VIBE) sequences in magnetic resonance imaging (MRI) compared to conventional VIBE and chest computed tomography (CT) in terms of image quality and small nodule detection. Patients with small lung nodules scheduled for video-assisted thoracoscopic surgery (VATS) for lung wedge resection were prospectively enrolled. Each patient underwent non-contrast chest CT and non-contrast MRI on the same day prior to thoracic surgery. The chest CT was performed to obtain a standard reference for nodule size, location, and morphology. The chest MRI included breath-hold conventional VIBE and CS-VIBE with scanning durations of 11 and 13 s, respectively, and free-breathing spiral UTE for 3.5–5 min. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and normal structure visualizations were measured to evaluate MRI quality. Nodule detection sensitivity was evaluated on a lobe-by-lobe basis. Inter-reader and inter-modality reliability analyses were performed using the Cohen κ statistic and the nodule size comparison was performed using Bland–Altman plots. Among 96 pulmonary nodules requiring surgery, the average nodule diameter was 7.7 ± 3.9 mm (range: 4–20 mm); of the 73 resected nodules, most were invasive cancer (74%) or pre-invasive carcinoma in situ (15%). Both spiral UTE and CS-VIBE images achieved significantly higher overall image quality scores, SNRs, and CNRs than conventional VIBE. Spiral UTE (81%) and CS-VIBE (83%) achieved a higher lung nodule detection rate than conventional VIBE (53%). Specifically, the nodule detection rate for spiral UTE and CS-VIBE reached 95% and 100% for nodules >8 and >10 mm, respectively. A 90% detection rate was achieved for nodules of all sizes with a part-solid or solid morphology. Spiral UTE and CS-VIBE under-estimated the nodule size by 0.2 ± 1.4 mm with 95% limits of agreement from −2.6 to 2.9 mm and by 0.2 ± 1.7 mm with 95% limits of agreement from −3.3 to 3.5 mm, respectively, compared to the reference CT. In conclusion, chest CT remains the gold standard for lung nodule detection due to its high image resolutions. Both spiral UTE and CS-VIBE MRI could detect small lung nodules requiring surgery and could be considered a potential alternative to chest CT; however, their clinical application requires further investigation. Full article
(This article belongs to the Special Issue Advances in Diagnostic Medical Imaging)
Show Figures

Figure 1

15 pages, 4994 KiB  
Article
Research on Vehicle-Road Co-Location Method Oriented to Network Slicing Service and Traffic Video
by Zhi Ma and Songlin Sun
Sustainability 2021, 13(10), 5334; https://doi.org/10.3390/su13105334 - 11 May 2021
Cited by 10 | Viewed by 2170
Abstract
The development of 5G network slicing technology, combined with the application scenarios of vehicle–road collaborative positioning, provides end-to-end, large-bandwidth, low-latency, and highly reliable flexible customized services for Internet of Vehicle (IoV) services in different business scenarios. Starting from the needs of the network [...] Read more.
The development of 5G network slicing technology, combined with the application scenarios of vehicle–road collaborative positioning, provides end-to-end, large-bandwidth, low-latency, and highly reliable flexible customized services for Internet of Vehicle (IoV) services in different business scenarios. Starting from the needs of the network in the business scenario oriented to co-location, we researched the application of 5G network slicing technology in the vehicle–road cooperative localization system. We considered scheduling 5G slice resources. Creating slices to ensure the safety of the system, provided an optimized solution for the application of the vehicle–road coordinated positioning system. On this basis, this paper proposes a vehicle–road coordinated combined positioning method based on Beidou. On the basis of Beidou positioning and track estimation, using the advantages of the volumetric Kalman model, a combined positioning algorithm based on CKF was established. In order to further improve the positioning accuracy, vehicle characteristics could be extracted based on the traffic monitoring video stream to optimize the service-oriented positioning system. Considering that the vehicles in the urban traffic system can theoretically only travel on the road, the plan can be further optimized based on the road network information. It was preliminarily verified by simulation that this research idea has improved the relative single positioning method. Full article
Show Figures

Figure 1

Back to TopTop