Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = 360° camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5485 KiB  
Article
Immersive 3D Soundscape: Analysis of Environmental Acoustic Parameters of Historical Squares in Parma (Italy)
by Adriano Farina, Antonella Bevilacqua, Matteo Fadda, Luca Battisti, Maria Cristina Tommasino and Lamberto Tronchin
Urban Sci. 2025, 9(7), 259; https://doi.org/10.3390/urbansci9070259 - 3 Jul 2025
Viewed by 360
Abstract
Sound source localization represents one of the major challenges for soundscapes due to the dynamicity of a large variety of signals. Many applications are found related to ecosystems to study the migration process of birds and animals other than other terrestrial environments to [...] Read more.
Sound source localization represents one of the major challenges for soundscapes due to the dynamicity of a large variety of signals. Many applications are found related to ecosystems to study the migration process of birds and animals other than other terrestrial environments to survey wildlife. Other applications on sound recording are supported by sensors to detect animal movement. This paper deals with the immersive 3D soundscape by using a multi-channel spherical microphone probe, in combination with a 360° camera. The soundscape has been carried out in three Italian squares across the city of Parma. The acoustic maps obtained from the data processing detect the directivity of dynamic sound sources as typical of an urban environment. The analysis of the objective environmental parameters (like loudness, roughness, sharpness, and prominence) was conducted alongside the investigations on the historical importance of Italian squares as places for social inclusivity. A dedicated listening playback is provided by the AGORA project with a portable listening room characterized by modular unit of soundbars. Full article
Show Figures

Figure 1

17 pages, 1564 KiB  
Review
Capsule Endoscopy: Current Trends, Technological Advancements, and Future Perspectives in Gastrointestinal Diagnostics
by Chang-Chao Su, Chu-Kuang Chou, Arvind Mukundan, Riya Karmakar, Binusha Fathima Sanbatcha, Chien-Wei Huang, Wei-Chun Weng and Hsiang-Chen Wang
Bioengineering 2025, 12(6), 613; https://doi.org/10.3390/bioengineering12060613 - 4 Jun 2025
Viewed by 4032
Abstract
Capsule endoscopy (CE) has revolutionized gastrointestinal (GI) diagnostics by providing a non-invasive, patient-centered approach to observing the digestive tract. Conceived in 2000 by Gavriel Iddan, CE employs a diminutive, ingestible capsule containing a high-resolution camera, LED lighting, and a power supply. It specializes [...] Read more.
Capsule endoscopy (CE) has revolutionized gastrointestinal (GI) diagnostics by providing a non-invasive, patient-centered approach to observing the digestive tract. Conceived in 2000 by Gavriel Iddan, CE employs a diminutive, ingestible capsule containing a high-resolution camera, LED lighting, and a power supply. It specializes in visualizing the small intestine, a region frequently unreachable by conventional endoscopy. CE helps detect and monitor disorders, such as unexplained gastrointestinal bleeding, Crohn’s disease, and cancer, while presenting a lower procedural risk than conventional endoscopy. Contrary to conventional techniques that necessitate anesthesia, CE reduces patient discomfort and complications. Nonetheless, its constraints, specifically the incapacity to conduct biopsies or therapeutic procedures, have spurred technical advancements. Five primary types of capsule endoscopes have emerged: steerable, magnetic, robotic, tethered, and hybrid. Their performance varies substantially. For example, the image sizes vary from 256 × 256 to 640 × 480 pixels, the fields of view (FOV) range from 140° to 360°, the battery life is between 8 and 15 h, and the frame rates fluctuate from 2 to 35 frames per second, contingent upon motion-adaptive capture. This study addresses a significant gap by methodically evaluating CE platforms, outlining their clinical preparedness, and examining the underexploited potential of artificial intelligence in improving diagnostic precision. Through the examination of technical requirements and clinical integration, we highlight the progress made in overcoming existing CE constraints and outline prospective developments for next-generation GI diagnostics. Full article
(This article belongs to the Special Issue Novel, Low Cost Technologies for Cancer Diagnostics and Therapeutics)
Show Figures

Figure 1

16 pages, 4637 KiB  
Article
Low-Cost Solution for Kinematic Mapping Using Spherical Camera and GNSS
by Lukáš Běloch and Karel Pavelka
Appl. Sci. 2025, 15(11), 5972; https://doi.org/10.3390/app15115972 - 26 May 2025
Viewed by 689
Abstract
The use of spherical cameras for mapping purposes is a common application in surveying. Very expensive and high-quality cameras are used for surveying purposes and are supplemented by systems for determining their position. Cheap cameras, in most cases, only complement laser scanners, and [...] Read more.
The use of spherical cameras for mapping purposes is a common application in surveying. Very expensive and high-quality cameras are used for surveying purposes and are supplemented by systems for determining their position. Cheap cameras, in most cases, only complement laser scanners, and the images are then used to color the laser point cloud. This article investigates the use of action cameras in combination with low-cost GNSS (Global Navigation Satellite System) equipment. The research involves the development of a methodology and software for georeferencing spherical images, created by the kinematic method, using GNSS RTK (Real-Time Kinematics) or PPK (Post-Processing Kinematics) coordinates. Testing was carried out in two case studies where the environment surveyed had varying properties. Considering that the images from the low-cost 360 camera are of lower quality, an artificial intelligence tool was used to improve the quality of the images. The point clouds from a low-cost device are compared with more accurate methods. One of them is the SLAM (Simultaneous Localization and Mapping) method with the Faro Orbis device. The results in this work show sufficient accuracy and data quality for mapping purposes. Due to the very low price of the low-cost device used in this work, it is very easy to extend this method to practice. Full article
Show Figures

Figure 1

14 pages, 4647 KiB  
Article
Rotary Panoramic and Full-Depth-of-Field Imaging System for Pipeline Inspection
by Qiang Xing, Xueqin Zhao, Kun Song, Jiawen Jiang, Xinhao Wang, Yuanyuan Huang and Haodong Wei
Sensors 2025, 25(9), 2860; https://doi.org/10.3390/s25092860 - 30 Apr 2025
Viewed by 478
Abstract
To address the adaptability and insufficient imaging quality of conventional in-pipe imaging techniques for irregular pipelines or unstructured scenes, this study proposes a novel radial rotating full-depth-of-field focusing imaging system designed to adapt to the structural complexities of irregular pipelines, which can effectively [...] Read more.
To address the adaptability and insufficient imaging quality of conventional in-pipe imaging techniques for irregular pipelines or unstructured scenes, this study proposes a novel radial rotating full-depth-of-field focusing imaging system designed to adapt to the structural complexities of irregular pipelines, which can effectively acquire tiny details with a depth of 300–960 mm inside the pipeline. Firstly, a fast full-depth-of-field imaging method driven by depth features is proposed. Secondly, a full-depth rotating imaging apparatus is developed, incorporating a zoom camera, a miniature servo rotation mechanism, and a control system, enabling 360° multi-view angles and full-depth-of-field focusing imaging. Finally, full-depth-of-field focusing imaging experiments are carried out for pipelines with depth-varying characteristics. The results demonstrate that the imaging device can acquire depth data of the pipeline interior and rapidly obtain high-definition characterization sequence images of the inner pipeline wall. In the depth-of-field segmentation with multiple view angles, the clarity of the fused image is improved by 75.3% relative to a single frame, and the SNR and PSNR reach 6.9 dB and 26.3 dB, respectively. Compared to existing pipeline closed-circuit television (CCTV) and other in-pipeline imaging techniques, the developed rotating imaging system exhibits high integration, faster imaging capabilities, and adaptive capacity. This system provides an adaptive imaging solution for detecting defects on the inner surfaces of irregular pipelines, offering significant potential for practical applications in pipeline inspection and maintenance. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

15 pages, 14361 KiB  
Article
Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method
by Xiao Yang, Jinchang Zhang, Bidur Paneru, Jiakai Lin, Ramesh Bahadur Bist, Guoyu Lu and Lilong Chai
AgriEngineering 2025, 7(2), 35; https://doi.org/10.3390/agriengineering7020035 - 3 Feb 2025
Cited by 1 | Viewed by 1842
Abstract
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were [...] Read more.
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were used to detect floor eggs and dead chickens in cage-free housing environments. A dataset comprising 1200 images was used to develop detection models, which were split into training, testing, and validation sets in a 3:1:1 ratio. Five different CNN models were developed based on YOLOv8 and the robot’s 360° panoramic depth perception camera. The final results indicated that YOLOv8m exhibited the highest performance, achieving a precision of 90.59%. The application of the optimal model facilitated the detection of floor eggs in dimly lit areas such as below the feeder area and in corner spaces, as well as the detection of dead chickens within the flock. This research underscores the utility of bionic robotics and convolutional neural networks for poultry management and precision livestock farming. Full article
(This article belongs to the Section Livestock Farming Technology)
Show Figures

Figure 1

28 pages, 12050 KiB  
Article
Construction Payment Automation Through Scan-to-BIM and Blockchain-Enabled Smart Contract
by Hamdy Elsharkawi, Emad Elbeltagi, Mohamed S. Eid, Wael Alattyih and Hossam Wefki
Buildings 2025, 15(2), 213; https://doi.org/10.3390/buildings15020213 - 13 Jan 2025
Cited by 3 | Viewed by 3268
Abstract
Timely approvals and payments to the project participants are crucial for successful completion of construction projects. However, the construction industry faces persistent delays and non-payments to contractors. Despite the desirable benefits of automated payments and enhanced access to digitized data progress, most payment [...] Read more.
Timely approvals and payments to the project participants are crucial for successful completion of construction projects. However, the construction industry faces persistent delays and non-payments to contractors. Despite the desirable benefits of automated payments and enhanced access to digitized data progress, most payment applications rely on centralized control mechanisms; inefficient procedures; and documentation that takes time to prepare, review, and approve. As such, there is a need for a reliable payment automation system that guarantees timely execution of payments upon the detection of completed works. Therefore, this study used a cutting-edge approach to automate construction payments by integrating blockchain-enabled smart contracts and scan-to-Building Information Modeling (BIM). In this approach, scan-to-BIM provides accurate, real-time building progress data, which serve as the source of verifiable off-chain data. A chain-link is then used to securely relay these data to the blockchain system. Blockchain-enabled smart contracts automate the execution of payments upon meeting contract conditions. The proposed approach was implemented on a real case study project. The actual site scan was captured using a photogrammetry 360° camera, which uses a combination of structured light and infrared depth sensing technology to capture 3D data and create detailed 3D models of spaces. This study leveraged accurate, real-time building progress data to automate payments using blockchain-enabled smart contracts upon work completion, thus reducing payment disputes by tying payments to verifiable construction progress, leading to faster release of payments. The findings show that this approach provides a transparent basis for payment, enhancing trust and allowing precise project progress tracking. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 8826 KiB  
Article
Coffee Leaf Rust Disease Detection and Implementation of an Edge Device for Pruning Infected Leaves via Deep Learning Algorithms
by Raka Thoriq Araaf, Arkar Minn and Tofael Ahamed
Sensors 2024, 24(24), 8018; https://doi.org/10.3390/s24248018 - 16 Dec 2024
Cited by 2 | Viewed by 1949
Abstract
Global warming and extreme climate conditions caused by unsuitable temperature and humidity lead to coffee leaf rust (Hemileia vastatrix) diseases in coffee plantations. Coffee leaf rust is a severe problem that reduces productivity. Currently, pesticide spraying is considered the most effective [...] Read more.
Global warming and extreme climate conditions caused by unsuitable temperature and humidity lead to coffee leaf rust (Hemileia vastatrix) diseases in coffee plantations. Coffee leaf rust is a severe problem that reduces productivity. Currently, pesticide spraying is considered the most effective solution for mitigating coffee leaf rust. However, the application of pesticide spray is still not efficient for most farmers worldwide. In these cases, pruning the most infected leaves with leaf rust at coffee plantations is important to help pesticide spraying to be more efficient by creating a more targeted, accessible treatment. Therefore, detecting coffee leaf rust is important to support the decision on pruning infected leaves. The dataset was acquired from a coffee farm in Majalengka Regency, Indonesia. Only images with clearly visible spots of coffee leaf rust were selected. Data collection was performed via two devices, a digital mirrorless camera and a phone camera, to diversify the dataset and test it with different datasets. The dataset, comprising a total of 2024 images, was divided into three sets with a ratio of 70% for training (1417 images), 20% for validation (405 images), and 10% for testing (202 images). Images with leaves infected by coffee leaf rust were labeled via LabelImg® with the label “CLR”. All labeled images were used to train the YOLOv5 and YOLOv8 algorithms through the convolutional neural network (CNN). The trained model was tested with a test dataset, a digital mirrorless camera image dataset (100 images), a phone camera dataset (100 images), and real-time detection with a coffee leaf rust image dataset. After the model was trained, coffee leaf rust was detected in each frame. The mean average precision (mAP) and recall for the trained YOLOv5 model were 69% and 63.4%, respectively. For YOLOv8, the mAP and recall were approximately 70.2% and 65.9%, respectively. To evaluate the performance of the two trained models in detecting coffee leaf rust on trees, 202 original images were used for testing with the best-trained weight from each model. Compared to YOLOv5, YOLOv8 demonstrated superior accuracy in detecting coffee leaf rust. With a mAP of 73.2%, YOLOv8 outperformed YOLOv5, which achieved a mAP of 70.5%. An edge device was utilized to deploy real-time detection of CLR with the best-trained model. The detection was successfully executed with high confidence in detecting CLR. The system was further integrated into pruning solutions for Arabica coffee farms. A pruning device was designed using Autodesk Fusion 360® and fabricated for testing on a coffee plantation in Indonesia. Full article
(This article belongs to the Special Issue Deep Learning for Intelligent Systems: Challenges and Opportunities)
Show Figures

Figure 1

13 pages, 2355 KiB  
Article
Diagnostic Ability of Quantitative Parameters of Whole-Body Bone SPECT/CT Using a Full-Ring 360° Cadmium-Zinc-Telluride Camera for Detecting Bone Metastasis in Patients with Prostate Cancer
by Ik Dong Yoo, Sun-pyo Hong, Sang Mi Lee, Hee Jo Yang, Ki Hong Kim, Si Hyun Kim and Jeong Won Lee
Diagnostics 2024, 14(23), 2714; https://doi.org/10.3390/diagnostics14232714 - 2 Dec 2024
Cited by 2 | Viewed by 1103
Abstract
Background/Objectives: This study aimed to assess the diagnostic capability of quantitative parameters from whole-body bone single-photon emission computed tomography/computed tomography (SPECT/CT) in detecting bone metastases in prostate cancer patients; Methods: We retrospectively analyzed 82 prostate cancer patients who underwent staging bone scintigraphy with [...] Read more.
Background/Objectives: This study aimed to assess the diagnostic capability of quantitative parameters from whole-body bone single-photon emission computed tomography/computed tomography (SPECT/CT) in detecting bone metastases in prostate cancer patients; Methods: We retrospectively analyzed 82 prostate cancer patients who underwent staging bone scintigraphy with a full-ring 360° Cadmium-Zinc-Telluride (CZT) SPECT/CT system. From the SPECT/CT images, we measured the maximum (SUVmax) and mean (SUVmean) standardized uptake values at six normal bone sites (skull, humerus, thoracic spine, lumbar spine, iliac bone, and femur), and the SUVmax for both metastatic and benign bone lesions. Ratios of lesion SUVmax-to-maximum and mean uptake values at the skull, humerus, and femur were computed for each lesion; Results: SUVmax and SUVmean at the skull and femur exhibited significantly lower variance compared to those at the thoracic spine, lumbar spine, and iliac bone, and revealed no significant differences between patients with and without bone metastasis. In receiver operating characteristic curve analysis for detecting bone metastasis among 482 metastatic lesions, 132 benign bone lesions, and 477 normal bone sites, the lesion-to-femur mean uptake ratio demonstrated the highest area under the curve value (0.955) among SPECT/CT parameters. Using a cut-off value of 5.38, the lesion-to-femur mean uptake ratio achieved a sensitivity of 94.8% and a specificity of 86.5%; Conclusions: The bone lesion-to-femur mean uptake ratio was the most effective quantitative bone SPECT/CT parameter for detecting bone metastasis in prostate cancer patients. Quantitative analysis of bone SPECT/CT images could thus play a crucial role in diagnosing bone metastasis. Full article
(This article belongs to the Special Issue Nuclear Medicine Imaging and Therapy in Prostate Cancer)
Show Figures

Figure 1

27 pages, 22427 KiB  
Article
Multi-Camera Rig and Spherical Camera Assessment for Indoor Surveys in Complex Spaces
by Luca Perfetti, Nazarena Bruno and Riccardo Roncella
Remote Sens. 2024, 16(23), 4505; https://doi.org/10.3390/rs16234505 - 1 Dec 2024
Viewed by 1590
Abstract
This study compares the photogrammetric performance of three multi-camera systems—two spherical cameras (INSTA 360 Pro2 and MG1) and one multi-camera rig (ANT3D)—to evaluate their accuracy and precision in confined environments. These systems are particularly suited for indoor surveys, such as narrow spaces, where [...] Read more.
This study compares the photogrammetric performance of three multi-camera systems—two spherical cameras (INSTA 360 Pro2 and MG1) and one multi-camera rig (ANT3D)—to evaluate their accuracy and precision in confined environments. These systems are particularly suited for indoor surveys, such as narrow spaces, where traditional methods face limitations. The instruments were tested for the survey of a narrow spiral staircase within Milan Cathedral and the results were analyzed based on different processing strategies, including different relative constraints between sensors, various calibration sets for distortion parameters, interior orientation (IO), and relative orientation (RO), as well as two different ground control solutions. This study also included a repeatability test. The findings showed that, with appropriate ground control, all systems achieved the target accuracy of 1 cm. In partially unconstrained scenarios, the drift errors ranged between 5 and 10 cm. Performance varied depending on the processing pipelines; however, the results suggest that imposing a multi-camera constraint between sensors and estimating both IO and RO parameters during the Bundle Block Adjustment yields the best outcomes. In less stable environments, it might be preferable to pre-calibrate and fix the IO parameters. Full article
Show Figures

Figure 1

21 pages, 15517 KiB  
Article
3D Reconstruction of Building Blocks Based on Extraction of Exterior Wall Lines Using Point Cloud Density Generated from Spherical Camera Images
by Qazale Askari, Hossein Arefi and Mehdi Maboudi
Remote Sens. 2024, 16(23), 4377; https://doi.org/10.3390/rs16234377 - 23 Nov 2024
Viewed by 1345
Abstract
The 3D modeling of urban buildings has become a common research area in various disciplines such as photogrammetry and computer vision, with different applications such as intelligent city management, navigation of self-driving cars and architecture, just to name a few. The objective of [...] Read more.
The 3D modeling of urban buildings has become a common research area in various disciplines such as photogrammetry and computer vision, with different applications such as intelligent city management, navigation of self-driving cars and architecture, just to name a few. The objective of this study is to produce a 3D model of the external facade of the buildings with the required precision, accuracy and level of detail according to the user’s requirements, while minimizing time and cost. This research focuses on the production of 3D models for blocks of residential buildings in Tehran, Iran. The Insta 360 One X2 spherical camera is selected to capture the data due to its low cost and 360 × 180° field of view. The camera specifications have facilitated more efficient data collection in terms of both time and cost. The proposed modeling method is based on extracting lines of external walls through the utilization of the point cloud density concept. Initially, photogrammetric point clouds are generated in with a reconstruction precision of 0.24 m from spherical camera images. In the next step, the 3D point cloud is projected into a 2D point cloud by setting the height component to zero. The 2D point cloud is then rotated based on the direction angle determined by the Hough transform so that the perpendicular walls are parallel to the axes of the coordinate system. Next, a 2D point cloud density analysis is performed by voxelizing the point cloud and counting the number of points in each voxel in both the horizontal and vertical directions. By determining the peaks in the density plot, the lines of the external vertical and horizontal walls are extracted. To extract the diagonal external walls, the density analysis is performed in the direction of the first principal component. Finally, by determining the height of each wall in the point cloud, a 3D model is created at the level of detail one. The resulting model has a precision of 0.32 m compared to real sizes, and the 2D plan has a precision of 0.31 m compared to the ground truth map. The use of the spherical camera and point cloud density analysis makes this method efficient and cost-effective, making it a promising approach for future urban modeling projects. Full article
Show Figures

Figure 1

19 pages, 33194 KiB  
Article
A 3D-Printed, High-Fidelity Pelvis Training Model: Cookbook Instructions and First Experience
by Radu Claudiu Elisei, Florin Graur, Amir Szold, Răzvan Couți, Sever Cãlin Moldovan, Emil Moiş, Călin Popa, Doina Pisla, Calin Vaida, Paul Tucan and Nadim Al-Hajjar
J. Clin. Med. 2024, 13(21), 6416; https://doi.org/10.3390/jcm13216416 - 26 Oct 2024
Viewed by 1583
Abstract
Background: Since laparoscopic surgery became the gold standard for colorectal procedures, specific skills are required to achieve good outcomes. The best way to acquire basic and advanced skills and reach the learning curve plateau is by using dedicated simulators: box-trainers, video-trainers and virtual [...] Read more.
Background: Since laparoscopic surgery became the gold standard for colorectal procedures, specific skills are required to achieve good outcomes. The best way to acquire basic and advanced skills and reach the learning curve plateau is by using dedicated simulators: box-trainers, video-trainers and virtual reality simulators. Laparoscopic skills training outside the operating room is cost-beneficial, faster and safer, and does not harm the patient. When compared to box-trainers, virtual reality simulators and cadaver models have no additional benefits. Several laparoscopic trainers available on the market as well as homemade box and video-trainers, most of them using plastic boxes and standard webcams, were described in the literature. The majority of them involve training on a flat surface without any anatomical environment. In addition to their demonstrated benefits, box-trainers which add anatomic details can improve the training quality and skills development of surgeons. Methods: We created a 3D-printed anatomic pelvi-trainer which offers a real-size narrow pelvic space environment for training. The model was created starting with a CT-scan performed on a female pelvis from the Anatomy Museum (Cluj-Napoca University of Medicine and Pharmacy, Romania), using Invesalius 3 software (Centro de Tecnologia da informação Renato Archer CTI, InVesalius open-source software, Campinas, Brazil) for segmentation, Fusion 360 with Netfabb software (Autodesk software company, Fusion 360 with Netfabb, San Francisco, CA, USA) for 3D modeling and a FDM technology 3D printer (Stratasys 3D printing company, Fortus 380mc 3D printer, Minneapolis, MN, USA). In addition, a metal mold for casting silicone valves was made for camera and endoscopic instruments ports. The trainer was tested and compared using a laparoscopic camera, a standard full HD webcam and “V-Box” (INTECH—Innovative Training Technologies, Milano, Italia), a dedicated hard paper box. The pelvi-trainer was tested by 33 surgeons with different qualifications and expertise. Results: We made a complete box-trainer with a versatile 3D-printed pelvi-trainer inside, designed for a wide range of basic and advanced laparoscopic skills training in the narrow pelvic space. We assessed the feedback of 33 surgeons regarding their experience using the anatomic 3D-printed pelvi-trainer for laparoscopic surgery training in the narrow pelvic space. Each surgeon tested the pelvi-trainer in three different setups: using a laparoscopic camera, using a webcam connected to a laptop and a “V-BOX” hard paper box. In the experiments that were performed, each participant completed a questionnaire regarding his/her experience using the pelvi-trainer. The results were positive, validating the device as a valid tool for training. Conclusions: We validated the anatomic pelvi-trainer designed by our team as a valuable alternative for basic and advanced laparoscopic surgery training outside the operating room for pelvic organs procedures, proving that it supports a much faster learning curve for colorectal procedures without harming the patients. Full article
(This article belongs to the Special Issue Recent Advances in the Management of Colorectal Cancer)
Show Figures

Figure 1

17 pages, 21943 KiB  
Article
Evaluation of Direct Sunlight Availability Using a 360° Camera
by Diogo Chambel Lopes and Isabel Nogueira
Solar 2024, 4(4), 555-571; https://doi.org/10.3390/solar4040026 - 1 Oct 2024
Viewed by 5152
Abstract
One important aspect to consider when buying a house or apartment is adequate solar exposure. The same applies to the evaluation of the shadowing effects of existing buildings on prospective construction sites and vice versa. In different climates and seasons, it is not [...] Read more.
One important aspect to consider when buying a house or apartment is adequate solar exposure. The same applies to the evaluation of the shadowing effects of existing buildings on prospective construction sites and vice versa. In different climates and seasons, it is not always easy to assess if there will be an excess or lack of sunlight, and both can lead to discomfort and excessive energy consumption. The aim of our project is to design a method to quantify the availability of direct sunlight to answer these questions. We developed a tool in Octave to calculate representative parameters, such as sunlight hours per day over a year and the times of day for which sunlight is present, considering the surrounding objects. The apparent sun position over time is obtained from an existing algorithm and the surrounding objects are surveyed using a picture taken with a 360° camera from a window or other sunlight entry area. The sky regions in the picture are detected and all other regions correspond to obstructions to direct sunlight. The sky detection is not fully automatic, but the sky swap tool in the camera software could be adapted by the manufacturer for this purpose. We present the results for six representative test cases. Full article
Show Figures

Figure 1

15 pages, 5588 KiB  
Article
Rolling Shutter-Based Underwater Optical Camera Communication (UWOCC) with Side Glow Optical Fiber (SGOF)
by Jia-Fu Li, Yun-Han Chang, Yung-Jie Chen and Chi-Wai Chow
Appl. Sci. 2024, 14(17), 7840; https://doi.org/10.3390/app14177840 - 4 Sep 2024
Cited by 1 | Viewed by 1346
Abstract
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss [...] Read more.
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss and limited bandwidth. In this work, we present and demonstrate a rolling shutter (RS)-based underwater optical camera communication (UWOCC) system utilizing a long short-term memory neural network (LSTM-NN) with side glow optical fiber (SGOF). SGOF is made of poly-methyl methacrylate (PMMA) SGOF. It is lightweight and flexibly bendable. Most importantly, SGOF is water resistant; hence, it can be installed in an underwater environment to provide 360° “omni-directional” uniform radial light emission around its circumference. This large FOV can fascinate the optical detection in underwater turbulent environments. The proposed LSTM-NN has the time-memorizing characteristics to enhance UWOCC signal decoding. The proposed LSTM-NN is also compared with other decoding methods in the literature, such as the PPB-NN. The experimental results demonstrated that the proposed LSTM-NN outperforms the PPB-NN in the UWOCC system. A data rate of 2.7 kbit/s can be achieved in UWOCC, satisfying the pre-forward error correction (FEC) condition (i.e., bit error rate, BER ≤ 3.8 × 10−3). We also found that thin fiber also allows performing spatial multiplexing to enhance transmission capacity. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

11 pages, 214 KiB  
Article
Virtual Reality and Higher Education Sporting Events: Social Anxiety Perception as an Outcome of VR Simulation
by Kyu-Soo Chung, Chad Goebert and John David Johnson
Behav. Sci. 2024, 14(8), 695; https://doi.org/10.3390/bs14080695 - 10 Aug 2024
Cited by 2 | Viewed by 2266
Abstract
Background: This study investigates the relationship between Virtual Reality Exposure Therapy (VRET) and social anxiety in sport environments. Social anxiety is a mental health condition that manifests people’s intense fear of being watched and judged by others and worrying about humiliation It is [...] Read more.
Background: This study investigates the relationship between Virtual Reality Exposure Therapy (VRET) and social anxiety in sport environments. Social anxiety is a mental health condition that manifests people’s intense fear of being watched and judged by others and worrying about humiliation It is important to research potential tools like VRET that could help to mitigate the impact of social anxiety as people with social anxiety often avoid attending live events due to the venue’s sensory stimuli and the social encounters they anticipate. VR simulation could allow socially anxious individuals to fully experience a sporting event simulation minus the anxiety induced by potential social encounters. VR’s therapeutic effects on social anxiety should be explored when considering several findings of VR intervention to mental health. Aim: The study aims to assess the impact of exposing socially anxious people to a virtual sporting game by measuring their levels of social anxiety, team identification, and intentions to attend a live sporting event before and after the VR exposure. Due to VR’s positive experience, social anxiety is expected to decrease. However, team identification and intentions to attend live sporting events are expected to increase because of VR’s ability to develop sport fanship. Method: Fourteen students with symptoms of social anxiety participated in the study. To create the VR simulation stimuli, the researchers used six 360° cameras to record an NCAA Division-I women’s volleyball game. Participants experienced the sporting event via VR simulation. Data were analyzed via one-group pre- and post-comparison. Results and Conclusions: Significant results were found for behavioral intentions of participants after experiencing the simulation. Social anxiety’s difference was negative 0.22, t(13) = 3.47, p < 0.01. After watching the game in VR, the respondents’ social anxiety decreased significantly. Team identification’s difference was 0.53, t(13) = −3.56, p < 0.01. Lastly, event visit intentions’ difference was 0.24, t(13) = −2.35, p < 0.05. Team identification and intentions to visit a sporting event rose significantly after viewing the game in VR. Full article
12 pages, 2509 KiB  
Article
Unleashing the Potential of the 360° Baited Remote Underwater Video System (BRUVS): An Innovative Design for Complex Habitats
by Marisa A. Gomes, Catarina M. Alves, Fábio Faria, Regina Neto, Edgar Fernandes, Jesus S. Troncoso and Pedro T. Gomes
J. Mar. Sci. Eng. 2024, 12(8), 1346; https://doi.org/10.3390/jmse12081346 - 8 Aug 2024
Cited by 1 | Viewed by 2516
Abstract
Coastal ecosystems are vital for numerous demersal and benthopelagic species, offering critical habitats throughout their life cycles. Effective monitoring of these species in complex coastal environments is essential, yet traditional survey methodologies are often impractical due to environmental constraints like strong currents and [...] Read more.
Coastal ecosystems are vital for numerous demersal and benthopelagic species, offering critical habitats throughout their life cycles. Effective monitoring of these species in complex coastal environments is essential, yet traditional survey methodologies are often impractical due to environmental constraints like strong currents and high wave regimes. This study introduces a new cost-effective Baited Remote Underwater Video System (BRUVS) design featuring a vertical structure and 360° cameras developed to overcome limitations of traditional BRUVS, such as system anchoring, overturning, and restricted frame view. The new design was compared against a previous one used on the northwest Iberian coast. Key performance metrics included species detection, habitat identification, and operational efficiency under complex hydrodynamic conditions. Findings reveal that the two designs can effectively identify the common species typically observed in the study area. However, the new design outperformed the previous by significantly reducing equipment losses and anchoring issues. This enhancement in field operations’ simplicity, operability, portability, and resiliency underscores the new system’s potential as a cost-effective and efficient tool for demersal and benthopelagic ecological surveys in challenging coastal seascapes. This innovative BRUVS design offers advanced monitoring solutions, improving habitat assessment accuracy and responsiveness. Full article
Show Figures

Figure 1

Back to TopTop