Next Article in Journal
A Lightweight Double-Deep Q-Network for Energy Efficiency Optimization of Industrial IoT Devices in Thermal Power Plants
Previous Article in Journal
ISELDP: An Enhanced Dropout Prediction Model Using a Stacked Ensemble Approach for In-Session Learning Platforms
Previous Article in Special Issue
Enhancing Human Pose Transfer with Convolutional Block Attention Module and Facial Loss Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Method and Tools to Collect, Process, and Publish Raw and AI-Enhanced Astronomical Observations on YouTube

Luxembourg Institute of Science and Technology, 5 Avenue des Hauts-Fourneaux, L-4362 Esch-sur-Alzette, Luxembourg
Electronics 2025, 14(13), 2567; https://doi.org/10.3390/electronics14132567
Submission received: 26 May 2025 / Revised: 16 June 2025 / Accepted: 23 June 2025 / Published: 25 June 2025
(This article belongs to the Special Issue Machine Learning Techniques for Image Processing)

Abstract

Observational astronomy requires specialized equipment and favourable outdoor conditions, creating barriers to access for many enthusiasts. Streaming platforms can help bridge this gap by offering accessible views of celestial events, fostering broader public engagement and educational opportunities. In this paper, we introduce a methodology and a set of tools designed to power a YouTube channel that shares authentic recordings of Deep-Sky Objects, the Sun, the Moon, and planets. Each video is accompanied by detailed information on observation conditions and post-processing steps. The content is structured into two complementary formats: raw footage, captured using smart telescopes, and AI-enhanced videos that highlight specific features or phenomena using custom-trained AI models. Furthermore, the YouTube channel and associated AI tools may serve as a dynamic platform for long-term sky observation, supporting the detection of seasonal patterns and transient celestial events.

1. Introduction

The majority of video content on the Internet is created for purely recreational purposes, in order to generate a massive and regular audience on streaming platforms [1]. However, it is obviously possible to use the video format effectively in the field of education and science popularization. Thus, scientific and educational content has been disseminated under this format via platforms like YouTube [2] or more recently TikTok [3]. The producers of this type of content can be very different: educational establishments, research institutes, specialist media, influencers (also called EduTubers [4]), etc.
It is particularly interesting for live or recorded broadcasts of observations that are difficult and/or impossible to perform for the general public; for example, live views of Mars have been broadcast via the Mars webcam [5] and deep see observation has been published through the Nautilus Live Stream [6]. It is beneficial for everyone: a recent study looked at the extent to which appropriate digital content can help reconnect young people with nature [7], while it is also a good way for engineers and scientists to showcase their work in a different way to a wider audience [8].
Astronomy is well-suited for online broadcasting due to its visually stunning nature (images of stars clusters, nebula, planets, galaxies, the Moon, the Sun, and even comets), the fascination it generates, and the ability to simplify complex concepts for a wide audience [9]. Animations, 3D simulations, tutorials, and real-time observations make the topics engaging and easy to understand. On this last point, astronomers and stargazers can use online platforms for sharing astronomical observations to offer streaming accessible to a wide audience worldwide. Potential interaction with viewers through live chat, and the ability to re-watch videos later, make the experience immersive, interactive, and educational. This is particularly valuable given that many people lack access to a suitable location for night sky observation, or may not have the time, motivation, or necessary equipment to engage in this activity themselves [10]. Astronomy clubs, outreach organizations and associations do their best to showcase night sky observations and make them accessible [11]. For instance, we can mention a yearly event called Nuit des Etoiles, organized by the AFA (https://www.afastronomie.fr/les-nuits-des-etoiles (accessed on 22 June 2025).
Nevertheless, broadcasting astronomical observations via streaming platforms opens up access to the many deep-sky targets to the public, who would probably not otherwise have access to them.
The technical feasibility of this kind of broadcasting has been enhanced by the advent of Electronically Assisted Astronomy (EAA). Sometimes known as Video Astronomy [9], EAA consists of collecting images directly from a dedicated camera coupled to an optical system, and then applying lightweight image processing on a computing device to generate and store enhanced images in near real time of celestial targets [12]. The recent emergence of smart telescopes has revolutionized the practice of EAA, making it significantly more accessible to a broader audience, including amateur astronomers. Available on the market from EUR 500, these telescopes are equipped with automated systems that handle traditionally time-consuming and complex tasks, such as initial sky alignment through pattern recognition, continuous object tracking, and precise focusing [13]. By streamlining these essential processes, smart telescopes eliminate many of the technical barriers that previously required extensive expertise, allowing users to focus more on observation and discovery.
For research projects started in 2021 at the Luxembourg Institute of Science and Technology (LIST), we are collecting astronomical images of Deep-Sky Objects (DSOs), planets, the Moon, and even the Sun with smart telescopes. This article describes how we have collected and used this astronomical data to produce and share videos on a YouTube channel that bridges the accessibility gap in observational astronomy by combining raw telescope footage with AI and Computer Vision enhancements. On the one hand, the raw videos document celestial observation in an authentic way, to show what can be seen without the need for a professional observatory or even a space telescope. On the other hand, the AI-processed videos provide additional interpretation and highlight elements that are less visible in the raw footage, providing keys to understanding what is seen, detected, highlighted, and/or ignored. Together, these resources promote scientific discovery and expand access to astronomy for educators, students, and amateur astronomers.
Communication around the production of astronomical images often focuses on the aesthetics of the final result, very often using attractive (and ultimately unrealistic) colours, leading people to believe that large, powerful observation instruments are absolutely essential, as are advanced software skills for observing the deep sky, the Moon, and the Sun. In this article, we propose instead an approach and tools to show what can be achieved with modern equipment and dedicated techniques.
The present paper is structured as follows. In Section 2, we list existing initiatives related to astronomy outreach with broadcasting and the benefits of using AI. We present how we have collected raw data in Section 3.1. We show how we have processed the images with dedicated AI models in Section 3.2. In Section 3.3, we detail the process to obtain movies from the raw and AI-processed images. We detail how we have published all the content on a YouTube channel in Section 3.4. Finally, we discuss the results in Section 4, and we propose some perspectives in Section 5.

2. Related Work

Streaming platforms were created in the 2000s, and numerous educational initiatives dedicated to astronomy gradually began to emerge [14]. These include HubbleCast, a series of videos produced in 2007 by European Space Agency (ESA) that showcase the discoveries and spectacular images made by the Hubble Space Telescope, with captivating animations and explanations [15]. The Chandra X-ray Observatory YouTube channel, launched in 2011, features videos that highlight the groundbreaking discoveries and stunning X-ray images captured by NASA’s Chandra Observatory, offering insights into high-energy astrophysics and cosmic phenomena [16].
In addition to the communication campaigns launched by agencies such as NASA and ESA to publicise space telescopes, other initiatives have also been launched. In 2017, a team from the University of Arizona showed how it produced and maintained a YouTube channel to disseminate content around astronomy [17]: managed by students and supervised by a researcher, this channel offers a mix of explanatory videos, discussions, and demonstrations on everything related to astronomy.
In 2020, during the COVID-19 pandemic, many science outreach organizations had to adapt in order to continue disseminating astronomical knowledge, but also to try and attract a new audience. The Bochum Planetarium, for example, expanded its activities by broadcasting its content on social networks and streaming platforms [18].
Video support is also used to conduct astronomy education and outreach through regular videos and Massive Open Online Courses (MOOCs) by the Department of Astronomy of the University of Arizona [19]. Although not directly related, we can also mention original educational initiatives that aim to disseminate astronomy content using technical combinations based on sensors, projectors, and smartphones [20].
In recent years, a number of channels run by enthusiastic amateurs or semi-professional astronomers have appeared on platforms such as YouTube, Twitch, and even TikTok. For instance, we can mention DeepSkyVideos www.youtube.com/@DeepSkyVideos (accessed on 22 June 2025), AstroBackyard https://www.youtube.com/@AstroBackyard (accessed on 22 June 2025), NebulaPhotos https://www.youtube.com/@NebulaPhotos (accessed on 22 June 2025), and AstroCanuck https://m.twitch.tv/astrocanuck/(accessed on 22 June 2025). AstroBackyard offers deep-sky imaging tutorials, gear reviews, and advanced post-processing techniques. Nebula Photos focuses on accessible setups with DSLR cameras, providing simple and creative guides. AstroCanuck combines astrophotography, space exploration, and humor, making the cosmos approachable and inspiring for all. Some other channels like Pompey Observatory https://www.youtube.com/@PompeyObservatory (accessed on 22 June 2025) are devoted to broadcasting live feeds directly from classical setups (telescopes and cameras) and smart telescopes.
Beyond the obvious interest of sharing scientific content as videos, let us also mention that studies have been carried out to determine the factors that make it possible to make this content visible or even popular on streaming platforms like YouTube [21]. Generally speaking, it can be seen as a good complement to academic sharing platforms such as Zenodo [22].
Regarding the application of AI in observational astronomy, it should be noted that it is mainly used in the context of scientific research, which makes perfect sense given the huge amount of data collected by ground-based and space observatories [23]. To our knowledge, only one YouTube channel uses Deep Learning (DL) to enhance its observations, in particular using a dedicated DL model to depict the Moon with more vivid colouring https://www.youtube.com/@alainpaillou29 (accessed on 22 June 2025).
The following sections detail how we have collected and generated content for a YouTube channel dedicated to presenting astronomical observations with smart telescopes, enriched with reproducible AI models, and accessible to amateurs and professionals.

3. Methodology

This paper presents a methodology for collecting, processing, annotating, describing, and publishing data from EAA (Electronically Assisted Astronomy) sessions on YouTube, with the goal of providing a realistic and faithful account of what can be observed in the night sky. Given that this process demands significant time, effort, and resources [24], a well-defined methodology coupled with efficient tools is essential to ensure that the work is efficient, consistent, and reproducible, without needing to reconsider each step along the way. To develop this approach, we customized and combined different CV and DL methods, and we drew inspiration from best practices in observational astronomy, as well as from lessons learned in other fields involving video dissemination, such as biodiversity monitoring [25].

3.1. Astronomical Image Collection

As part of research activities for the space domain at the Luxembourg Institute of Science and Technology (LIST), we have been regularly collecting astronomical images using smart telescopes: Stellina (from 2021), Vespera (from 2022), and Hestia (from 2024) Vaonis (France), https://vaonis.com (accessed on 22 June 2025). Available on the market for the general public as well as for professionals, Stellina and Vespera allow all the tedious stages (initialization, focus, and tracking) to be automated so that the astronomers can concentrate solely on acquiring data when the weather conditions allow. From last year, a recent device called Hestia has enabled wide-field astronomical pictures to be taken by using a smartphone’s sensor.
The observation sessions were carried out from Luxembourg, France, and Belgium, in dark places to avoid the light produced by public or private lighting (Figure 1). Given that the meteorological conditions have been quite appalling since 2024, data capture has been a real challenge: the slightest hour of clear sky in the evening or at night has been exploited to observe the starry sky. Part of the data acquisition process is presented in [12,26].
After observation sessions, collected data takes various forms:
  • The raw images come directly from the sensor (Stellina and Vespera), with an exposure time of 10 s. These images are stored in 16-bit FITS format, and they are linear and unstretched, meaning pixel values directly reflect light intensity. They often appear dark or featureless until contrast stretching reveals details while preserving intensity relationships.
  • The sequences of images are pre-processed by the smart telescopes throughout the capture process (Stellina and Vespera) in 8-bit JPEG format; these RGB pictures are the results of the alignment/registration, stacking, and stretching of raw images (Algorithm 1)—leading to a sequence of images showing the evolution of the process: first, a single raw image, then the result after stacking two raw images, and finally the result after stacking N images, with progressive signal enhancement. In general, an observation session corresponds to the capture of several hundred frames (between 100 and 600). For star clusters and planetary nebulae, stacking around 100 images is often sufficient due to their high brightness. In the case of galaxies, at least 200 stacked images are recommended, with more required for fainter objects. To optimally capture emission or reflection nebulae, stacking the highest possible number of images is advised.
  • Snapshots of the Moon, the Sun, and brighter DSOs like the Orion Nebula or Pleiades star cluster (Hestia). These images are obtained after short-exposure captures (a maximum of a few hundred milliseconds), and stored as 8-bit JPEG files.
Algorithm 1: Pseudo-code for the alignment, stacking, and stretching of deep-sky images.
  Data: Raw images I 1 , I 2 , , I N
  Result: Stacked and stretched image I s t a c k e d
  Select reference image I r e f
  for each image I i do
   |  Compute and apply transformation to align I i with I r e f
  end (
  Compute I s t a c k e d as the median or weighted mean of aligned images
  Apply histogram stretch on I s t a c k e d
  return  I s t a c k e d
The resolution (in pixels) of the captured data depend of the observed target: rather small for planets (around 1M pixels because the smart telescopes used have small diameters and short focal lengths, and therefore little possibility of obtaining detail), high resolution for the Moon and the Sun (around 6M pixels), and potentially very high resolution for DSOs (up to 20M pixels for the mosaics produced on nebulae of large apparent size).
In practice, the outputs of each observation session have been named with the target identifier and the date, for instance, Moon-20231101 or M31-20240501. A subset of raw deep-sky images have been published in several open archives [12]. An other subset of solar images have also been shared in a data repository [26].
As of 15 May 2025, we have obtained a dataset containing 299 MP4 videos of different resolutions, with an average duration of just over one minute each (shorts obviously bring the average down as they last less than 30 s).
The targets shown in the videos are varied:
  • DSOs identified in established astronomical catalogs [27]: Messier, New General Catalog (NGC), Index Catalog (IC), Sharpless, and Barnard. Different types of targets are listed: galaxies, nebulae, open clusters, globular clusters, planetary nebulae, supernova remnants, etc.
  • The Sun (obtained with a dedicated solar filter), allowing solar activity to be captured with sunspots.
  • Planets (Venus, Mars, Jupiter, Saturn, Uranus, and Neptune): Diameters of smart telescopes are small, so the images are only detailed for Jupiter and Saturn.
  • The Moon, allowing craters and other lunar relief to be observed differently depending on the Moon phases: we even had the opportunity to produce and publish a video on a partial eclipse of the Moon (18 September 2024 https://tinyurl.com/partialMoonEclipse2024 (accessed on 22 June 2025)).
  • Transient objects like Comets (12P/Pons-Brooks, C/2023 A3 Tsuchinshan-ATLAS): These targets can generally be difficult to observe (position in the sky but also limited period of visibility).
In terms of celestial targets covered, to date there have been the following:
  • 105 videos of galaxies and galaxies groups.
  • 75 videos of nebulae, HII regions, or supernova remnants.
  • 47 videos of open or globular clusters.
  • 32 videos of the Sun.
  • 21 videos of the Moon.
  • 15 videos of planets.
  • 4 videos of comets.

3.2. Image Processing with AI

Space agencies (NASA, ESA, etc.) may apply heavy image processing to astronomical photos in order to reveal faint details invisible to the naked eye or even raw sensors. This includes enhancing contrast and assigning artificial colours to different wavelengths, sometimes beyond the visible spectrum [28]; while scientifically valuable, these enhancements also serve aesthetic and communicative purposes, helping to captivate the public and highlight subtle structures in data. However, this can create a gap between expectation and reality, and recent Generative AI techniques can attempt to create images that are not even obtained from real data [29]: the final images may look surreal or overly artistic, making the cosmos feel less accessible or authentic to those unfamiliar with the underlying techniques.
In this work, we applied various AI methods to provide a different view of the targets observed, by slightly transforming the initial images captured with smart telescopes, in order to remain close and faithful to original data:
  • Highlighting DSOs in deep-sky views: As stars are often much more visible and contrasty than DSOs in astronomical images, we have therefore used a method to hide the stars and make the DSOs more visible.
  • Annotation of deep-sky views: To show where the targets of interest are in the images, we have applied another method consisting of detecting and showing the position of the DSO present and visible in the images, ignoring the stars.
  • Sunspot annotation: As smart telescopes can help to observe sunspots on solar images, a detection method has been used to highlight the visible ones in the images.
The next paragraphs provide details about the algorithms used to process the images, with a comparative study for each task.

3.2.1. Highlight of DSO in Deep-Sky Images

The first approach involved processing the images by isolating only the deep-sky object (DSO) in order to enhance its visibility. Stars often dominated the image, making it difficult to discern faint structures such as galaxy arms or diffuse nebulae.
While this task could have served various tasks—such as star–galaxy classification or satellites trail detection [30]—we have applied the technique primarily for visualization purposes.
In practice, traditional CV methods may help to remove stars from astronomical images using thresholding, morphological operations, and inpainting (e.g., median or Gaussian filtering). Diffusion methods initially proposed for grayscale images can also be applied [30]. While effective for isolated stars on smooth backgrounds, they struggle with data from smart telescopes, where imperfect tracking can cause star trails: in such cases, these methods often leave artifacts.
Thus, we have tested and compared two existing DL models. Firstly, we applied Starnet (2.0.0) https://www.starnetastro.com (accessed on 22 June 2025), a widely used software among astrophotography enthusiasts. This tool employs a supervised DL model based on Generative Adversarial Networks (GANs) to effectively remove point sources from images, thereby producing a remarkable visual effect, particularly evident in the case of nebulae and galaxies. To check the efficiency of Starnet for our data, we selected a representative set of 2692 patches of 256 × 256 pixels obtained from images described in Section 3.1; we executed the StarNet model and we checked if it was able to remove point sources. To this end, we used the photutils (2.2.0) Python package, a tool for point source detection in astronomical images [31].
Overall, the method proved to be effective (Figure 2), with the only drawback being the suppression of small galaxies that look like noisy artefacts generated by the sensor. In addition, the StarNet models were trained on synthetic data, which could raise doubts when applied to specific real data (high noise, sky background with undesired gradient, etc.).
Secondly, we have also tested nox (1.0.0), a Python package based on the same neural architecture, but trained on real data https://github.com/charvey2718/nox (accessed on 22 June 2025). In practice, the results are quite similar, sometimes better when the data is very noisy, or for managing stars with very strong light halos.
In both cases, applying the AI models to a sequence of images generated iteratively using Algorithm 1 made the impact of stacking visually apparent, by reducing noise and enhancing the visibility of the underlying signal.  Figure 3 illustrates the results of a sequence of observations of the Lagoon Nebula, obtained with a Vespera smart telescope (6 July 2023).

3.2.2. Annotation of Deep-Sky Images

Another approach we employed to emphasize the content of the observed images was through annotation: to automatically identify the elements present, we adapted dedicated detection techniques to this specific context.
DSO detection can usually be performed using astrometry, but this process requires images to be analysed using the coordinates of the part of the sky photographed and a dedicated database with all known DSOs and their celestial coordinates [32].
To enable a DSO detection process that uses images exclusively as input, we have trained DL models to outline galaxies, nebulae, and other deep-sky targets, while ignoring point sources. More precisely, we have used YOLO (You Only Look Once), a real-time object detection algorithm that analyses an entire image in a single pass, identifying and localizing multiple objects simultaneously with high speed and accuracy.
By using DeepSpaceYoloDataset, the annotated dataset described and published in [33], we have compared a YOLOv7 tiny model introduced in [34] to recent versions of YOLO architectures, in particular 8 and 11, with different sizes (nano, normal, and large). We used the YOLOv7 official Python package (0.1) https://github.com/WongKinYiu/yolov7 (accessed on 22 June 2025) and the Ultralytics Python package (8.3.95) https://pypi.org/project/ultralytics/ (accessed on 22 June 2025) without specific customization. YOLO-v7 implementation uses default hyperparameters such as l r 0 = 0.01 , m o m e n t u m = 0.937 , w e i g h t d e c a y = 0.0005 , with augmentations like m o s a i c = 1.0 , f l i p l r = 0.5 , and  s c a l e = 0.5 . Moreover, we applied default data augmentation techniques during training, including Mosaic (combining 4 images), random affine transforms (scaling, rotation, and translation), HSV augmentation (colour changes), and horizontal flipping. YOLO-v8, from Ultralytics, follows similar defaults but with slight adjustments and improved augmentation handling. YOLO-v11 extends YOLO-v8 and includes automatic tuning over hyperparameter ranges to optimize training dynamically. For training, validation, and testing, we reshuffled the initial dataset to obtain 333 images for validation and 222 images for testing. The training was performed for 400 epochs, on 128GB RAM Intel(R) Xeon(R) Silver 4210@2.20GHz cores and NVIDIA Tesla V100-PCIE-32GB GPUs:  Figure 4 illutrates the training of the YOLO8m model.
In Table 1, we have used the following evaluation metrics for each trained model:
precision = T P T P + F P
recall = T P T P + F N
mAP = n ( R n R n 1 ) P n
The evaluation results highlight significant improvements over the original YOLOv7tiny model, which served as the baseline (Table 1). While YOLOv7tiny achieved a mAP@50 of 0.639 and mAP@50-95 of 0.442, all custom YOLO11 and YOLO8 models surpassed it across these metrics. Notably, YOLO11n—despite having less than half the parameters—outperformed YOLOv7tiny with a mAP@50 of 0.742 and mAP@50-95 of 0.569. The most precise model, YOLO8x, achieved a precision of 0.854 but at the cost of lower recall . These results demonstrate that the newly trained models, particularly the YOLO11 variants, offer a better trade-off between performance and model size, confirming the value of adapting architectures specifically for this use-case.
Using YOLO models on the images presented in Section 3.1 allows us to see how detection evolves on a sequence of images obtained after aligning and stacking 1, 2, …, N raw images (Algorithm 1): as expected, stacking progressively improves the final image, and the visibility of the targeted DSO).  Figure 5 illustrates the results on a sequence of observations of the M64 galaxy, realized with a Vespera smart telescope.

3.2.3. Annotation of Solar Images

Solar images are already beautiful to look at, but it is interesting to be able to highlight the most intriguing features to observe (which change every day): sunspots. Appearing as dark areas of varying sizes, they are indicators of intense disturbances in the Sun’s magnetic field. They are continuously studied to better understand solar activity cycles. Regularly monitored by amateur astronomers as well, sunspots served as an early warning in May 2024 for a solar storm that produced spectacular northern lights visible at unusually low latitudes https://en.wikipedia.org/wiki/May_2024_solar_storms (accessed on 22 June 2025).
Thus, dedicated YOLO models have been customized to detect and localize sunspots in images captured by smart telescopes (solar data collection requiring the use of a specific solar filter) [26].
Even if this task is already realized in real time by spacecraft and ground-based observatories, as part of the study of Space Weather [35], the idea here is rather to highlight the level of detail that can be captured in solar images obtained with smart telescopes.
As explained in Section 3.1, solar images are obtained here with exposures of a few hundred milliseconds, and atmospheric disturbances (like weather conditions) can alter the view of the Sun obtained. The YOLO model enables sunspots to be detected while dealing with potential disturbances (clouds, haze, and even tree branches if the observer’s view is obstructed by trees).
In the same way as previously described for DSOs, we have trained YOLO models by using SunspotsYoloDataset, an annotated collection of smart telescopes images and dedicated for sunspots detection [26]. The images were carefully annotated to ensure only real sunspots were labelled to face challenges like varying image quality, lighting conditions, and the resolution of the telescopes, and undesired artifacts like tree branches when the Sun was low on the visible horizon.
According to Table 2, the results on the SunspotsYoloDataset test set (128 images) show that the original YOLOv7tiny model remains the top performer, achieving the highest recall (0.920), mAP@50 (0.915), and mAP@50-95 (0.728) among all models. While several custom models such as YOLO8n and YOLO11n come close in terms of precision and mAP@50, none surpass YOLOv7tiny overall. Notably, YOLO8n demonstrated a strong balance with high recall (0.802) and the highest mAP@50 among the custom models (0.849), despite having only half the parameters of YOLOv7tiny. These findings suggest that although the custom architectures are competitive and more lightweight, the original YOLOv7tiny remains best suited for sunspot detection in this dataset.
Thus, applying these models on successive images of the Sun allows us to observe that the movement of sunspots is too slow to be observed in real time, but also to show how the model’s detection can be disturbed by elements (bad weather conditions as in  Figure 6 and obstacles in the field of view). Adding annotations is useful, particularly for quickly showing on the image where the dark spots are (even if they are small), which is useful for those who do not know how to distinguish between them.

3.2.4. Application of AI Models

We did not systematically apply the models to all the captured data described in Section 3.1, but we did so when the original images were of interest and could benefit from further processing (e.g., cleaning images with important noise, highlighting faint nebulae by removing stars, detecting small galaxies in large images, and detecting groups of sunspots in solar images).
In practice, the transformed images have been named with the target identifier, the date, and the used algorithm type, for instance, Sun-20230613-yolo or M33-20240901-yolo.

3.3. Video Generation

Based on the different building blocks that have been presented in previous sections, videos have been generated from raw and AI-processed images by using FFmpeg [36]. FFmpeg offers powerful benefits for video generation, including efficient compression, high-quality output, and extensive format support, making it ideal for handling complex video processing tasks. It enables precise control over video and audio parameters, allowing for advanced editing, filtering, and encoding/decoding.
More precisely, a specific workflow has been defined to obtain MP4 movies from the sequences of images (Figure 7), in a way that some of them can be generated as Shorts, a popular format [37]. A Short refers to a concise, vertical format designed for quick consumption and optimized for mobile platforms. They are typically characterized by their limited duration (usually under 60 s) and are produced in a vertical aspect ratio (9:16) to maximize compatibility with mobile screens. Shorts are increasingly employed in educational and scientific communication to highlight key insights, share findings, or provide accessible overviews of complex topics, leveraging technical features to reach broader and more diverse audiences.
The following commands have been used:
  • Generate MP4 video from images:
    Electronics 14 02567 i001
  • Add padding to MP4 video (useful for low-resolution planetary data):
    Electronics 14 02567 i002
  • Rotation of MP4 video (’Short’ movies requires a vertical format):
    Electronics 14 02567 i003
  • Speed-up MP4 video:
    Electronics 14 02567 i004
  • Slow-down MP4 video:
    Electronics 14 02567 i005
  • Animated and fluid zoom during MP4 video:
    Electronics 14 02567 i006
  • Crop MP4 videos:
    Electronics 14 02567 i007
Using these commands, we can generate videos of the various observation sessions that we feel are of interest, making sure that they are not long (for example, an observation session lasting several hours is compiled into videos lasting a few minutes, or even a few seconds depending on the case). Except in exceptional situations, we have ignored observation sessions with poor results (for example, due to weather parameters such as clouds, wind or even rain).

3.4. Video Upload on YouTube

Videos were then shared over time on YouTube (Figure 8) by using a dedicated tool called YouTubeStudio https://studio.youtube.com/ (accessed on 22 June 2025). This tool allows the MP4 files to be uploaded, but also for metadata to be added and published to describe the content. As shown in a recent study [38], detailed metadata are essential for improving the visibility of a video.
In practice, each published movie is described with a title and a description containing the following metadata:
  • Identifier (DSO only): It could be the Messier code, NGC code, Sharpless, IC code, or Barnard code of the celestial target.
  • Nickname if any (DSO only): Some targets have a common name used by astronomers [39] (example: Wild Duck Cluster for Messier 11).
  • Type (DSO only): It could be an emission nebula, reflection nebula, planetary nebula, galaxy, open cluster, globular cluster, supernova remnant, etc.
  • Date of the observation (example: 14 November 2024): Most of the time, videos were not immediately produced, processed, and published after observation—so the date of data capture is always specified.
  • Telescope used for the observation (example: Stellina, Vespera, or Hestia): For Hestia, the smartphone model is provided.
  • Sensor gain in dB: This parameter is used to set the sensitivity of the sensor (example: 20 dB).
  • Exposure time in seconds for each sub-frame (example: ten seconds).
  • Total count of subframes (example: 200 subframes).
  • Specific capture mode (if any): Automated dithering and/or mosaic mode can be enabled.
  • Filter if any (DSO only): A filter can be useful according to the target (example: dual-band filter for an emission nebula, and solar filter for the Sun).
  • Post-processing actions if any (example: images annotated with the YOLO model).
  • Link to a reference paper (example: about data acquisition, data processing, etc.).
  • Hashtags: They help categorize content, improve searchability, and increase visibility by linking videos to related topics (example: “#ngc2174 #monkeyhead #stars #nebula #stellina #telescope”).
When generating and publishing the videos, we alternate between standard and Short formats, having observed that the latter generally have a greater impact in terms of visibility for YouTube users.

4. Discussion

The methodology described in this article has allowed us to consistently produce and publish a significant number of videos from 2023 to 2025, enabling efficient and timely content creation and dissemination. For a given video, the steps are as follows:
  • Weather conditions permitting, data capture can be carried out quickly with the smart telescopes: by considering five minutes of automatic initialization of the instruments after switched on, it may take a few tens of minutes to obtain several hundred images for a given target (after 200 images with a unit exposure time of 10 s, we generally have a correct stacked image, but this depends on the magnitude of the DSO). For the Moon and Sun, only few minutes are needed (the exposure time is a few hundred milliseconds.)
  • Captured data can also be retrieved very quickly from the smart telescopes via FTP access or a USB key.
  • Processing with one or more AI models can take just a few milliseconds per image, even on a low-performance laptop: inference with introduced YOLO models is fast; the same is true for models to remove stars.
  • Video generation with ffmpeg can take a few minutes: only the command creating a zoom effect is a bit slow as it generate frames to make a smooth effect.
  • Publishing on YouTube finally takes a few minutes, in order to fill the metadata in the form provided by YoutubeStudio.
One way of assessing the YouTube channel’s impact is to measure the audience: on 15 May 2025, there were 380 subscribers and around 195,000 views (Figure 9). In total, 20 standard videos have more than 1000 views, 20 Short videos have more than 1500 views, and the best of them—‘Live observation of Great Pegasus Cluster (M15) with a Vespera smart telescope (15 September 2023)’—has almost 19,000 views https://tinyurl.com/m15pegasus2024 (accessed on 22 June 2025). To compare orders of magnitude, all the raw data we have published on Zenodo (DeepSkyYoloDataset, SunspotsYoloDataset, etc.) were downloaded around 6500 times on 15 May 2025. The statistics provided by YouTubeStudio are as follows:
  • In total, 23% of viewers are under 24, 45% are between 25 and 44 and 22% are over 45.
  • In terms of geographical coverage, the videos were viewed from 30 different countries, mainly India (22%), the United States (12%), and Russia (6%).
  • The audience is predominantly male (83%)—it can be mentioned that this bias is a specific topic of research [40,41].
In terms of content, we observed that videos about galaxies or planets are more viewed than videos about nebulae or (open/globular) star clusters: the first regularly have more than 1500 views, while the others have difficulty in surpassing 600 views.
It is also interesting to note that in terms of visibility, the videos are well referenced on search engines, much more than if they had been archived on a platform like Zenodo. For example, on 15 May 2025, a specific search on Google with the query ngc2174 observation video returns ‘Observation of Monkey Head Nebula (NGC2174) with a Stellina portable smart telescope (2 February 2025)’ https://tinyurl.com/ngc2174 (accessed on 22 June 2025). In another case, a more generic query sun annotation video smart telescope also refers to the channel’s videos like ‘AI-powered annotation of Sunspots with a Vespera smart telescope (20 May 2024)’ https://tinyurl.com/sunYolo2024 (accessed on 22 June 2025).
What is more, we regularly receive comments from viewers on the YouTube platform, who are surprised by the results obtained with smart telescopes (smart telescopes are generally portable and therefore small in size), amazed by the fact that we can observe such distant objects, and also interested in the treatments that can be applied.
Generally speaking, the YouTube channel offers a versatile tool for educational use:
  • Introduction to Observational Astronomy: Raw videos produced with smart telescopes provide the public with an authentic view of the sky. For students and educators, this footage can be used in lessons on observing phases (and eclipses) of the Moon, watching sunspot evolution, and understanding the concept of magnitude https://en.wikipedia.org/wiki/Magnitude_(astronomy) (accessed on 22 June 2025) by observing galaxies or nebulae, providing hands-on learning without the need for physical equipment.
  • Astrophotography basics: Videos help to understand an essential principle in astronomical image capture: during an EAA session on a given target, the more data accumulated, the better the final result (better signal-to-noise ratio). These videos show how very high-quality images are gradually built up.
  • Technology in Astronomy: The use of smart telescopes allows viewers to experience advanced observational technology first-hand. The AI-enhanced videos demonstrate how AI can process observational data, enhancing the visibility of faint or transient phenomena and introducing students to real-world applications of AI in science.
  • Skill Development in Data Analysis: The availability of both raw and processed videos provides an opportunity to learn data analysis techniques. Students can explore the raw footage to make independent observations and then use AI-processed videos as a reference, deepening their understanding of how data processing affects interpretation.
  • Inspiration for STEM Careers: Exposure to real observational data, captured using smart technology and enhanced with AI, can inspire students to pursue careers in astronomy, data science, and engineering. Seeing the integration of smart telescopes and AI in practice highlights the tools available to today’s scientists.
Even though we have chosen to publish content without embedded text or sound, it may be interesting to add additional contextual information to videos, in particular when specific processing has been realized on images; recent studies have shown that this can help to improve the viewer’s experience [42]. A promising direction would be to add automatic, content-based transcriptions; however, to the best of our knowledge, no existing approach currently achieves this for astronomical videos
An important point to consider is data availability: publishing on YouTube does not offer the same guarantees of long-term preservation and accessibility as publishing on an open archive like Zenodo, which provides stable citation, a DOI, and long-term file preservation. However, these two platforms are complementary because they do not reach the same audience. As a result, we have started to share movies processed with YOLO models on a dedicated Zenodo repository [43], and we plan to add other movies too on a regular basis.

5. Conclusions and Perspectives

This paper presents methods and tools to collect, compile, process, and publish on a YouTube channel astronomical observations realized during EAA sessions. The original images were captured with various smart telescopes over several years, the metadata describing the content has been clearly defined to allow rapid searching, and a subset of images have been post-processed with dedicated CV and AI algorithms.
This leads to a resulting YouTube channel offering more than a collection of videos; it is an accessible resource for education, public engagement, and scientific exploration. By combining raw, smart telescope-captured observations with AI-enhanced visuals, the channel provides a unique lens on the night sky, supporting both traditional observational learning and modern data analysis, and inspiring a broader appreciation of astronomy. In practice, this makes them more accessible and easier to find by the non-academic public than if they were only stored on open platforms like Zenodo.
Moreover, the approach can be replicated and improved for astronomy as well as for other research topics (monitoring of natural environments, observation of animals, etc.).
In future work, we plan to continue adding new videos, and we will consider embedding contextual information about the sequences (such as time, location, and data about the targets observed) directly into the videos. We will also work on other image processing approaches to enable observations to be enriched effectively.

Funding

This research was funded by the Luxembourg National Research Fund (FNR), grant reference 15872557. For the purpose of open access, and in fulfilment of the obligations arising from the grant agreement, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.

Data Availability Statement

Videos processed with YOLO models are available on a specific repository (https://zenodo.org/records/15676200 (accessed on 22 June 2025)). The YouTube channel can be found here: https://www.youtube.com/@doopyon (accessed on 22 June 2025). Videos and additional materials used to support the results of this paper are available from the corresponding author upon request.

Acknowledgments

Data storage, data processing, and detection model training were conducted on the LIST Artificial Intelligence and Data Analytics platform (40 cores with 128 GB RAM Intel(R) Xeon(R) Silver 4210 @ 2.20 GHz for CPU, NVIDIA Tesla V100-PCIE-32GB for GPU), with the assistance of Jean-François Merche and Raynald Jadoul. We have also used the MeluXina HPC infrastructure provided by LuxProvide in order to use multi-GPU nodes equipped with NVIDIA A100-40GB cards.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CVComputer Vision
DSOsDeep-Sky Objects
DLDeep Learning
EAAElectronically Assisted Astronomy
GANsGenerative Adversarial Networks
YOLOYou Only Look Once

References

  1. Foster, D. Factors influencing the popularity of YouTube videos and users’ decisions to watch them. Ph.D. Thesis, University of Wolverhampton, West Midlands, UK, 2020. [Google Scholar]
  2. Welbourne, D.J.; Grant, W.J. Science communication on YouTube: Factors that affect channel and video popularity. Public Underst. Sci. 2016, 25, 706–718. [Google Scholar]
  3. Radin, A.G.; Light, C.J. TikTok: An emergent opportunity for teaching and learning science communication online. J. Microbiol. Biol. Educ. 2022, 23, e00236–21. [Google Scholar]
  4. Pasquel-López, C.; Valerio-Ureña, G. EduTubers’s pedagogical best practices and their theoretical foundation. Informatics 2022, 9, 84. [Google Scholar]
  5. Ormston, T.; Denis, M.; Scuka, D.; Griebel, H. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam. Acta Astronaut. 2011, 69, 703–713. [Google Scholar]
  6. Wishnak, S.; Cook, M.; Moran, K.; Fiely, J.; Lubetkin, M.; Ballard, E.; Dapcevich, M.; Fundis, A. Bringing The Ocean to A Remote Learning World. Oceanography 2021, 34, 20–23. [Google Scholar]
  7. Reed, J.; Beames, S.K.; Macleod, G. Young people’s networked constructions of nature: Evidence from a qualitative multiple case study in the United Kingdom. People Nat. 2025, 7, 463–474. [Google Scholar] [CrossRef]
  8. Brennan, E.B. Why should scientists be on YouTube? It’s all about bamboo, oil and ice cream. Front. Commun. 2021, 6, 586297. [Google Scholar] [CrossRef]
  9. Ashley, J. Video Astronomy on the Go; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  10. Azevedo, F.S.; Mann, M.J. Seeing in the dark: Embodied cognition in amateur astronomy practice. J. Learn. Sci. 2018, 27, 89–136. [Google Scholar] [CrossRef]
  11. Buxner, S.R.; Fitzgerald, M.T.; Freed, R.M. Amateur astronomy: Engaging the public in astronomy through exploration, outreach, and research. Space Sci. Public Engagem. 2021, 2021, 143–168. [Google Scholar]
  12. Parisot, O.; Hitzelberger, P.; Bruneau, P.; Krebs, G.; Destruel, C.; Vandame, B. MILAN Sky Survey, a dataset of raw deep sky images captured during one year with a Stellina automated telescope. Data Brief 2023, 48, 109133. [Google Scholar] [CrossRef]
  13. Parisot, O.; Bruneau, P.; Hitzelberger, P.; Krebs, G.; Destruel, C. Improving Accessibility for Deep Sky Observation. ERCIM News 2022, 21, 268. [Google Scholar]
  14. Shida, R.Y.; Gater, W. I Tune, You Tube, We Rule. Commun. Astron. Public J. 2007, 1, 30. [Google Scholar]
  15. Christensen, L.L.; Kornmesser, M.; Shida, R.Y.; Gater, W.; Liske, J. The Hubblecast—The world’s first full HD video podcast? In Proceedings of the IAU/National Observatory of Athens/ESA/ESO Conference, Athens, Greece, 8–11 October 2007. [Google Scholar]
  16. Wilkes, B.J.; Tananbaum, H. The Chandra X-ray Observatory. In Handbook of X-Ray and Gamma-Ray Astrophysics; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1115–1147. [Google Scholar]
  17. Austin, C.; Calahan, J.; Resi Baucco, A.; Bullivant, C.W.; Eckley, R.; Ekstrom, W.H.; Fitzpatrick, M.R.; Genovese, T.F.; Impey, C.D.; Libby, K.; et al. Active Galactic Videos: A YouTube Channel for Astronomy Education and Outreach. Am. Astron. Soc. Meet. Abstr. 2017, 229, 335.04. [Google Scholar]
  18. Christoph, J.; Hüttemeister, S. ’Planetarium@ home’: Digital Astronomy Outreach During the Covid-19 Pandemic. Commun. Astron. Public J. 2021, 30, 38. [Google Scholar]
  19. Impey, C.; Pereira, V.; Danehy, A.; Wenger, M. Video as a Vehicle for Astronomy Education and Outreach. Astron. Educ. J. 2023, 3, 052resana-1–052resana-15. [Google Scholar] [CrossRef]
  20. Chastenay, P. Suggested classroom activities to promote perspective-taking in astronomy by projecting images from a phone or tablet up unto a screen. Astron. Educ. J. (AEJ) 2023, 3, 042aep-1–042aep-11. [Google Scholar]
  21. Velho, R.M.; Mendes, A.M.F.; Azevedo, C.L.N. Communicating science with YouTube videos: How nine factors relate to and affect video views. Front. Commun. 2020, 5, 567606. [Google Scholar] [CrossRef]
  22. Giglietto, F.; Rossi, L.; Bennato, D. The open laboratory: Limits and possibilities of using Facebook, Twitter, and YouTube as a research data source. J. Technol. Hum. Serv. 2012, 30, 145–159. [Google Scholar] [CrossRef]
  23. Smith, M.J.; Geach, J.E. Astronomia ex machina: A history, primer and outlook on neural networks in astronomy. R. Soc. Open Sci. 2023, 10, 221454. [Google Scholar] [CrossRef]
  24. Smith, A.A. Broadcasting ourselves: Opportunities for researchers to share their work through online video. Front. Environ. Sci. 2020, 8, 150. [Google Scholar] [CrossRef]
  25. Vins, M.; Aldecoa, M.; Hines, H.N. Sharing wildlife conservation through 4 billion views on YouTube. Glob. Ecol. Conserv. 2022, 33, e01970. [Google Scholar] [CrossRef]
  26. Parisot, O. Data and Models for Sunspots Detection in Solar Images Captured with Smart Telescopes. In Advanced Research in Technologies, Information, Innovation and Sustainability; Springer: Cham, Switzerland, 2024; pp. 153–163. [Google Scholar]
  27. Steinicke, W. Observing and Cataloguing Nebulae and Star Clusters: From Herschel to Dreyer’s New General Catalogue; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  28. Rector, T.A.; Levay, Z.G.; Frattare, L.M.; English, J.; Pu’uohau-Pummill, K. Image-processing techniques for the creation of presentation-quality astronomical images. Astron. J. 2007, 133, 598. [Google Scholar] [CrossRef]
  29. Okulu, H.Z. Creating and evaluating instructional materials with generative artificial intelligence: Visual representations in astronomy education. Educ. Inf. Technol. 2025, 1–20. [Google Scholar] [CrossRef]
  30. Cohen, M.; Lu, W. A diffusion-based method for removing background stars from astronomical images. Astron. Comput. 2021, 37, 100507. [Google Scholar] [CrossRef]
  31. Bradley, L.; Sipocz, B.; Robitaille, T.; Tollerud, E.; Deil, C.; Vinícius, Z.; Barbary, K.; Günther, H.M.; Bostroem, A.; Droettboom, M.; et al. Photutils: Photometry tools. Astrophys. Source Code Libr. 2016, ascl–1609. [Google Scholar] [CrossRef]
  32. Kovalevsky, J.; Seidelmann, P.K. Fundamentals of Astrometry; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  33. Parisot, O. DeepSpaceYoloDataset: Annotated Astronomical Images Captured with Smart Telescopes. Data 2024, 9, 12. [Google Scholar] [CrossRef]
  34. Parisot, O.; Jaziri, M. Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy. Astronomy 2024, 3, 122–138. [Google Scholar] [CrossRef]
  35. Velli, M.; Harra, L.K.; Vourlidas, A.; Schwadron, N.; Panasenco, O.; Liewer, P.; Müller, D.; Zouganelis, I.; St Cyr, O.; Gilbert, H.; et al. Understanding the origins of the heliosphere: Integrating observations and measurements from Parker Solar Probe, Solar Orbiter, and other space-and ground-based observatories. Astron. Astrophys. 2020, 642, A4. [Google Scholar] [CrossRef]
  36. Tomar, S. Converting video formats with FFmpeg. Linux J. 2006, 2006, 10. [Google Scholar]
  37. Violot, C.; Elmas, T.; Bilogrevic, I.; Humbert, M. Shorts vs. Regular Videos on YouTube: A Comparative Analysis of User Engagement and Content Creation Trends. In Proceedings of the 16th ACM Web Science Conference, Stuttgart, Germany, 21–24 May 2024; pp. 213–223. [Google Scholar]
  38. Lupșa-Tătaru, D.A.; Lixăndroiu, R. YouTube channels, subscribers, uploads and views: A multidimensional analysis of the first 1700 channels from july 2022. Sustainability 2022, 14, 13112. [Google Scholar] [CrossRef]
  39. Zotti, G.; Hoffmann, S.M.; Wolf, A.; Chéreau, F.; Chéreau, G. The simulated sky: Stellarium for cultural astronomy research. arXiv 2021, arXiv:2104.01019. [Google Scholar] [CrossRef]
  40. Landrum, A.R. Are women a missing audience for science on YouTube? An exploratory study. Front. Commun. 2021, 6, 610920. [Google Scholar] [CrossRef]
  41. McGhie, H. De-masculinising Astrophotography. In Proceedings of the Fast Forward Conference 5: “Hidden (Hi)stories: New Perspectives of Women’s Photographies”, Thessaloniki, Greece, 17–19 May 2024. [Google Scholar]
  42. Bernad-Mechó, E.; Valeiras-Jurado, J. Multimodal engagement strategies in science dissemination: A case study of TED talks and YouTube science videos. Discourse Stud. 2023, 25, 733–754. [Google Scholar] [CrossRef]
  43. Parisot, O. A Set of Smart Telescopes Videos Analyzed with YOLO Models (2023–2025); Zenodo: Geneva, Switzerland, 2025. [Google Scholar] [CrossRef]
Figure 1. Telescopes used by the authors during a session in January 2025 (from left to right: refractor and Maksutov for visual observation, Hestia, Stellina, and Vespera).
Figure 1. Telescopes used by the authors during a session in January 2025 (from left to right: refractor and Maksutov for visual observation, Hestia, Stellina, and Vespera).
Electronics 14 02567 g001
Figure 2. Considering 2692 patches of 256 × 256 pixels, percentage of images with point sources detected (blue represents the proportion of images with stars detected; green represents the proportion of images without stars detected); the original images are on the left, the images after applying the Starnet model are in the middle, and the images after running the nox package are on the right.
Figure 2. Considering 2692 patches of 256 × 256 pixels, percentage of images with point sources detected (blue represents the proportion of images with stars detected; green represents the proportion of images without stars detected); the original images are on the left, the images after applying the Starnet model are in the middle, and the images after running the nox package are on the right.
Electronics 14 02567 g002
Figure 3. Observation of Lagoon Nebula (Messier 8) with a Vespera smart telescope (6 July 2023), and processing with Starnet. On the left, the stacked image after 10 s of capture; on the right, the stacked image after 1200 s of capture.
Figure 3. Observation of Lagoon Nebula (Messier 8) with a Vespera smart telescope (6 July 2023), and processing with Starnet. On the left, the stacked image after 10 s of capture; on the right, the stacked image after 1200 s of capture.
Electronics 14 02567 g003
Figure 4. Training of a YOLO8m model by using the DeepSpaceYoloDataset in order to detect DSOs in astronomical images captured with smart telescopes.
Figure 4. Training of a YOLO8m model by using the DeepSpaceYoloDataset in order to detect DSOs in astronomical images captured with smart telescopes.
Electronics 14 02567 g004
Figure 5. Execution of the YOLOv7 tiny model on two images of M64 captured with a Vespera smart telescope (27 April 2023): the first image was obtained after the stacking of 2 raw images, and the second one was obtained after the stacking of 115 raw images.
Figure 5. Execution of the YOLOv7 tiny model on two images of M64 captured with a Vespera smart telescope (27 April 2023): the first image was obtained after the stacking of 2 raw images, and the second one was obtained after the stacking of 115 raw images.
Electronics 14 02567 g005
Figure 6. Annotated view of the Sun as captured with a Vespera telescope on 16 June 2023: the conditions are not perfect and the image was blurred by the bad weather conditions of the day, but the YOLO model is able to detect most of the sunspots.
Figure 6. Annotated view of the Sun as captured with a Vespera telescope on 16 June 2023: the conditions are not perfect and the image was blurred by the bad weather conditions of the day, but the YOLO model is able to detect most of the sunspots.
Electronics 14 02567 g006
Figure 7. Diagram to use FFMPEG commands to generate an MP4 video from a sequence of images, with conditional steps for the different operations.
Figure 7. Diagram to use FFMPEG commands to generate an MP4 video from a sequence of images, with conditional steps for the different operations.
Electronics 14 02567 g007
Figure 8. A subset of videos available on the YouTube channel.
Figure 8. A subset of videos available on the YouTube channel.
Electronics 14 02567 g008
Figure 9. Daily audience for the YouTube channel over 2024, measured by YouTubeStudio: the spring was very active, while the end of the year was marked by strong audiences of videos like ‘Live observation of Great Pegasus Cluster (M15) with a Vespera smart telescope (15 September 2023)’.
Figure 9. Daily audience for the YouTube channel over 2024, measured by YouTubeStudio: the spring was very active, while the end of the year was marked by strong audiences of videos like ‘Live observation of Great Pegasus Cluster (M15) with a Vespera smart telescope (15 September 2023)’.
Electronics 14 02567 g009
Table 1. Accuracy of trained YOLO models on the test set from DeepSpaceYoloDataset (222 images).
Table 1. Accuracy of trained YOLO models on the test set from DeepSpaceYoloDataset (222 images).
ModelParametersPrecisionRecallmAP@50mAP@50-95
YOLO11n2,582,3470.7780.6460.7420.569
YOLO8n3,005,8430.7780.6020.7270.544
YOLOv7tiny6,007,5960.7610.5730.6390.442
YOLO11m20,030,8030.8020.6190.7380.559
YOLO8m25,840,3390.7670.6610.7390.555
YOLO11x56,828,1790.7580.6570.7260.541
YOLO8x68,124,5310.8540.5940.7290.552
Table 2. Accuracy of trained YOLO models on the test set from SunspotsYoloDataset (128 images).
Table 2. Accuracy of trained YOLO models on the test set from SunspotsYoloDataset (128 images).
ModelParametersPrecisionRecallmAP@50mAP@50-95
YOLO11n2,582,3470.8040.7270.8070.526
YOLO8n3,005,8430.7950.8020.8490.562
YOLOv7tiny6,007,5960.8650.9200.9150.728
YOLO11m20,030,8030.8150.7640.8410.560
YOLO8m25,840,3390.8510.7490.8440.562
YOLO11x56,828,1790.8380.7340.8230.541
YOLO8x68,124,5310.8540.7450.8420.554
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Parisot, O. Method and Tools to Collect, Process, and Publish Raw and AI-Enhanced Astronomical Observations on YouTube. Electronics 2025, 14, 2567. https://doi.org/10.3390/electronics14132567

AMA Style

Parisot O. Method and Tools to Collect, Process, and Publish Raw and AI-Enhanced Astronomical Observations on YouTube. Electronics. 2025; 14(13):2567. https://doi.org/10.3390/electronics14132567

Chicago/Turabian Style

Parisot, Olivier. 2025. "Method and Tools to Collect, Process, and Publish Raw and AI-Enhanced Astronomical Observations on YouTube" Electronics 14, no. 13: 2567. https://doi.org/10.3390/electronics14132567

APA Style

Parisot, O. (2025). Method and Tools to Collect, Process, and Publish Raw and AI-Enhanced Astronomical Observations on YouTube. Electronics, 14(13), 2567. https://doi.org/10.3390/electronics14132567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop