Next Article in Journal
Inferring Evidence from Nested Sampling Data via Information Field Theory
Previous Article in Journal
Snowballing Nested Sampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era †

1
European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching, Germany
2
National Institute of Nuclear Physics, Section of Naples (INFN), Via Cinthia, 80126 Naples, Italy
3
INAF—Institute of Radio Astronomy, Via Gobetti 101, 40129 Bologna, Italy
4
Italian ALMA Regional Centre, Via Gobetti 101, 40129 Bologna, Italy
5
Max Planck Institute for Astrophysics, Karl-Schwarzschild-Straße 1, 85748 Garching, Germany
6
Department of Physics Ettore Pancini, Federico II University, Via Cinthia 26, 80126 Naples, Italy
7
Department for Computer Science, Technical University of Munich, Boltzmannstr. 3, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Presented at the 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Garching, Germany, 3–7 July 2023.
Phys. Sci. Forum 2023, 9(1), 18; https://doi.org/10.3390/psf2023009018
Published: 13 December 2023

Abstract

:
An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis, employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this study, we provide evidence of the benefits of employing these approaches to ALMA imaging for operational and scientific purposes. We show the potential of two techniques, RESOLVE and DeepFocus, applied to ALMA-calibrated science data. Significant advantages are provided with the prospect to improve the quality and completeness of the data products stored in the science archive and the overall processing time for operations. Both approaches evidence the logical pathway to address the incoming revolution in data rates dictated by the planned electronic upgrades. Moreover, we bring to the community additional products through a new package, ALMASim, to promote advancements in these fields, providing a refined ALMA simulator usable by a large community for training and testing new algorithms.

1. Introduction

Since its inception in 2011, the Atacama Large Millimeter/submillimeter Array (ALMA) (Figure 1) has been at the forefront of astronomical research (see e.g., [1]), consistently delivering groundbreaking scientific discoveries with its versatile observatory capabilities. In 2018, the ALMA Development Roadmap provided a forward-thinking perspective to increase the observatory’s capabilities out to 2030 (ALMA2030) [2,3]. Scientifically motivated by fostering the understanding of the origin of galaxies, the origin of chemical complexity, and the origins of planets, ALMA is going to be enhanced to address (1) larger bandwidths and better receiver sensitivity, (2) improvements to the ALMA Science Archive, and (3) longer baselines. In addition, the observatory is currently expanding the frequency coverage to 35–950 GHz with two new receivers (band 1 and band 2) to capture a broader range of celestial objects and phenomena.
The ALMA2030 Roadmap [2] encourages innovative technical concepts rooted in astronomy, even if they are not explicitly listed within the document. In this sense, the ESO internal ALMA development study BRAIN fits into the original strategic vision of ALMA2030. In this study, synthesis imaging algorithms capable of learning from ALMA data are developed and tested. The powerful attribute of modern artificial intelligence systems enables algorithms to make data-driven decisions, discover insights, and provide predictions. We include the exploration of algorithms that can leverage graphics processing units (GPUs) to maximize computational power, achieving high performance. GPUs allow for more efficient and cost-effective computing solutions and open up new possibilities for solving complex problems with large data volumes. Moreover, we provide to the community a database and software to create ALMA-simulated data to promote the development of new synthesis imaging algorithms.

Data Rate and Data Volume Incrementing with ALMA2030

The currently top priority of the ALMA2030 vision is to broaden the receiver’s intermediate frequency (IF) bandwidth by a factor of 2–4 (currently ∼8 GHz) while upgrading the associated electronics and the correlator. A wider bandwidth will allow the observatory to capture more data in a shorter amount of time, accelerating data collection and analysis. Upgrades to the receivers and electronics improve the sensitivity, with the aim to detect fainter signals and gather more precise data. Efforts are made to reduce system noise, interference, and instrumental effects in order to refine the signal-to-noise ratio. These enhancements have been officially designated as the ALMA2030 Wideband Sensitivity Upgrade (WSU) [4].
The development of the ALMA second-generation correlator, the improved receivers, the wider bandwidth, and the transmission system will enable a substantial increase in data volume during a single observation and up to 10 times more observing speed efficiency. When ALMA collected 1.7 PB of visibility data and products until Cycle 9, it is expected that ALMA2030 might augment the visibility data by up to a factor of 100. Secondly, the ALMA Science Archive requires improvements in order to handle the data from the increased bandwidth in addition to enhancement of the user-friendliness of the archive and data products. Thirdly, the maximum baselines may be increased by a factor of 2–3, and the number of antennas will grow up to 16%. Consequently, the ALMA resolution and UV coverage will be enriched from today’s several possible solutions because of the longest baselines and more efficient observational strategies.
The expanded bandwidth directly leads to enhancements in both the continuum sensitivity and spectral coverage. While the exact capabilities that can be offered are not yet known, an increase in the data rate of between 10 and 100 times is envisaged. Due to the increased sensitivity and data volume, continuum identification can become an awkward procedure, especially for the potential incrementing of spectral line detection. During the ramp-up years of the WSU, observations will occur with a lower number of antennas than employed today. In this time frame, sparse sampling may affect the data. To handle the increased volume and complexity of the data, data processing and analysis tools have to be strategically developed.

2. Methods and Results: Artificial Intelligence for Synthesis Imaging with BRAIN

The ESO internal ALMA development study named BRAIN was proposed in late 2019 and will conclude by the end of 2023. BRAIN stands for Bayesian Reconstruction through Adaptive Image Notion. Alternatives to commonly used applications in image processing have been sought and tested [5,6]. This study explores the astrostatistics and astroinformatics approaches suited to the ALMA2030 era. Astrostatistics and astroinformatics are essential for advancing our understanding of the universe and for making sense of the enormous volumes of data collected by astronomical instruments and observatories [7].
Astrostatistics involves the application of statistical analysis, data modeling, and probability theory to solve problems, perform predictions, and provide robust uncertainty quantification. By extracting meaningful insights from astronomical data, astrostatistics enhances the quality and reliability of astronomical research by providing quantitative and data-driven approaches.
Astroinformatics focuses on the management, analysis, and interpretation of large and complex astronomical datasets, combining elements of astronomy, computer science, and information technology. This field plays a crucial role in executing operations of large observatories in real time. It can support operations for sanity checks on the health of the observatory, as well as real-time image processing.
In this study, the information field theory [8] approach RESOLVE [9,10,11] and the astroinformatics technique DeepFocus [12] are applied to ALMA-calibrated data. A companion package of DeepFocus, named ALMASim, was developed with the additional scope to provide simulated data as well as open software for the scientific community. ALMASim enables training and testing supervised machine learning (ML) algorithms as well as reliability and quality assessments and comparisons of several techniques.

2.1. Resolve

The algorithm works on ALMA-calibrated measurement sets in the UV space. The input data d are modeled as a combination of log-celestial signals s corrupted by a dirty beam R (i.e., the spatial response pattern of the telescope array during an observation) and the noise n (systematic and random errors), where d = R e s + n . The process of synthesizing an image involves estimating the posterior probability density function (pdf) of potential true sky signal configurations using variational inference, where P ( s | d ) = e H ( s , d ) Z ( d ) . For more details on the RESOLVE algorithm, see [9,10,11] and the references therein. The RESOLVE algorithm outputs posterior samples of sky images and power spectra. From these posterior samples, summary statistics such as a mean sky map or an uncertainty map can be computed. The first application of RESOLVE to ALMA data is shown in [10]. In Figure 2, an application of RESOLVE on a protoplanetary disk sample, Elias 27, from the ALMA Disk Substructures at High Angular Resolution Project (DSHARP) [13,14] is shown. In Figure 2, image (A) is the fiducial image of the continuum detection of Elias 27 from the DSHARP data release [14,15], with a self-calibrated image employing CASA [16]. Images (B) and (C) are the outcomes of RESOLVE’s application to Elias 27 continuum ALMA data, showing the mean sky map and the uncertainty map, respectively. The DSHARP project was aimed at detecting a large number of protoplanetary disks with a high resolution (35 mas) to reveal the amplitudes of small-scale substructures in the distributions of the disk material and understand their relation to the planet-forming process.
RESOLVE is currently applied to other science cases such as as self-calibration, full polarization, single-dish imaging, as well as VLBI-based filming of evolving black hole environments [17,18]. Those applications should be extended to ALMA data.

2.2. DeepFocus

The deep learning pipeline DeepFocus was described in [6,12]. It performs deconvolution, source detection, and characterization. From the original development targeted to detect faint compact objects, the deconvolution algorithm has been improved to detect extended emissions with a new ML model, the meta-learner, which is capable of exploring different architectures (e.g., CAE-VAE, U-Net, and Res-Net) and assisting in model selection to predict the best-performing architecture given a task and a set of interferometric data. A Bayesian optimization algorithm is used for model selection. The procedure is supported by taxonomy. This framework allows one to include several model architectures, hyperparameters, and evaluation metrics that are specific to the problem, data, and desired performance criteria. Multiple parameter realizations are tested in parallel, and a subsample of the original problem is used to measure the performance. Once the optimal architecture is chosen according to the image data, the deep learning pipeline performs deconvolution and denoising, focusing, and classification tasks [12].
The Bayesian optimization algorithm is supported by a parameter search employing surrogate models. The best set of parameters for a given objective is efficiently found while reducing the number of expensive model evaluations. The expensive objective function (deep learning model) is approximated by its surrogate model, as described by Gaussian processes (GPs), where y = G P ( μ , Σ ) with μ ( x ) and Σ ( x , x ) being mean and covariance functions, respectively. The Matérn kernel is chosen as a covariance function to capture smoothness and correlations in the data, where Σ = K ( x , x ) = σ 2 ( 1 + ( x x ) 2 2 α l ) α with α and l representing the smoothness and length scale parameters, respectively. The surrogate model is trained on a limited number of initial evaluations of the objective function. In order to target promising regions in the parameter space, an acquisition function is used to decide which set of parameters is evaluated next. In this development, the expected improvement (EI) is used as an acquisition function, where E I ( x ) = E [ max ( f ( x ) f ( x ) , 0 ) ] with f ( x ) and f ( x ) indicating the predicted mean of the objective function at point x, based on the GP, and the best value observed thus far in the optimization process, respectively. The EI quantifies the potential gain in performance. It prefers models for which μ > f ( x ) (exploitation) or where the standard deviation σ ( x ) is high (exploration). As already shown in [19], the key advantage of using surrogate models in Bayesian parameter searching is that it reduces the number of costly evaluations of the objective function.

Benchmark on Archived ALMA Data Cubes

The European ALMA Regional Centre (EU ARC) cluster is frequently utilized for processing the standard pipeline and generating official products delivered to principal investigators. Using the EU ARC cluster, Deep Focus was executed on 29 × 10 3 archived ALMA cubes taken during the three most recent cycles (7–9; see Figure 3). We observed an average processing time of 1.13 min per cube and an average compute throughput of 140 MB/s. The same data were cleaned by the tCLEAN algorithm [16,20] (see also [5,10]). The tCLEAN algorithm was executed with the parameter niter = 1000 , corresponding to the number of cleaning iterations, employing parallel computing. The average computing throughput with tCLEAN was 0.56 MB/s, showing a rate of performance 250 times lower for calculations with respect to the ML algorithm DeepFocus. The image processing speed improvement achieved by DeepFocus varied significantly depending on the image size, ranging from 280 fold to a remarkable 5500 fold increase in speed. When employing algorithms such as DeepFocus, the image deconvolution process may take only few minutes on large cubes. GPUs, which are commonly used by ML algorithms, provide a large benefit for synthesis image analysis. DeepFocus demonstrated high image fidelity and high-performance computing for image reconstruction on the ALMA data cubes.

2.3. A Refined ALMA Simulator: ALMASim

In [6,12], it was shown that DeepFocus is applied to ALMA dirty cubes, learning from the input data of the celestial sources, the noise, and the instrumental point spread function (beam). The algorithm allows for extreme data compression by leveraging both spatial and frequency information. With DeepFocus being a supervised ML algorithm, the process to build and evaluate the models is essential. To support the needed workflow, the ALMASim package was developed.
ALMASim is an extension of the CASA Simulator package [16]. We increased the capabilities of this simulator to be tailored toward the development of ML imaging algorithms and for quality and reliability assessments. The novel package builds upon the CASA PiP Wheels, the MARTINI Package [21], and the Illustris Python Package [22] to facilitate the creation of observations involving high-redshift point-like sources as well as nearby extended sources across the entire spectrum of ALMA configurations.
Following the CASA Simulator, the observational input parameters (precipitable water vapor, band, antenna configuration, bandwidth, integration time, and scan time) were provided with the source properties (signal-to-noise ratio and peak brightness). ALMASim introduces several optional models for the science target brightness distribution: diffuse, point-like, Gaussian, and extended (Figure 4). For the diffuse emission model, the Numerical Information Field Theory package (NIFTy) [23] was used. The simulated science target distributions entered into the MARTINI package [21] to transform the input data into radio observations. Mock observations of simulated celestial sources (such as high-redshift point sources, low-redshift galaxies, extended structures, and both continuum and emission lines in ALMA cubes) were produced efficiently. In the output, the novel package generated sky model cubes and dirty ALMA cubes, with both in FITS format. The sky models represent the simulated real sky without any noise or instrumental artifacts added to the image. The dirty ALMA cubes correspond to the Fourier inversion of the observed visibilities corrupted by noise (e.g., instrumental or thermal). ALMASim can optionally provide outputs such as the ALMA dirty beam, the measurement sets files, and plots of the 2D integrated cubes and 1D spectra for all simulated data.
We aim at having this service available to the scientific community as an open-source tool. A number of preconfigured parameter sets will be available to generate identical datasets on which to test algorithms. ALMASim empowers users in generating synthetic datasets designed for the development of deconvolution and source detection models. Each user will be able to generate a dataset on its local machine. The package is engineered to harness the usage of MPI parallel computing, particularly on contemporary HPC clusters, to efficiently produce thousands of ALMA data cubes and their visibilities. Users have the choice to define the input parameters or to opt for a completely randomized configuration and sources.

2.3.1. Sharpening Mock Data with Realistic Noise Characteristics

We want to ensure that the simulated data closely resemble actual ALMA observations. The generation of synthetic noise has to reproduce the noise in real ALMA observations and introduce those characteristics in the artificial images. Two approaches are used: (1) adding noise to the generated ALMA measurement sets and (2) empirical noise modeling. Realistic noise characteristics in artificial ALMA images are commonly added to the generated ALMA measurement sets, which contain the visibility data and calibration information from the simulated observations. Synthetic noise can be generated from the CASA Simulator that is capable of corrupting the visibility data with thermal noise and atmospheric attenuation, even with leakages (cross-polarization). Corruption with atmospheric turbulence or adding gain fluctuations or drift are also possible within the CASA Simulator. However, correlated noise (e.g., due to antenna placement) is not accounted for in a straightforward manner, but this is possible while corrupting the measurement sets. Once the several components are introduced, the final noisy image is verified such that the noise levels in the synthetic image match the noise characteristics of the desired simulated observations. Alternatively, the empirical approach to noise modeling is not bounded to any theoretical model, allowing the approach to be applicable at other observatories and wavelengths.

Empirical Noise Modeling

Generation of synthetic noise is limited in reproducing all complexities encountered in real image data, though we wanted the simulated ALMA images to closely resemble actual ALMA observations. Commonly, quality assurance for a principal investigator’s parameter of interest (such as sensitivity or resolution) occurs in the image space. We investigated the possibility to learn the noise patterns from real ALMA data with the goal of understanding the noise properties, such as the spatial and spectral distribution of noise, and possibly considering correlated noise. Therefore, rather than relying solely on artificial noise added to a simulated real sky, an empirical approach to noise modeling was used and applied to ALMASim to encompass a broader range of noise components. Noise components were characterized at larger scales and at scales similar to and smaller than the ALMA beam size from the observed data. By extracting high spatial frequency noise patterns present in the image, we tried to capture the complexity of the noise environment. When replicated to create the new simulated image, we took care to preserve the statistical properties of the flux distribution on individual pixels, matching those observed in the original image (except in regions originally occupied by the astronomical sources of interest). In Figure 5, a simulated ALMA image characterized by noise is shown and extracted from a real ALMA image. The real ALMA image displays a quasi-stellar object with a jet-like structure. The empirical approach assists in reinforcing the creation of realistic ALMA simulations. Extracting noise from ALMA observations and integrating it into simulated sky data improves the fidelity of noise modeling, leading to a more precise representation of the intricate noise characteristics often encountered in ALMA observations.

3. Outlook and Conclusions

The ALMA2030 roadmap is bringing about a significant transformation due to the incrementing of data rates and volume. The ESO internal ALMA development study BRAIN provided an exploration of concepts for advanced imaging techniques. RESOLVE and DeepFocus are applicable to large data volumes while requiring the least amount of human intervention.
While RESOLVE stands out for its exceptional capacity to detect extended emissions and distinguish them from point sources, it will naturally outclass other options at combining data from various measurement sets for group-level imaging. Group-level imaging allows us to combine data from multiple arrays (12 m, 7 m, and total power) to capture both high-resolution and low-resolution features in a single observation. RESOLVE appears to be emerging as the preferred tool in the scientific community. Currently, the RESOLVE algorithm is in development to enhance its efficiency in processing ALMA cube images. On the other hand, DeepFocus is well-equipped with essential features to be used in operations, such as real-time processing and cube imaging of all spectral windows for large datasets. DeepFocus is actively advancing in its capability to detect diverse source morphologies, and it is planned to learn from freshly archived data. Its companion package, ALMASim, opens doors to the creation of tailored imaging algorithms specifically designed to address unique scientific cases. Although we still do not have a clear design for ALMA2030, it is foreseeable for DeepFocus to perform image processing soon after the calibrated data are reduced in real-time processing. If freshly observed data are calibrated and ingested in the archive, then the DeepFocus algorithm can efficiently perform the imaging of the calibrated data within few minutes. A real-time imaging service from a science platform of an ALMA archive is realistic. During the ALMA imaging process, if structures are present in the images caused by, for example, a glitch, then an alert system can be sent to the system astronomers to perform the needed investigations and take actions for the following ALMA observations, or an alert system can be sent to the next-generation CASA team [24]. If the archive is the central system for acquiring the ingested calibrated data, then the imaging algorithms of choice can be employed to perform the imaging of the measurement set even at the group level when available for each project.
Enhancing the efficiency of the observatory and establishing a comprehensive archive in terms of products will greatly enrich data mining opportunities for the scientific community.

Author Contributions

Conceptualization, F.G.; methodology, F.G., E.V., T.E., G.L., M.D.V., I.B., Ł.T., F.S. and J.R.; software, M.D.V., V.J., C.B., A.D. and J.R.; validation, F.S. and Ł.T.; formal analysis, M.D.V. and V.J.; resources, F.G., T.E., G.L. and M.D.V.; writing—original draft preparation, F.G.; writing—review and editing, T.E., F.S., E.V. and M.D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by an ESO internal ALMA development study investigating interferometric image reconstruction methods. The authors would like to acknowledge the Max Planck Computing and Data Facility (MPCDF) for providing computational resources and support for this research. J.R. acknowledges financial support from the German Federal Ministry of Education and Research (BMBF) under grant 05A20W01 (Verbundprojekt D-MeerKAT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The data can be found by querying the ALMA Archive: https://almascience.eso.org/aq/.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALMAAtacama Large Millimeter/submillimeter Array
ALMA2030ALMA Development Roadmap
BRAINBayesian Reconstruction through Adaptive Image Notion
DSHARPDisk Substructures at High Angular Resolution Project
EIExpected improvement
ESOEuropean Southern Observatory
EU ARCEuropean ALMA Regional Centre
IFIntermediate frequency bandwidth
FITSFlexible Image Transport System
GPGaussian process
GPUGraphics processing unit
HPCHigh-performance computing
MLMachine learning
MPIMessage Passing Interface
NIFTyNumerical Information Field Theory
pdfProbability density function
WSUWideband Sensitivity Upgrade

References

  1. Wootten, A.; Thompson, A.R. The Atacama Large Millimeter/Submillimeter Array. Proc. IEEE 2009, 97, 1463–1471. [Google Scholar] [CrossRef]
  2. Carpenter, J.; Iono, D.; Testi, L.; Whyborn, N.; Wootten, A.; Evans, N. The ALMA Development Roadmap. 2018. Available online: https://www.eso.org/sci/facilities/alma/developmentstudies/ALMA_Development_Roadmap_public.pdf (accessed on 23 November 2023).
  3. Carpenter, J.; Iono, D.; Kemper, F.; Wootten, A. The ALMA Development Program: Roadmap to 2030. arXiv 2020, arXiv:2001.11076. [Google Scholar]
  4. Carpenter, J.; Brogan, C.; Iono, D.; Mroczkowski, T. ALMA Memo 621: The ALMA2030 Wideband Sensitivity Upgrade. arXiv 2022, arXiv:2211.00195. [Google Scholar]
  5. Guglielmetti, F.; Villard, E.; Fomalont, E. Bayesian Reconstruction through Adaptive Image Notion. Proceedings 2019, 33, 21. [Google Scholar] [CrossRef]
  6. Guglielmetti, F.; Arras, P.; Delli Veneri, M.; Enßlin, T.; Longo, G.; Tychoniec, L.; Villard, E. Bayesian and Machine Learning Methods in the Big Data Era for Astronomical Imaging. Phys. Sci. Forum 2022, 5, 50. [Google Scholar]
  7. Siemiginowska, A.; Eadie, G.; Czekala, I.; Feigelson, E.; Ford, E.B.; Kashyap, V.; Kuhn, M.; Loredo, T.; Ntampaka, M.; Stevens, A.; et al. Astro2020 Science White Paper: The Next Decade of Astroinformatics and Astrostatistics. arXiv 2019, arXiv:1903.06796. [Google Scholar]
  8. Enßlin, T.A.; Frommert, M.; Kitaura, F.S. Information field theory for cosmological perturbation reconstruction and nonlinear signal analysis. Phys. Rev. D 2009, 80, 105005. [Google Scholar] [CrossRef]
  9. Junklewitz, H.; Bell, M.R.; Selig, M.; Enßlin, T.A. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy. Astron. Astrophys. 2016, 586, A76. [Google Scholar] [CrossRef]
  10. Tychoniec, Ł.; Guglielmetti, F.; Arras, P.; Enßlin, T.; Villard, E. Bayesian Statistics Approach to Imaging of Aperture Synthesis Data: RESOLVE Meets ALMA. Phys. Sci. Forum 2022, 5, 52. [Google Scholar]
  11. Roth, J.; Arras, P.; Reinecke, M.; Perley, R.A.; Westermann, R.; Enßlin, T.A. Bayesian Radio Interferometric imaging with direction–dependent calibration. Astron. Astrophys. 2023, 678, A177. [Google Scholar] [CrossRef]
  12. Delli Veneri, M.; Tychoniec, Ł.; Guglielmetti, F.; Longo, G.; Villard, E. 3D detection and characterization of ALMA sources through deep learning. Mon. Not. R. Astron. Soc. 2023, 518, 3. [Google Scholar] [CrossRef]
  13. Andrews, S.M.; Huang, J.; Pérez, L.M.; Isella, A.; Dullemond, C.P.; Kurtovic, N.T.; Guzmán, V.V.; Carpenter, J.M.; Wilner, D.J.; Zhang, S.; et al. The Disk Substructures at High Angular Resolution Project (DSHARP). I. Motivation, Sample, Calibration, and Overview. Astrophys. J. Lett. 2018, 869, L41. [Google Scholar] [CrossRef]
  14. Huang, J.; Andrews, S.M.; Pérez, L.M.; Zhu, Z.; Dullemond, C.P.; Isella, A.; Benisty, M.; Bai, X.; Birnstiel, T.; Carpenter, J.M.; et al. The Disk Substructures at High Angular Resolution Project (DSHARP). III. Spiral Structures in the Millimeter Continuum of the Elias 27, IM Lup, and WaOph 6 Disks. Astrophys. J. Lett. 2018, 869, L43. [Google Scholar] [CrossRef]
  15. Disk Substructures at High Angular Resolution Project (DSHARP). Available online: https://almascience.eso.org/almadata/lp/DSHARP/ (accessed on 23 November 2023).
  16. The CASA Team. CASA, the Common Astronomy Software Applications for Radio Astronomy. Publ. Astron. Soc. Pac. 2022, 134, 1041. [Google Scholar] [CrossRef]
  17. Arras, P.; Frank, P.; Haim, P.; Knollmüller, J.; Leike, R.; Reinecke, M.; Enßlin, T. Variable structures in M87 from space, time and frequency resolved interferometry. Nat. Astron. 2022, 6, 259–269. [Google Scholar] [CrossRef]
  18. Knollmüller, J.; Arras, P.; Enßlin, T. Resolving Horizon-Scale Dynamics of Sagittarius A. arXiv 2023, arXiv:2310.16889. [Google Scholar]
  19. Preuss, R.; von Toussaint, U. Global Optimization Employing Gaussian Process-Based Bayesian Surrogates. Entropy 2018, 20, 201. [Google Scholar] [CrossRef] [PubMed]
  20. Högbom, J.A. Aperture synthesis with a non-regular distribution of interferometer baselines. Astron. Astrophys. Suppl. 1974, 15, 417. [Google Scholar]
  21. Oman, K.A. MARTINI: Mock Spatially Resolved Spectral Line Observations of Simulated Galaxies; Bibcode: 2019ascl.soft11005O; Astrophysics Source Code Library: College Park, MD, USA, 2019. [Google Scholar]
  22. The Illustris Collaboration. 2018. Available online: https://github.com/illustristng/illustris_python (accessed on 23 November 2023).
  23. The Nifty Team. 2021. Available online: https://gitlab.mpcdf.mpg.de/ift/nifty (accessed on 23 November 2023).
  24. CNGI. Available online: https://cngi-prototype.readthedocs.io/en/latest/_api/autoapi/cngi/index.html (accessed on 23 November 2023).
Figure 1. ALMA antennas on the Chajnantor plateau. Credit: ESO.
Figure 1. ALMA antennas on the Chajnantor plateau. Credit: ESO.
Psf 09 00018 g001
Figure 2. Application of RESOLVE to Elias 27 from the DSHARP ALMA project at 240 GHz (1.25 mm) continuum. (A) The fiducial image as given by the DSHARP team [14]. (B) RESOLVE mean sky map of Elias 27. (C) RESOLVE uncertainty map representation.
Figure 2. Application of RESOLVE to Elias 27 from the DSHARP ALMA project at 240 GHz (1.25 mm) continuum. (A) The fiducial image as given by the DSHARP team [14]. (B) RESOLVE mean sky map of Elias 27. (C) RESOLVE uncertainty map representation.
Psf 09 00018 g002
Figure 3. Comparison of processing time and computing throughput with tCLEAN and DeepFocus on 29 × 10 3 archived cube data from cycles 7, 8, and 9. This represents a rough estimate because at this stage of development, it is challenging to make a robust comparison between the two techniques.
Figure 3. Comparison of processing time and computing throughput with tCLEAN and DeepFocus on 29 × 10 3 archived cube data from cycles 7, 8, and 9. This represents a rough estimate because at this stage of development, it is challenging to make a robust comparison between the two techniques.
Psf 09 00018 g003
Figure 4. Example of ALMA-simulated sources (dirty images) created with the ALMASim package: (A) point-like, (B) Gaussian shape, (C) extended, and (D) diffuse emissions.
Figure 4. Example of ALMA-simulated sources (dirty images) created with the ALMASim package: (A) point-like, (B) Gaussian shape, (C) extended, and (D) diffuse emissions.
Psf 09 00018 g004
Figure 5. Simplified visual explanation of the empirical approach to noise modeling. Different background and noise components measured at scales larger and shorter than the typical beam scale (central panels) are isolated from a real ALMA image (e.g., an ALMA calibrator (left panel)) and then added to a simulated image (right panel). In this example, we considered local fluctuations (center, bottom panel), a large-scale background (central panel), and high spatial frequency patterns (center, top panel). Instead of using a theoretical model to simulate noise and instrumental response, the same effects were directly measured from real observations obtained in comparable situations (telescope configuration and atmospheric conditions).
Figure 5. Simplified visual explanation of the empirical approach to noise modeling. Different background and noise components measured at scales larger and shorter than the typical beam scale (central panels) are isolated from a real ALMA image (e.g., an ALMA calibrator (left panel)) and then added to a simulated image (right panel). In this example, we considered local fluctuations (center, bottom panel), a large-scale background (central panel), and high spatial frequency patterns (center, top panel). Instead of using a theoretical model to simulate noise and instrumental response, the same effects were directly measured from real observations obtained in comparable situations (telescope configuration and atmospheric conditions).
Psf 09 00018 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guglielmetti, F.; Delli Veneri, M.; Baronchelli, I.; Blanco, C.; Dosi, A.; Enßlin, T.; Johnson, V.; Longo, G.; Roth, J.; Stoehr, F.; et al. A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era. Phys. Sci. Forum 2023, 9, 18. https://doi.org/10.3390/psf2023009018

AMA Style

Guglielmetti F, Delli Veneri M, Baronchelli I, Blanco C, Dosi A, Enßlin T, Johnson V, Longo G, Roth J, Stoehr F, et al. A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era. Physical Sciences Forum. 2023; 9(1):18. https://doi.org/10.3390/psf2023009018

Chicago/Turabian Style

Guglielmetti, Fabrizia, Michele Delli Veneri, Ivano Baronchelli, Carmen Blanco, Andrea Dosi, Torsten Enßlin, Vishal Johnson, Giuseppe Longo, Jakob Roth, Felix Stoehr, and et al. 2023. "A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era" Physical Sciences Forum 9, no. 1: 18. https://doi.org/10.3390/psf2023009018

APA Style

Guglielmetti, F., Delli Veneri, M., Baronchelli, I., Blanco, C., Dosi, A., Enßlin, T., Johnson, V., Longo, G., Roth, J., Stoehr, F., Tychoniec, Ł., & Villard, E. (2023). A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era. Physical Sciences Forum, 9(1), 18. https://doi.org/10.3390/psf2023009018

Article Metrics

Back to TopTop