Next Article in Journal
Modified Gravity with Nonminimal Curvature–Matter Couplings: A Framework for Gravitationally Induced Particle Creation
Previous Article in Journal
Strangeon Matter: From Stars to Nuggets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Revolutionizing Time-Domain Astronomy

1
School of Physics, Henan Normal University, Xinxiang 453007, China
2
Henan Academy of Sciences, Zhengzhou 450046, China
3
INAF Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova, Italy
*
Author to whom correspondence should be addressed.
Universe 2025, 11(11), 355; https://doi.org/10.3390/universe11110355
Submission received: 24 September 2025 / Revised: 17 October 2025 / Accepted: 23 October 2025 / Published: 28 October 2025
(This article belongs to the Special Issue Applications of Artificial Intelligence in Modern Astronomy)

Abstract

Artificial intelligence (AI) applications have attracted widespread attention and have proven to be highly successful in understanding messages across various dimensions. These applications have the potential to assist astronomers in exploring the massive amounts of astronomical data. In fact, the integration of AI techniques with astronomy began some time ago, significantly advancing our understanding of the universe by aiding in exoplanet discovery, galaxy morphology classification, gravitational wave event analysis, and more. In particular, AI is now recognized as a crucial component in time-domain astronomy, particularly given the rapid evolution of targeting transients and the increasing number of candidates detected by powerful surveys. A notable success is SN 2023tyk, the first transient discovered and spectroscopically classified without human inspection, an achievement made even more remarkable given that it was identified by the Zwicky Transient Facility, which detects millions of alert sources every night. There is no doubt that AI will play a crucial role in future astronomical observations across various messenger channels, aiding in transient discovery and classification, and helping, or even replacing, observers in making real-time decisions. This review paper examines several cases where AI is transforming contemporary astronomy, especially time-domain astronomy. We discuss the AI algorithms and methodologies employed to date, highlight significant discoveries enabled by AI, and outline future research directions in this rapidly evolving field.

1. Introduction

In recent years, artificial intelligence (AI) has emerged as a powerful tool in various scientific disciplines, including astronomy. AI techniques, such as machine learning, deep learning, and data mining, have found diverse applications in astronomy, ranging from data analysis and source classification to predictive modeling and decision-making. Specifically, AI algorithms are employed to filter through vast amounts of observational data, identify patterns, and extract meaningful information from complex datasets. For example, machine learning algorithms can automate the process of classifying celestial objects based on their spectral characteristics, light curves, or morphological features. Moreover, AI techniques enable astronomers to make predictions about the behavior and properties of astronomical objects, facilitating the discovery of new phenomena and guiding observational campaigns.
It is worth nothing that in this review, we adopt a broad and functional definition of AI, encompassing classical machine-learning algorithms, modern deep-learning architectures, and emerging generative or large language model approaches. In astronomy, these methods share a common goal: to extract informative features from complex, multidimensional data and to enable automated or semi-automated decision-making that traditionally required human intervention. We acknowledge that the terminology surrounding AI, machine learning, and deep learning is often used interchangeably and sometimes ambiguously, reflecting historical cycles of enthusiasm and reevaluation within the broader AI community. Our focus here is therefore not on the algorithmic details themselves but on how these data-driven techniques are being applied in time-domain astronomy, what types of datasets most benefit from AI-assisted analysis, and how such approaches are reshaping transient discovery and classification pipelines.
The application of AI in astronomy has led to significant advancements in various subfields. By asking ChatGPT-4 [1] 1 about successful applications of AI in astronomy, we received the following responses:
  • Exoplanet Discovery: AI algorithms analyze stellar data to identify patterns indicative of exoplanets orbiting distant stars. This has led to the discovery of numerous exoplanets, expanding our understanding of planetary systems beyond our own.
  • Galaxy Morphology Classification: AI techniques, such as deep learning, are used to classify the shapes and structures of galaxies in large-scale surveys. This helps astronomers study galaxy evolution and formation.
  • Gravitational Wave Detection: AI algorithms analyze data from gravitational wave observatories like LIGO and Virgo to detect and characterize gravitational wave signals emitted by cataclysmic cosmic events such as black hole mergers.
  • Transient Detection and Classification: AI is used to automatically detect and classify transient events such as supernovae, gamma-ray bursts, and fast radio bursts in astronomical surveys, enabling rapid follow-up observations.
  • Data Analysis and Interpretation: AI techniques are employed to analyze large datasets from telescopes and satellites, extracting valuable insights about the properties and behaviors of celestial objects.
From these answers, it can be seen that AI has a wide range of applications and prospects in astronomy. This review mainly focuses on describing the application of AI in time-domain astronomy, showing how it effectively assists astronomers in dealing with the opportunities and challenges brought about by instrument upgrades, ultimately facilitating revolutionary advancements in time-domain astronomy.
Time-domain astronomy focuses on the study of celestial events that vary or appear transiently over time. These events, known as transients, encompass a wide spectrum, ranging from dramatic cataclysmic events like supernovae (SNe) [2] and gamma-ray bursts (GRBs) [3] to compact stellar systems, including cataclysmic variables (CVs) [4] and X-ray binaries [5], to active galactic nuclei (AGNs) [6], whose brightness fluctuations trace accretion physics around supermassive black holes. Time-domain astronomy plays a crucial role in understanding the dynamic and evolving nature of the universe, offering insights into fundamental astrophysical processes, cosmology, and even the search for extraterrestrial intelligence.
Aiming to detect the variability of astronomical sources, particularly transients, modern synoptic surveys are designed to monitor large sky areas with high cadence. With the continuous increase in telescope aperture and the rapid advancement of detector technologies such as mosaic CCDs [7,8,9], the survey capability of current time-domain surveys has seen an exponential improvement compared to the past; for instance, the Pan-STARRS [10] and the Zwicky Transient Facility (ZTF) [11,12,13,14] cameras have a large field of view, i.e., 9 and 47 square degrees, respectively, and they can thus monitor the full northern sky with daily cadence. Consequently, the data volume of transient candidates detected each night far exceeds the capacity for human inspection, e.g., as shown in Figure 1, the number of public transients reported per year was relatively low between 2005 and 2012, allowing for classification by eye. After 2012, there was a rapid increase, reaching nearly 25,000 reported events per year. Of course, the reported candidates have already been pre-selected; in fact, the number of detected candidates is much larger than this figure. In the meantime, the number of classified supernovae remained relatively low, comparable to the earlier classification numbers. This is due to the scarcity of spectral resources. Thus, any tool aiding in the classification holds importance in time-domain astronomy research. AI, which encompasses a broad range of techniques and methodologies aimed at enabling machines to perform tasks that typically require human intelligence, can play a crucial role here. For instance, machine learning algorithms can be trained on labeled datasets to recognize patterns and anomalies associated with different types of transients, enabling rapid and accurate classification of new observations, even with multi-band photometry only. Furthermore, the integration of AI with time-domain astronomy holds the potential to uncover novel insights and discoveries that may have remained hidden using traditional approaches. By harnessing the power of AI to analyze multi-dimensional and time-varying data, astronomers can explore new parameter spaces and identify unexpected correlations or phenomena.
In the meantime, astronomers often do not have sufficient time to slowly process the vast amount of data and extract valuable insights from it. Compared to relatively common transients such as SNe, astronomers are now more inclined to explore rapidly evolving transients, aiming to unveil a broader spectrum of extreme astronomical phenomena such as kilonovae (KNe) [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32], fast blue optical transients (FBOTs) [33,34,35,36,37], and the shock breakout (SBO) emissions of SNe; e.g., the cooling tail of SBO might be caught in ultra-violet and optical wavelengths [38,39]. Therefore, observational instruments need to possess high temporal resolution. To overcome limitations imposed by the horizon and the atmosphere, astronomers have initiated the deployment of space telescopes like Kepler [40,41,42], TESS [43,44,45,46], GAIA [47], etc. However, considering the costs associated with construction, maintenance, and operational lifespan, a more feasible alternative involves establishing fully automated telescopes at various locations worldwide, thereby creating a network for time-domain astronomy observations, e.g., Las Cumbres Observatory Global Telescope (LCOGT) [48], Asteroid Terrestrial Last-Alert System (ATLAS) [49,50,51], All-Sky Automated Survey for Supernovae (ASAS-SN) [52], Distance Less Than 40 Mpc Survey (DLT40) [53], etc. The operation of multiple telescopes often requires a significant amount of manpower. Therefore, AI-based collaborative observation techniques and fully automated operation technologies for telescopes have become important tools in current research in time-domain astronomy. AI-driven predictive models can forecast the properties and evolution of transient events, guiding follow-up observations and maximizing scientific return.
In summary, the integration of AI with time-domain astronomy represents a significant methodological change in the way we explore the transient universe. By leveraging AI techniques to analyze vast amounts of observational data, astronomers can unlock the full potential of time-domain surveys and uncover the secrets of the dynamic cosmos. For instance, with the support of AI techniques, astronomers can now classify millions of alerts within just a few hours, compared to the days or even months once required for human verification in earlier surveys. This dramatic, order-of-magnitude improvement in latency and throughput enables astronomers to identify rare and unusual transient events that might otherwise have been missed. In this context, SN 2023tyk [54,55] is claimed as the first transient that required no human action from its discovery to its spectroscopic classification (type Ia supernova), being posted to TNS2. In this review paper, we use SN 2023tyk as a case illustrating successful applications of AI in astronomy. We explore the transformative impact of AI on time-domain astronomy with SN 2023tyk as an example in Section 2, demonstrating how meaningful features can be extracted from diverse datasets. In Section 3, we introduced various machine learning algorithms and evaluation criteria, and we summarize those that have been widely applied in time-domain astronomy. We highlight the key developments, challenges, and future directions in this rapidly evolving field in Section 4, and finally, we conclude this review paper in Section 5.

2. SN 2023tyk as a Case Illustrating the Application of AI in Time-Domain Astronomy

In Figure 2, we present a workflow diagram illustrating the monitoring process of the transient sky, including candidate detection, photometric and spectroscopic follow-up strategies, and the final diagnosis of the physical scenario. In this process, machine learning can primarily be applied in three parts, i.e., real bogus classification of candidates based on their images, pattern recognition based on the multi-band light curves, and automatic analysis based on spectral energy distributions (SEDs). Additionally, machine learning can also be utilized in various other ways, such as survey strategy optimization, although these applications are beyond the scope of this review.
Regarding SN 2023tyk, a.k.a. ZTF23abhvlji, it was automatically identified as a bright extragalactic transient in the ZTF public alert stream by BTSbot [56]. Due to its scientific interest, which was judged by the computer itself, it was subsequently and automatically observed by the Bright Transient Survey (BTS) [57,58] using the SED Machine [59]. A spectrum was robotically obtained, automatically reduced using pySEDM [60], and classified with SNIascore [61]. The classification was then routinely reported to the TNS server.
In this section, we use the discovery and follow-up story of SN 2023tyk as an example to illustrate the fully robotic pipeline developed within the ZTF ecosystem. In Section 2.1, we present how to distinguish between real and bogus candidates by relying on the image stamps, while in Section 2.2 and Section 2.3, we demonstrate how to distinguish different candidate types based on their observed fluxes and ultimately connect them to the underlying physical scenarios.

2.1. Real–Bogus Classification

Astronomers use telescopes to capture images of the sky and compare them with historical images of the same celestial region to detect new transients. Upon acquisition of images, the most effective method for transient detection involves subtracting search images from reference images captured previously. However, a major challenge with optical images from ground-based telescopes is that the Point Spread Function (PSF) varies at different epochs due to changing atmospheric conditions, such as variable seeing. To perform subtraction more effectively without atmospheric interference, the PSFs from two different epochs can be convolved to match each other using several publicly available codes, such as ISIS [62], HotPants [63], and ZOGY [64]. Once the images are properly subtracted, astronomers can identify residuals in the difference images using tools like SExtractor [65]. Sources exhibiting a Gaussian-like profile are likely real, while those that do not may be artifacts caused by various factors. Overall, owing to various issues associated with the detector and subtraction algorithms—such as alterations in the flux distribution kernel of point sources caused by atmospheric changes—the candidates identified from the resultant residual images are frequently tainted by a plethora of artifacts, see Figure 3, typically presenting a ratio of 100–1000 bogus candidates for every real one (depending on the threshold of signal-to-noise ratio).
As detailed in Section 1, advancements in observational facilities and techniques have significantly enhanced the capabilities of modern telescopes to monitor vast areas of the sky. As a result, they generate an overwhelming number of transient candidates each night—far beyond what astronomers can feasibly inspect manually. Human visual inspection becomes the bottleneck for rapid target identification. To address this challenge, AI has become an essential tool in the field, helping to alleviate the burden and enable efficient source identification.
Choosing informative, discriminating, and independent features is crucial to produce effective algorithms for pattern recognition, classification, and regression tasks. In fact, there are various approaches to constructing features for machine learning algorithms in addressing the real–bogus classification problem. One of the most commonly used methods, as proposed by [66], is to represent each transient candidate directly using its pixel data. As illustrated in Figure 3, for a given training example, a feature vector can be constructed by extracting a x by x pixel region centered on the transient (x is the size of square image stamp, which can vary depending on the seeing conditions; typically, a region spanning approximately five times the average seeing is used), shown as the left side in each subplot of Figure 3. Then, on the right-hand side of each subplot, a 1-D feature vector is created by sequentially shifting each column of the substamp and concatenating them to form a vector of pixel intensity values. To mitigate variations in pixel intensity across different images—variations that are not intrinsic to the transient source itself—a normalization procedure is typically applied. One commonly used approach is the log normalization method, as described in Equation (1) of [66]. This step ensures that amplitude differences arising from external factors (e.g., background noise, detector sensitivity) are minimized. After normalization, the machine learning algorithm can more effectively identify patterns in the feature vectors that are characteristic of real or bogus detections.
In addition to the feature extraction method described above, various alternative approaches have been explored. For instance, one can use source-fitting parameters obtained from tools like SExtractor, employ deep learning algorithms to automatically learn feature representations [68], or even apply image recognition models that operate directly on the raw images [69]. These different strategies have been adopted in several projects and have all demonstrated promising classification performance. A comprehensive overview and comparisons between these methods are beyond the scope of this paper.
In order to assist astronomers in identifying interesting celestial objects in the shortest possible time, allowing them ample time and confidence to prepare for subsequent deep observations with large-aperture telescopes, modern time-domain surveys such as the ZTF and Rubin LSST [70] have integrated tasks such as image processing, candidate source extraction, and machine learning-based real–bogus classification into streamlined, automated pipelines operated by expert teams. This enables the broader astronomical community to receive transient candidates, accompanied by essential contextual information, in real time. This need for rapid and accessible data dissemination is the key motivation behind the development of alert systems and dedicated broker platforms. Inspired by the data delivery architectures used in commercial applications—such as those employed by LinkedIn3 —and built upon frameworks like Apache Kafka [71], these surveys can efficiently distribute alert packets to multiple broker systems, including ALeRCE [72]4, Lasair [73]5, ANTARES [74]6, Fink [75]7, and Ampel [76]8, among others9. Each alert packet10 (see e.g., the ZTF alert documentation11) contains rich metadata and contextual information, including photometric measurements and image cutouts (science, reference, and difference images). This allows users to apply customized filters and algorithms to identify and prioritize transient events of interest. For example, one can refer to the Lasair’s filter page12, which provides a collection of community-designed filters aimed at identifying different types of transient and variable source candidates.
A variety of parameters, multi-epoch photometric data, and even image cutouts for each epoch are displayed on the dedicated Lasair webpage of SN 2023tyk13. All of this information is derived from the alert stream. For each epoch, the alert also provides associated parameters such as rb and drb, which represent the probabilities that the candidate is real, as computed using machine learning and deep learning algorithms, respectively. These values are designed to assist users in deciding whether a candidate merits follow-up observations, and which observational epochs, they provide reliable flux measurements.
BTSbot is a convolutional neural network designed to identify sources relevant to the BTS project. As shown in the Figure 1 of [56], BTSbot achieves this by assigning a bright transient score to each ZTF alert packet [77] based on the image cutouts and 14 selected numeric features, autonomously requesting follow-up observations for those most promising candidates. Follow-up observations can thus generate multi-band light curves and SEDs of candidate objects, providing astronomers with crucial data for classifying transients based on empirical knowledge. Additionally, machine learning algorithms, by comparing these observations with vast historical datasets, can significantly enhance the efficiency of identifying and prioritizing important candidates.

2.2. Multi-Band Photometric Lightcurves

Once astronomers identify an interesting real transient through alert filters, the next step typically involves gathering additional information about the candidate through various means. This may include checking for associations with multi-wavelength or multi-messenger sources, or evaluating whether follow-up observations with large-aperture telescopes are feasible to obtain a more detailed spectral energy distribution of the source. The spectroscopic observations from large-aperture telescopes are often crucial for astronomers to delve into the intricacies of these transients. However, observation time on large-aperture telescopes is extremely precious. Astronomers often require reliable and efficient tools to quickly discern the uniqueness and scientific value of alerts, thus bolstering confidence to utilize precious large-aperture telescope resources for uncharted exploration. Multiband photometric information from small-aperture telescopes—even citizen scientists, whose equipment may be less advanced—however, often obtain images with remarkably good cadence [78,79], to some extent reflecting the SED of candidates, making it a highly trusted tool for astronomers, e.g., extracting features such as color and brightness variation rates from multiband light curve data. Within diverse extracted features from photometry, astronomers can then employ various AI tools, such as machine learning algorithms, to compare with historical data or relevant models, thus assessing the potential underlying special physical eruption mechanisms of these sources behind those multi-band light curves.
As before, we take SN 2023tyk as an example. Figure 4 shows its ZTF light curves as well as its machine learning-based photometric classification of the transient. Remarkably, using only limited photometric information, AI identified the source as a potential Type Ia supernova—a prediction that was later confirmed by spectroscopic observations. At present, such light curve-based machine learning approaches are applicable to most classes of transients. However, there are still too few cases to robustly assess their accuracy in distinguishing between, for instance, Type Ib and Type Ic SNe, separating kilonovae from the SBO cooling tails of SNe, or determining whether a SNe is a superluminous event when distance information is unavailable.
Of course, the accuracy of light curve-based machine learning classification is strongly dependent on the completeness of the photometric sampling. Figure 5 illustrates the performance of astrorapid14, a deep recurrent neural network framework for real-time classification of multi-band light curves. During the earliest phases of an outburst, the classification scores of different supernova types are nearly uniform; however, within ∼5 days of the trigger, clear separation between Type Ia and non-Ia supernovae emerges. When light curves are well sampled—i.e., observed with sufficiently high cadence and across multiple filters—the predictive power of machine learning can approach that of direct spectroscopic classification. Nonetheless, as emphasized above, early-time SED observations are crucial, and in practice, astronomers often realize a transient’s importance only after this critical phase has passed. Rapid acquisition of rich early-time SED data therefore remains a major challenge for time-domain surveys. Two general strategies are currently pursued. One is to conduct large, indiscriminate spectroscopic campaigns, exemplified by projects such as LAMOST [80] and 7DT [81]. Such efforts, however, are rare and resource-intensive. More commonly, wide-field, high-cadence surveys rely on machine learning algorithms to classify transients from sparse photometric data. While this approach carries intrinsic risks, it can yield enormous scientific rewards when successful.
The challenge of classifying transients from photometric light curves has been recognized for more than a decade. In 2010, the Supernova Photometric Classification Challenge (SNPhotCC) [82,83] released a mixed dataset of simulated supernovae to test methods for distinguishing Type Ia supernovae from other classes. This initiative spurred the development of classification techniques in preparation for the Dark Energy Survey (DES) [84], ranging from template fitting to more advanced approaches such as light curve parameter fitting. Semi-analytic models, e.g., refs. [85,86], explored the possibility of reproducing SNe light curves with only a small number of hyperparameters. These models can be further embedded within other frameworks, such as the Gaussian processes [87], which may be applied for interpolation or parameter constraint, thereby providing informative features for transient classification. Many of these fitting and feature-extraction processes have been integrated into publicly available codes, allowing astronomers to conveniently extract analytic or physical parameters directly from observational data. Representative examples include HAFFET (easily public transient data access and various light curve/SED model fittings) [88], PISCOLA (SNe light curve interpolation for cosmology) [89], MOSFiT (physically motivated models such as magnetar or radioactive decay) [90], TigerFit (similar to MOSFIT but places greater emphasis on the circumstellar medium interaction-powered scenarios)15, RESSPECT (enables the construction of optimized training samples for the Rubin LSST) [91], and TransFit (improves upon previous physical models, enabling faster and more accurate reproduction of light curves) [92].
This effort was expanded in 2019 with the Photometric Luminous and Transient Classification Challenge (PLAsTiCC) [93]16, which simulated 19 classes of transients using tools such as SNANA [94] to evaluate classification strategies for LSST. These challenges attracted a broad community of astronomers and algorithm developers, fostering a variety of feature extraction techniques and machine learning approaches. Collectively, such developments have driven time-domain astronomy to a new level, enabling deeper exploration of the dynamic universe.

2.3. Spectral Energy Distributions

When a source is preliminarily identified as potentially significant—either through peculiar metadata (see Section 2.1) or early-time photometric behavior (see Section 2.2)—astronomers often announce the discovery via community networks such as the Gamma-ray Coordinates Network (GCN)17, Astronotes18, or the Astronomer’s Telegram (ATel)19. Once such announcements gain traction within the community, they typically trigger an intensive campaign of photometric and spectroscopic follow-up observations.
Although the light curve classification tools mentioned above are powerful in most cases, they may not be able to accurately classify SNe in certain specific situations, requiring spectroscopic identification. The SED describes the distribution of radiation energy of celestial bodies at different wavelengths, as shown in Figure 6. By analyzing the presence of hydrogen and helium in the spectrum, SNe can be further subdivided. 2023tyk is a widely studied SN candidate that, after confirmation by BTSbot, obtains a spectrum using the SED Machine and is classified by SNIascore. Similar to SNIascore, there is the CCSNscore [95] rating system, which quantifies the spectral features of different SNe to improve classification accuracy.
Currently, there are various automated classification tools such as Supernova Identification (SNID) [97]20, Generative Latent Textured Objects (GELATO) [98]21, Deep Automated Supernova and Host classifier (astrodash) [99]22, The Next Generation SuperFit (NGSF) [100]23, and the SNID SAGE24, which can quickly and accurately identify the types and relevant parameters of supernovae. SNID provides a spectroscopic classification method based on cross-correlation, effectively distinguishing SN Ia from other types of SNe and allowing for flexibility in adjustment when dealing with different spectral data. As shown in Figure 7, the observed spectra are compared with historic data via the SNID tool to identify the possible nature behind the sources. GELATO is a user-friendly online tool that can automatically compare input spectra with a rich template database to quickly identify the most similar SN spectra. Its intuitive interface and powerful backend algorithms enable even non-professional users to easily classify SNe. Additionally, astrodash provides a Python library that supports batch processing of SNe spectra, significantly improving data processing efficiency. NGSF has been integrated into the Weizmann Interactive Supernova Data Repository (WISeREP) [101] webpage25, allowing it to be executed directly on the spectral search page of the repository. By combining the strengths of these tools, utilizing machine learning techniques for the analysis and classification of SNe spectra is no longer a challenge, greatly improving the timeliness and accuracy of classifications.

3. Machine Learning Algorithms and Criteria

Building on Section 2, where we used the ZTF discovery and automated identification of SN 2023tyk as an example, we illustrated the broad applicability of machine learning techniques in time-domain astronomy. In the following section, we review the various machine learning algorithms and decision criteria that have been employed in this process. It is worth noting that machine learning has now developed into a highly systematic methodology. Its utility is not limited to astronomy; as long as meaningful features and reliable labels can be extracted, these algorithms can be readily applied across disciplines to perform large-scale data mining and uncover hidden patterns in complex datasets.

3.1. Machine Learning Concepts and Evaluation Standards

Machine learning is a discipline that studies how to improve algorithms and model performance through experience, i.e., the historic data. The core terminology in machine learning that provides the foundation for methodological development and application includes:
  • Model: A model is an abstract representation of a computer program or algorithm used to process and analyze data, make decisions, or make predictions. A model can be seen as a decision center that learns patterns and rules from data to perform tasks like prediction or classification. In time-domain astronomy, models can be applied to process and analyze various types of data, such as tabular data, time series data, and image data, to explore and understand astronomical phenomena.
  • Dataset: A dataset is a collection of information used to train and test models. In time-domain astronomy, datasets can include various types of data, such as astronomical images, astronomical light curves, and observational flux distributions. Typically, datasets are divided into training sets and test sets. The training set is used for the learning and training of the model, providing a large number of sample data points that enable the model to learn the features and patterns of the data. The test set is used to evaluate the model’s performance and generalization ability on unseen data, verifying whether the model has truly learned knowledge from the training data.
  • Features and Labels: Features are attributes or characteristics used to describe data, such as color, shape, size, etc. In time-domain astronomy, features can be various numerical characteristics extracted from observational data. Labels are interpretations or tags assigned to data, similar to naming or classifying the data. By learning the associations between features and labels, models can classify or identify new observational data.
Depending on whether the input data are labeled, unlabeled, or require interaction with an environment, machine learning is typically divided into three main categories:
  • Supervised Learning [103]: In this learning mode, models are trained using a set of input–output pairs, where the input consists of features and the output is the target variable. The goal of supervised learning is to master the mapping relationship between features and the target variable. Common algorithms include Decision Trees, Support Vector Machines (SVMs), and Random Forests (RFs). Typical application scenarios include classification (classifying spectra as stars or quasars [104]) and regression (estimating redshift from photometric measurements [105]).
  • Unsupervised Learning [106]: Unsupervised learning is used to discover the intrinsic structure of unlabeled data. It does not rely on labels provided by humans but instead reveals patterns and relationships in the data through techniques such as clustering, dimensionality reduction, and anomaly detection. Unsupervised learning is particularly important in scientific research because it can extract new knowledge from existing datasets and drive new discoveries. Common unsupervised learning methods include clustering algorithms (e.g., K-means, HDBSCAN, DBSCAN) [107,108,109], dimensionality reduction techniques (e.g., PCA, t-SNE, UMAP) [110,111,112], and anomaly detection algorithms [113,114,115,116].
  • Reinforcement learning (RL) [117]: RL is centered on the idea that an agent explores and exploits a specific environment, optimizing its decision-making process through trial and error. The goal is to learn how to take effective actions through interaction with the environment. Compared to the other two machine learning methods, RL significantly transforms the learning process into actual actions. Currently, in the field of astronomy, RL has been widely applied to telescope control [118,119,120,121,122] and hyperparameter tuning in radio astronomical data processing pipelines [123,124].
Evaluating model performance is undoubtedly a key step, and evaluation standards vary based on the nature of the task (classification or regression). In regression tasks, where the target variable is continuous, common evaluation metrics include Mean Absolute Error (MAE) and Mean Squared Error (MSE) [125]. The MAE measures the average absolute difference between predicted and actual values, making it suitable for scenarios where sensitivity to outliers needs to be minimized. The MSE calculates the average of the squared differences between predicted and actual values, making it more sensitive to outliers.
For classification tasks, where the target variable is discrete, evaluation standards typically include the Area Under the ROC Curve (AUC), confusion matrix, F1 score, F2 score, F1/2 score, F1/3 score, and accuracy.
  • ROC (Receiver Operating Characteristic) Curve: The ROC curve displays the performance of a classifier at different thresholds, with the horizontal axis representing the false positive rate (FP) and the vertical axis representing the true positive rate (TP). The closer the curve is to the top left corner (point (0,1)), the better the classification performance. The AUC value, which represents the area under the ROC curve, ranges from 0 to 1; the closer the AUC value is to 1, the better the model’s performance.
  • F1 Score: The F1 score is the harmonic mean of precision (P) and recall (R), calculated as follows:
    F 1 = 2 · P · R P + R
  • F2 Score: The F2 score gives more weight to recall than precision, calculated as follows:
    F 2 = 5 · P · R 4 · P + R
  • F1/2 Score: The F1/2 score emphasizes precision while considering recall, calculated as follows:
    F 1 / 2 = 3 · P · R 2 · P + R
  • F1/3 Score: The F1/3 score puts a greater emphasis on recall, calculated as follows:
    F 1 / 3 = 4 · P · R 3 · P + R
  • Accuracy: Accuracy measures the proportion of correct predictions (both true positives and true negatives) out of the total predictions made. It is calculated as follows:
    A c c u r a c y = TP + TN TP + TN + FP + FN
In these equations:
P = TP TP + FP
R = TP TP + FN
Finally, the confusion matrix is an intuitive tool for displaying the comparison between the model’s predictions and actual labels, with each row corresponding to a true class and each column corresponding to a predicted class. These evaluation metrics provide a comprehensive assessment of classification model performance, allowing for better decision-making based on specific needs and contexts.

3.2. Photometric Classification for Optical Transient Studies

In Section 2, we used SN 2023tyk as an example to illustrate how machine learning assists astronomers in identifying real sources in images and in classifying different source types through light curves and SEDs. In fact, the accuracy of machine learning algorithms on image vetting has already reached a very high level, e.g., a missed detection rate of around 10% by accepting a false positive rate of 1% [66], and for survey data such as that from ZTF, LSST, etc., it has been highly integrated into broker systems. At the same time, machine learning has significantly improved the accuracy of spectral classification of sources; however, spectroscopic data remain sparse and complex, meaning that expert analysis is still indispensable. As noted above, current surveys such as ZTF and LSST are extremely powerful, producing massive amounts of photometric data that can only be efficiently processed with machine learning algorithms. Methods to achieve this have varied, with differing levels of accuracy, and the development of improved approaches continues to be an active research area. Therefore, in this section, we systematically review, through a selection of representative studies, how astronomers have applied machine learning algorithms, e.g., Random Forests [126,127], Support Vector Machines [128], and Bayesian Neural Networks (BNNs) [129,130,131], to massive multi-band light curves in time-domain astronomy to carry out large-scale data mining.
In [86], the authors evaluated four feature extraction methods—model parameters, manually selected features, principal component analysis (PCA), and direct light curve inputs—together with two data-augmentation techniques (SMOTE and MVG) and three classification algorithms, i.e., SVM, RF, and Multi-Layer Perceptron (MLP). By combining these approaches into 24 distinct pipelines, they assessed the purity, accuracy, and completeness of transient classifications. Their analysis showed that the most successful pipeline—resulting in around 90% average accuracy, 70% average purity, and 80% average completeness for all SN classes—achieved the highest performance for Type Ia SNe and SLSNe, while the classification accuracy for Type Ib/c SNe remained comparatively poor. Convolutional Neural Networks (CNNs) have achieved significant success in image recognition [132,133,134], and their application to light curve data has also shown great potential. For example, the SCONE algorithm generates two-dimensional heat maps of light curves using Gaussian processes and utilizes CNNs for classification. Experimental results indicate that SCONE can classify six types of supernovae with over 98% accuracy without redshift information [135]. Recurrent neural networks (RNNs) are suitable for processing time-series data and have been widely used in light curve classification problems in recent years. Ref. [136] used long short-term memory networks (LSTM) to distinguish between Type Ia SNe and core-collapse SNe. With around 10,000 SNe used for training, the model achieves a Type Ia vs. non-Ia classification accuracy of 94.7%, an AUC of 0.986, and an F1 score of 0.64. For using the early-epoch data only, performance remains high with 93.1% accuracy, AUC = 0.977, and F1 = 0.58. A bidirectional RNN further distinguishes Types I, II, and III with 90.4% accuracy and AUC = 0.974, demonstrating competitive performance for large-scale photometric surveys. Ref. [137] trained classifiers using RNNs with a masking layer to fill arrays with zero entries, successfully identifying SNe, kilonovae, and other rare sources from simulated light curves of ZTF. Unlike previous methods, they adopted a new anomaly detection approach, combining multi-class isolation forests (MCIFs) to train separate forests for 17 categories of sources, avoiding the disadvantages of interpolation and discovering 41 ± 3 anomalies among the top 2000 ranked transient sources detected. Ref. [138] used the ELAsTiCC26 streaming dataset and the binary classifier in Fink broker to divide the dataset into five categories: SN-like, Periodic, Non-periodic, Long, and Fast. They built a deep learning light curve classification framework using LSTM recurrent neural networks to evaluate the performance of SuperNNova (SNN), finding stable classification performance for SN-like and Periodic transients. e.g., predicting within this framework, CATS achieves more than 93% precision for all classes except long (83%), while the best-performing binary classifier reaches more than 98% precision and 99% completeness for periodic sources.
On the other hand, unsupervised algorithms also demonstrate strong application potential in the classification of transient sources. Ref. [139] used the unsupervised clustering method HDBSCAN27 [108] and the isolation forest algorithm in the ASTRONOMALY28 Python package to explore seven unknown transient sources and two types of stellar flares on second-to-hour time scales. For time-series and tabular data, the initial method [140] simplified the data to a 2D representation for classification using deep learning. In a departure from traditional methods, Transformers [141], a new deep learning architecture, were applied by [142] to the PLAsTiCC dataset to handle multivariate time-series data, achieving excellent performance metrics. Ref. [143] developed ATAT, composed of two Transformer models, overcoming the resource-intensive nature of feature engineering (FE) [144]. They processed input data in two branches: time modulation and quantile feature marker mechanism, training the ATAT model with different combinations of light curves, metadata, and features, pioneering multi-modal applications.

3.3. Beyond the Optical: Classification of Transients Across the Spectrum

Of course, the applications of machine learning in time-domain astronomy extend far beyond those described in Section 3.2. In fact, many contemporary research areas in astronomy—such as gravitational-wave waveform recognition [145], exoplanet detection [146], and galaxy morphology classification [147]—rely heavily on machine learning techniques. More broadly, any discipline that involves large-scale data analysis can benefit from these methods. A comprehensive review of all such applications is beyond the scope of this paper. Instead, in this section, we highlight selected advances that illustrate how machine learning has been applied to transient astronomy beyond supernova studies, with particular emphasis on research frontiers involving gamma-ray bursts (GRBs) and fast radio bursts (FRBs).
FRBs are highly energetic millisecond-duration astrophysical phenomena typically categorized as repeaters or nonrepeaters. However, observational limitations may result in misclassifications, potentially leading to a higher proportion of repeaters than currently identified. Therefore, machine learning has been widely applied to the detection and analysis of FRBs [148,149,150,151,152,153,154]. With the first CHIME/FRB catalog, Refs. [155,156] identified 188 and 117 repeater candidates from 474 apparently nonrepeating FRBs through unsupervised learning, respectively. Ref. [157] reported 145 repeaters using FRB morphology as features, while [158] identified dozens of repeater candidates with various supervised learning methods. Moreover, refs. [158] found that brightness temperature and rest-frame frequency bandwidth are the most significant factors distinguishing repeaters from nonrepeaters, whereas refs. [159] suggested that spectral running may also play an important role. Some of the authors of this manuscript have also applied machine learning with 16 physical features to identify over a hundred repeater candidates, revealing distinct empirical relations between repeating and nonrepeating clusters, with all parameters in these empirical relations being mutually independent [160]. In addition, machine learning has been used to classify thousands of bursts from highly active repeaters, such as FRB 20121102 [161] and FRB 20201124A [162], in order to probe their possible radiation mechanisms.
GRBs are among the most energetic explosions in the universe, occurring in distant galaxies and outshining virtually all other astrophysical phenomena. They represent the brightest and most powerful class of cosmic transients, releasing immense amounts of energy within short timescales. GRBs exhibit durations that range from a few milliseconds to several hours and are conventionally classified into two populations based on their burst duration: short GRBs (≤2 s), which are typically associated with compact binary mergers, and long GRBs (≥2 s), which are generally linked to the collapse of massive stars [163]. However, in recent years, it has become increasingly clear that the traditional classification of GRBs by burst duration is not always reliable. A growing number of GRB events challenge this dichotomy. For example, according to standard theory, short GRBs are associated with compact binary mergers, which naturally produce r-process nucleosynthesis and are accompanied by kilonova emission. Yet, several long GRBs have recently been observed with candidate kilonova counterparts, e.g., GRB 191019A [164], GRB 211211A [165] and GRB 230307A [166]. This is puzzling, since long GRBs are thought to originate from the collapse of massive stars, a process typically accompanied by core-collapse SNe rather than kilonovae. These unexpected associations have raised serious questions about the adequacy of the duration-based classification scheme. In this context, applying machine learning algorithms to raw GRB data may provide new insights and reveal more physically motivated categories. Ref. [167] used dimensionality reduction algorithms t-SNE and UMAP based on four observational parameters (duration, peak energy, radiated energy and peak flux) to classify Fermi GRBs into two categories. They found that GRBs associated with kilonovae all belonged to Category I (except GRB 211211A [168] and GRB 230307A [169]), but this classification did not have a clear boundary compared to the traditional long–short classification. Ref. [170] proposed ClassiPyGRB, a machine learning-based tool for GRB classification and visualization. This tool uses the t-SNE algorithm for dimensionality reduction and visualization of GRB data, significantly improving the accuracy and efficiency of GRB classification. In the future, further integration of deep learning techniques and more complex feature engineering methods is expected to enhance classification outcomes further.

4. Future Directions and Challenges

As detailed above, while the application of ML and AI has already transformed time-domain astronomy, several important challenges and opportunities lie ahead. Here, we outline key directions for future research, emphasizing both the scientific potential and the technical barriers that must be addressed.
First, the classification of transients from photometric light curves remains a central task. Although supervised models trained on well-curated datasets (e.g., SNPhotCC and PLAsTiCC) have demonstrated promising results, their performance is often limited by incomplete or sparsely sampled light curves. Recent work, such as astrorapid, has shown that deep recurrent networks can achieve near-spectroscopic performance when light curves are densely sampled across multiple bands. However, in practice, early-phase SEDs are scarce, leaving significant uncertainties in classification. Large-scale, indiscriminate spectroscopic campaigns (e.g., LAMOST, 7DT) are one strategy, but resource constraints mean that ML-based classification from limited data will continue to play a critical role.
Second, the integration of multimodal data is an emerging frontier. Rather than relying solely on photometry, future systems will need to incorporate images, contextual metadata, and spectra into unified frameworks. For example, the AppleCiDEr system [171] combines photometry, image cutouts, metadata, and spectra to improve early classification performance, even for rare transients. Similarly, LAISS [172] leverages host-galaxy environment variables in addition to light curve features to identify anomalous transients in real-time through the ANTARES broker. These approaches demonstrate the power of multimodal integration in uncovering rare and scientifically valuable events.
Third, anomaly detection will remain a vital tool for discovering new classes of astrophysical phenomena. Traditional supervised classifiers are inherently biased toward known types, whereas unsupervised or semi-supervised methods can highlight sources that deviate from expectations. Multi-Class Isolation Forest (MCIF) [137] has shown excellent performance in surfacing rare transients such as kilonovae and unstable SNe from ZTF-like datasets. Similarly, LAISS has demonstrated the potential of combining anomaly detection with contextual filtering to reveal unclassified transients missed by conventional pipelines.
Fourth, real-time scalability is an urgent challenge in the LSST era, when tens of millions of alerts will be generated each night. Efficient algorithms for streaming data are critical to enable low-latency classification and follow-up. Efforts such as FINK’s kilonova science module [173] and representation-learning searches for fast X-ray transients in archival data [174] showcase how specialized modules and deep learning approaches can push real-time anomaly detection to new regimes. Looking forward, optimizing deep architectures (e.g., transformers, graph-based models) for irregular, sparse, and multimodal data streams will be essential.
Finally, interpretability and reproducibility remain key scientific concerns. While deep learning has proven powerful, its “black box” nature risks reducing trust in results. Transparent algorithms, uncertainty quantification, and community benchmarks will be critical for ensuring robustness. Furthermore, systematic biases—such as uneven sky coverage, incomplete spectroscopic training sets, or differences in host-galaxy populations—must be carefully addressed to prevent misleading scientific conclusions.
In summary, the future of AI in time-domain astronomy lies in multimodal integration, robust anomaly detection, real-time scalability, and transparent interpretability. With ongoing advances, AI will not only accelerate the discovery of known transients but also unlock entirely new classes of astrophysical phenomena, driving the field into an era of discovery that matches the unprecedented scale of upcoming surveys.

5. Conclusions

Artificial intelligence, particularly machine learning and deep learning technologies, has significantly enhanced the efficiency of transient event detection and classification in time-domain astronomy. By automating data processing and enabling rapid analysis, AI not only alleviates the workload of astronomers but also provides powerful tools for faster and more accurate exploration of the universe. This review paper discusses the transformative applications of AI in time-domain astronomy, especially its potential in processing and analyzing vast amounts of astronomical data. As the demand for research on transient celestial objects continues to grow, AI technologies have become indispensable for identifying, classifying, and predicting astronomical events. The case of SN 2023tyk particularly illustrates how AI has achieved the discovery and classification of transient objects without human intervention. Looking ahead, as data volumes continue to increase and technologies advance, the prospects for AI applications in time-domain astronomy are promising. However, challenges such as algorithm reliability, interpretability, and fairness also need to be addressed. By further integrating multimodal data, enhancing real-time processing capabilities, and developing automated systems, time-domain astronomy is poised to transcend traditional research methods, revealing more of the universe’s mysteries and advancing the field of astronomy further.

Author Contributions

Writing—original draft preparation, Z.-N.W.; writing—review and editing, D.-C.Q. and S.Y.; supervision, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under Grant No. 12303046, 12505070, the Joint Fund of Henan Province Science and Technology R&D Program No. 235200810057, the Henan Provincial Natural Science Foundation No. 252300420902, the Henan Province High-Level Talent International Training Program, and the Startup Research Fund of Henan Academy of Sciences No. 242041217, 241841222.

Data Availability Statement

The imaging data presented in the Figure 3 are available through the ESO archive, i.e., http://archive.eso.org/cms.html, while the photometric and spectroscopic data of SN 2023tyk presented in the Figure 4 and Figure 7 were derived from the following resources available in the public domain: https://alerce.online and https://www.wis-tns.org/.

Acknowledgments

The authors would like to thank the anonymous reviewers for their helpful comments and feedback, which greatly strengthened the overall manuscript. The authors acknowledge the use of ChatGPT as a grammar checker and paraphrasing tool. Based on observations collected at the European Southern Observatory under ESO programmes 095.D-0195, 095.D-0079, 096.D-0110 and 096.D-0141.

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
https://chatgpt.com/. Note that all webpage links throughout this paper are accessed on 25 October 2025.
2
3
4
5
6
7
8
9
Check the detailed description and full list of LSST brokers at https://rubinobservatory.org/for-scientists/data-products/alerts-and-brokers.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

References

  1. OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Leoni Aleman, F.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
  2. Alsabti, A.W.; Murdin, P. Handbook of Supernovae; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  3. Bambi, C.; Santangelo, A. (Eds.) Handbook of X-Ray and Gamma-Ray Astrophysics; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
  4. Warner, B. Cataclysmic Variable Stars; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar] [CrossRef]
  5. Casares, J.; Jonker, P.G.; Israelian, G. X-Ray Binaries. In Handbook of Supernovae; Alsabti, A.W., Murdin, P., Eds.; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  6. Beckmann, V.; Shrader, C.R. Active Galactic Nuclei; Wiley-VCH: Weinheim, Germany, 2012. [Google Scholar] [CrossRef]
  7. Luppino, G.A.; Tonry, J.L.; Stubbs, C.W. CCD mosaics–past, present, and future: A review. Opt. Astron. Instrum. 1998, 3355, 469–476. [Google Scholar] [CrossRef]
  8. Luppino, G.A.; Tonry, J.; Kaiser, N. The current state of the art in large CCD mosaic cameras, and a new strategy for wide field, high resolution optical imaging. In Optical Detectors for Astronomy II. Astrophysics and Space Science Library; Amico, P., Beletic, J.W., Eds.; Springer: Dordrecht, The Netherlands, 2000; Volume 252, p. 119. [Google Scholar] [CrossRef]
  9. Platais, I.; Kozhurina-Platais, V.; Girard, T.M.; van Altena, W.F.; Klemola, A.R.; Stauffer, J.R.; Armandroff, T.E.; Mighell, K.J.; Dell’Antonio, I.P.; Falco, E.E.; et al. WIYN Open Cluster Study. VIII. The Geometry and Stability of the NOAO CCD Mosaic Imager. Astron. J. 2002, 124, 601–611. [Google Scholar] [CrossRef]
  10. Chambers, K.C.; Magnier, E.A.; Metcalfe, N.; Flewelling, H.A.; Huber, M.E.; Waters, C.Z.; Denneau, L.; Draper, P.W.; Farrow, D.; Finkbeiner, D.P.; et al. The Pan-STARRS1 Surveys. arXiv 2016, arXiv:1612.05560. [Google Scholar] [CrossRef]
  11. Bellm, E.C.; Kulkarni, S.R.; Graham, M.J.; Dekany, R.; Smith, R.M.; Riddle, R.; Masci, F.J.; Helou, G.; Prince, T.A.; Adams, S.M.; et al. The Zwicky Transient Facility: System Overview, Performance, and First Results. Publ. Astron. Soc. Pac. 2018, 131, 018002. [Google Scholar] [CrossRef]
  12. Dekany, R.; Smith, R.M.; Riddle, R.; Feeney, M.; Porter, M.; Hale, D.; Zolkower, J.; Belicki, J.; Kaye, S.; Henning, J.; et al. The Zwicky Transient Facility: Observing System. Publ. Astron. Soc. Pac. 2020, 132, 038001. [Google Scholar] [CrossRef]
  13. Masci, F.J.; Laher, R.R.; Rusholme, B.; Shupe, D.L.; Groom, S.; Surace, J.; Jackson, E.; Monkewitz, S.; Beck, R.; Flynn, D.; et al. The Zwicky Transient Facility: Data Processing, Products, and Archive. Publ. Astron. Soc. Pac. 2019, 131, 018003. [Google Scholar] [CrossRef]
  14. Graham, M.J.; Kulkarni, S.R.; Bellm, E.C.; Adams, S.M.; Barbarino, C.; Blagorodnova, N.; Bodewits, D.; Bolin, B.; Brady, P.R.; Cenko, S.B.; et al. The Zwicky Transient Facility: Science Objectives. Publ. Astron. Soc. Pac. 2019, 131, 078001. [Google Scholar] [CrossRef]
  15. Li, L.X.; Paczyński, B. Transient Events from Neutron Star Mergers. Astrophys. J. Lett. 1998, 507, L59–L62. [Google Scholar] [CrossRef]
  16. Metzger, B.D.; Fernández, R. Red or blue? A potential kilonova imprint of the delay until black hole formation following a neutron star merger. Mon. Not. R. Astron. Soc. 2014, 441, 3444–3453. [Google Scholar] [CrossRef]
  17. Abbott, B.P.; Abbott, R.; Abbott, T.D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.X.; Adya, V.B.; et al. Multi-messenger Observations of a Binary Neutron Star Merger. Astrophys. J. Lett. 2017, 848, L12. [Google Scholar] [CrossRef]
  18. Coulter, D.A.; Foley, R.J.; Kilpatrick, C.D.; Drout, M.R.; Piro, A.L.; Shappee, B.J.; Siebert, M.R.; Simon, J.D.; Ulloa, N.; Kasen, D.; et al. Swope Supernova Survey 2017a (SSS17a), the optical counterpart to a gravitational wave source. Science 2017, 358, 1556–1558. [Google Scholar] [CrossRef] [PubMed]
  19. Valenti, S.; Sand, D.J.; Yang, S.; Cappellaro, E.; Tartaglia, L.; Corsi, A.; Jha, S.W.; Reichart, D.E.; Haislip, J.; Kouprianov, V. The Discovery of the Electromagnetic Counterpart of GW170817: Kilonova AT 2017gfo/DLT17ck. Astrophys. J. Lett. 2017, 848, L24. [Google Scholar] [CrossRef]
  20. Tanvir, N.R.; Levan, A.J.; González-Fernández, C.; Korobkin, O.; Mandel, I.; Rosswog, S.; Hjorth, J.; D’Avanzo, P.; Fruchter, A.S.; Fryer, C.L.; et al. The Emergence of a Lanthanide-rich Kilonova Following the Merger of Two Neutron Stars. Astrophys. J. 2017, 848, L27. [Google Scholar] [CrossRef]
  21. Lipunov, V.M.; Gorbovskoy, E.; Kornilov, V.G.; Tyurina, N.; Balanutsa, P.; Kuznetsov, A.; Vlasenko, D.; Kuvshinov, D.; Gorbunov, I.; Buckley, D.A.H.; et al. MASTER Optical Detection of the First LIGO/Virgo Neutron Star Binary Merger GW170817. Astrophys. J. Lett. 2017, 850, L1. [Google Scholar] [CrossRef]
  22. Soares-Santos, M.; Holz, D.E.; Annis, J.; Chornock, R.; Herner, K.; Berger, E.; Brout, D.; Chen, H.Y.; Kessler, R.; Sako, M.; et al. The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. I. Discovery of the Optical Counterpart Using the Dark Energy Camera. Astrophys. J. Lett. 2017, 848, L16. [Google Scholar] [CrossRef]
  23. Arcavi, I.; Hosseinzadeh, G.; Howell, D.A.; McCully, C.; Poznanski, D.; Kasen, D.; Barnes, J.; Zaltzman, M.; Vasylyev, S.; Maoz, D.; et al. Optical emission from a kilonova following a gravitational-wave-detected neutron-star merger. Nature 2017, 551, 64–66. [Google Scholar] [CrossRef]
  24. Kasliwal, M.M.; Nakar, E.; Singer, L.P.; Kaplan, D.L.; Cook, D.O.; Van Sistine, A.; Lau, R.M.; Fremling, C.; Gottlieb, O.; Jencson, J.E.; et al. Illuminating gravitational waves: A concordant picture of photons from a neutron star merger. Science 2017, 358, 1559–1565. [Google Scholar] [CrossRef]
  25. Yang, S.; Valenti, S.; Cappellaro, E.; Sand, D.J.; Tartaglia, L.; Corsi, A.; Reichart, D.E.; Haislip, J.; Kouprianov, V. An Empirical Limit on the Kilonova Rate from the DLT40 One Day Cadence Supernova Survey. Astrophys. J. Lett. 2017, 851, L48. [Google Scholar] [CrossRef]
  26. Goldstein, A.; Veres, P.; Burns, E.; Briggs, M.S.; Hamburg, R.; Kocevski, D.; Wilson-Hodge, C.A.; Preece, R.D.; Poolakkil, S.; Roberts, O.J.; et al. An Ordinary Short Gamma-Ray Burst with Extraordinary Implications: Fermi-GBM Detection of GRB 170817A. Astrophys. J. Lett. 2017, 848, L14. [Google Scholar] [CrossRef]
  27. Savchenko, V.; Ferrigno, C.; Kuulkers, E.; Bazzano, A.; Bozzo, E.; Brandt, S.; Chenevez, J.; Courvoisier, T.J.L.; Diehl, R.; Domingo, A.; et al. INTEGRAL Detection of the First Prompt Gamma-Ray Signal Coincident with the Gravitational-wave Event GW170817. Astrophys. J. Lett. 2017, 848, L15. [Google Scholar] [CrossRef]
  28. Haggard, D.; Nynka, M.; Ruan, J.J.; Kalogera, V.; Cenko, S.B.; Evans, P.; Kennea, J.A. A Deep Chandra X-Ray Study of Neutron Star Coalescence GW170817. Astrophys. J. Lett. 2017, 848, L25. [Google Scholar] [CrossRef]
  29. Troja, E.; Piro, L.; van Eerten, H.; Wollaeger, R.T.; Im, M.; Fox, O.D.; Butler, N.R.; Cenko, S.B.; Sakamoto, T.; Fryer, C.L.; et al. The X-ray counterpart to the gravitational-wave event GW170817. Nature 2017, 551, 71–74. [Google Scholar] [CrossRef]
  30. Hallinan, G.; Corsi, A.; Mooley, K.P.; Hotokezaka, K.; Nakar, E.; Kasliwal, M.M.; Kaplan, D.L.; Frail, D.A.; Myers, S.T.; Murphy, T.; et al. A radio counterpart to a neutron star merger. Science 2017, 358, 1579–1583. [Google Scholar] [CrossRef]
  31. Margutti, R.; Berger, E.; Fong, W.; Guidorzi, C.; Alexander, K.D.; Metzger, B.D.; Blanchard, P.K.; Cowperthwaite, P.S.; Chornock, R.; Eftekhari, T.; et al. The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. V. Rising X-Ray Emission from an Off-axis Jet. Astrophys. J. Lett. 2017, 848, L20. [Google Scholar] [CrossRef]
  32. Yang, S.; Sand, D.J.; Valenti, S.; Cappellaro, E.; Tartaglia, L.; Wyatt, S.; Corsi, A.; Reichart, D.E.; Haislip, J.; Kouprianov, V.; et al. Optical Follow-up of Gravitational-wave Events during the Second Advanced LIGO/VIRGO Observing Run with the DLT40 Survey. Astrophys. J. 2019, 875, 59. [Google Scholar] [CrossRef]
  33. Drout, M.R.; Chornock, R.; Soderberg, A.M.; Sanders, N.E.; McKinnon, R.; Rest, A.; Foley, R.J.; Milisavljevic, D.; Margutti, R.; Berger, E.; et al. Rapidly Evolving and Luminous Transients from Pan-STARRS1. Astrophys. J. 2014, 794, 23. [Google Scholar] [CrossRef]
  34. Prentice, S.J.; Maguire, K.; Smartt, S.J.; Magee, M.R.; Schady, P.; Sim, S.; Chen, T.W.; Clark, P.; Colin, C.; Fulton, M.; et al. The Cow: Discovery of a Luminous, Hot, and Rapidly Evolving Transient. Astrophys. J. Lett. 2018, 865, L3. [Google Scholar] [CrossRef]
  35. Ho, A.Y.Q.; Perley, D.A.; Gal-Yam, A.; Lunnan, R.; Sollerman, J.; Schulze, S.; Das, K.K.; Dobie, D.; Yao, Y.; Fremling, C.; et al. A Search for Extragalactic Fast Blue Optical Transients in ZTF and the Rate of AT2018cow-like Transients. Astrophys. J. 2023, 949, 120. [Google Scholar] [CrossRef]
  36. Perley, D.A.; Mazzali, P.A.; Yan, L.; Cenko, S.B.; Gezari, S.; Taggart, K.; Blagorodnova, N.; Fremling, C.; Mockler, B.; Singh, A.; et al. The fast, luminous ultraviolet transient AT2018cow: Extreme supernova, or disruption of a star by an intermediate-mass black hole? Mon. Not. R. Astron. Soc. 2019, 484, 1031–1049. [Google Scholar] [CrossRef]
  37. Margutti, R.; Metzger, B.D.; Chornock, R.; Vurm, I.; Roth, N.; Grefenstette, B.W.; Savchenko, V.; Cartier, R.; Steiner, J.F.; Terreran, G.; et al. An Embedded X-Ray Source Shines through the Aspherical AT 2018cow: Revealing the Inner Workings of the Most Luminous Fast-evolving Optical Transients. Astrophys. J. 2019, 872, 18. [Google Scholar] [CrossRef]
  38. Thompson, T.A.; Burrows, A.; Pinto, P.A. Shock Breakout in Core-Collapse Supernovae and Its Neutrino Signature. Astrophys. J. 2003, 592, 434–456. [Google Scholar] [CrossRef]
  39. Piro, A.L.; Chang, P.; Weinberg, N.N. Shock Breakout from Type Ia Supernova. Astrophys. J. 2010, 708, 598–604. [Google Scholar] [CrossRef]
  40. Basri, G.; Borucki, W.J.; Koch, D. The Kepler Mission: A wide-field transit search for terrestrial planets. New Astron. Rev. 2005, 49, 478–485. [Google Scholar] [CrossRef]
  41. Borucki, W.J. KEPLER Mission: Development and overview. Rep. Prog. Phys. 2016, 79, 036901. [Google Scholar] [CrossRef] [PubMed]
  42. Borucki, W.J.; Koch, D.; Basri, G.; Batalha, N.; Brown, T.; Caldwell, D.; Caldwell, J.; Christensen-Dalsgaard, J.; Cochran, W.D.; DeVore, E.; et al. Kepler Planet-Detection Mission: Introduction and First Results. Science 2010, 327, 977. [Google Scholar] [CrossRef]
  43. Ricker, G.R.; Winn, J.N.; Vanderspek, R.; Latham, D.W.; Bakos, G.Á.; Bean, J.L.; Berta-Thompson, Z.K.; Brown, T.M.; Buchhave, L.; Butler, N.R.; et al. Transiting Exoplanet Survey Satellite (TESS). J. Astron. Telesc. Instrum. Syst. 2015, 1, 014003. [Google Scholar] [CrossRef]
  44. Stassun, K.G.; Oelkers, R.J.; Paegert, M.; Torres, G.; Pepper, J.; De Lee, N.; Collins, K.; Latham, D.W.; Muirhead, P.S.; Chittidi, J.; et al. The Revised TESS Input Catalog and Candidate Target List. Astron. J. 2019, 158, 138. [Google Scholar] [CrossRef]
  45. Stassun, K.G.; Oelkers, R.J.; Pepper, J.; Paegert, M.; De Lee, N.; Torres, G.; Latham, D.W.; Charpinet, S.; Dressing, C.D.; Huber, D.; et al. The TESS Input Catalog and Candidate Target List. Astron. J. 2018, 156, 102. [Google Scholar] [CrossRef]
  46. Kempton, E.M.R.; Bean, J.L.; Louie, D.R.; Deming, D.; Koll, D.D.B.; Mansfield, M.; Christiansen, J.L.; López-Morales, M.; Swain, M.R.; Zellem, R.T.; et al. A Framework for Prioritizing the TESS Planetary Candidates Most Amenable to Atmospheric Characterization. Publ. Astron. Soc. Pac. 2018, 130, 114401. [Google Scholar] [CrossRef]
  47. Gaia Collaboration; Prusti, T.; de Bruijne, J.H.J.; Brown, A.G.A.; Vallenari, A.; Babusiaux, C.; Bailer-Jones, C.A.L.; Bastian, U.; Biermann, M.; Evans, D.W.; et al. The Gaia mission. Astron. Astrophys. 2016, 595, A1. [Google Scholar] [CrossRef]
  48. Foreman-Mackey, D.; Hogg, D.W.; Lang, D.; Goodman, J. emcee: The MCMC Hammer. Publ. Astron. Soc. Pac. 2013, 125, 306. [Google Scholar] [CrossRef]
  49. Smith, K.W.; Smartt, S.J.; Young, D.R.; Tonry, J.L.; Denneau, L.; Flewelling, H.; Heinze, A.N.; Weiland, H.J.; Stalder, B.; Rest, A.; et al. Design and Operation of the ATLAS Transient Science Server. Publ. Astron. Soc. Pac. 2020, 132, 085002. [Google Scholar] [CrossRef]
  50. Tonry, J.L. An Early Warning System for Asteroid Impact. Publ. Astron. Soc. Pac. 2010, 123, 58. [Google Scholar] [CrossRef]
  51. Tonry, J.L.; Denneau, L.; Heinze, A.N.; Stalder, B.; Smith, K.W.; Smartt, S.J.; Stubbs, C.W.; Weiland, H.J.; Rest, A. ATLAS: A High-cadence All-sky Survey System. Publ. Astron. Soc. Pac. 2018, 130, 064505. [Google Scholar] [CrossRef]
  52. Shappee, B.J.; Prieto, J.L.; Grupe, D.; Kochanek, C.S.; Stanek, K.Z.; Rosa, G.D.; Mathur, S.; Zu, Y.; Peterson, B.M.; Pogge, R.W.; et al. The man behind the curtain: X-rays drive the UV through nir variability in the 2013 active galactic nucleus outburst in ngc 2617. Astrophys. J. 2014, 788, 48. [Google Scholar] [CrossRef]
  53. Sand, D. Highlights from the D<40 Mpc Sub-Day Cadence Supernova Survey, DLT40. Am. Astron. Soc. Meet. Abstr. 2023, 241, 447.07. [Google Scholar]
  54. Rehemtulla, N.; Miller, A.A.; Jegou Du Laz, T.; Coughlin, M.W.; Fremling, C.; Perley, D.A.; Qin, Y.J.; Sollerman, J.; Mahabal, A.A.; Laher, R.R.; et al. The Zwicky Transient Facility Bright Transient Survey. III. BTSbot: Automated Identification and Follow-up of Bright Transients with Deep Learning. Astrophys. J. 2024, 972, 7. [Google Scholar] [CrossRef]
  55. Rehemtulla, N.; Miller, A.; Fremling, C.; Perley, D.A.; Qin, Y.; Sollerman, J.; Mahabal, A.; Neill, J.D.; Laz, T.J.D.; Coughlin, M. SN 2023tyk: Discovery to spectroscopic classification performed fully automatically. Transient Name Serv. AstroNote 2023, 265. [Google Scholar]
  56. Rehemtulla, N.; Miller, A.A.; Coughlin, M.W.; Jegou du Laz, T. BTSbot: A Multi-input Convolutional Neural Network to Automate and Expedite Bright Transient Identification for the Zwicky Transient Facility. arXiv 2023, arXiv:2307.07618. [Google Scholar] [CrossRef]
  57. Fremling, C.; Miller, A.A.; Sharma, Y.; Dugas, A.; Perley, D.A.; Taggart, K.; Sollerman, J.; Goobar, A.; Graham, M.L.; Neill, J.D.; et al. The Zwicky Transient Facility Bright Transient Survey. I. Spectroscopic Classification and the Redshift Completeness of Local Galaxy Catalogs. Astrophys. J. 2020, 895, 32. [Google Scholar] [CrossRef]
  58. Perley, D.A.; Fremling, C.; Sollerman, J.; Miller, A.A.; Dahiwale, A.S.; Sharma, Y.; Bellm, E.C.; Biswas, R.; Brink, T.G.; Bruch, R.J.; et al. The Zwicky Transient Facility Bright Transient Survey. II. A Public Statistical Sample for Exploring Supernova Demographics. Astrophys. J. 2020, 904, 35. [Google Scholar] [CrossRef]
  59. Blagorodnova, N.; Neill, J.D.; Walters, R.; Kulkarni, S.R.; Fremling, C.; Ben-Ami, S.; Dekany, R.G.; Fucik, J.R.; Konidaris, N.; Nash, R.; et al. The SED Machine: A Robotic Spectrograph for Fast Transient Classification. Publ. Astron. Soc. Pac. 2018, 130, 035003. [Google Scholar] [CrossRef]
  60. Rigault, M.; Neill, J.D.; Blagorodnova, N.; Dugas, A.; Feeney, M.; Walters, R.; Brinnel, V.; Copin, Y.; Fremling, C.; Nordin, J.; et al. Fully automated integral field spectrograph pipeline for the SEDMachine: Pysedm. Astron. Astrophys. 2019, 627, A115. [Google Scholar] [CrossRef]
  61. Fremling, C.; Hall, X.J.; Coughlin, M.W.; Dahiwale, A.S.; Duev, D.A.; Graham, M.J.; Kasliwal, M.M.; Kool, E.C.; Mahabal, A.A.; Miller, A.A.; et al. SNIascore: Deep-learning Classification of Low-resolution Supernova Spectra. Astrophys. J. Lett. 2021, 917, L2. [Google Scholar] [CrossRef]
  62. Alard, C.; Lupton, R. ISIS: A method for optimal image subtraction. Astrophysics Source Code Library, record ascl:9909.003. 1999. [Google Scholar]
  63. Becker, A. HOTPANTS: High Order Transform of PSF ANd Template Subtraction. Astrophysics Source Code Library. 2015. [Google Scholar]
  64. Zackay, B.; Ofek, E.O.; Gal-Yam, A. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing. Astrophys. J. 2016, 830, 27. [Google Scholar] [CrossRef]
  65. Bertin, E.; Arnouts, S. SExtractor: Software for source extraction. Astron. Astrophys. Suppl. Ser. 1996, 117, 393–404. [Google Scholar] [CrossRef]
  66. Wright, D.E.; Smartt, S.J.; Smith, K.W.; Miller, P.; Kotak, R.; Rest, A.; Burgett, W.S.; Chambers, K.C.; Flewelling, H.; Hodapp, K.W.; et al. Machine learning for transient discovery in Pan-STARRS1 difference imaging. Mon. Not. R. Astron. Soc. 2015, 449, 451–466. [Google Scholar] [CrossRef]
  67. Brocato, E.; Branchesi, M.; Cappellaro, E.; Covino, S.; Grado, A.; Greco, G.; Limatola, L.; Stratta, G.; Yang, S.; Campana, S.; et al. GRAWITA: VLT Survey Telescope observations of the gravitational wave sources GW150914 and GW151226. Mon. Not. R. Astron. Soc. 2018, 474, 411–426. [Google Scholar] [CrossRef]
  68. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  69. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  70. Ivezić, Ž.; Kahn, S.M.; Tyson, J.A.; Abel, B.; Acosta, E.; Allsman, R.; Alonso, D.; AlSayyad, Y.; Anderson, S.F.; Andrew, J.; et al. LSST: From Science Drivers to Reference Design and Anticipated Data Products. Astrophys. J. 2019, 873, 111. [Google Scholar] [CrossRef]
  71. Kreps, J. Kafka: A Distributed Messaging System for Log Processing. 2011. Available online: https://notes.stephenholiday.com/Kafka.pdf (accessed on 23 September 2025).
  72. Förster, F.; Cabrera-Vives, G.; Castillo-Navarrete, E.; Estévez, P.A.; Sánchez-Sáez, P.; Arredondo, J.; Bauer, F.E.; Carrasco-Davis, R.; Catelan, M.; Elorrieta, F.; et al. The Automatic Learning for the Rapid Classification of Events (ALeRCE) Alert Broker. Astron. J. 2021, 161, 242. [Google Scholar] [CrossRef]
  73. Smith, K.W.; Williams, R.D.; Young, D.R.; Ibsen, A.; Smartt, S.J.; Lawrence, A.; Morris, D.; Voutsinas, S.; Nicholl, M. Lasair: The Transient Alert Broker for LSST:UK. Res. Notes Am. Astron. Soc. 2019, 3, 26. [Google Scholar] [CrossRef]
  74. Matheson, T.; Stubens, C.; Wolf, N.; Lee, C.H.; Narayan, G.; Saha, A.; Scott, A.; Soraisam, M.; Bolton, A.S.; Hauger, B.; et al. The ANTARES Astronomical Time-domain Event Broker. Astron. J. 2021, 161, 107. [Google Scholar] [CrossRef]
  75. Leoni, M.; Ishida, E.E.O.; Peloton, J.; Möller, A. Fink: Early supernovae Ia classification using active learning. Astron. Astrophys. 2022, 663, A13. [Google Scholar] [CrossRef]
  76. Nordin, J.; Brinnel, V.; van Santen, J.; Bulla, M.; Feindt, U.; Franckowiak, A.; Fremling, C.; Gal-Yam, A.; Giomi, M.; Kowalski, M.; et al. Transient processing and analysis using AMPEL: Alert management, photometry, and evaluation of light curves. Astron. Astrophys. 2019, 631, A147. [Google Scholar] [CrossRef]
  77. Patterson, M.T.; Bellm, E.C.; Rusholme, B.; Masci, F.J.; Juric, M.; Krughoff, K.S.; Golkhou, V.Z.; Graham, M.J.; Kulkarni, S.R.; Helou, G.; et al. The Zwicky Transient Facility Alert Distribution System. Publ. Astron. Soc. Pac. 2019, 131, 018001. [Google Scholar] [CrossRef]
  78. Li, G.; Hu, M.; Li, W.; Yang, Y.; Wang, X.; Yan, S.; Hu, L.; Zhang, J.; Mao, Y.; Riise, H.; et al. A shock flash breaking out of a dusty red supergiant. Nature 2024, 627, 754–758. [Google Scholar] [CrossRef]
  79. Chen, T.W.; Yang, S.; Srivastav, S.; Moriya, T.J.; Smartt, S.J.; Rest, S.; Rest, A.; Lin, H.W.; Miao, H.Y.; Cheng, Y.C.; et al. Discovery and Extensive Follow-up of SN 2024ggi, a Nearby Type IIP Supernova in NGC 3621. Astrophys. J. 2025, 983, 86. [Google Scholar] [CrossRef]
  80. Zhao, G.; Zhao, Y.H.; Chu, Y.Q.; Jing, Y.P.; Deng, L.C. LAMOST spectral survey—An overview. Res. Astron. Astrophys. 2012, 12, 723–734. [Google Scholar] [CrossRef]
  81. Kim, J.H.; Im, M.; Lee, H.M.; Chang, S.W.; Choi, H.; Paek, G.S.H. Introduction to the 7-Dimensional Telescope: Commissioning Procedures and Data Characteristics. arXiv 2024, arXiv:2406.16462. [Google Scholar] [CrossRef]
  82. Kessler, R.; Conley, A.; Jha, S.; Kuhlmann, S. Supernova Photometric Classification Challenge. arXiv 2010, arXiv:1001.5210. [Google Scholar] [CrossRef]
  83. Kessler, R.; Bassett, B.; Belov, P.; Bhatnagar, V.; Campbell, H.; Conley, A.; Frieman, J.A.; Glazov, A.; González-Gaitán, S.; Hlozek, R.; et al. Results from the Supernova Photometric Classification Challenge. Publ. Astron. Soc. Pac. 2010, 122, 1415. [Google Scholar] [CrossRef]
  84. Dark Energy Survey Collaboration; Abbott, T.; Abdalla, F.B.; Aleksić, J.; Allam, S.; Amara, A.; Bacon, D.; Balbinot, E.; Banerji, M.; Bechtol, K.; et al. The Dark Energy Survey: More than dark energy—an overview. Mon. Not. R. Astron. Soc. 2016, 460, 1270–1299. [Google Scholar] [CrossRef]
  85. Bazin, G.; Palanque-Delabrouille, N.; Rich, J.; Ruhlmann-Kleider, V.; Aubourg, E.; Le Guillou, L.; Astier, P.; Balland, C.; Basa, S.; Carlberg, R.G.; et al. The core-collapse rate from the Supernova Legacy Survey. Astron. Astrophys. 2009, 499, 653–660. [Google Scholar] [CrossRef]
  86. Villar, V.A.; Berger, E.; Miller, G.; Chornock, R.; Rest, A.; Jones, D.O.; Drout, M.R.; Foley, R.J.; Kirshner, R.; Lunnan, R.; et al. Supernova Photometric Classification Pipelines Trained on Spectroscopically Classified Supernovae from the Pan-STARRS1 Medium-deep Survey. Astrophys. J. 2019, 884, 83. [Google Scholar] [CrossRef]
  87. Ambikasaran, S.; Foreman-Mackey, D.; Greengard, L.; Hogg, D.W.; O’Neil, M. Fast Direct Methods for Gaussian Processes. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 252. [Google Scholar] [CrossRef]
  88. Yang, S.; Sollerman, J. HAFFET: Hybrid Analytic Flux FittEr for Transients. Astrophys. J. Supp. 2023, 269, 40. [Google Scholar] [CrossRef]
  89. Müller-Bravo, T.E.; Sullivan, M.; Smith, M.; Frohmaier, C.; Gutiérrez, C.P.; Wiseman, P.; Zontou, Z. PISCOLA: A data-driven transient light-curve fitter. Mon. Not. R. Astron. Soc. 2022, 512, 3266–3283. [Google Scholar] [CrossRef]
  90. Guillochon, J.; Nicholl, M.; Villar, V.A.; Mockler, B.; Narayan, G.; Mandel, K.S.; Berger, E.; Williams, P.K.G. MOSFiT: Modular Open Source Fitter for Transients. Astrophys. J. Supp. 2018, 236, 6. [Google Scholar] [CrossRef]
  91. Kennamer, N.; Ishida, E.E.O.; Gonzalez-Gaitan, S.; de Souza, R.S.; Ihler, A.; Ponder, K.; Vilalta, R.; Moller, A.; Jones, D.O.; Dai, M.; et al. Active learning with RESSPECT: Resource allocation for extragalactic astronomical transients. arXiv 2020, arXiv:2010.05941. [Google Scholar] [CrossRef]
  92. Liu, L.D.; Zhang, Y.H.; Yu, Y.W.; Du, Z.X.; Li, J.Y.; Wu, G.L.; Dai, Z.G. TransFit: An Efficient Framework for Transient Light-Curve Fitting with Time-Dependent Radiative Diffusion. arXiv 2025, arXiv:2505.13825. [Google Scholar] [CrossRef]
  93. Kessler, R.; Narayan, G.; Avelino, A.; Bachelet, E.; Biswas, R.; Brown, P.J.; Chernoff, D.F.; Connolly, A.J.; Dai, M.; Daniel, S.; et al. Models and Simulations for the Photometric LSST Astronomical Time Series Classification Challenge (PLAsTiCC). Publ. Astron. Soc. Pac. 2019, 131, 094501. [Google Scholar] [CrossRef]
  94. Kessler, R.; Bernstein, J.P.; Cinabro, D.; Dilday, B.; Frieman, J.A.; Jha, S.; Kuhlmann, S.; Miknaitis, G.; Sako, M.; Taylor, M.; et al. SNANA: A Public Software Package for Supernova Analysis. Publ. Astron. Soc. Pac. 2009, 121, 1028. [Google Scholar] [CrossRef]
  95. Sharma, Y.; Mahabal, A.A.; Sollerman, J.; Fremling, C.; Kulkarni, S.R.; Rehemtulla, N.; Miller, A.A.; Aubert, M.; Chen, T.X.; Coughlin, M.W.; et al. CCSNscore: A multi-input deep learning tool for classification of core-collapse supernovae using SED-Machine spectra. arXiv 2024, arXiv:2412.08601. [Google Scholar] [CrossRef]
  96. Turatto, M. Classification of Supernovae. In Supernovae and Gamma-Ray Bursters; Weiler, K., Ed.; Springer: Berlin, Germany, 2003; Volume 598, pp. 21–36. [Google Scholar] [CrossRef]
  97. Blondin, S.; Tonry, J.L. Determining the Type, Redshift, and Age of a Supernova Spectrum. Astrophys. J. 2007, 666, 1024–1047. [Google Scholar] [CrossRef]
  98. Martin-Brualla, R.; Pandey, R.; Bouaziz, S.; Brown, M.; Goldman, D.B. GeLaTO: Generative Latent Textured Objects. arXiv 2020, arXiv:2008.04852. [Google Scholar] [CrossRef]
  99. Muthukrishna, D.; Parkinson, D.; Tucker, B.E. DASH: Deep Learning for the Automated Spectral Classification of Supernovae and Their Hosts. Astrophys. J. 2019, 885, 85. [Google Scholar] [CrossRef]
  100. Goldwasser, S.; Yaron, O.; Sass, A.; Irani, I.; Gal-Yam, A.; Howell, D.A. The Next Generation SuperFit (NGSF) tool is now available for online execution on WISeREP. Transient Name Serv. AstroNote 2022, 191. [Google Scholar]
  101. Yaron, O.; Gal-Yam, A. WISeREP—An Interactive Supernova Data Repository. Publ. Astron. Soc. Pac. 2012, 124, 668. [Google Scholar] [CrossRef]
  102. Sollerman, J.; Yang, S.; Perley, D.; Schulze, S.; Fremling, C.; Kasliwal, M.; Shin, K.; Racine, B. Maximum luminosities of normal stripped-envelope supernovae are brighter than explosion models allow. Astron. Astrophys. 2022, 657, A64. [Google Scholar] [CrossRef]
  103. Cunningham, P.; Cord, M.; Delany, S.J. Supervised Learning. In Machine Learning Techniques for Multimedia: Case Studies on Organization and Retrieval; Cord, M., Cunningham, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 21–49. [Google Scholar] [CrossRef]
  104. Clarke, A.O.; Scaife, A.M.M.; Greenhalgh, R.; Griguta, V. Identifying galaxies, quasars, and stars with machine learning: A new catalogue of classifications for 111 million SDSS sources without spectra. Astron. Astrophys. 2020, 639, A84. [Google Scholar] [CrossRef]
  105. Bilicki, M.; Dvornik, A.; Hoekstra, H.; Wright, A.H.; Chisari, N.E.; Vakili, M.; Asgari, M.; Giblin, B.; Heymans, C.; Hildebrandt, H.; et al. Bright galaxy sample in the Kilo-Degree Survey Data Release 4. Selection, photometric redshifts, and physical properties. Astron. Astrophys. 2021, 653, A82. [Google Scholar] [CrossRef]
  106. Ghahramani, Z. Unsupervised Learning. In Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, 2–14 February 2003, Tübingen, Germany, 4–16 August 2003, Revised Lectures; Bousquet, O., von Luxburg, U., Rätsch, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 72–112. [Google Scholar] [CrossRef]
  107. Ikotun, A.M.; Ezugwu, A.E.; Abualigah, L.; Abuhaija, B.; Heming, J. K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. Inform. Sci. 2023, 622, 178–210. [Google Scholar] [CrossRef]
  108. McInnes, L.; Healy, J.; Astels, S. hdbscan: Hierarchical density based clustering. J. Open Source Softw. 2017, 2, 205. [Google Scholar] [CrossRef]
  109. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining; AAAI Press: Washington, DC, USA, 1996. KDD’96. pp. 226–231. [Google Scholar]
  110. Maćkiewicz, A.; Ratajczak, W. Principal components analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
  111. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  112. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar] [CrossRef]
  113. Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 2009, 41, 1–58. [Google Scholar] [CrossRef]
  114. Abraham, B.; Chuang, A. Outlier detection and time series modeling. Technometrics 1989, 31, 241–248. [Google Scholar] [CrossRef]
  115. Pruzhinskaya, M.V.; Malanchev, K.L.; Kornilov, M.V.; Ishida, E.E.O.; Mondon, F.; Volnova, A.A.; Korolev, V.S. Anomaly detection in the Open Supernova Catalog. Mon. Not. R. Astron. Soc. 2019, 489, 3591–3608. [Google Scholar] [CrossRef]
  116. Ishida, E.E.O.; Kornilov, M.V.; Malanchev, K.L.; Pruzhinskaya, M.V.; Volnova, A.A.; Korolev, V.S.; Mondon, F.; Sreejith, S.; Malancheva, A.A.; Das, S. Active anomaly detection for time-domain discoveries. Astron. Astrophys. 2021, 650, A195. [Google Scholar] [CrossRef]
  117. Sutton, R.; Barto, A. Reinforcement Learning: An Introduction. IEEE Trans. Neural Netw. 1998, 9, 1054. [Google Scholar] [CrossRef]
  118. Nousiainen, J.; Rajani, C.; Kasper, M.; Helin, T.; Haffert, S.Y.; Vérinaud, C.; Males, J.R.; Van Gorkom, K.; Close, L.M.; Long, J.D.; et al. Toward on-sky adaptive optics control using reinforcement learning. Model-based policy optimization for adaptive optics. Astron. Astrophys. 2022, 664, A71. [Google Scholar] [CrossRef]
  119. Landman, R.; Haffert, S.Y.; Radhakrishnan, V.M.; Keller, C.U. Self-optimizing adaptive optics control with reinforcement learning for high-contrast imaging. J. Astron. Telesc. Instrum. Syst. 2021, 7, 039002. [Google Scholar] [CrossRef]
  120. Nousiainen, J.; Rajani, C.; Kasper, M.; Helin, T. Adaptive optics control using model-based reinforcement learning. Opt. Express 2021, 29, 15327. [Google Scholar] [CrossRef]
  121. Jia, P.; Jia, Q.; Jiang, T.; Liu, J. Observation Strategy Optimization for Distributed Telescope Arrays with Deep Reinforcement Learning. Astron. J. 2023, 165, 233. [Google Scholar] [CrossRef]
  122. Jia, P.; Jia, Q.; Jiang, T.; Yang, Z. A simulation framework for telescope array and its application in distributed reinforcement learning-based scheduling of telescope arrays. Astron. Comput. 2023, 44, 100732. [Google Scholar] [CrossRef]
  123. Yatawatta, S.; Avruch, I.M. Deep reinforcement learning for smart calibration of radio telescopes. Mon. Not. R. Astron. Soc. 2021, 505, 2141–2150. [Google Scholar] [CrossRef]
  124. Yatawatta, S. Hint assisted reinforcement learning: An application in radio astronomy. arXiv 2023, arXiv:2301.03933. [Google Scholar] [CrossRef]
  125. Baron, D. Machine Learning in Astronomy: A practical overview. arXiv 2019, arXiv:1904.07248. [Google Scholar] [CrossRef]
  126. Chen, S.; Kargaltsev, O.; Yang, H.; Hare, J.; Volkov, I.; Rangelov, B.; Tomsick, J. Population of X-Ray Sources in the Intermediate-age Cluster NGC 3532: A Test Bed for Machine-learning Classification. Astrophys. J. 2023, 948, 59. [Google Scholar] [CrossRef]
  127. Neira, M.; Gómez, C.; Suárez-Pérez, J.F.; Gómez, D.A.; Reyes, J.P.; Hoyos, M.H.; Arbeláez, P.; Forero-Romero, J.E. MANTRA: A Machine-learning Reference Light-curve Data Set for Astronomical Transient Event Recognition. Astrophys. J. Suppl. Ser. 2020, 250, 11. [Google Scholar] [CrossRef]
  128. de Beurs, Z.L.; Islam, N.; Gopalan, G.; Vrtilek, S.D. A Comparative Study of Machine-learning Methods for X-Ray Binary Classification. Astrophys. J. 2022, 933, 116. [Google Scholar] [CrossRef]
  129. Debosscher, J.; Sarro, L.M.; Aerts, C.; Cuypers, J.; Vandenbussche, B.; Garrido, R.; Solano, E. Automated supervised classification of variable stars. I. Methodology. Astron. Astrophys. 2007, 475, 1159–1183. [Google Scholar] [CrossRef]
  130. Richards, J.W.; Starr, D.L.; Butler, N.R.; Bloom, J.S.; Brewer, J.M.; Crellin-Quick, A.; Higgins, J.; Kennedy, R.; Rischard, M. On machine-learned classification of variable stars with sparse and noisy time-series data. Astrophys. J. 2011, 733, 10. [Google Scholar] [CrossRef]
  131. Kim, D.W.; Protopapas, P.; Byun, Y.I.; Alcock, C.; Khardon, R.; Trichas, M. Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from macho large magellanic cloud database. Astrophys. J. 2011, 735, 68. [Google Scholar] [CrossRef]
  132. Razzano, M.; Cuoco, E. Image-based deep learning for classification of noise transients in gravitational wave detectors. Class. Quantum Gravity 2018, 35, 095016. [Google Scholar] [CrossRef]
  133. Flamary, R. Astronomical image reconstruction with convolutional neural networks. arXiv 2016, arXiv:1612.04526. [Google Scholar] [CrossRef]
  134. Martinazzo, A.; Espadoto, M.; Hirata, N.S.T. Self-supervised Learning for Astronomical Image Classification. arXiv 2020, arXiv:2004.11336. [Google Scholar] [CrossRef]
  135. Qu, H.; Sako, M. Photometric Classification of Early-time Supernova Light Curves with SCONE. Astron. J. 2022, 163, 57. [Google Scholar] [CrossRef]
  136. Charnock, T.; Moss, A. Supernova Photometric Classification with Deep Recurrent Neural Networks. arXiv 2017, arXiv:1706.01849. [Google Scholar]
  137. Gupta, R.; Muthukrishna, D.; Lochner, M. A classifier-based approach to multiclass anomaly detection for astronomical transients. RAS Tech. Instrum. 2025, 4, rzae054. [Google Scholar] [CrossRef]
  138. Fraga, B.M.O.; Bom, C.R.; Santos, A.; Russeil, E.; Leoni, M.; Peloton, J.; Ishida, E.E.O.; Möller, A.; Blondin, S. Transient Classifiers for Fink: Benchmarks for LSST. arXiv 2024, arXiv:2404.08798. [Google Scholar] [CrossRef]
  139. Webb, S.; Lochner, M.; Muthukrishna, D.; Cooke, J.; Flynn, C.; Mahabal, A.; Goode, S.; Andreoni, I.; Pritchard, T.; Abbott, T.M.C. Unsupervised machine learning for transient discovery in deeper, wider, faster light curves. Mon. Not. R. Astron. Soc. 2020, 498, 3077–3094. [Google Scholar] [CrossRef]
  140. Mahabal, A.; Sheth, K.; Gieseke, F.; Pai, A.; Djorgovski, S.G.; Drake, A.; Graham, M.; the CSS/CRTS/PTF Collaboration. Deep-Learnt Classification of Light Curves. arXiv 2017, arXiv:1709.06257. [Google Scholar] [CrossRef]
  141. Turner, R.E. An Introduction to Transformers. arXiv 2023, arXiv:2304.10557. [Google Scholar] [CrossRef]
  142. Allam, T.; McEwen, J.D. Paying attention to astronomical transients: Introducing the time-series transformer for photometric classification. RAS Tech. Instrum. 2024, 3, 209–223. [Google Scholar] [CrossRef]
  143. Cabrera-Vives, G.; Moreno-Cartagena, D.; Astorga, N.; Reyes-Jainaga, I.; Förster, F.; Huijse, P.; Arredondo, J.; Muñoz Arancibia, A.M.; Bayo, A.; Catelan, M.; et al. ATAT: Astronomical Transformer for time series and Tabular data. arXiv 2024, arXiv:2405.03078. [Google Scholar] [CrossRef]
  144. Nun, I.; Protopapas, P.; Sim, B.; Zhu, M.; Dave, R.; Castro, N.; Pichara, K. FATS: Feature Analysis for Time Series. arXiv 2015, arXiv:1506.00010. [Google Scholar] [CrossRef]
  145. Cuoco, E.; Powell, J.; Cavaglià, M.; Ackley, K.; Bejger, M.; Chatterjee, C.; Coughlin, M.; Coughlin, S.; Easter, P.; Essick, R.; et al. Enhancing gravitational-wave science with machine learning. Mach. Learn. Sci. Technol. 2020, 2, 011002. [Google Scholar] [CrossRef]
  146. Malik, A.; Moster, B.P.; Obermeier, C. Exoplanet detection using machine learning. Mon. Not. R. Astron. Soc. 2022, 513, 5505–5516. [Google Scholar] [CrossRef]
  147. de la Calleja, J.; Fuentes, O. Machine learning and image analysis for morphological galaxy classification. Mon. Not. R. Astron. Soc. 2004, 349, 87–93. [Google Scholar] [CrossRef]
  148. Wagstaff, K.L.; Tang, B.; Thompson, D.R.; Khudikyan, S.; Wyngaard, J.; Deller, A.T.; Palaniswamy, D.; Tingay, S.J.; Wayth, R.B. A Machine Learning Classifier for Fast Radio Burst Detection at the VLBA. Publ. Astron. Soc. Pac. 2016, 128, 084503. [Google Scholar] [CrossRef]
  149. Zhang, Y.G.; Gajjar, V.; Foster, G.; Siemion, A.; Cordes, J.; Law, C.; Wang, Y. Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach. Astrophys. J. 2018, 866, 149. [Google Scholar] [CrossRef]
  150. Wu, D.; Cao, H.; Lv, N.; Fan, J.; Tan, X.; Yang, S. Feature Matching Conditional GAN for Fast Radio Burst Localization with Cluster-fed Telescope. Astrophys. J. Lett. 2019, 887, L10. [Google Scholar] [CrossRef]
  151. Yang, X.; Zhang, S.B.; Wang, J.S.; Hobbs, G.; Sun, T.R.; Manchester, R.N.; Geng, J.J.; Russell, C.J.; Luo, R.; Tang, Z.F.; et al. 81 New candidate fast radio bursts in Parkes archive. Mon. Not. R. Astron. Soc. 2021, 507, 3238–3245. [Google Scholar] [CrossRef]
  152. Adámek, K.; Armour, W. Single-pulse Detection Algorithms for Real-time Fast Radio Burst Searches Using GPUs. Astrophys. J. Suppl. Ser. 2020, 247, 56. [Google Scholar] [CrossRef]
  153. Agarwal, D.; Aggarwal, K.; Burke-Spolaor, S.; Lorimer, D.R.; Garver-Daniels, N. FETCH: A deep-learning based classifier for fast transient classification. Mon. Not. R. Astron. Soc. 2020, 497, 1661–1674. [Google Scholar] [CrossRef]
  154. Bhatporia, S.; Walters, A.; Murugan, J.; Weltman, A. A Topological Data Analysis of the CHIME/FRB Catalogues. arXiv 2023, arXiv:2311.03456. [Google Scholar] [CrossRef]
  155. Chen, B.H.; Hashimoto, T.; Goto, T.; Kim, S.J.; Santos, D.J.D.; On, A.Y.L.; Lu, T.Y.; Hsiao, T.Y.Y. Uncloaking hidden repeating fast radio bursts with unsupervised machine learning. Mon. Not. R. Astron. Soc. 2021, 509, 1227–1236. [Google Scholar] [CrossRef]
  156. Zhu-Ge, J.M.; Luo, J.W.; Zhang, B. Machine learning classification of CHIME fast radio bursts-II. Unsupervised methods. Mon. Not. R. Astron. Soc. 2023, 519, 1823–1836. [Google Scholar] [CrossRef]
  157. Yang, X.; Zhang, S.B.; Wang, J.S.; Wu, X.F. Classifying FRB spectrograms using nonlinear dimensionality reduction techniques. Mon. Not. R. Astron. Soc. 2023, 522, 4342–4351. [Google Scholar] [CrossRef]
  158. Luo, J.W.; Zhu-Ge, J.M.; Zhang, B. Machine learning classification of CHIME fast radio bursts-I. Supervised methods. Mon. Not. R. Astron. Soc. 2022, 518, 1629–1641. [Google Scholar] [CrossRef]
  159. Sun, W.P.; Zhang, J.G.; Li, Y.; Hou, W.T.; Zhang, F.W.; Zhang, J.F.; Zhang, X. Exploring the Key Features of Repeating Fast Radio Bursts with Machine Learning. Astrophys. J. 2025, 980, 185. [Google Scholar] [CrossRef]
  160. Qiang, D.C.; Zheng, J.; You, Z.Q.; Yang, S. Unsupervised Machine Learning for Classifying CHIME Fast Radio Bursts and Investigating Empirical Relations. Astrophys. J. 2025, 982, 16. [Google Scholar] [CrossRef]
  161. Raquel, B.J.R.; Hashimoto, T.; Goto, T.; Chen, B.H.; Uno, Y.; Hsiao, T.Y.Y.; Kim, S.J.; Ho, S.C.C. Machine learning classification of repeating FRBs from FRB 121102. Mon. Not. R. Astron. Soc. 2023, 524, 1668–1691. [Google Scholar] [CrossRef]
  162. Chen, B.H.; Hashimoto, T.; Goto, T.; Raquel, B.J.R.; Uno, Y.; Kim, S.J.; Hsiao, T.Y.Y.; Ho, S.C.C. Classifying a frequently repeating fast radio burst, FRB 20201124A, with unsupervised machine learning. Mon. Not. R. Astron. Soc. 2023, 521, 5738–5745. [Google Scholar] [CrossRef]
  163. Ghirlanda, G.; Nava, L.; Ghisellini, G.; Celotti, A.; Firmani, C. Short versus long gamma-ray bursts: Spectra, energetics, and luminosities. Astron. Astrophys. 2009, 496, 585–595. [Google Scholar] [CrossRef]
  164. Rastinejad, J.C.; Gompertz, B.P.; Levan, A.J.; Fong, W.f.; Nicholl, M.; Lamb, G.P.; Malesani, D.B.; Nugent, A.E.; Oates, S.R.; Tanvir, N.R.; et al. A kilonova following a long-duration gamma-ray burst at 350 Mpc. Nature 2022, 612, 223–227. [Google Scholar] [CrossRef] [PubMed]
  165. Troja, E.; Fryer, C.L.; O’Connor, B.; Ryan, G.; Dichiara, S.; Kumar, A.; Ito, N.; Gupta, R.; Wollaeger, R.T.; Norris, J.P.; et al. A nearby long gamma-ray burst from a merger of compact objects. Nature 2022, 612, 228–231. [Google Scholar] [CrossRef] [PubMed]
  166. Levan, A.J.; Gompertz, B.P.; Salafia, O.S.; Bulla, M.; Burns, E.; Hotokezaka, K.; Izzo, L.; Lamb, G.P.; Malesani, D.B.; Oates, S.R.; et al. Heavy-element production in a compact object merger observed by JWST. Nature 2024, 626, 737–741. [Google Scholar] [CrossRef] [PubMed]
  167. Zhu, S.Y.; Sun, W.P.; Ma, D.L.; Zhang, F.W. Classification of Fermi gamma-ray bursts based on machine learning. Mon. Not. R. Astron. Soc. 2024, 532, 1434–1443. [Google Scholar] [CrossRef]
  168. Yang, J.; Ai, S.; Zhang, B.B.; Zhang, B.; Liu, Z.K.; Wang, X.I.; Yang, Y.H.; Yin, Y.H.; Li, Y.; Lü, H.J. A long-duration gamma-ray burst with a peculiar origin. Nature 2022, 612, 232–235. [Google Scholar] [CrossRef]
  169. Du, Z.; Lü, H.; Yuan, Y.; Yang, X.; Liang, E. The Progenitor and Central Engine of a Peculiar GRB 230307A. Astrophys. J. Lett. 2024, 962, L27. [Google Scholar] [CrossRef]
  170. Garcia-Cifuentes, K.; Becerra, R.; De Colle, F. ClassiPyGRB: Machine Learning-Based Classification and Visualization of Gamma Ray Bursts using t-SNE. J. Open Source Softw. 2024, 9, 5923. [Google Scholar] [CrossRef]
  171. Junell, A.; Sasli, A.; Fontinele Nunes, F.; Xu, M.; Border, B.; Rehemtulla, N.; Rizhko, M.; Qin, Y.J.; Jegou Du Laz, T.; Le Calloch, A.; et al. Applying multimodal learning to Classify transient Detections Early (AppleCiDEr) I: Data set, methods, and infrastructure. arXiv 2025, arXiv:2507.16088. [Google Scholar] [CrossRef]
  172. Aleo, P.D.; Engel, A.W.; Narayan, G.; Angus, C.R.; Malanchev, K.; Auchettl, K.; Baldassare, V.F.; Berres, A.; de Boer, T.J.L.; Boyd, B.M.; et al. Anomaly Detection and Approximate Similarity Searches of Transients in Real-time Data Streams. Astrophys. J. 2024, 974, 172. [Google Scholar] [CrossRef]
  173. Biswas, B.; Ishida, E.E.O.; Peloton, J.; Möller, A.; Pruzhinskaya, M.V.; de Souza, R.S.; Muthukrishna, D. Enabling the discovery of fast transients. A kilonova science module for the Fink broker. Astron. Astrophys. 2023, 677, A77. [Google Scholar] [CrossRef]
  174. Dillmann, S.; Martínez-Galarza, J.R.; Soria, R.; Stefano, R.D.; Kashyap, V.L. Representation learning for time-domain high-energy astrophysics: Discovery of extragalactic fast X-ray transient XRT 200515. Mon. Not. R. Astron. Soc. 2025, 537, 931–955. [Google Scholar] [CrossRef]
Figure 1. The number of transients increased rapidly. This plot illustrates the evolution in the number of public transients (represented by the blue open circles and solid line) and classified supernovae (represented by the red open circles and solid line) from 2005 to 2022. The inset subplot provides a zoomed-in view of the data from 2021 to 2024, i.e., the green shaded area, showing the monthly variation in the counts for both categories along with their cumulative distributions in log scale. All data were queried from https://www.wis-tns.org/.
Figure 1. The number of transients increased rapidly. This plot illustrates the evolution in the number of public transients (represented by the blue open circles and solid line) and classified supernovae (represented by the red open circles and solid line) from 2005 to 2022. The inset subplot provides a zoomed-in view of the data from 2021 to 2024, i.e., the green shaded area, showing the monthly variation in the counts for both categories along with their cumulative distributions in log scale. All data were queried from https://www.wis-tns.org/.
Universe 11 00355 g001
Figure 2. Workflow diagram for transient observation and machine learning algorithms for analysis and prediction. According to whether there is a trigger, sky survey observations are divided into two categories. Initially, the observed images are corrected for extinction and classified as real or bogus, with key information such as coordinates and brightness stored in a public database. Subsequently, candidates are studied using photometry and spectroscopy to provide data for machine learning models to determine if they are kilonovae. Regardless of the outcome, further analysis can be conducted through SED fitting and spectral analysis, deriving the probability of a new star through physical analysis and model comparison to guide survey strategies.
Figure 2. Workflow diagram for transient observation and machine learning algorithms for analysis and prediction. According to whether there is a trigger, sky survey observations are divided into two categories. Initially, the observed images are corrected for extinction and classified as real or bogus, with key information such as coordinates and brightness stored in a public database. Subsequently, candidates are studied using photometry and spectroscopy to provide data for machine learning models to determine if they are kilonovae. Regardless of the outcome, further analysis can be conducted through SED fitting and spectral analysis, deriving the probability of a new star through physical analysis and model comparison to guide survey strategies.
Universe 11 00355 g002
Figure 3. Visualization of subtracted image stamps as machine learning features for real bogus classifications. We construct feature vectors, following the approach illustrated in Figures 3 and 4 of [66], using images from the VST. These images are publicly available through the ESO archive (http://archive.eso.org/cms.html), and their labels were vetted by the author as part of the GRAWITA [67] project during the LIGO-Virgo-KAGRA Collaboration O1 observing run. For each subplot, the left side illustrates a 20 by 20 pixel stamp image, which is flattened into a 400-dimensional one-dimensional feature vector shown on the right by concatenating the pixel values row by row.
Figure 3. Visualization of subtracted image stamps as machine learning features for real bogus classifications. We construct feature vectors, following the approach illustrated in Figures 3 and 4 of [66], using images from the VST. These images are publicly available through the ESO archive (http://archive.eso.org/cms.html), and their labels were vetted by the author as part of the GRAWITA [67] project during the LIGO-Virgo-KAGRA Collaboration O1 observing run. For each subplot, the left side illustrates a 20 by 20 pixel stamp image, which is flattened into a 400-dimensional one-dimensional feature vector shown on the right by concatenating the pixel values row by row.
Universe 11 00355 g003
Figure 4. The light curve and type predictions of SN 2023tyk. The left panel shows the ZTF g- and r-band light curves of SN 2023tyk obtained from https://alerce.online/object/ZTF23abhvlji, accessed on 24 September 2025. All of these data products are embedded within the ZTF alert packets. The black and red vertical lines indicate the epochs when the object was first reported to the TNS as a new transient and when it was subsequently classified as a Type Ia supernova, respectively. The right panel shows the machine learning-based assessment of the light curve with the astrorapid.
Figure 4. The light curve and type predictions of SN 2023tyk. The left panel shows the ZTF g- and r-band light curves of SN 2023tyk obtained from https://alerce.online/object/ZTF23abhvlji, accessed on 24 September 2025. All of these data products are embedded within the ZTF alert packets. The black and red vertical lines indicate the epochs when the object was first reported to the TNS as a new transient and when it was subsequently classified as a Type Ia supernova, respectively. The right panel shows the machine learning-based assessment of the light curve with the astrorapid.
Universe 11 00355 g004
Figure 5. An example illustrating the astrorapid prediction on a supernova. The changes in relative luminosity and classification probability of an astronomical event over time. The upper part shows the relationship between the days since the trigger and the relative luminosity, with data points represented in blue and orange indicate different wavelength observations, accompanied by error bars. The lower part displays the classification probabilities of various celestial types (such as supernovae and kilonovae) over time, with different colored curves representing different types of objects. The figure marks t0 = −7.3 days, indicating a key event. Overall, this figure depicts the temporal evolution of the luminosity and its classification possibilities for the astronomical event. All data were queried from https://astrorapid.readthedocs.io/en/latest/, accessed on 24 September 2025.
Figure 5. An example illustrating the astrorapid prediction on a supernova. The changes in relative luminosity and classification probability of an astronomical event over time. The upper part shows the relationship between the days since the trigger and the relative luminosity, with data points represented in blue and orange indicate different wavelength observations, accompanied by error bars. The lower part displays the classification probabilities of various celestial types (such as supernovae and kilonovae) over time, with different colored curves representing different types of objects. The figure marks t0 = −7.3 days, indicating a key event. Overall, this figure depicts the temporal evolution of the luminosity and its classification possibilities for the astronomical event. All data were queried from https://astrorapid.readthedocs.io/en/latest/, accessed on 24 September 2025.
Universe 11 00355 g005
Figure 6. The supernovae classifications. Adapted from the Figure 1 in [96]. At the center is the “Supernova Zoo,” which categorizes supernovae into different types through a series of yes/no questions. The left side addresses thermonuclear supernovae (thermonuclear SNe), including Type Ia and related examples. The right side discusses core collapse supernovae (core collapse SNe), further divided into stripped envelope supernovae (stripped envelope SNe) and other types. Each branch is refined by specific characteristics and observational data, providing a systematic way to understand and classify the different types of supernovae.
Figure 6. The supernovae classifications. Adapted from the Figure 1 in [96]. At the center is the “Supernova Zoo,” which categorizes supernovae into different types through a series of yes/no questions. The left side addresses thermonuclear supernovae (thermonuclear SNe), including Type Ia and related examples. The right side discusses core collapse supernovae (core collapse SNe), further divided into stripped envelope supernovae (stripped envelope SNe) and other types. Each branch is refined by specific characteristics and observational data, providing a systematic way to understand and classify the different types of supernovae.
Universe 11 00355 g006
Figure 7. The spectral comparisons. We reproduce Figure 4 of [102] to present spectral comparisons. The observed spectra under evaluation are shown in red, with their smoothed counterparts plotted in black. The corresponding telescope used for each spectrum is indicated. For reference, the closest template spectra identified using SNID are shown in yellow. All spectra displayed here are publicly available through WISeREP.
Figure 7. The spectral comparisons. We reproduce Figure 4 of [102] to present spectral comparisons. The observed spectra under evaluation are shown in red, with their smoothed counterparts plotted in black. The corresponding telescope used for each spectrum is indicated. For reference, the closest template spectra identified using SNID are shown in yellow. All spectra displayed here are publicly available through WISeREP.
Universe 11 00355 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.-N.; Qiang, D.-C.; Yang, S. Artificial Intelligence Revolutionizing Time-Domain Astronomy. Universe 2025, 11, 355. https://doi.org/10.3390/universe11110355

AMA Style

Wang Z-N, Qiang D-C, Yang S. Artificial Intelligence Revolutionizing Time-Domain Astronomy. Universe. 2025; 11(11):355. https://doi.org/10.3390/universe11110355

Chicago/Turabian Style

Wang, Ze-Ning, Da-Chun Qiang, and Sheng Yang. 2025. "Artificial Intelligence Revolutionizing Time-Domain Astronomy" Universe 11, no. 11: 355. https://doi.org/10.3390/universe11110355

APA Style

Wang, Z.-N., Qiang, D.-C., & Yang, S. (2025). Artificial Intelligence Revolutionizing Time-Domain Astronomy. Universe, 11(11), 355. https://doi.org/10.3390/universe11110355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop