Next Article in Journal
Numerical Solutions of Second-Order Elliptic Equations with C-Bézier Basis
Next Article in Special Issue
A Physics-Informed Neural Network Based on the Boltzmann Equation with Multiple-Relaxation-Time Collision Operators
Previous Article in Journal
Transformation Properties of a Class of Variable Coefficient Boiti–Leon–Manna–Pempinelli Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computing Transiting Exoplanet Parameters with 1D Convolutional Neural Networks

by
Santiago Iglesias Álvarez
1,*,
Enrique Díez Alonso
1,2,*,
María Luisa Sánchez Rodríguez
1,3,
Javier Rodríguez Rodríguez
1,
Saúl Pérez Fernández
1 and
Francisco Javier de Cos Juez
1,4
1
Instituto Universitario de Ciencias y Tecnologías Espaciales de Asturias (ICTEA), C. Independencia 13, 33004 Oviedo, Spain
2
Departamento de Matemáticas, Facultad de Ciencias, Universidad de Oviedo, 33007 Oviedo, Spain
3
Departamento de Física, Universidad de Oviedo, 33007 Oviedo, Spain
4
Departamento de Explotación y Prospección Minera, Universidad de Oviedo, 33004 Oviedo, Spain
*
Authors to whom correspondence should be addressed.
Axioms 2024, 13(2), 83; https://doi.org/10.3390/axioms13020083
Submission received: 29 November 2023 / Revised: 21 January 2024 / Accepted: 23 January 2024 / Published: 26 January 2024

Abstract

:
The transit method allows the detection and characterization of planetary systems by analyzing stellar light curves. Convolutional neural networks appear to offer a viable solution for automating these analyses. In this research, two 1D convolutional neural network models, which work with simulated light curves in which transit-like signals were injected, are presented. One model operates on complete light curves and estimates the orbital period, and the other one operates on phase-folded light curves and estimates the semimajor axis of the orbit and the square of the planet-to-star radius ratio. Both models were tested on real data from TESS light curves with confirmed planets to ensure that they are able to work with real data. The results obtained show that 1D CNNs are able to characterize transiting exoplanets from their host star’s detrended light curve and, furthermore, reducing both the required time and computational costs compared with the current detection and characterization algorithms.

1. Introduction

Exoplanet detection is one of the most relevant fields in astrophysics nowadays. Its origins trace back to 1992, when the discovery of three exoplanets orbiting the P S R 1257 + 12 pulsar [1] emerged from data collected by the 305 m Arecibo radio telescope. This discovery was the beginning of a new research area that is still being studied nowadays, although this planetary system is not what they were looking for, as the surroundings of a pulsar are completely different from the one surrounding a star in which there could be planets similar to the Earth.
Over the years, various detection techniques have been established. One of the most used is the transit method, which consists of detecting periodic dimmings in the stellar light curves (which are the flux in the function of the time) due to the crossing of an exoplanet (or more) with the line of sight between its host star and a telescope monitoring it. This is probably the most used technique nowadays, as there are a lot of photometry data available from different surveys, as one telescope can monitor thousands of stars at the same time. The first discovery of exoplanets related to this technique took place in the year 2000, when [2,3] discovered an exoplanet orbiting the star HD 209458. The dimmings detected through this technique are described by the Mandel and Agol theoretical shape [4], which takes into account an optical effect known as limb darkening, which makes the star appear less bright at the edges than at the center. From these models, it is possible to estimate the orbital period (P), which is the temporal distance between two consecutive transits; the planet-to-star radius ratio ( R p / R R p [ R ] ), which is related to the transit depth; the semimajor axis of the orbit in terms of the stellar radius ( a / R a [ R ] ); and the orbital plane inclination angle (i).
The main challenges related to the analysis of light curves when trying to search and characterize transit-like signals are the high computational cost required to analyze the large dataset of light curves available from different surveys, the high amount of time required to visually inspect them; and the fact that stellar noise present in light curves could make the transit detection considerably more difficult.
One of the most relevant solutions to these problems comes from artificial intelligence (AI). If an AI model could be able to distinguish between transit-like signals and noise, it would reduce the amount of light curves that are needed to analyze with current algorithms.
Current algorithms, such as box least squares (BLS) [5] and transit least squares (TLS) [6], among others, could be classified mainly into Markov chain Monte Carlo (MCMC) methods and least squares algorithms. The second ones, including BLS and TLS, search for periodic transit-like signals in the light curves and compute some of the most relevant parameters related to the transit method (as P, R p [ R ] , etc.). However, BLS needs about 3 s per light curve and TLS about 10 s (taking into account the simulated light curves explained in Section 2.2 and the server used in this research (see Section 3)), making the analysis of thousands of light curves within a short timeframe exceedingly challenging.
The TESS (Transiting Exoplanet Survey Satellite) mission [7] is a space telescope launched in April 2018, whose main goal is to discover exoplanets smaller than Neptune through the transit method, orbiting stars bright enough to be able to estimate their companions’ atmospheres. The telescope is composed of 4 cameras with 7 lenses, thus allowing for monitoring a region of 24 × 96 during a sector (27 days). It took data with long and short cadence (30 min and 2 min, respectively). Its prime mission started in July 2018 and finished in April 2020. Currently, it is developing an extended mission, which started in May 2020. It is important to remark that TESS light curves obtained from full frame images (FFIs) usually present high noise levels in less bright stars, which makes planetary detection and characterization considerably more difficult.
Understanding exoplanet demographics in the function of the main stellar parameters is important not only for checking and improving planetary evolution and formation models, but also for studying planetary habitability [8]. The main parameters on which to calculate demographics can be split into three main groups: planetary system, host star, and surrounding environment. First of all, it is important to remark that all detection techniques have clear bias related to which type of planets are able to detect, which greatly conditions the study of planetary demography. One example is the radial velocity (RV) technique, which consists of measure Doppler shifts in stellar spectra due to the gravitational interaction between the planets and their host star, which is highly dependent on the planet-to-star mass ratio. From [9,10], it was estimated that ∼20% of solar-type stars host a giant exoplanet (i.e., M P > 0.3 M J , where M J is the Jupiter mass) at 20 AU. The main contributions of RV research to planet demographics ([11,12] among others) include that, in solar-type stars and considering a short orbital period regime, low mass planets are up to an order of magnitude more probable than giant ones. In addition, the giant planet occurrence rate increases along with the period up to about 3 AU from its host star, and also, if they are closer to 2.5 AU, these rates increase with stellar mass and metallicity ( [ F e / H ] ). Finally, Neptune-like planets are the most frequent beyond the frost line [13] (the minimum distance from the central protostar in a solar nebula, where the temperature drops sufficiently to allow the presence of volatile compounds like water). Other techniques, such as transits, also help to study planetary demographics. It is necessary to highlight the contributions of the Kepler [14] mission. Its exoplanet data allowed the study of the bimodality in the small planet size’s distribution [15] and the discovery of the Neptune dessert [16], which is a term used to describe the zone near a star ( P < 2 4 d ) where the absence of Neptune-sized exoplanets is observed. In addition, as most of the stars appear to host a super-Earth (or sub-Neptune), which is a type of planet that is not present in the Solar System, it seems that our planetary system architecture might not be much common. All the transit-based detection facilities in general and TESS in particular show clear bias in the planetary systems found. Statistics from [17] show that it is more common to find planetary systems with low orbital period, as this produces a greater number of transits in the light curves. Additionally, planets with low planet-to-star radius ratio and semimajor axis are more frequent.
Machine learning (ML) techniques allow for setting aside human judgment in order to detect and fit transit-like signals in light curves, thus reducing the overall time and computational cost required to analyze all of them. The first approach was the Robovetter project [18], which was used to create the seventh Kepler planet candidate (PC) catalog, employing a decision tree for classifying threshold crossing events (TCEs), which are periodic transit-like signals. Others, such as Signal Detection using Random forest Algorithm (SIDRA) [19], used random forests in order to classify TCEs depending on different features related to transit-like signals. Artificial neural networks (ANNs) were thought to be a better solution, in particular convolutional neural networks (CNNs). The results from [20,21,22,23,24,25,26,27,28] show that CNNs perform better in transit-like signal detection due to the fact that these algorithms are prepared for pattern recognition. One example was carried out in our previous work [28], in which the performance of 1D CNN in transit detection was shown by training a model on K2 simulated data. Nowadays, as current and future observing facilities provide very large datasets composed of many thousands of targets, automatic methods, such as CNN, are crucial for analyzing all of them.
In this research, we continue to go a step further. As our previous results showed, 1D CNNs are able to detect the presence of transit-like signals in light curves. The aim of this research is to develop 1D CNN models that are able to extract different planetary parameters from light curves in which it is known that there are transit-like signals. There are other techniques that allow being automatized with ML techniques. One example was carried out in [29], which studied how deep learning techniques can estimate the planetary mass from RV data.
Our models were trained with simulated light curves that mimic those expected for the TESS mission, as there are not enough confirmed planets detected with TESS (with their respective host star’s light curves) to train a CNN model. The reasons why simulated TESS light curves were used instead of K2 ones are motivated by the fact that apart from checking the performance of CNN in transit-like signal characterization, another aim is to check their performance with light curves from another survey. Our light curve simulator was adapted for creating both complete and phase-folded light curves. This comes from the fact that for phase-folding a light curve, first, it is necessary to know the orbital period of the planet (which is computed as the distance between two consecutive transits). After knowing this parameter, a phase-folded light curve could be computed by folding the complete light curve in the function of the orbital period. This is crucial because, taking into account the most common observing cadence from the main telescopes (about 30 min), a transit with a duration of a few hours could be represented with only a few points, which makes the transit model fitting considerably difficult. By phase-folding a light curve, the transit shape is much more precise, as it is composed of all the in-transit data points from the complete light curve. From the complete light curves, our model is able to compute the orbital period, and from the phase-folded light curve, it calculates the planet-to-star radius ratio and the semimajor axis in terms of its host star radius (see Section 3).
The rest of this paper is structured as follows: In Section 2, the materials and methods used during the research are explained. More concretely, in Section 2.1, the theoretical transit shape, which appears in all the light curves with which our models are trained, is explained; in Section 2.2, light curve simulation is detailed; and in Section 2.3, the structure of our model is shown in detail. In Section 3, the training, test, and validation processes of the model are explained, and also the statistics related to the predicted values obtained during the test process and the model test on real TESS data are shown and discussed. Finally, in Section 4, the conclusions of all the research are outlined.

2. Materials and Methods

In this section, the main materials and methods used during this research are introduced, including the explanation of the transit theoretical shape, the light curve simulation, and CNNs, placing special emphasis on the models used.

2.1. Transit Shape

Theoretically, a transit can be described as a trapezoid where the amount of flux that reaches the detector decreases while the planet overlaps the star. There are 3 main parameters that can be derived from its shape: the transit depth ( Δ F ), the transit duration ( t T ), and the duration of the flat region ( t F ), which is observed when the whole planet is overlapping the star (see Figure 1).
The flux that arrives at the detector, which is monitoring the star, could be expressed as [4]
F ( a [ r ] , R p [ R ] ) = 1 λ ( a [ r ] , R p [ R ] ) ,
where F ( a [ r ] , R p [ R ] ) is the flux that reaches the telescope and λ ( a [ r ] , R p [ R ] ) is the portion of the flux that is blocked by the planet. As the planet overlaps the star, the amount of flux decreases.
Actually, the transit shape is more complex as there is an optical effect that makes the star appear less bright at the edges than at the center, known as limb darkening. This effect causes the transit to be rounded. The most accurate transit shape, taking into account this phenomenon, is described in [4] (Mandel and Agol theoretical shape). The limb darkening formulation adopted in the Mandel and Agol shape is described in [30], where how the intensity emitted by the star changes in the function of the place where the radiation comes from is described:
I ( μ ) I ( 0 ) = 1 m = 1 4 c m ( 1 μ m / 2 ) ,
with μ = 1 r ˜ 2 , where r ˜ [ 0 , 1 ] is the normalized radial coordinate on the disk of the star, c m are the limb darkening coefficients, and I(0) is the intensity at the center of the star. Another common parameterization of this phenomenon is made with a quadratic law [31], where u 1 and u 2 are the quadratic limb darkening coefficients:
I ( μ ) I ( 0 ) = 1 u 1 · ( 1 μ ) u 2 · ( 1 μ ) 2 .
Taking into account the limb darkening effect, the Mandel and Agol theoretical shape models transits as
F ( a [ r ] , R p [ R ] ) = 0 1 d r 2 r I ( r ) 1 0 1 d r I ( r ) d [ F ˜ ( a [ r ] / r , R p [ R ] / r ] ) d r ,
where F ˜ ( a [ r ] / r , R p [ R ] / r ] ) is the transit shape without the limb darkening effect. An example of the Mandel and Agol transit shape obtained with TLS from a simulated light curve (see Section 2.2) is shown in Figure 2. The value t f is much more difficult to determine. However, as was previously mentioned, it is still possible to estimate different parameters, as in the case of R p [ R ] , which is related to the squared root of the mean depth of the transit. For this reason, it is thought that convolutional neural networks, which learn from the input dataset shape, could be a solution. They could infer the main parameters from a phase-folded light curve. Others, as the orbital period, should be inferred from the complete light curve, because its value is the mean distance between two consecutive transits.

2.2. Light Curve Simulation

The aim of this research is the creation of a CNN architecture that is able to predict the values P, R p [ R ] , and a [ R ] and that is trained with TESS light curves. However, as a CNN needs a large dataset for training all its hyperparameters, and there are not enough real light curves with confirmed planetary transits to train it, it was decided to use simulated light curves to train and test the models. This is why the light curve simulator used in our previous work was adapted to TESS data. It is important to remark that TESS light curves are shorter than K2 ones due to the fact that TESS observes a sector during 27 days instead of 75 days (which is the mean duration of K2 campaigns). In contrast, as both of them have a mean observing cadence of 30 min, the observing cadence was not changed. However, less bright stars were selected in order to increase the noise levels, which would mean that the network would also be able to learn to characterize transits where noise levels make this work considerably much more difficult.
Before explaining our light curve simulator, it is necessary to clarify that stellar light curves are affected by stellar variability phenomena, such as rotations, flares, and pulsations, which induce trends that have to be removed before analyzing them. The light curves after removing these trends are usually known as detrended (or normalized) light curves.
For simulating light curves, the Batman package [32], which allows for creating theoretical normalized light curves with transits (i.e., detrended light curves with transits but without noise), was used. The main parameters needed for creating the transit models are the orbital period (P); the planet-to-star radius ratio ( R p [ R ] ); the semimajor axis in terms of the host star’s radius ( a [ r ] ); the epoch ( t 0 ), which is the time in which the first transit takes place; the inclination angle of the orbital plane ( i [ d e g ] ); and the limb darkening quadratic coefficients ( u 1 and u 2 ).
All these parameters were selected, when it was possible, as random values, where the upper and lower limits were chosen, taking into account planetary statistics (in general, not only the TESS ones) or the main characteristics of TESS light curves. As previously mentioned, as the TESS mission detects exoplanets through the transit method, it is more common to find planetary systems with low periods, planet-to-star radius ratio, and semimajor axis. However, as the main objective is to develop a model that is as most generalized as possible, it was decided not to use these statistics, thus preventing the model from generating dependence and not being able to correctly characterize systems with different parameters. First of all, it was decided to inject at least 2 transits in the light curves (necessary to be able to estimate the orbital period from light curves), so the maximum value of the period considered is half of the duration of the light curves. The value is chosen randomly between these limits. This is the only restriction applied that generates bias in the data, but it is necessary to apply it because if there were not at least 2 transits in the light curves, the model would not be able to estimate the orbital period. Then, the epoch was chosen as a random value between 0 and the value of the orbital period.
For choosing R p [ R ] , the following procedure was carried out: First of all, it is necessary to simulate the host star. The stellar mass was chosen as a random value following the statistics of the solar neighborhood, which is the surrounding region to the Sun within ∼92 pc. Their radii were estimated following the mass–radius relationship of the main sequence (MS) stars [33,34] (among others):
R [ R ] ( M [ M ] ) 0.8
The distance and apparent magnitude (which is a measure of the brightness of a star observed from the Earth on an inverse logarithmic scale) were chosen as random values limited, respectively, by the solar neighborhood size (up to ∼92 pc) and TESS typically detected magnitudes (up to magnitude 16) [7]. Limb darkening quadratic parameters ( u 1 and u 2 ) are chosen as random values between 0 and 1 considering u 1 + u 2 = 1 . Then, the planetary radius was chosen following planetary statistics: it is not common to find Jupiter-like exoplanets orbiting low mass stars as red dwarfs, so the maximum radius of the orbiting exoplanet was limited to the one expected from a Neptune-like one in low mass stars (i.e., the maximum R p [ R ] was set to 0.05 for stars with masses lower than 0.75 M ). For more massive stars, the upper limit is a Jupiter-like exoplanet (i.e., the maximum value is limited to R p [ R ] = 0.1 if the stellar mass is larger than 0.75 M ). a [ r ] was derived considering Kepler’s third law:
a = G · M 4 · π 2 · P 2 1 3 .
To summarize all the stellar and planetary parameters used during the light curve simulation, the upper and lower limits from all of them are shown in Table 1.
As was already mentioned, phase-folded light curves, which allowed for obtaining more accurate transit shapes, thus producing better performance in transit characterization, were also simulated. The main difference when creating both types of light curves is that in the complete light curve, the temporal vector needed for simulating them takes into account the whole duration of a TESS sector; on the contrary, a phase-folded light curve is created with a temporal vector between t 0 P / 2 and t 0 + P / 2 , which other authors call as a global view of the transit (a local version will be created by making a zoom to the transit) [22,24] (among others).
Batman creates light curves without noise, so it has to be added after generating them. For that aim, Gaussian noise was implemented following the expected values for TESS light curves; i.e., the standard deviation ( σ ) expected from TESS light curves in the function of the stellar magnitude [7] was taken into account, and thus, a noise vector, which is centered in 0, was obtained. As the light curve without noise has a maximum of 1, the light curve with noise was created by adding both vectors. The signal-to-noise ratio (SNR) for each of the detrended light curves in the function of its stellar magnitude was estimated with the relationship shown in Equation (7) [35], where μ = 1 is the mean value (after adding both vectors) and σ is the standard deviation proportional to stellar magnitude [7]. In addition, key SNR values in the function of stellar magnitude are shown in Table 2. It is important to remark that the larger the magnitude is, the lower its SNR will be, and thus, the transit-like signal detection and characterization will also be more difficult.
S N R = 10 · l o g 10 ( μ σ )
An example of the 2 views (complete and phase-folded) of a simulated light curve is shown in Figure 3. As shown, a phase-folded light curve allows for obtaining a better transit shape. For checking this, a zoom of Figure 3 is shown in Figure 4. As shown, the transit of the phase-folded light curve is composed of much more points, thus allowing for better checking the Mandel and Agol theoretical shape. Both views of the light curve were simulated, taking into account the main parameters from TIC 183537452 (TOI-192), except the epoch and the temporal span of the light curve, which were not considered as they do not condition the noise levels and the transit shape. TOI-192 is part of the TOI Catalog, short for the TESS Objects of Interest Catalog, which comprises a compilation of the most favorable targets for detecting exoplanets. Its complete and phase-folded light curves are shown in Figure 5. TESS light curves exhibit gaps that occur during the data download by the telescope. However, this phenomenon was not considered because either no transits are lost in the gap, and thus, the separation between two consecutive transits is maintained, or if any transits are in this zone, the distance between the last transit before the gap and the next one after the gap will be separated by a temporal distance proportional to the orbital period, so the calculation of this value will not be affected. Aside from this, the simulated light curves closely resemble those expected from TESS, showing similar noise levels for similar stellar magnitudes and showing a similar transit depth similar to ( R p [ R ] ) 2 .
The training process (see Section 3) requires 2 different datasets, as it is important to test both models in order to check their generalization to data unknown for the CNN (this is known as test process). Thus, a train dataset with 650,000 light curves and a test dataset with 150,000 were developed. The number of light curves in the train dataset was chosen after training both models a large number of times and increasing the number of light curves used. The final amount of light curves corresponds to the one with which the best results were obtained. Above this number, significant improvement was seen. In addition, the number of light curves in the test dataset was chosen to have a large statistic that would allow a good check of the results.

2.3. Convolutional Neural Networks (CNNs): Our 1D CNN Models

The transit shape previously shown differs from an outlier or the noise present in light curves only by their shape. This is why CNNs play a crucial role in transit detection and characterization, as these algorithms could distinguish between both signals (noise and transits), even though they are at similar levels.
CNNs apply filters to the input data that allow the detection of different features that characterize them. An example of a 1D CNN filter is shown in Figure 6.
The main activation function used in this research is known as parametric rectifier linear unit (PReLU) [36]. If y i is considered as the input of the non-linear activation function, its mathematical definition is:
f ( y i ) = y i i f y i > 0 a i · y i i f y i 0 .
The main difference between this activation function and the rectifier linear unit (ReLU) [37], which is one of the most commonly used, is that, in ReLU, a i is equal to 0, while in PReLU, it is a learnable parameter.
The gradient of this function when optimizing with backpropagation [38] is (taking into account a derivative with respect to the new trainable parameter, a i )
f ( y i ) a i = 0 i f y i > 0 y i i f y i 0 .
Our CNN model consists, actually, in two 1D CNN models, as a transit analysis could be understood as a temporal series one. The main fact is that flux vectors of the simulated light curves were used as inputs for training, validating, and testing them (see Section 3). All the processes were carried out in Keras [39]. The first model (it will be referred to as model 1 from now on) works with complete light curves and is able to predict the value of the orbital period of the transit-like signals. The second model (it will be referred to as model 2 from now on) works with phase-folded light curves and is able to predict ( R p [ R ] ) 2 and a [ R ] . The model predicts ( R p [ R ] ) 2 instead of R p [ R ] because the squared value is related to the transit depth.
The structure of both models is the same with the difference of the last layer, which has the same number of neurons as the number of parameters to predict. Both models are composed of a 4-layer convolutional part and a 2-layer multilayer perceptron (MLP) part. In the convolutional part, all the layers have a filter size of 3 and have 2 strides. The numbers of convolutional filters are, respectively, 12, 24, 36, and 48. All the layers are connected by a PReLU activation function and have the padding set to the same, which avoids the change of the shape of the light curves among the layers due to the convolutional filters. The MLP part is composed of a layer with 24 neurons activated by PReLU and a last layer with 1 neuron for model 1 and 2 neurons for model 2, without an activation function. The two parts of the models are connected by a flatten layer. A scheme of model 1 is shown in Figure 7. The scheme of the second model consists of changing the number of neurons to 2 in the last layer (2nd sense).

3. Model Training and Test Results and Discussion

First of all, both processes were carried out on a server with an Intel Xeon E4-1650 V3, 3.50 GHz, with 12 CPUs (6 physical and 6 virtual). It had a RAM of 62.8 Gb.
As both models are 1D CNN models, it is important to preprocess the input light curves due to the fact that convolutional filters usually take the maximum value on them, which would entail the loss of the transits. Thus, the light curves were inverted and set between 0 and 1.
In addition, the values related to the parameters that the 1D CNN models were learning to predict (from now on, it will be referred as labels) were normalized. The following transformation was applied to each label, supposing max_labels as the maximum of all the labels simulated and min_labels the minimum value:
l a b e l = l a b e l m i n _ l a b e l s m a x _ l a b e l s m i n _ l a b e l s .
These transformations set the data between 0 and 1. This is necessary because deep learning techniques work more properly when the input and output data are normalized.
Before training both models, adaptive moments (Adam) [40] were selected as the optimizer. Adam is a stochastic gradient descent (SGD) [41] method that changes the value of the learning rate using the values of the first- and second-order gradients. The initial learning rate was set to 0.0001 (chosen after many training processes). As a loss function, the mean squared error (MSE) was chosen.
One of the main parts of the training process consists in validating the model train with a dataset different from the one used for updating the hyperparameters in each epoch (an epoch is each time the training dataset is used to train the model) in order to check how well the model is being generalized during the training process. Keras allows for performing that with a parameter known as validation split. A number between 0 and 1 that refers to the percentage of the train dataset is used for this process. A proportion of 30% of the dataset was split; i.e., a validation split of 0.3 was selected. Furthermore, a batch size of 16 was selected. For monitoring the training, the MSE loss function was chosen. Each epoch took about 200 s to be completed in our server.
After training both models, the training histories were obtained, in which training and validation loss were plotted against the epochs. Figure 8 and Figure 9 show, respectively, the training history of models 1 and 2 in a logarithmic scale. As shown, the training process was completed correctly, as the validation loss decreased along with the training loss among the epochs.
The test process was carried out with the test dataset composed by 150,000 light curves (as explained in Section 2.2). The accuracy of the predictions was studied with a comparison between them and the values with which the light curves were simulated. Theoretically, the perfect result would be adjusted to a linear function with a slope 1 and an intercept of 0. The results were fit with a linear regression and obtained R 2 = 0.991 for the orbital period (model 1), R 2 = 0.974 for ( R p [ R ] ) 2 (model 2), and R 2 = 0.971 for a [ R ] (model 2). These results mean that most of the data predicted are in agreement with the simulated data, which implies that our 1D CNN models are properly predicting the planetary parameters. Furthermore, the mean absolute error (MAE) was computed for both models, obtaining M A E = 0.015 d for the orbital period (model 1), M A E = 0.0003 for ( R p [ R ] ) 2 (model 2), and M A E = 0.9113 for a [ R ] (model 2). These results could be better understood with the plot of a small randomly chosen sample of 100 predictions and test data (see Figure 10 and Figure 11) because the total plot of the data is a bit clumpy and could make the results confusing. As shown, the predictions fit correctly to the simulated ones. In addition, as an absolute error could be a bit confusing, the absolute percentage error (APE) was computed from all the predictions and the obtained value were plotted in histograms, in this case, with the whole data (see Figure 12 and Figure 13, which make reference, respectively, to models 1 and 2). The values of the modes from the histograms of P ( d ) , ( R p [ R ] ) 2 and a [ R ] are, respectively, 0.003, 0.69, and 0.38. To sum up all the results, all of them are shown in Table 3.
All these results show that both models properly predict the parameters P, ( R p [ R ] ) 2 and a [ R ] , with low uncertainties. In addition, they show that both models properly generalize without generating dependence on the train dataset. Among other previous studies, these results show that 1D CNN is a good choice not only for checking if a light curve presents transit-like signals but also for estimating parameters related to its shape. The test dataset shows a wide range of values, but even so, the models do not show any bias in the results (which would mean that the models predict better in some regions than in others), which means that the models are properly trained. To put these results in context, 10,000 light curves from the test dataset were analyzed with TLS and the M A E for P ( d ) and ( R p [ R ] ) 2 were computed respect to the values with which the light curves were simulated (TLS does not compute the semimajor axis of the orbit). The obtained values are, respectively, 0.009 and 0.0018 , which mean that our CNN models predict all the parameters with a similar precision compared with the most used algorithms nowadays.
The models were also tested on real TESS data. From the Mikulski Archive for Space Telescopes (MAST) (bulk download https://archive.stsci.edu/tess/bulk_downloads/bulk_downloads_ffi-tp-lc-dv.html (accessed on 1 December 2023)), 25 light curves with confirmed transiting exoplanets from the stars part of the candidate target lists (CTLs) [42], which is a special subset from the TESS Input Catalog (TIC) containing targets that are good for detecting planetary-induced transit-like signals, were obtained. This subset of light curves was decided to be used because there are preprocessed and corrected light curves available to download from all of them. All the light curves were analyzed with our 1D CNN models, and the results were compared with their published values from [29]. The phase-folded light curves were computed with the predicted value of the orbital period. From the results, the MAE, R 2 , and Mean Absolute Percentage Error (MAPE) were computed. All the predicted and real values are shown in Table 4, along with the APE and the Absolute Error (AE) computed for each of the parameters. The obtained MAEs of P(d), ( R p [ R ] ) 2 , and a [ R ] are, respectively, 0.0502 , 0.0004 , and 0.20 ; the MAPE of P(d), ( R p [ R ] ) 2 , and a [ R ] are, respectively, 1.87 % , 7.16 % , and 2.98 % ; and the R 2 values from P ( d ) , ( R p [ R ] ) 2 , and a [ R ] are, respectively, 0.981 , 0.978 , and 0.985 . These results mean that our 1D CNN models also perform properly on real data. In Figure 14, the comparison between real data and predicted values is plotted to check the accuracy of the predictions visually. The differences between the results obtained with real and simulated data were due to the fact that, although simulating the light curves as most similar as possible to those expected for the TESS mission was attempted, in reality, it is impossible to have them the same. However, these results show that both models are able to characterize transiting exoplanets from the TESS light curves of their host star also in real data, which means that our models were well trained and that the light curve simulator is able to mimic real TESS light curves with high accuracy.
In addition, is important to remark that analyzing 150,000 light curves for each model took 30 s to complete, which is similar to three times the time required for analyzing 1 light curve with TLS in our server. TLS aims to maximize transit-like signal detection while decreasing the executing time as much as possible. However, as least squares algorithms allow for choosing different prior parameters, the amount of time required considerably depends on the density of the priors. For example, the minimum depth value considered during the analysis or the period intervals in which to search for periodic signals considerably constrain the execution time to complete the analysis. In addition, the light curves’ length (in points) and their time span have also high impact. As shown in [6], the executing times quadratically depend on the time span of the light curve. Obviously, the computation power available is also fundamental for reducing the computing times. They show that on an Intel Core i7-7700K, a mean 4000-points-length K2 [43] light curve takes about 10 s. This process would probably not be possible with current algorithms in common computing facilities if the dataset that has to be analyzed would be composed of many thousands of stars (as the ones provided by current observing facilities), because current algorithms are highly time-consuming, and also, they need a high computational cost, which is not always available.
Apart from reducing the computational cost and time consumption, our approach avoids the data preparation that is required from current MCMC and least squares methods. Almost all of them require obtaining some parameters related to the star, such as the limb darkening coefficients, something that could be carried out with different algorithms, which need stellar information, such as the effective temperature ( T e f f ), the metallicity ( [ M / H ] ), and microturbulent velocity, among others. Is not always easy to obtain these parameters from stellar databases or to compute them, especially if the analysis is carried out on a dataset composed of many thousands of stars, as the ones provided by current observing facilities. However, our 1D CNN models allow for determining all these parameters in real time only by applying a simple normalization to the input data in order to invert the light curves and to set them between 0 and 1.

4. Conclusions

In this research, we went one step further than our previous work. The light curve simulator was generalized to TESS data, and was also modified to obtain phase-folded light curves (in addition to the complete ones) with Gaussian noise, mimicking those expected for TESS data. As light curve data can be understood as temporal series and also the predictions have high shape dependency, it was thought that 1D CNN techniques would be the most accurate ML techniques as they learn from the input data due to the convolutional filters and they perform properly with temporal series data.
Our two models are similar, but changing the last layer to adjust them to the required output values. The first model works with complete light curves and thus predicts the value of the orbital period. The second model works with phase-folded light curves and estimates the values ( R p [ R ] ) 2 and a [ R ] .
The training process was carried out with a dataset of 650,000 light curves, but splitting 30% of them to validate the training. The training histories show correct performance of both processes (training and validation), where both of them decrease among the epochs without the existence of a huge difference between them (that would suppose overfitting). After training both models, another dataset composed of 150,000 light curves was used to test the results with a set different from the one used for training it (the test dataset). The most visual way to analyze the accuracy of the prediction is by plotting the predicted values against the simulated ones. In addition, some statistics that allow for taking into account the average error with which the CNN predicts the values were computed. More concretely, the mean average error (MAE) and a linear regression R 2 coefficient were considered. For the orbital period, M A E = 0.015 d and R 2 = 0.980 were obtained (predictions made with model 1). For ( R p [ R ] ) 2 , M A E = 0.0003 and R 2 = 0.974 were obtained (predictions made with model 2). For a [ R ] , M A E = 0.9113 and R 2 = 0.971 were obtained (predictions made with model 2). The values of the modes from the histograms computed with the absolute percentage errors (APEs) of P ( d ) , ( R p [ R ] ) 2 , and a [ R ] are, respectively, 0.003, 0.69, and 0.38. Apart from that, a set of 10,000 light curves from the test dataset was analyzed with TLS, showing that both models characterize planetary systems from their host star light curves with similar accuracy compared with current algorithms. In addition, both models were tested on real data obtained of CTL light curves from MAST. The results obtained for P ( d ) , R p [ R ] , and a [ R ] ( 0.0502 , 0.0004 , and 0.20 , respectively, of the MAEs; 1.87 % , 7.16 % , and 2.98 % , respectively, of the MAPEs; and 0.98 , 0.976 , and 0.985 , respectively, of R 2 ), show that both models are able to characterize planetary systems from TESS light curves with high accuracy, which means not only that they are well trained and thus can be used for characterizing new exoplanets from TESS mission, but also that the light curve simulator is able to mimic with high fidelity the light curves expected from this mission. In addition, our models reduce the computing time and computational cost required for analyzing such large datasets as the ones available from current observing facilities; more concretely, our models take 30 s to complete the analysis of the test dataset (150,000 light curves), which is similar to three times the time required for analyzing a single light curve with TLS. Moreover, with our models it is not necessary to set the priors in which to compute the main parameters, which also increases the time consumption.
In addition, we are not only in agreement with the fact that CNN in general and 1D CNN in particular are a good choice for analyzing light curves trying to search for transiting exoplanets (as previous research do [20,21,22,23,24,25,26,27,28]), but also that CNN algorithms are able to characterize planetary systems with high accuracy in a short period of time.

Author Contributions

Research: S.I.Á.; coding: S.I.Á. and E.D.A.; writing: S.I.Á.; reviewing and editing: S.I.Á., E.D.A., M.L.S.R., J.R.R., S.P.F. and F.J.d.C.J.; formal analysis: S.I.Á., S.P.F. and M.L.S.R.; manuscript structure: S.I.Á., E.D.A. and J.R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This reseach was funded by Proyecto Plan Regional by FUNDACION PARA LA INVESTIGACION CIENTIFICA Y TECNICA FICYT, grant number SV-PA-21-AYUD/2021/51301, Plan Nacional by Ministerio de Ciencia, Innovación y Universidades, Spain, grant number MCIU-22-PID2021-127331NB-I00 and Plan Nacional by Ministerio de Ciencia, Innovación y Universidades, Spain, grant number MCINN-23-PID2022-139198NB-I00.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. We acknowledge the use of public TOI Release data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wolszczan, A.; Frail, D.A. A planetary system around the millisecond pulsar PSR1257 + 12. Nature 1992, 355, 145–147. [Google Scholar] [CrossRef]
  2. Charbonneau, D.; Brown, T.M.; Latham, D.W.; Mayor, M. Detection of Planetary Transits Across a Sun-like Star. Astrophys. J. 2000, 529, L45–L48. [Google Scholar] [CrossRef]
  3. Henry, G.W.; Marcy, G.W.; Butler, R.P.; Vogt, S.S. A Transiting “51 Peg–like” Planet. Astrophys. J. 2000, 529, L41–L44. [Google Scholar] [CrossRef]
  4. Mandel, K.; Agol, E. Analytic Light Curves for Planetary Transit Searches. Astrophys. J. 2002, 580, L171. [Google Scholar] [CrossRef]
  5. Kovács, G.; Zucker, S.; Mazeh, T. A box-fitting algorithm in the search for periodic transits. Astron. Astrophys. 2002, 391, 369–377. [Google Scholar] [CrossRef]
  6. Hippke, M.; Heller, R. Optimized transit detection algorithm to search for periodic transits of small planets. Astron. Astrophys. 2019, 623, A39. [Google Scholar] [CrossRef]
  7. Ricker, G.R.; Winn, J.N.; Vanderspek, R.; Latham, D.W.; Bakos, G.Á.; Bean, J.L.; Berta-Thompson, Z.K.; Brown, T.M.; Buchhave, L.; Butler, N.R.; et al. Transiting Exoplanet Survey Satellite (TESS). J. Astron. Telesc. Instrum. Syst. 2015, 1, 014003. [Google Scholar] [CrossRef]
  8. Gaudi, B.S.; Meyer, M.; Christiansen, J. The Demographics of Exoplanets. In ExoFrontiers; Big Questions in Exoplanetary Science; Madhusudhan, N., Ed.; IOP: London, UK, 2021; pp. 1–21. [Google Scholar] [CrossRef]
  9. Cumming, A.; Marcy, G.W.; Butler, R.P. The Lick Planet Search: Detectability and Mass Thresholds. Astrophys. J. 1999, 526, 890–915. [Google Scholar] [CrossRef]
  10. Cumming, A.; Butler, R.P.; Marcy, G.W.; Vogt, S.S.; Wright, J.T.; Fischer, D.A. The Keck Planet Search: Detectability and the Minimum Mass and Orbital Period Distribution of Extrasolar Planets. Publ. Astron. Soc. Pac. 2008, 120, 531. [Google Scholar] [CrossRef]
  11. Fischer, D.A.; Valenti, J. The Planet-Metallicity Correlation. Astrophys. J. 2005, 622, 1102–1117. [Google Scholar] [CrossRef]
  12. Johnson, J.A.; Aller, K.M.; Howard, A.W.; Crepp, J.R. Giant Planet Occurrence in the Stellar Mass-Metallicity Plane. Publ. Astron. Soc. Pac. 2010, 122, 905. [Google Scholar] [CrossRef]
  13. Suzuki, D.; Bennett, D.P.; Sumi, T.; Bond, I.A.; Rogers, L.A.; Abe, F.; Asakura, Y.; Bhattacharya, A.; Donachie, M.; Freeman, M.; et al. The exoplanet mass-ratio function from the Moa-II survey: Discovery of a break and likely peak at a neptune mass. Astrophys. J. 2016, 833, 145. [Google Scholar] [CrossRef]
  14. Borucki, W.J.; Koch, D.; Basri, G.; Batalha, N.; Brown, T.; Caldwell, D.; Caldwell, J.; Christensen-Dalsgaard, J.; Cochran, W.D.; DeVore, E.; et al. Kepler Planet-Detection Mission: Introduction and First Results. Science 2010, 327, 977. [Google Scholar] [CrossRef] [PubMed]
  15. Fulton, B.J.; Petigura, E.A. The California-Kepler Survey. VII. Precise Planet Radii Leveraging Gaia DR2 Reveal the Stellar Mass Dependence of the Planet Radius Gap. Astrophys. J. 2018, 156, 264. [Google Scholar] [CrossRef]
  16. Mazeh, T.; Holczer, T.; Faigler, S. Dearth of short-period Neptunian exoplanets: A desert in period-mass and period-radius planes. Astron. Astrophys. 2016, 589, A75. [Google Scholar] [CrossRef]
  17. NASA Exoplanet Archive. Confirmed Planets Table. Available online: https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=PS (accessed on 16 December 2023).
  18. Coughlin, J.L.; Mullally, F.; Thompson, S.E.; Rowe, J.F.; Burke, C.J.; Latham, D.W.; Batalha, N.M.; Ofir, A.; Quarles, B.L.; Henze, C.E.; et al. Planetary candidates observed by kepler. VII. the first fully uniform catalog based on the entire 48-month data set (Q1–Q17 DR24). Astrophys. J. Suppl. Ser. 2016, 224, 12. [Google Scholar] [CrossRef]
  19. Mislis, D.; Bachelet, E.; Alsubai, K.A.; Bramich, D.M.; Parley, N. SIDRA: A blind algorithm for signal detection in photometric surveys. Mon. Not. R. Astron. Soc. 2016, 455, 626–633. [Google Scholar] [CrossRef]
  20. Pearson, K.A.; Palafox, L.; Griffith, C.A. Searching for exoplanets using artificial intelligence. Mon. Not. R. Astron. Soc. 2018, 474, 478–491. [Google Scholar] [CrossRef]
  21. Zucker, S.; Giryes, R. Shallow Transits—Deep Learning. I. Feasibility Study of Deep Learning to Detect Periodic Transits of Exoplanets. Astron. J. 2018, 155, 147. [Google Scholar] [CrossRef]
  22. Ansdell, M.; Ioannou, Y.; Osborn, H.P.; Sasdelli, M.; 2018 NASA Frontier Development Lab Exoplanet Team; Smith, J.C.; Caldwell, D.; Jenkins, J.M.; Räissi, C.; Angerhausen, D.; et al. Scientific Domain Knowledge Improves Exoplanet Transit Classification with Deep Learning. Astrophys. J. Lett. 2018, 869, L7. [Google Scholar] [CrossRef]
  23. Chaushev, A.; Raynard, L.; Goad, M.R.; Eigmüller, P.; Armstrong, D.J.; Briegal, J.T.; Burleigh, M.R.; Casewell, S.L.; Gill, S.; Jenkins, J.S.; et al. Classifying exoplanet candidates with convolutional neural networks: Application to the Next Generation Transit Survey. Mon. Not. R. Astron. Soc. 2019, 488, 5232–5250. [Google Scholar] [CrossRef]
  24. Shallue, C.J.; Vanderburg, A. Identifying Exoplanets with Deep Learning: A Five-planet Resonant Chain around Kepler-80 and an Eighth Planet around Kepler-90. Astron. J. 2018, 155, 94. [Google Scholar] [CrossRef]
  25. Gupta, S. Harnessing the Power of Convolutional Neural Network for Exoplanet Discovery. Am. J. Adv. Comput. 2023, 2, 76–80. [Google Scholar]
  26. Haider, Z. A Novel Method of Transit Detection Using Parallel Processing and Machine Learning. J. Stud. Res. 2022. [Google Scholar] [CrossRef]
  27. Cuéllar Carrillo, S.; Granados, P.; Fabregas, E.; Cure, M.; Vargas, H.; Dormido-Canto, S.; Farias, G. Deep Learning Exoplanets Detection by Combining Real and Synthetic Data. PLoS ONE 2022, 17, e0268199. [Google Scholar] [CrossRef]
  28. Iglesias Álvarez, S.; Díez Alonso, E.; Sánchez Rodríguez, M.L.; Rodríguez Rodríguez, J.; Sánchez Lasheras, F.; de Cos Juez, F.J. One-Dimensional Convolutional Neural Networks for Detecting Transiting Exoplanets. Axioms 2023, 12, 348. [Google Scholar] [CrossRef]
  29. Tasker, E.J.; Laneuville, M.; Guttenberg, N. Estimating Planetary Mass with Deep Learning. Astron. J. 2020, 159, 41. [Google Scholar] [CrossRef]
  30. Claret, A. A new non-linear limb-darkening law for LTE stellar atmosphere models. Calculations for −5.0 <= log[M/H] <= +1, 2000 K <= Teff <= 50,000 K at several surface gravities. Astron. Astrophys. 2000, 363, 1081–1190. [Google Scholar]
  31. Kopal, Z. Detailed effects of limb darkening upon light and velocity curves of close binary systems. Harv. Coll. Obs. Circ. 1950, 454, 1–12. [Google Scholar]
  32. Kreidberg, L. Batman: BAsic Transit Model cAlculatioN in Python. Publ. Astron. Soc. Pac. 2015, 127, 1161. [Google Scholar] [CrossRef]
  33. Demory, B.-O.; Ségransan, D.; Forveille, T.; Queloz, D.; Beuzit, J.-L.; Delfosse, X.; Di Folco, E.; Kervella, P.; Le Bouquin, J.-B.; Perrier, C.; et al. Mass-radius relation of low and very low-mass stars revisited with the VLTI. Astron. Astrophys. 2009, 505, 205–215. [Google Scholar] [CrossRef]
  34. Demircan, O.; Kahraman, G. Stellar Mass/Luminosity and Mass / Radius Relations. Astrophys. Space Sci. 1991, 181, 313–322. [Google Scholar] [CrossRef]
  35. Schroeder, D.J. Astronomical Optics; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. arXiv 2015, arXiv:1502.01852. [Google Scholar]
  37. Agarap, A.F. Deep Learning using Rectified Linear Units (ReLU). arXiv 2018, arXiv:1803.08375. [Google Scholar]
  38. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  39. Chollet, F. Keras. Available online: https://keras.io (accessed on 1 December 2023).
  40. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  41. Robbins, H.; Monro, S. A Stochastic Approximation Method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  42. Stassun, K.G.; Oelkers, R.J.; Pepper, J.; Paegert, M.; Lee, N.D.; Torres, G.; Latham, D.W.; Charpinet, S.; Dressing, C.D.; Huber, D.; et al. The TESS Input Catalog and Candidate Target List. Astron. J. 2018, 156, 102. [Google Scholar] [CrossRef]
  43. Howell, S.B.; Sobeck, C.; Haas, M.; Still, M.; Barclay, T.; Mullally, F.; Troeltzsch, J.; Aigrain, S.; Bryson, S.T.; Caldwell, D.; et al. The K2 Mission: Characterization and Early Results. Publ. Astron. Soc. Pac. 2014, 126, 398. [Google Scholar] [CrossRef]
Figure 1. Theoretical transit model without limb darkening.
Figure 1. Theoretical transit model without limb darkening.
Axioms 13 00083 g001
Figure 2. Example of a transit shape with the Mandel and Agol model computed with TLS from the simulated light curve shown in the upper panel of Figure 3. Black: phase-folded light curve. Red: TLS model.
Figure 2. Example of a transit shape with the Mandel and Agol model computed with TLS from the simulated light curve shown in the upper panel of Figure 3. Black: phase-folded light curve. Red: TLS model.
Axioms 13 00083 g002
Figure 3. Example of the 2 views of a simulated light curve. Top panel: complete light curve. Bottom panel: phase-folded light curve. Phase is computed as ( t t 0 ) / P .
Figure 3. Example of the 2 views of a simulated light curve. Top panel: complete light curve. Bottom panel: phase-folded light curve. Phase is computed as ( t t 0 ) / P .
Axioms 13 00083 g003
Figure 4. Zoom of the light curves of Figure 3. Top panel: complete light curve. Bottom panel: phase-folded light curve.
Figure 4. Zoom of the light curves of Figure 3. Top panel: complete light curve. Bottom panel: phase-folded light curve.
Axioms 13 00083 g004
Figure 5. TOI-192 light curve. Top panel: Full light curve. Bottom panel: phase-folded light curve computed with TLS.
Figure 5. TOI-192 light curve. Top panel: Full light curve. Bottom panel: phase-folded light curve computed with TLS.
Axioms 13 00083 g005
Figure 6. Example of how a 1D CNN filter works. Red: input. Blue: output. Purple: filter input. Green: filter output (after applying the appropriate operation). Black arrows show the movement of the filter.
Figure 6. Example of how a 1D CNN filter works. Red: input. Blue: output. Purple: filter input. Green: filter output (after applying the appropriate operation). Black arrows show the movement of the filter.
Axioms 13 00083 g006
Figure 7. Scheme of model 1. Green: input. Salmon: convolutional layers. Red: MLP layers. Blue: output.
Figure 7. Scheme of model 1. Green: input. Salmon: convolutional layers. Red: MLP layers. Blue: output.
Axioms 13 00083 g007
Figure 8. Training history of model 1. Blue: training loss. Orange: validation loss.
Figure 8. Training history of model 1. Blue: training loss. Orange: validation loss.
Axioms 13 00083 g008
Figure 9. Training history of model 2. Blue: training loss. Orange: validation loss.
Figure 9. Training history of model 2. Blue: training loss. Orange: validation loss.
Axioms 13 00083 g009
Figure 10. Comparison between 100 predicted and test values (black dots) from a sample of P of 100. Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Figure 10. Comparison between 100 predicted and test values (black dots) from a sample of P of 100. Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Axioms 13 00083 g010
Figure 11. Comparison between 100 predicted and test values (black dots) from a sample of ( R p [ R ] ) 2 (upper panel) and a [ R ] (bottom panel) of 100. Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Figure 11. Comparison between 100 predicted and test values (black dots) from a sample of ( R p [ R ] ) 2 (upper panel) and a [ R ] (bottom panel) of 100. Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Axioms 13 00083 g011
Figure 12. APE histograms for P ( d ) . Cyan: mode. Red: percentile 90 (P90).
Figure 12. APE histograms for P ( d ) . Cyan: mode. Red: percentile 90 (P90).
Axioms 13 00083 g012
Figure 13. APE histograms for ( R p [ R ] ) 2 (upper panel) and a [ R ] (lower panel). Cyan: mode. Red: percentile 90 (P90).
Figure 13. APE histograms for ( R p [ R ] ) 2 (upper panel) and a [ R ] (lower panel). Cyan: mode. Red: percentile 90 (P90).
Axioms 13 00083 g013
Figure 14. Predicted parameters on real TESS data. Prediction vs. real data (black dots) of P ( d ) (upper panel), ( R p [ R ] ) 2 (middle panel), and a [ R ] (bottom panel). Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Figure 14. Predicted parameters on real TESS data. Prediction vs. real data (black dots) of P ( d ) (upper panel), ( R p [ R ] ) 2 (middle panel), and a [ R ] (bottom panel). Red: linear regression. Dashed blue area: MAE. Green: perfect prediction. Shaded blue: MAE.
Axioms 13 00083 g014
Table 1. Stellar and planetary paramaters’ upper and lower limits used for simulating the light curves.
Table 1. Stellar and planetary paramaters’ upper and lower limits used for simulating the light curves.
ParameterMin ValueMax Value
M [ M ] 0.110
R [ R ] 0.154.89
u 1 01
u 2 01
mag816
d [ p c ] 192
P ( d ) 113.5
t 0 0 P ( d )
R p [ R ] 0.010.1
Table 2. SNR in the function of stellar magnitude.
Table 2. SNR in the function of stellar magnitude.
magSNR
840.84
1036.20
1231.56
1426.92
1622.28
Table 3. Statistics computed with the predictions of the test process for each of the predicted parameters (P(d), ( R p [ R ] ) 2 , and a [ R ] ).
Table 3. Statistics computed with the predictions of the test process for each of the predicted parameters (P(d), ( R p [ R ] ) 2 , and a [ R ] ).
ParameterMAE R 2 Histogram Mode (%)Histogram P90 (%)
P(d)0.0150.990.77.5
( R p [ R ] ) 2 0.00030.970.819
a [ R ] 0.910.970.511
Table 4. Published and predicted parameters of exoplanets observed by TESS. The first two columns show, respectively, the TIC id of the star and the sector in which each light curve was obtained. The other columns show the published parameters, the predicted parameters, the AE, and the APE of each of the parameters and exoplanets. Published data were obtained from ExoFOP.
Table 4. Published and predicted parameters of exoplanets observed by TESS. The first two columns show, respectively, the TIC id of the star and the sector in which each light curve was obtained. The other columns show the published parameters, the predicted parameters, the AE, and the APE of each of the parameters and exoplanets. Published data were obtained from ExoFOP.
TIC idSectorP(d) RealP(d) PredAE (P(d))APE (P(d)) (%) ( R P [ R ] ) 2 Real ( R P [ R ] ) 2 PredAE ( ( R P [ R ] ) 2 ) APE ( ( R P [ R ] ) 2 ) ( % ) a [ R ] Real a [ R ] PredAE ( a [ R ] ) APE ( a [ R ] ) ( % )
18353745223.92304.02000.09702.47260.00940.00984.25534.255312.620012.30000.32002.5357
15862353127.86007.79000.07000.89060.01270.01251.25981.259814.520014.10000.42002.8926
10010082720.94141.01500.07367.81810.00970.01003.09283.09283.39003.15000.24007.0796
38810452522.49972.57620.07653.06040.01280.01301.56251.56257.42007.79000.37004.9865
14960352424.41194.49860.08671.96510.01210.01221.07441.07445.57005.90000.33005.9246
18424068321.62841.67000.04162.55470.01220.01230.69870.69873.39703.58000.18305.3871
3884651522.84942.75000.09943.48850.00690.00713.03163.03165.48005.66000.18003.2847
23098288522.07282.00000.07283.51220.01120.01083.57143.57149.73009.85000.12001.2333
14960352414.41194.46000.04811.09020.01210.01210.33060.33066.28006.56000.28004.4586
3884651512.84942.88000.03061.07390.00690.00655.79715.79719.83769.99610.15851.6113
38810452532.49972.34000.15976.38880.01180.01115.93225.93229.83009.46000.37003.7640
14960352434.41194.42850.01660.37630.01200.01286.66676.666710.870011.15000.28002.5759
26876605333.31003.36000.05001.51060.01770.01694.46044.46045.48005.67000.19003.4672
38810452542.49982.53400.03421.36810.01290.01215.98295.98297.38007.16540.21462.9079
3884651542.84932.88030.03101.08800.00690.00681.88131.88139.82009.96000.14001.4257
28979307613.04403.05480.01080.35480.01970.02012.23802.23809.25009.13000.12001.2973
30087154514.81704.84500.02800.58130.02360.02536.99156.99155.81005.73000.08001.3769
23166390111.43001.42920.00080.05590.01980.01923.03033.03035.48005.78000.30005.4745
23452359913.79603.79400.00200.05270.04840.04624.54554.545518.840018.65000.19001.0085
29013177813.30893.31670.00780.23570.00250.003644.000044.00004.33004.42500.09502.1940
9740951913.37293.34060.03230.95760.01600.013813.750013.75009.11009.18200.07200.7903
28145967013.17403.18570.01170.36980.01430.01346.24566.24568.72008.81000.09001.0321
26060920514.46204.47110.00910.20350.01710.01794.67844.67846.92006.82000.10001.4451
2515531913.28903.41470.12573.82140.00610.008538.436538.43657.88007.65000.23002.9188
2537555312.31102.35000.03901.68760.00760.00805.61505.61504.35004.50000.15003.4483
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iglesias Álvarez, S.; Díez Alonso, E.; Sánchez Rodríguez, M.L.; Rodríguez Rodríguez, J.; Pérez Fernández, S.; de Cos Juez, F.J. Computing Transiting Exoplanet Parameters with 1D Convolutional Neural Networks. Axioms 2024, 13, 83. https://doi.org/10.3390/axioms13020083

AMA Style

Iglesias Álvarez S, Díez Alonso E, Sánchez Rodríguez ML, Rodríguez Rodríguez J, Pérez Fernández S, de Cos Juez FJ. Computing Transiting Exoplanet Parameters with 1D Convolutional Neural Networks. Axioms. 2024; 13(2):83. https://doi.org/10.3390/axioms13020083

Chicago/Turabian Style

Iglesias Álvarez, Santiago, Enrique Díez Alonso, María Luisa Sánchez Rodríguez, Javier Rodríguez Rodríguez, Saúl Pérez Fernández, and Francisco Javier de Cos Juez. 2024. "Computing Transiting Exoplanet Parameters with 1D Convolutional Neural Networks" Axioms 13, no. 2: 83. https://doi.org/10.3390/axioms13020083

APA Style

Iglesias Álvarez, S., Díez Alonso, E., Sánchez Rodríguez, M. L., Rodríguez Rodríguez, J., Pérez Fernández, S., & de Cos Juez, F. J. (2024). Computing Transiting Exoplanet Parameters with 1D Convolutional Neural Networks. Axioms, 13(2), 83. https://doi.org/10.3390/axioms13020083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop