1. Introduction
Radiation has profound effects on eukaryotic cells, the fundamental units of life in complex organisms such as humans. When ionizing radiation interacts with biological tissues, it can cause significant damage at the cellular level, which can lead to tissue and organ failure, thus having an impact on a human’s health [
1]. One of the primary types of damage is DNA damage, including single-strand breaks (SSBs) and more severe double-strand breaks (DSBs). DSBs, in particular, are highly cytotoxic and can lead to mutations, chromosomal rearrangements, or cell death [
2]. Additionally, radiation can induce oxidative stress by generating reactive oxygen species (ROS) [
3], disrupting cellular homeostasis [
4], and damaging cell membranes [
5], proteins, and other vital cellular components [
6]. These effects can result in cell cycle arrest, apoptosis (programmed cell death), or senescence, altering normal cellular functions and potentially leading to diseases such as cancer [
7].
To understand the mechanisms underlying radiation-induced cellular responses, it is important to perform basic radiobiological research [
8]. The knowledge gained is essential for developing and improving radiation therapies for cancer treatment. A profound understanding of how different types of radiation affect various cancer cells can lead to more targeted and efficient treatments while minimizing damage to healthy tissues, such as in FLASH therapy [
9] or particle minibeam therapy [
10]. Furthermore, understanding the effects of radiation on a single-cell basis is vital for assessing the risks associated with radiation exposure [
11]. Workers in nuclear facilities, astronauts, and patients undergoing radiation-based medical procedures benefit from this knowledge. By understanding the cellular responses to radiation, scientists can establish safe exposure limits and develop protective measures, ensuring the well-being of individuals exposed to radiation in various contexts [
12].
The damaging effects of radiation and their impact on human health on a large scale have been known for more than a century [
13]. Supported by technological developments, assays were developed that can be used to determine and quantify the reactions of large cell populations to radiation in vitro, such as colony-forming assays for cell survival [
14], MTT assays for cell viability [
15], and flow cytometry for assessing cell death [
16]. These three assays are able to provide data that allows us draw conclusions on the major radiation effects that predominantly influence cellular integrity. Furthermore, cell survival, viability, and cell death can influence tissue functionality, immune reactions, and human well-being in cancer treatment and after radiation accidents. All the assays mentioned face the same challenge, as they can only be used at a particular time point for one endpoint in a sample, averaging over the reaction of thousands or millions of cells. This limits its applicability to low cell numbers, which are of interest, for example, in modern therapy methods such as micro- or minibeam therapy as well as in bystander research, where low cell numbers are irradiated. Furthermore, detailed information on how the cellular transition from the irradiated cell to a colony, a viable cell population, or cell death occurs is missing. Finally, post-irradiation treatments such as trypsinization for post-plating or color staining might interfere with the biological reactions. For example, cells that are already severely damaged might be lost during the trypsinization process, which is needed to perform the post-plating in the colony-forming assay. Post-plating is mostly necessary when accurate irradiation is needed for micro-beam [
17,
18] or bystander research [
19]. In this case, the sample processing would alter the results. Finally, to be able to create a full picture of radiation damage, several assays have to be used, each on separate samples, and only indirect conclusions can be drawn, which limits the generalization of the conclusions drawn.
Therefore, since the mid-20th century, the single-cell response has been of key interest for researchers [
20]. Only the limited amount of data storage and the long time needed for analysis made single-cell tracking unattractive for gaining knowledge. Increasing data storage, image analysis methods, and computational speed opens the door for performing complex analysis on single cells by tracking them through several generations.
To overcome the problem of slow result generation and therefore the limited use of single analysis, the CeCILE (Cell classification and in vitro lifecycle evaluation) project was started. In 2021, we published the first machine learning-based program that was able to detect and classify up to 100 cells in live-cell phase-contrast videos, CeCILE [
21]. We also showed the promising perspectives that the use of artificial intelligence-based analysis provides for investigating cellular radiation responses. The aim of the current study was to generalize CeCILE to more cell lines and increase performance. The first step was to overcome the limitations, which come when there is a maximum number of cells that can be analyzed. The key developments in the new version, CeCILE 2.0, were to implement human-supervised cell tracking, which allows to track cells and their descendants over several generations and to generate cell lineages for each cell from the tracking data. This allows for the scoring of defects in the cell cycle, abnormalities occurring through division or throughout the cell cycle, as well as proliferation and cell survival. These developments represent a big step toward reaching the overall goal of the CeCILE project, which is to provide an open-source platform for automated analysis of cellular reactions imaged in the normal environment without interfering with the cells.
3. Discussion
In this study, we presented a novel method to study the cellular behavior of single eukaryotic cells. It is based on observing the cells for several days after irradiation by live-cell phase-contrast microscopy and analyzing the obtained data with the human-supervised algorithm CeCILE (Cell classification and in vitro lifecycle evaluation) 2.0, which is based on artificial intelligence. The introduced algorithm can detect and track cells on microscopic videos and classify them into four cell states depending on their morphology. It is also capable of evaluating various cell cycle-related endpoints such as proliferation, cell cycle duration, cell cycle abnormalities, and cell lineage. The first version of CeCILE published in 2021 [
21] was, to our knowledge, the first to present an artificial intelligence-based algorithm for analyzing cell response to radiation on live-cell phase-contrast videos.
As a first step, we implemented an object detection system based on artificial intelligence that is able to detect all cells on all frames of a live-cell phase-contrast video. The object detection in CeCILE 2.0 represents an upgrade to the previously published one, as it overcomes the limit of 100 cells that could be detected in the first version. Now there is no limit to the number of detectable cells, opening the way for increased statistics and deeper analysis. The object detection algorithm chosen is a pre-trained, faster RCNN with ResNet-101 as the backbone CNN. The dataset includes images from a total of 20 experiments with different setups to increase generalization and widen the window of possible applications. It includes three cell lines, CHO-K1, LN229, and HeLa, which are commonly used in radiobiology. These cell lines have three important characteristics that are necessary for detection and tracking. They grow unchanged when seeded at low densities without clustering. They adhere to flat surfaces and grow in a 2D-like fashion, usually without overlapping. The cells of these cell lines are easily distinguishable and can be followed throughout a video as long as the cell density is not too high. In principle, CeCILE 2.0 can be applied to any cell line that meets these requirements, but it has to be tested if further training is necessary before application.
Object detection performance was evaluated using two videos of the test and a tracking dataset derived from an irradiation experiment. In the first video, CHO-K1 cells were irradiated with 3 Gy of X-rays, and in the second video, cells were sham irradiated. For these two videos, we created a detection and tracking groundtruth that was used to evaluate the performance of CeCILE 2.0 in both tasks, detection and tracking. First, object detection was evaluated in terms of its performance in localizing objects. This is the most important part of object detection in CeCILE 2.0, since tracking relies on the correct localization of cells. Here, the F1 score was calculated. Correctly detected cells had box overlaps with IoU values > 0.5 of predicted boxes and groundtruth boxes. The video of the irradiated cells achieved a mean F1-score of 0.90 with a mean precision of 0.88 and a mean recall of 0.93. The video of the sham cells had a mean F1 score of 0.92, a mean precision of 0.88, and a mean recall of 0.96. Thus, the object detector performed slightly better on the sham video. Overall, this means that over 90% of the cells were accurately localized in both videos. For a low-contrast biological sample, this is a very good value, and we conclude that the performance is good enough to proceed.
In addition to localization, an object detector also classifies the detected objects. For object detection, localization and classification are typically evaluated in a combined score, the mAP score. The mAP score is 1 for perfect detection and classification. The mAP can be calculated for each class individually and as an average mAP over all classes. The object detector achieved the best results in the detection and classification of the liv class. This class had a mean mAP of 0.95 in the irradiated video and 0.97 in the sham video. In the irradiated video, 75.1% of the cells were in class liv and in the sham video, 85.6% of the cells were in class liv. The second-best results were achieved in the dead class, with mean mAP values of 0.82 (irradiated) and 0.69 (sham). There were more dead cells in the irradiated video, with 17.0% of the cells, than in the sham video, with 9.8% of the cells. The third highest mAP score was achieved in the class round, with 0.72 in the irradiated video and 0.60 in the sham video. 7.6% of the cells were in the class round in the irradiated video, and 4.2% of the cells, in the sham video. By far the least common class was div. Only 0.3% and 0.4% of the cells in the irradiated and sham videos, respectively, were in this class. The mean mAP values were also the lowest, at 0.5 in both videos. This shows that the cell detection gave better results for classes containing more cells. However, it is important to note that the mAP score is much more affected by a missing bounding box or an incorrect classification when there are fewer cells in the class being examined. This can be seen in the classes liv, round, and dead, where higher mAP scores were obtained when there were more cells in a class. In particular, class liv, which contained the majority of cells, had very high mAP scores above 0.95, indicating that the vast majority of cells were correctly detected and classified. The mAP score of a class was only below 0.8 when less than 10% of the cells belonged to the class. As shown in the classification with a simple CNN, the classification struggles with cells that transit from one class to another. In the cell cycle, cells stay in the liv class most of the time. The class round as a precursor to cell division occurs only for about 30 min and the class div can be observed only for about 10 min. We imaged the cells every 5 min. Therefore, the probability that a cell is in a transition state is higher for the round and div classes than for the liv class. Dead cells change their morphology after death. They may undergo self-digestion, as in apoptosis, or they may be digested by other cells. In these processes, dead cells disappear after some time. Therefore, dead cells are difficult to detect and classify for an algorithm, but also for a human expert annotator, if the death occurred 1 h or more ago. Misclassifications in cell state transitions do not affect the result after tracking, since it is not important whether a cell enters a particular cell state one frame earlier or later. Furthermore, it is not important to track dead cells for the whole observation time since the only important information, namely that a cell is dead, has already been received. Therefore, it can be concluded that object detection provides all the information needed for tracking and performs well enough to be passed to a tracking algorithm. In order to improve the classification performance, the next development step of CeCILE will include the time information, i.e., the state of the cell, several frames before and after.
As a tracking algorithm, we implemented the centroid tracker proposed by A. Rose-brock [
22] and adapted it to the special requirements of tracking cells in phase-contrast videos. Furthermore, we implemented the IoU as a second feature to take into account box overlap when matching bounding boxes, which further increases robustness. Since the centroid tracker is a location-based tracker, it is well suited for cells on phase-contrast videos. Cells on phase-contrast videos hardly move between frames when a frame-to-frame distance of 5 min is applied during recording. An appearance-based tracker is less suitable for cell tracking because cells change their morphology during the cell cycle. Also, trackers that expect objects to move toward a specific target cannot be applied because cells in culture move in a more random walk fashion due to the homogeneous distribution of nutrients in the culture. The centroid tracker is a hard-coded tracker that only considers the information of the previous and current frames. It is designed for accurate tracking of objects where each object has a track but does not provide the ability to track across cell divisions where a track splits into two tracks. However, it can be used to track between cell divisions and manually assign the tracks of two daughter cells to a mother cell afterwards. The performance of the implemented tracker was tested, such as object detection before, on the two videos of the test and tracking datasets. We tested the tracker on the bounding boxes of the groundtruth in order to test only the performance of the tracker and not the previously tested detector. Since the centroid tracker cannot track across cell divisions, all ID assignments between cell divisions predicted by the tracker and given by the groundtruth were compared to calculate the tracking accuracy. The tracking accuracy is the percentage of correct ID assignments out of all assignments. The centroid tracker achieved an accuracy of 97.77% in the irradiated video and 98.51% in the sham video. This shows that the proposed location-based tracker is well suited for cell tracking.
The detection and tracking implemented in CeCILE 2.0 both give very good results when used separately. In combination, however, the errors made by each are amplified. Errors in detection, such as missing bounding boxes, lead to errors in tracking, since tracking relies on the bounding boxes of object detection. For the evaluation of phase-contrast videos on a single-cell basis, 100% accuracy is required, and no errors can be accepted as they would falsify the result. In particular, switching cell identities or prematurely stopping the track will lead to incorrect results when evaluating the cell cycle and creating accurate cell lines. Therefore, we implemented two manual monitoring steps. The first step was implemented after object detection to allow the user to add any missing bounding boxes. With these corrections, the tracking algorithm can be applied, which leads to very accurate results. In the second correction step, the tracking IDs can be adjusted. The tracks of mother cells and their daughter cells can be combined. In both correction steps, the class labels of the bounding boxes can be corrected if necessary. With this supervised approach, the user can easily evaluate phase-contrast videos with 100% accuracy and obtain precise information about each cell in the video. Although very high accuracy could be achieved in the supervised mode, this is one of the main limitations of CeCILE 2.0. In the future, human supervision should be eliminated from the workflow. To be able to do this, a different approach to tracking is necessary, which is able to track over a cell division, recognize the daughter cells as such, and track them separately. For this, it is necessary to use the full video information, adding the time domain as a parameter, and not sticking to the single time planes. This development needs substantial improvement of the underlying model. One possible solution could be the use of conservation tracking, which was proposed for use in life sciences [
23,
24], or the use of the Hidden Markov Model [
25].
In the experiment shown in this study, cells were imaged for 4 days and analyzed for cell viability, cell cycle and cell cycle abnormalities, proliferation, and survival. It is possible to extend the monitoring to more than 4 days, but the evaluation with CeCILE 2.0 is limited by the density of the cells. Proper detection and tracking are only possible as long as the cells are still distinguishable from each other. We show that the cells are in exponential growth during the imaging period, except for a cell cycle arrest of 4.4 h for the irradiated sample. After the first division, the cell cycle is constant, with (11.8 ± 0.3) h in the sham sample and (11.4 ± 0.4) h in the irradiated sample.
The cell lines generated by CeCILE 2.0 provide deep insight into the evolution of each cell. For example, it can be seen here that some cells show abnormalities in their cell cycles. In the videos analyzed, fusions of two daughter cells and divisions into three daughter cells were observed. Combinations of both abnormalities were also observed. Overall, 35% of the irradiated cells showed cell cycle abnormalities and only 4% of the sham cells, with division abnormalities being the dominant process. The object detection CeCILE 2.0 and the counted number of cells could also be used to correctly calculate the number of divisions and daughter cells and to prove the assumed model of exponential growth with cell cycle arrest for the irradiated cells. Based on the data, cell survival could be predicted, which was in good agreement with actual measurements of the colony formation assay in the same setup [
22]. By comparing the results obtained, it can be concluded that the difference in cell growth between irradiated and sham cells is mainly a result of cell cycle arrests immediately after irradiation and therefore delayed cell divisions. Furthermore, due to unsuccessful cell divisions, on the one hand because more daughter cells died and, on the other hand, because of cell cycle abnormalities.
4. Materials and Methods
4.1. Data Set
For the dataset used for training CeCILE 2.0, 20 videos of cells of three different cell lines (HeLa, CHO, and LN229) were recorded via live-cell imaging with a standard inverted microscope in our lab. An overview of the used data can be found in
Supplementary Tables S1 and S2. During recording, cells were kept in a stage-top incubator that ensures a healthy environment for the cells and enables continuous monitoring for up to 5 days. From the recorded videos, frames were chosen to be labeled and included in the dataset. In the first 13 videos, a distinct time interval of 20 min or 100 min was chosen between the labeled frames, and in videos 14 to 20, only frames were chosen to be included in the dataset where at least one cell was in the state cell division. Cells were imaged after different treatments, under different conditions, and with different imaging modes, resulting in heterogeneous videos that represent a wide range of experiments. Cells were seeded into three different containers that have different optical properties. The containers were coated with either gelantine or CellTak or left uncoated. Cell samples were irradiated or left unirradiated. For testing the performance of object detection and tracking on videos, two videos were labeled. To create the videos of the test dataset, CHO cells were seeded on two μ-dishes without coating. After 24 h of incubation, one μ-dish was irradiated with 3 Gy of X-rays, and the other one was sham irradiated. Both μ-dishes were placed in the live-cell-imaging setup of the microscope, and recording of the videos with the microscope started immediately after irradiation. The recording was performed for 4 days. For the groundtruth, 457 frames of the irradiated sample were labeled, corresponding to a time range of 38 h, and 341 frames of the unirradiated sample were labeled, corresponding to a time range of 28.3 h. The different numbers of labeled frames were chosen because the irradiated sample has a decreased growth rate compared to the sham sample. In the evaluated time ranges, the cells in both samples were able to divide three times and could still be tracked accurately as the cell densities were not too high. To create a groundtruth of these videos, the videos were labeled by the CeCILE 2.0 object detector and tracker, and the labels were manually corrected after object detection and tracking using VIA image annotator software [
26].
4.2. Object Detection
A faster RCNN implemented in the TensorFlow object detection API was chosen as an object detection model. This model is easy to use and adapt to custom datasets, and it is very accurate in detecting many small objects on a crowded image [
27]. The Faster-RCNN also shows these characteristics when applied to microscopy images of cells [
28]. To save computational time, transfer learning with a pretrained ResNet-101 model trained on the COCO dataset [
29] from the TensorFlow 2 Model Detection Zoo was used. For identification of the cell-specific appearance, classification, and location, CeCILE 2.0 was trained and fine-tuned on the specific dataset described in the
Section 4.1, which was split randomly (75%/25%) into a training dataset and a validation dataset. Training was performed on the training dataset, and for fine-tuning, the object detector’s predictions were evaluated during training on the validation dataset. During this process, only the top 10 layers of ResNet-101 were trained, and the rest were frozen. For inspection during the training, the qualification scores were used, which are described in the next section. This way, the training process could be inspected and the parameters fine-tuned accordingly. The data preparation and training pipeline for faster RCNN is implemented as described by Rosebrock [
30] and on the official website of the TensorFlow 2 object detection API [
31]. In the training process, the following parameters were fine-tuned: number of epochs, learning rate, aspect ratios and scales of the anchor boxes, data augmentation, non-maximum suppression, localization loss weight, classification loss weight, and objectness loss weight. These parameters have been chosen because during the development process, it turned out that they had an influence on the model’s performance. The major influences were seen in the learning rate and data augmentation. The final parameters are shown in
Table 3.
Training was performed on a NVIDIA GeForce RTX 2080 super graphics card (NVIDIA corporation, Santa Clara, CA, USA) with 8 GB of VRAM (video random access memory).
4.3. Qualification Scores
For object detection qualification, first the number of true positives, false positives, and false negatives were counted. True positives are all boxes that overlap a groundtruth box with an intersection over union (IoU) > 0.5. The intersection over union is determined by measuring how much the predicted bounding box overlaps with the groundtruth bounding box. False positives are all boxes that do not overlap with a groundtruth box with an IoU > 0.5 and are, therefore, falsely predicted by the object detector. An example of a false positive is a prediction of two boxes for one object. Objects not predicted by CeCILE 2.0 object detection are called false negatives. After that, the following scores were calculated for each frame:
Recall, also known as true positive rate, is the percentage of cells correctly identified for a class out of the total cells for this class:
The precision is the percentage of cells correctly predicted for a class out of all cells belonging to that class:
The F1-score is the harmonic mean of precision and recall:
For the mAP score, the class labels of the boxes were also taken into account when assigning the boxes as true positives, false positives, and false negatives. Now, only boxes are true positive if the box overlaps with an IoU > 0.5 with a groundtruth box and the label is the same. If the label does not match the label of the corresponding groundtruth box, the box is false positive. The boxes were sorted according to their confidence score that was assigned by the object detector, starting with the highest confidence score and going to boxes with smaller scores. The precision and the recall were calculated by taking only the current box and all boxes with a higher confidence score as one of the current boxes into account. This is repeated for all boxes, and in every step, one more box is taken into account. The average precision is the area under the resulting recall-precision curve. The mean average precision score was calculated for each class individually as the mean of the average precision for IoU < 0.5 (mAP liv, mAP round, mAP div, and mAP dead). If a class did not appear in the predictions and the groundtruth, it was assigned a mAP score of 0. The overall mAP was defined as the average of all classes present in each frame.
4.4. Centroid Tracking
The bounding boxes obtained from the detection are passed to a tracking algorithm. By implementing a tracking algorithm that uses the bounding boxes obtained from object detection, CeCILE 2.0 is able to track each cell throughout the video. The tracking algorithm assigns a unique ID to each bounding box in the first frame of a video. In the next frame, the tracking algorithm matches the bounding boxes with the bounding boxes of the previous frame. If a matching bounding box is found, it is given the same ID as its matching partner. If there is a bounding box in the current frame that has no matching partner in the previous frame, it receives its own unique ID. This matching process is repeated for all frames in the video.
To track the cells based on their location and the bounding boxes obtained from the detection, the centroid tracker developed by Adrian Rosebrock [
22] was implemented in CeCILE 2.0 and adapted to the specific needs of cell tracking. This object tracker uses the OpenCV library in Python. In the first step of the centroid tracker, the centroids of each bounding box in a frame were determined by calculating the center coordinates of the bounding box based on the box coordinates:
where
centroidX and
centroidY are the coordinates of the center of a bounding box in the x and y directions,
startX and
startY are the x- and y-coordinates of the upper left corner of a bounding box, and
endX and endY correspond to the x- and y-coordinates of the lower right corner of a bounding box. In the first frame of the video, each bounding box is assigned a unique ID by the function register. This function stores the IDs as keys and the bounding box coordinates as box coordinates as values in the dictionary objects. In each subsequent frame
n, the bounding boxes are mapped to the bounding boxes of the previous frame
n − 1 using the Euclidean distance
d. Here, the Euclidean distances between the centroids of each pair of bounding boxes in frame
n and frame
n − 1 are computed and stored in a matrix
, where the columns correspond to the bounding boxes of frame
n − 1 and the rows correspond to the bounding boxes of frame
n:
Additionally to the tracking proposed by Rosebrock [
22], the IoU overlap of each box in frame
n − 1 and frame
n was calculated and also stored in a matrix
similar to
. A matching matrix was computed using the following:
Now, the algorithm searches each row of the matching matrix for the smallest value. These values are ordered in ascending order, and the position of each value in the matrix is also stored. Starting with the first and smallest value, the corresponding bounding boxes in frame n and frame n − 1 are derived based on the position in the matrix. It can be assumed that these two bounding boxes contain the same object since they are the closest to each other and overlap the most. Therefore, the ID of the bounding box in frame n − 1 is assigned to the bounding box in frame n. This step is repeated for all the smallest values in all rows. To avoid double assignment of the bounding boxes of both frames, the indices of the bounding boxes used are stored. Before each new assignment of an ID, it is checked whether the bounding box in frame n − 1 has already been matched to a bounding box of frame n or vice versa. Each time a bounding box is matched, the dictionary objects are updated by assigning the value tuple containing the coordinates of the matched bounding box of frame n to the key ID that was found to be the matching ID. Finally, it is checked whether there are bounding boxes in frame n or n − 1 that have not been matched to a box of another frame. If a bounding box of frame n has no matching partner, the function register is executed. If there is no matching partner for a bounding box in frame n − 1, the deregister function is executed. This function deletes the ID and coordinates of this bounding box from the dictionary objects. This procedure is repeated for each frame of the video. By using the matching matrix instead of the Euclidean distance matrix for the matching, the matching of two boxes that do not overlap or only partly overlap is penalized, and the matching of boxes that have a small center-to-center distance and a huge box overlap is encouraged.
4.5. Tracking Accuracy
The tracking accuracy was measured by applying the centroid tracker to the groundtruth-bounding boxes of the two test videos and comparing the IDs assigned to the boxes by the tracker to the groundtruth IDs. As the centroid tracker is not able to track across cell divisions, the specific ID changes at cell divisions were ignored for the scoring, and only the consistency of the tracks in between cell divisions was scored. The IDs of each object in a frame
n > 0 were compared to the ID of the very same object in the previous frame
n − 1 in the tracks created by the centroid tracker (prediction) and the tracks created by me (groundtruth). If the ID of an object did not change in the groundtruth and the prediction between frame
n and frame
n − 1, the variable
right_ID, which was initially 0, was increased by 1. If the ID of an object changes in the prediction but not in the groundtruth, the variable
wrong_ID, which was initially 0, was increased by 1. This was performed for all bounding boxes and frames in the video. Afterwards, the tracking accuracy
tacc was calculated by using the following formula:
4.6. Cell Culture and Irradiation
CHO cells were cultivated in the growth medium RPMI 1640 (R8758-500ML, Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% FCS (fetal calf serum, F0804-500ML, Sigma-Aldrich, USA), 1% Penicillin-Streptomycin (P4333-100ML, Sigma-Aldrich, USA), and 1% Sodium Pyruvate (S8636-100ML, Sigma-Aldrich, USA). Cells were grown in an incubator at a temperature of 37 °C, 5% CO2, and 100% humidity and were passaged twice a week. Cells were seeded for 24 h before irradiation on µ-dishes with glass bottoms (μ-Dish 35 mm, Ibidi, Martinsried, Germany). For irradiation, an X-ray cabinet (CellRad, Precision Xray Inc., Madison, CT, USA) was used. One dish was irradiated with 3 Gy of X-rays (130 kV) and a dose rate of 0.067 Gy/s and the other dish was sham irradiated.
4.7. Life-Cell Microscopy
Cells were imaged by live-cell microscopy for up to 4 days. Therefore, the microscope was equipped with a stage-top incubator (Tokai-hit STX, Tokai-hit, Fujinomiya, Japan). The incubator allows cells to be maintained under cell culture conditions (37 °C temperature, 5% CO2 concentration, and 100% humidity). The water reservoir in the stage-top incubator was replenished every day during the recording period, and the growth medium for the cells was replenished every second day. The water reservoir, high humidity, and medium replenishment prevented the cell sample from drying out and ensured optimal physical conditions and nutrient supply for the cells during observation. A 10× objective (Plan-Apochromat 10×/0.45 Ph1, Zeiss, Oberkochen, Germany) was used for imaging. Cells were imaged in two modes: in standard phase-contrast using the phase stop Ph 1 (Zeiss, Germany) as suggested by the manufacturer and hereafter referred to as mode 1. In addition, videos were recorded in a bright-field phase-contrast mode using the phase stop Ph 2 (Zeiss, Germany), hereafter referred to as mode 2. Both imaging methods were included in the dataset to increase the robustness of the detection algorithm and to encourage the algorithm to learn the structure and pattern of different cell morphologies rather than intensity patterns. The test videos were recorded in mode 1.