Rapid Quantification of Microvessels of Three-Dimensional Blood–Brain Barrier Model Using Optical Coherence Tomography and Deep Learning Algorithm

The blood–brain barrier (BBB) is a selective barrier that controls the transport between the blood and neural tissue features and maintains brain homeostasis to protect the central nervous system (CNS). In vitro models can be useful to understand the role of the BBB in disease and assess the effects of drug delivery. Recently, we reported a 3D BBB model with perfusable microvasculature in a Transwell insert. It replicates several key features of the native BBB, as it showed size-selective permeability of different molecular weights of dextran, activity of the P-glycoprotein efflux pump, and functionality of receptor-mediated transcytosis (RMT), which is the most investigated pathway for the transportation of macromolecules through endothelial cells of the BBB. For quality control and permeability evaluation in commercial use, visualization and quantification of the 3D vascular lumen structures is absolutely crucial. Here, for the first time, we report a rapid, non-invasive optical coherence tomography (OCT)-based approach to quantify the microvessel network in the 3D in vitro BBB model. Briefly, we successfully obtained the 3D OCT images of the BBB model and further processed the images using three strategies: morphological imaging processing (MIP), random forest machine learning using the Trainable Weka Segmentation plugin (RF-TWS), and deep learning using pix2pix cGAN. The performance of these methods was evaluated by comparing their output images with manually selected ground truth images. It suggested that deep learning performed well on object identification of OCT images and its computation results of vessel counts and surface areas were close to the ground truth results. This study not only facilitates the permeability evaluation of the BBB model but also offers a rapid, non-invasive observational and quantitative approach for the increasing number of other 3D in vitro models.


Introduction
The blood-brain barrier (BBB) is a semi-permeable interface between blood circulation and the extracellular fluid of the central nervous system (CNS), and its unique multicellular structures maintain brain homeostasis by tightly controlling the selective transport of molecules, ions, and other nutrients, protecting against toxins and pathogens from blood to brain [1]. The disruption of the BBB leads to multiple neurological diseases including global morphology of micro-luminal structures of the 3D transwell BBB model for quality control in commercial use, a rapid and non-invasive approach is needed. Optical coherence tomography (OCT) is a powerful imaging technology based on interference between a split and later recombined broadband optical field [18]. It is able to provide non-invasive 3D images of inhomogeneous samples and has been commonly used in the ophthalmological field [19]. Recently, OCT has been used for imaging of spheroidal BBB models [20], bovine preimplantation embryos [21], and 3D microvasculature modes [22] to visualize the dynamics of angiogenic sprouting due to the advantages of fast image acquisition, wide ranges of sample depths, and because sample preparation is unnecessary.
With the increasing application of OCT for in vitro models, there is a need for processing technologies to identify regions of interest (ROI) within the 3D images and derive quantitative values from OCT images for further analysis of the desired structures. However, to the best of our knowledge, there are no studies to date that have comprehensively examined the processing of OCT images of 3D in vitro vascular models.
In this study, we report a rapid and non-invasive OCT-based approach to visualize and quantify the vessel lumen structures of the 3D BBB in vitro model. OCT was applied to visualize the vascular network of the 3D BBB model, and the resultant image stacks were processed using three detectors (morphological imaging processing (MIP), random forest machine learning using the Trainable Weka Segmentation plugin (RF-TWS), and deep learning using pix2pix cGAN) to identify vascular lumen structures. Their performances were evaluated by comparing their output images with manually selected ground truth images. Finally, the vessel counts and total surface areas of the vascular structures were calculated ( Figure 1). Our approach not only facilitates the permeability assessment of the 3D BBB model but also offers a rapid and non-invasive observational and image processing approach for the increasing number of other 3D in vitro models.
Biosensors 2023, 13, x FOR PEER REVIEW 3 of 14 variations at interfaces or have non-centrosymmetric structures, respectively [16,17]. To visualize and quantify the global morphology of micro-luminal structures of the 3D transwell BBB model for quality control in commercial use, a rapid and non-invasive approach is needed. Optical coherence tomography (OCT) is a powerful imaging technology based on interference between a split and later recombined broadband optical field [18]. It is able to provide non-invasive 3D images of inhomogeneous samples and has been commonly used in the ophthalmological field [19]. Recently, OCT has been used for imaging of spheroidal BBB models [20], bovine preimplantation embryos [21], and 3D microvasculature modes [22] to visualize the dynamics of angiogenic sprouting due to the advantages of fast image acquisition, wide ranges of sample depths, and because sample preparation is unnecessary.
With the increasing application of OCT for in vitro models, there is a need for processing technologies to identify regions of interest (ROI) within the 3D images and derive quantitative values from OCT images for further analysis of the desired structures. However, to the best of our knowledge, there are no studies to date that have comprehensively examined the processing of OCT images of 3D in vitro vascular models.
In this study, we report a rapid and non-invasive OCT-based approach to visualize and quantify the vessel lumen structures of the 3D BBB in vitro model. OCT was applied to visualize the vascular network of the 3D BBB model, and the resultant image stacks were processed using three detectors (morphological imaging processing (MIP), random forest machine learning using the Trainable Weka Segmentation plugin (RF-TWS), and deep learning using pix2pix cGAN) to identify vascular lumen structures. Their performances were evaluated by comparing their output images with manually selected ground truth images. Finally, the vessel counts and total surface areas of the vascular structures were calculated (Figure 1). Our approach not only facilitates the permeability assessment of the 3D BBB model but also offers a rapid and non-invasive observational and image processing approach for the increasing number of other 3D in vitro models. The workflow of non-invasive OCT-based approach to visualize lumen structures of 3D BBB in vitro model. The x-z cross-sectional images obtained from OCT were converted to z-stacks of x-y images. These converted images were analyzed with morphological imaging processing (MIP), RF machine learning using TWS plugin (RF-TWS), and deep learning using pix2pix cGAN and then the output images were used for visualization and quantification of the 3D vessel network of BBB model.

Cell Culture and Fabrication of 3D BBB Model
The 3D BBB model was prepared as previously reported [10]. Briefly, human brain microvascular endothelial cells/conditionally immortalized clone 18 (HBEC) [23], human astrocyte/conditionally immortalized clone 35 (HA) [24], and human brain Figure 1. The workflow of non-invasive OCT-based approach to visualize lumen structures of 3D BBB in vitro model. The x-z cross-sectional images obtained from OCT were converted to z-stacks of x-y images. These converted images were analyzed with morphological imaging processing (MIP), RF machine learning using TWS plugin (RF-TWS), and deep learning using pix2pix cGAN and then the output images were used for visualization and quantification of the 3D vessel network of BBB model.

Cell Culture and Fabrication of 3D BBB Model
The 3D BBB model was prepared as previously reported [10]. Briefly, human brain microvascular endothelial cells/conditionally immortalized clone 18 (HBEC) [23], human astrocyte/conditionally immortalized clone 35 (HA) [24], and human brain pericyte/conditionally immortalized clone 37 (HP) [25] were resuspended into a fibrin gel with the ratio of 2:4:1 (HBEC:HA:HP), and then the gel was polymerized in membrane-free transwell inserts to partition the apical side from the basolateral side. The next day, HBEC were seeded onto the bottom of the fibrin gel to form the HBEC monolayer, fusing to the endothelial vascular in the gel to form a luminal vascular network with perfusable opening structures after 7 days of culture ( Figure 1a).

Image Acquisition and Preprocessing
For the OCT imaging of the in vitro BBB model, the samples were captured from the bottom of the fibrin gel after culturing for 7 days by using a spectral domain OCT (SD-OCT) (SCREEN Holdings Co., Ltd., Kyoto, Japan). The system was operated at a center wavelength of 890 nm, using an A-scan rate of 6000 Hz with a sensitivity of 107 dB, 10 µm for the low-resolution mode and 3 µm for the high-resolution mode. The OCT provided 2D x-z cross-sectional images (600 images) with 600 × 60 pixels (width × height; 10 µm per pixel) and 10 µm thickness (Figure 1b). For comparison with fluorescence images, the y stacking of x-z images was converted to z stacking of x-y images (60 images) (Figure 1c).
To confirm that the luminal structures visualized by OCT are microvascular structures, the samples after OCT measurement were subsequently used for fluorescence imaging on living cells. CD31 was used as the biomarker for the presence of endothelial cells that formed blood vascular lumens [26]. Samples were washed twice with phosphate-buffered saline (PBS) and incubated with Alexa Fluor ® 488 Anti-CD31 antibody (Abcam, ab215911, Cambridge, UK) at 37 • C for 1 h, followed by two additional PBS washes. The resultant samples were scanned by a confocal laser-scanning microscope Olympus FV3000 (Olympus, Tokyo, Japan) using a 10× objective lens with a step size of 2 µm. The scan parameters were kept the same for all of the measurements of six BBB samples.
We initially obtained 3D OCT images of each sample with depths up to 600 µm, but images were truncated to a depth of 300-350 µm based on the visible range of microvessels. The identified images were preprocessed with histogram equalization to increase the global contrast before analysis. Finally, 6 datasets obtained from 6 BBB samples were separated into 2 sets of 3 for training and testing, respectively. Each dataset contains 30-35 images, and each image had a size of 600 × 600 pixels. Three detectors (MIP, RF-TWS, and pix2pix cGAN) ( Figure 1d) were applied to each testing dataset. Binary images of luminal structures output by three detectors (Figure 1e) were used to generate 3D images (Figure 1f). Unless stated otherwise, Fiji software (version 2.9.0) [27] was used for image processing.

Morphological Imaging Processing (MIP)
The interfacial tension between the hydrogel and the transwell insert causes a depression at the center of the hydrogel ( Figure S1a); thus, the 3D OCT images are vacant in their centers for the initial few layers of the image stacks ( Figure S1b). The edge of the culture-insert showing high brightness caused disturbances in the recognition of the luminal structures on binarized images. To remove these untargeted regions (edge and center vacancy of the images), we first identified the binary region of interest (ROI) mask for each image stack ( Figure S2a). For main image processing, Gaussian blur and adaptive thresholding were performed on the preprocessed images and inverted for morphological operations. The adaptive threshold value was adjusted with reference to the ground truth images for correct detection and followed by morphological erosion and opening with connected component labeling to remove large and small objects, respectively. The resulting images were combined and filtered by the ROI mask, forming the binary output images ( Figure S2b). This work was implemented with Python.

RF Machine-Learning Using TWS (RF-TWS)
The semi-automatic Trainable Weka Segmentation 3D (TWS) [28] is a robust and userfriendly Fiji plugin of ImageJ, containing a large number of machine learning algorithms for classifier training, and the random forest (RF) algorithm was used for this study. In detail, the stack images from the training dataset were input into the classifier, and the representative pixels of the vascular lumen and background of each slice were randomly selected and labeled as two classes for training. Once the classifier is obtained, the remaining unlabeled pixels are classified, and binary classification images are output. Classification results are inferred from the binary images, and then misclassifications are corrected manually before repeatedly retraining the classifier and correcting misclassifications using different randomly selected subsets to obtain more accurate results. Final output images were post-processed with morphological closing to remove small, unwanted objects ( Figure S3).

Deep Learning Using Pix2pix cGAN
As a supervised image-to-image solution, the Pix2pix conditional generative advertise network (Pix2pix cGAN) containing a U-Net generator plus a PatchGAN discriminator [29] does not need to know about specific features of input images but learns the differentiation of both classes through the simultaneous training of adversarial discriminative and generative networks. In our case, 104 OCT image frames from three individual datasets were horizontally and vertically flipped for training (training datasets 1-3), while 101 OCT image frames from the other three datasets were used for testing (testing datasets 1-3). For classifier training, we used a patch-based strategy to learn discriminative features. Briefly, each of the input images was split into nine overlapping 256 × 256-pixel patches, and they were fed to the generator and translated to the target (output) images ( Figure S4). This work was implemented with Mathematica.

Performance Evaluation
In this study, the manually labeled binary images derived from testing datasets 1-3 served as the ground truth. Agreement between the ground truth images and those of different image processing methods was evaluated with the inter-rater reliability measure known as Cohen's kappa [30].
where TP, FP, TN, and FN represent the numbers of true positives, false positives, true negatives, and false negatives in an image frame, respectively. A segmented object pixel that overlapped with ground truth is considered a true positive; otherwise, it is a false positive. The positive ground truth that has no pixels overlapped by a segmented object is counted as a false negative, while a negative ground truth that does not overlap a segmented object is considered a true negative.

Visualization of Microvascular Structures in BBB Model by OCT
Typically, confocal microscopy (CM) is very effective for visualizing 3D tissue model structures in vitro, whereas limitations such as light scattering and numerical aperture (NA) of the objective limited the penetration depth to less than 100 µm [31]. In addition, to acquire images of 3D tissue samples with CM, the tissue samples need to be preprocessed with immunostaining, which is time consuming (more than 2 days), and scanning of the large volume (8457 × 8382 × 180 µm 3 ) of the BBB model is demanding, requiring a complicated image stitching process to combine 49 smaller 3D images (1208 × 1197 × 180 µm 3 ) into reconstructed 3D global images of our BBB tissue sample. On the other hand, since OCT typically has higher transmission optics but lower resolution than confocal microscopy, we expect that OCT can visualize deeper vascular structures of our 3D BBB tissue in shorter times. Therefore, we first evaluated the potential of OCT imaging for visualizing the global microvascular lumen morphology of the BBB model.
Our BBB model sample consists of a fibrin hydrogel construct that is~1200 µm thick with the endothelial micro-tubular lumina extending from its bottom surface, which is covered with a monolayer of endothelial cells (Figure 1a). Numerous pore-like morphologies representing the cross-sectional lumen area are obvious in the representative OCT image, which is located at a 160 µm depth from bottom side of the construct (Figure 2a). In addition, we also observed closed micro-luminal structures inside the gel (Figures 2b and S6). Open and closed luminal structures could be observed more clearly with the high-magnification mode of OCT (Figure 2b). Although OCT could capture the structures from its bottom side up to a depth of 600 µm, clear images could not be obtained beyond a 300-350 µm depth due to the gradual blurring of OCT images. Because open micro-luminal structures connected to the bottom side rarely extended beyond a depth of 350 µm, there is no major problem for our purpose such as permeability assessment, even if other closed luminal structures may be present.
luminal structures connected to the bottom side rarely extended beyond a depth of 350 µm, there is no major problem for our purpose such as permeability assessment, even if other closed luminal structures may be present.
The sample was subsequently stained with the endothelial marker CD31 and observed by confocal microscopy to confirm that the luminal structures visualized by OCT are microvascular structures. Although the two images cannot be perfectly matched due to totally different measurement principles, image resolutions, etc., between the OCT and CM, the matched objects in both images can demonstrate that the pore-like morphology in the OCT image is micro-vascular lumens formed by endothelial cells (Figure S5), as almost all micro-luminal structures are identical to microvessel structures. Moreover, we successfully obtained full 3D images of our BBB model sample with the volume of 6000 × 6000 × 600 µm 3 (after conversion) in only 14 min using the OCT method, therefore confirming that OCT imaging is a powerful tool for visualizing micro-luminal structure including microvessels in our BBB model.  The sample was subsequently stained with the endothelial marker CD31 and observed by confocal microscopy to confirm that the luminal structures visualized by OCT are microvascular structures. Although the two images cannot be perfectly matched due to totally different measurement principles, image resolutions, etc., between the OCT and CM, the matched objects in both images can demonstrate that the pore-like morphology in the OCT image is micro-vascular lumens formed by endothelial cells (Figure S5), as almost all micro-luminal structures are identical to microvessel structures. Moreover, we successfully obtained full 3D images of our BBB model sample with the volume of 6000 × 6000 × 600 µm 3 (after conversion) in only 14 min using the OCT method, therefore confirming that OCT imaging is a powerful tool for visualizing micro-luminal structure including microvessels in our BBB model.

Image-Based Evaluation of Performance of Three Detectors
In order to quantify the vascular lumen structure of the 3D BBB in vitro model, it is necessary to accurately and selectively/specifically extract the vascular lumen structure from the OCT images obtained. However, OCT images generally lose resolution as the depth of the tissue increases and OCT images of the tissue model change in appearance. Therefore, it is challenging to extract the pore-like morphology precisely from 3D image stacks including some images with low contrast or inevitable noise. In our study, we initially obtained 3D images of each sample with depths up to 600 µm, but it was deemed that most open structures extended less than 300-350 in depth; images were cropped to this depth to avoid resolution and noise issues as much as possible. Then, the performance of three potential detectors (MIP, RF-TWS, and Pix2pix cGAN) in vascular lumen recognition was compared on 3D OCT images derived from one BBB model.
First, the 31 images in testing dataset 1 acquired by OCT were segregated into three regions according to the z-axis direction showing the different morphologies. Each representative image is shown on the left side of Figure 3. The images of the surface region, including layers 1~5, had vacancies in the center due to the central hydrogel depression caused by the interfacial tension between the hydrogel and culture insert, whereas tissue and luminal structures were observed in the region at the edge of the insert (top-left). The images of the middle region, including layers 6~20, showed a relatively clear contrast between tissue and luminal structures (middle-left). The images of the deeper region, including layers 21~31, gradually became unclear due to the transparency of OCT (bottomleft). These original OCT images were processed by three detectors as input images, and, as a control, the luminal structure was manually marked and used as ground truth images to evaluate each detector.  Initially, two types of detectors, RF-TWS and Pix2pix cGAN, were trained using three datasets (training datasets 1, 2, and 3) and validated using the other three datasets (testing datasets 1, 2, and 3). However, the classifier from training datasets was unsuccessful on the test datasets when using RF-TWS. Thus, we directly applied the RF-TWS detector to the testing images to identify luminal structures. As a result, the locations of many objects were successfully detected without ROI mask application at layers belonging to the surficial (top-center and top-right) and middle regions (middle-center and middle-right). Similar to the MIP detector, the RF-TWS detector shows poor object recognition at larger depths, possibly caused by images with lower contrast and uneven pixel intensity distributions. Notably, Pix2pix cGAN identified most of the objects even with low contrast and complex background (bottom-center).

Quantitative Evaluation of the Performance of 3 Detectors
Next, for a more detailed assessment, we evaluated the performance of three detectors quantitatively using three testing datasets. The output binary images showing locations of luminal structures located by three detectors at each depth layer in each dataset were compared with ground truth images using Cohen's κ coefficient (Equation (1)) to In the detector based on morphological imaging processing, we manually tried to recognize the luminal structures by a combination of general processes including Gaussian blur, adaptive thresholding, and morphological erosion/opening using testing datasets 1, 2, and 3. Edges of the culture-insert showing high brightness caused disturbances in the recognition of the luminal structures using MIP methods on binarized images. Therefore, a circular ROI mask with r = 250 pixels was first created and implemented through all stacks to remove the interference of the insert edge, and finally, the parameters of each process for classifying luminal structures and others were determined. However, despite the manual optimization of each parameter as much as possible, recognition of the luminal objects at the edges of the culture insert was unsuccessful. As a result, few luminal structures were detected in some layers belonging to the surficial region (top-third from left). On the other hand, in comparison to the ground truth images, objects at the middle depths were well recognized with the MIP detector, although recognition accuracy in deeper regions is poor (middle-third from the left and bottom-third from the left).
Initially, two types of detectors, RF-TWS and Pix2pix cGAN, were trained using three datasets (training datasets 1, 2, and 3) and validated using the other three datasets (testing datasets 1, 2, and 3). However, the classifier from training datasets was unsuccessful on the test datasets when using RF-TWS. Thus, we directly applied the RF-TWS detector to the testing images to identify luminal structures. As a result, the locations of many objects were successfully detected without ROI mask application at layers belonging to the surficial (top-center and top-right) and middle regions (middle-center and middle-right). Similar to the MIP detector, the RF-TWS detector shows poor object recognition at larger depths, possibly caused by images with lower contrast and uneven pixel intensity distributions. Notably, Pix2pix cGAN identified most of the objects even with low contrast and complex background (bottom-center).

Quantitative Evaluation of the Performance of 3 Detectors
Next, for a more detailed assessment, we evaluated the performance of three detectors quantitatively using three testing datasets. The output binary images showing locations of luminal structures located by three detectors at each depth layer in each dataset were compared with ground truth images using Cohen's κ coefficient (Equation (1)) to evaluate the recognition accuracy of luminal structures (Figure 4). Cohen's κ coefficient measures the level of agreement between two judges; in our case, they are the corresponding pixels of output images and the ground truth images. The coefficient can range from −1 to +1, and larger values of κ are better. It is suggested that values from 0.21 to 0.40 are fair, from 0.41 to 0.6 are moderate, and from 0.61 to 0.8 show substantial agreement [30,32].
The three image datasets (testing datasets 1, 2, and 3), which consist of 31, 35, and 35 z-stack images, respectively, were captured from three individual BBB samples. The contrast and background of the images were slightly different in the respective datasets due to lot-to-lot differences in cell proliferation and formation of luminal structures in fibrin hydrogel constructs. As a result, the images in testing dataset 1 were more easily detected compared to the other two datasets. For the MIP detector, parameters were manually optimized to process the images of the three testing datasets, but with a low detection rate, indicated by values of κ between 0.2~0.4. Test dataset 1, however, showed higher detection performance than the other two, with κ ranging from 0.4 to 0.6 in the depth range of 80 µm to 200 µm (Figure 4a, upper). For RF-TWS, almost all κ-values were less than 0.4 in the stack images of the three image datasets. The result shows that trained classifiers do not work well, even though the classifier was continuously optimized with its powerful interface feedback. This may be due to the inconsistency of the three datasets and the strong noise in OCT images, causing the classifier to misidentify many noise signals in the vacant areas of the image as hole-like structures in the images (Figure 4a, middle). For the deep-learning-based detector, the κ values were above 0.6 for the 100~200 µm depth range and 0.4~0.6 for depth ranges of 0~100 µm and 200 µm~350 µm. The results show sufficient accuracy, and the detector performed with good consistency and reproducibility in the three testing datasets (Figure 4a, bottom).
The averaged κ values reached nearly 0.6 using the deep-learning-based detector (Figure 4b), indicating the good prediction ability of the detector. It is suggested that deep learning generally requires large amounts of data for optimizing classifiers [33], but our classified detector, trained using 104 images derived from just three training datasets, worked well on the other three testing datasets. These results suggest that the cGANs used in our detection work have the potential to learn from limited information [34]. Patch-based methods and geometric transformations may be helpful for deep learning with limited data, and preprocessing operations such as histogram equalization might be applicable in learning models with non-uniform and low-contrast images. Overall, recognition performance using our optimized detector based on deep learning shows good agreement with the ground truth and reduces the complexity for object identification.

Quantification of 3D Vascular Structures of BBB Model
The accurate quantification of vascular lumen structures of the 3D BBB model is crucial for quality control in commercial use. By measuring the vascular surface area, permeability can be assessed for further drug screening. Thus, we characterized the microvessels of our model using the output images of the deep learning algorithm and compared its results with those from ground truth images for further performance evaluation.
The 3D vascular structures were segmented from binarized images using pixel connectivity in the Image Processing Toolbox of MATLAB R2022b (Figure 5a). Since for permeability assays, the tested compounds can only perfuse from the open end inside the capillary, we define the luminal structures that are connected to the bottom surface of the fibrin gel as open vascular structures. Otherwise, structures are considered to be closed ( Figure S6). Here, we removed closed vascular structures and evaluated only those open vascular structures that were involved in the permeability assay. The number of structures of each dataset was counted, the average diameter was calculated as the minimum x-y distance across each ROI object, and the total surface area was the product of object circumference in each frame and the number of frames along the z-axis. The number of structures predicted by deep learning is almost the same as that of the ground truth (p = 0.97), and values of average diameter for each microvessel (p = 0.01) and total surface area of each BBB model (p = 0.59) are close in value (Figure 5b). Though diameters returned by DL are smaller due to noise in the OCT measurement, these results demonstrate the  (1)) of each frame with increasing depths (from left to right) of three datasets; each frame was 10 µm depth. (b). Averaged Cohen's κ of three approaches calculated from test datasets 1, 2, and 3; error bar represents maximum and minimum value.
We investigated the performance of the three detectors via image-based and quantitative evaluation. There are advantages and disadvantages of 3D OCT image analysis with different approaches. Morphological imaging processing has good detection ability on high-contrast images with high accuracy, but it requires the development of convolutional matrices manually to realize edge detection and feature extraction of the images. The RF-TWS has a graphical user-friendly interface; it provides feedback to users after segmentation, which allows one to correct or add new labels and train again until the ideal results are obtained. However, it might not work well on inhomogeneous datasets. Both detectors above showed poor detection capability in low-contrast images. Deep-learningbased Pix2pix cGAN can automatically detect objects with the robust convolutional neural network; it performs well on both high-and low-contrast images and offers the possibility to detect objects at large scale without additional effort. However, it must be trained using a particular set of images.

Quantification of 3D Vascular Structures of BBB Model
The accurate quantification of vascular lumen structures of the 3D BBB model is crucial for quality control in commercial use. By measuring the vascular surface area, permeability can be assessed for further drug screening. Thus, we characterized the microvessels of our model using the output images of the deep learning algorithm and compared its results with those from ground truth images for further performance evaluation.
The 3D vascular structures were segmented from binarized images using pixel connectivity in the Image Processing Toolbox of MATLAB R2022b (Figure 5a). Since for permeability assays, the tested compounds can only perfuse from the open end inside the capillary, we define the luminal structures that are connected to the bottom surface of the fibrin gel as open vascular structures. Otherwise, structures are considered to be closed ( Figure S6). Here, we removed closed vascular structures and evaluated only those open vascular structures that were involved in the permeability assay. The number of structures of each dataset was counted, the average diameter was calculated as the minimum x-y, distance, across each ROI object, and the total surface area was the product of object circumference in each frame and the number of frames along the z-axis. The number of structures predicted by deep learning is almost the same as that of the ground truth (p = 0.97), and values of average diameter for each microvessel (p = 0.01) and total surface area of each BBB model (p = 0.59) are close in value (Figure 5b). Though diameters returned by DL are smaller due to noise in the OCT measurement, these results demonstrate the possibility of quantifying the vascular lumen of the BBB model with OCT. In the future, deep learning automatic prediction may be suitable for both commercial and research purposes for rapid quantification of the 3D vascular open structures of the BBB.

Conclusions
For the first time, we reported a rapid approach to quantify microvessels of a 3D blood-brain barrier model using a non-invasive OCT technique to acquire 3D global image acquisition (14 min, including tomography computation) and deep learning-based image processing (about 5 h for training a classifier). However, this study has its limitations; for example, the ground truth images that were manually selected can be affected

Conclusions
For the first time, we reported a rapid approach to quantify microvessels of a 3D blood-brain barrier model using a non-invasive OCT technique to acquire 3D global image acquisition (14 min, including tomography computation) and deep learning-based image processing (about 5 h for training a classifier). However, this study has its limitations; for example, the ground truth images that were manually selected can be affected by the quality of the images, especially for the images in deep layers, and affect the evaluation accuracy. Although the detectable depth that OCT can reach is limited to 350 µm, for the assessment of 3D BBB vascular permeability, only open vascular structures are effective, and they are rarely extended beyond 300 µm depth. Structures in deeper layers are thus not required for this purpose.
Many excellent techniques can be used to study 3D in vitro models. Each has its own advantages and limitations, and the appropriate technique should be selected depending on the application. OCT can collect a large area of global three-dimensional images of living cell samples in a relatively short time without any sample preprocessing and can be used to visualize the dynamic generation of the vascular network. Combined with deeplearning-based image processing, morphological properties of the vessel network such as total surface area (A) can be obtained, and combined with knowledge of the permeability surface area product (PSe) of the compounds between the apical and basolateral side, the effective permeability coefficient (Pe) can be evaluated (Pe = PSe/A). Thus, vascular responses to different test molecules can be monitored in real time. The combination of OCT and deep learning provides reliable technical support for the study of 3D microvascular networks and other increasingly developed 3D models. We believe that with the rapidly growing demand for in vitro 3D models, this combination will be a novel way forward for characterization analysis.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bios13080818/s1, Figure S1: View of OCT images; Figure S2: Flowchart of morphological imaging processing; Figure S3: Workflow of image processing using RF-TWS; Figure S4: Deep learning using pix2pix cGAN architecture for image processing; Figure S5: Cross-sectional view image frame obtained by OCT and confocal microscopy; Figure

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.