Transferable Deep Learning Model for the Identification of Fish Species for Various Fishing Grounds
Abstract
:1. Introduction
- Resource survey:
- Collection of detailed catch information, including the quantity of fish, species, length, location, and fishing operations.
- Resource assessment:
- Estimating the condition of a fishery resource to determine its sustainability.
- Resource management:
- Coordinating with stakeholders and establishing control thresholds such as total allowable catch (TAC) based on various indicators.
- Catch:
- Fishing the following year in accordance with the resource management thresholds set.
Citation | Dataset | # of Species | # of Images | Labels | Location |
---|---|---|---|---|---|
[16] | fish4Knowledge | 23 | 27,142 | S | Underwater |
[17] | LifeCLEF 2015 Fish | 15 | >20,000 | S, B | Underwater |
[18] | WildFish | 1000 | 54,459 | S | Underwater |
[19] | WildFish++ | 2348 | 103,034 | S, D | Underwater |
[20] | Fish-Pak | 6 | 915 | S | Landed |
[21] | LargeFish | 9 | 430 | S, I | Landed |
[22] | URPC | 4 | 3701 | S | Underwater |
[23] | DeepFish | 59 | 1291 | S, I | Landed |
[24] | SEAMAPD21 | 130 | 28,328 | S, B | Underwater |
[15] | FishNet | 17,357 | 94,778 | S, B | Underwater, Landed |
Citation | Target Task | Model | Pre-Train |
---|---|---|---|
[4] | Original | NN | None |
[5] | Fish4Knowledge | CNN (2 layers) | None |
[8] | Original | Inception v3 | ImageNet |
[6] | Fish-Pak | CNN (32 layers) | None |
[9] | Fish4Knowledge | ResNet50 | ImageNet |
[10] | Fish4Knowledge | Inception v3 | ImageNet |
[11] | LifeCLEF 2015 Fish | ResNet50 | ImageNet |
[12] | LifeCLEF 2015 Fish & URPC | ResNet50 | ImageNet |
[25] | Original | VGG16 | Unknown |
[13] | Large Fish | MobileNetV2 | ImageNet |
[14] | SEAMAPD21 | MobileNetV3 | Unknown |
[15] | FishNet | ConvNeXt | ImageNet |
- We constructed a large fish identification dataset from images and labels provided by the Kanagawa Prefectural Museum of Natural History (KPM) and Zukan.com, Inc. KPM comprises approximately 200,000 images, each labeled with a type of fish species (Japanese name). Using this dataset with some training strategies, we developed a TFI model for various deep learning architectures.
- For the fish species identification task, the experimental results, using publicly available datasets, demonstrated that the developed model has better transfer performance than common pre-trained models using ImageNet.
- Experiments conducted on a subset of fish species of the Japanese resource survey revealed 129 fish species, with the proposed model achieving 72.9% accuracy without additional training.
2. Materials and Methods
2.1. Dataset Description
2.1.1. KPM
2.1.2. Webfish
2.1.3. Fish-Pak
2.1.4. LargeFish
2.2. Proposed Method
2.2.1. Pre-Training Phase
2.2.2. Implementation Phase
2.3. Experimental Settings
3. Results
- Potential performance evaluation:
- We gauge the proposed model’s potential by examining its estimation accuracy on the KPM test set during pre-training with the KPM dataset (Section 3.1).
- Impact of pre-training:
- The effect of pre-training methods on transfer performance is evaluated. The robustness of our approach is showcased through the transfer performance analysis, comparing scenarios with large (Webfish dataset) versus small (Fish-pak and LargeFish datasets) additional training data, reflecting a “few-shot learning” context (Section 3.2).
- Performance without additional training:
- Our method’s performance is analyzed without additional training. This includes comparing direct inference using the TFI model to inference with output layer masking, incorporating prior knowledge (Section 3.3).
3.1. Evaluation of Pre-Training Phase
3.2. Evaluation of Transfer Learning
3.2.1. Scenario When Relatively Large Dataset Is Available
3.2.2. Scenario Involving Few-Shot Learning
3.3. Evaluation of Output Layer Masking
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Verification of CLIP as an Image Classifier
- Prepare image data taken in the lab of flathead flounder, black scraper, yellowback seabream, and ukkari scorpionfish.
- The CLIP prompt is “This is xxx” (where xxx is the name of each fish species).
- In addition to the four species mentioned above, “John dory”, which is somewhat similar in appearance to black scraper, is added to the species name.
- The scores from 0 to 1 output by CLIP are used to evaluate the accuracy of the fish species estimation.
Input | ||||
---|---|---|---|---|
(Flathead Flounder) | (Black Scraper) | (Yellowback Seabream) | (Ukkari Scorpionfish) | |
flathead flounder | 0.326 | 0.317 | 0.284 | 0.286 |
black scraper | 0.239 | 0.227 | 0.236 | 0.205 |
yellowback seabream | 0.318 | 0.297 | 0.317 | 0.316 |
ukkari scorpionfish | 0.282 | 0.257 | 0.259 | 0.263 |
John dory | 0.336 | 0.338 | 0.315 | 0.297 |
Appendix B. Visualization of Feature Maps
Appendix C. Confusion Matrices of Our Model
1 | KPM database https://www.kahaku.go.jp/research/db/zoology/photoDB/ (English ver. https://fishpix.kahaku.go.jp/fishimage-e/index.html) (accessed on 1 February 2024). |
2 | Web Sakana Zukan https://zukan.com/fish/ (accessed on 1 February 2024). |
3 | PyTorch MODELS AND PRE-TRAINED WEIGHTS https://pytorch.org/vision/stable/models.html (accessed on 1 February 2024). |
4 | Image classification reference training scripts: https://github.com/pytorch/vision/tree/main/references/classification (accessed on 1 February 2024). |
5 | PyTorch MODELS AND PRE-TRAINED WEIGHTS: https://pytorch.org/vision/stable/models.html (accessed on 1 February 2024). |
6 | TorchMetrics F-1 SCORE: https://torchmetrics.readthedocs.io/en/stable/classification/f1_score.html (accessed on 1 February 2024). |
7 | Pytorch ResNet: https://pytorch.org/vision/0.15/models/generated/torchvision.models.resnet50.html (accessed on 1 February 2024). |
8 | How to Train State-Of-The-Art Models Using TorchVision’s Latest Primitives: https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ (accessed on 1 February 2024). |
9 | Interactive tSNE: https://haselab.fuis.u-fukui.ac.jp/research/fish/tsne_en.html (accessed on 1 February 2024). |
References
- Garcia, R.; Prados, R.; Quintana, J.; Tempelaar, A.; Gracias, N.; Rosen, S.; Vågstøl, H.; Løvall, K. Automatic segmentation of fish using deep learning with application to fish size measurement. ICES J. Mar. Sci. 2019, 77, 1354–1366. [Google Scholar] [CrossRef]
- Hasegawa, T.; Tanaka, M. Few-shot Fish Length Recognition by Mask R-CNN for Fisheries Resource Management. IPSJ Trans. Consum. Devices Syst. 2022, 12, 38–48. (In Japanese) [Google Scholar]
- Tseng, C.H.; Kuo, Y.F. Detecting and counting harvested fish and identifying fish types in electronic monitoring system videos using deep convolutional neural networks. ICES J. Mar. Sci. 2020, 77, 1367–1378. [Google Scholar] [CrossRef]
- Pornpanomchai, C.; Lurstwut, B.; Leerasakultham, P.; Kitiyanan, W. Shape- and Texture-Based Fish Image Recognition System. Agric. Nat. Resour. 2013, 47, 624–634. [Google Scholar]
- Rathi, D.; Jain, S.; Indu, S. Underwater Fish Species Classification using Convolutional Neural Network and Deep Learning. In Proceedings of the 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Rauf, H.T.; Lali, M.I.U.; Zahoor, S.; Shah, S.Z.H.; Rehman, A.U.; Bukhari, S.A.C. Visual features based automated identification of fish species using deep convolutional neural networks. Comput. Electron. Agric. 2019, 167, 105075. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Allken, V.; Handegard, N.O.; Rosen, S.; Schreyeck, T.; Mahiout, T.; Malde, K. Fish species identification using a convolutional neural network trained on synthetic data. ICES J. Mar. Sci. 2018, 76, 342–349. [Google Scholar] [CrossRef]
- Mathur, M.; Goel, N. FishResNet: Automatic Fish Classification Approach in Underwater Scenario. SN Comput. Sci. 2021, 2, 273. [Google Scholar] [CrossRef]
- Murugaiyan, J.; Palaniappan, M.; Durairaj, T.; Muthukumar, V. Fish species recognition using transfer learning techniques. Int. J. Adv. Intell. Inform. 2021, 7, 188–197. [Google Scholar] [CrossRef]
- Ben Tamou, A.; Benzinou, A.; Nasreddine, K. Live Fish Species Classification in Underwater Images by Using Convolutional Neural Networks Based on Incremental Learning with Knowledge Distillation Loss. Mach. Learn. Knowl. Extr. 2022, 4, 753–767. [Google Scholar] [CrossRef]
- Zhou, Z.; Yang, X.; Ji, H.; Zhu, Z. Improving the classification accuracy of fishes and invertebrates using residual convolutional neural networks. ICES J. Mar. Sci. 2023, 80, 1256–1266. [Google Scholar] [CrossRef]
- Dey, K.; Bajaj, K.; Ramalakshmi, K.S.; Thomas, S.; Radhakrishna, S. FisHook—An Optimized Approach to Marine Species Classification using MobileNetV2. In Proceedings of the OCEANS 2023, Limerick, Ireland, 5–8 June 2023; pp. 1–7. [Google Scholar]
- Alaba, S.Y.; Nabi, M.M.; Shah, C.; Prior, J.; Campbell, M.D.; Wallace, F.; Ball, J.E.; Moorhead, R. Class-Aware Fish Species Recognition Using Deep Learning for an Imbalanced Dataset. Sensors 2022, 22, 8268. [Google Scholar] [CrossRef] [PubMed]
- Khan, F.F.; Li, X.; Temple, A.J.; Elhoseiny, M. FishNet: A Large-scale Dataset and Benchmark for Fish Recognition, Detection, and Functional Trait Prediction. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 20439–20449. [Google Scholar]
- Boom, B.J.; Huang, P.X.; He, J.; Fisher, R.B. Supporting ground-truth annotation of image datasets using clustering. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 1542–1545. [Google Scholar]
- LifeCLEF. LifeCLEF 2015 Fish Task. Available online: https://www.imageclef.org/lifeclef/2015/fish (accessed on 9 October 2023).
- Zhuang, P.; Wang, Y.; Qiao, Y. WildFish: A Large Benchmark for Fish Recognition in the Wild. In Proceedings of the 2018 ACM Multimedia Conference on Multimedia Conference, Seoul, Republic of Korea, 22–26 October 2018; pp. 1301–1309. [Google Scholar]
- Zhuang, P.; Wang, Y.; Qiao, Y. Wildfish++: A Comprehensive Fish Benchmark for Multimedia Research. IEEE Trans. Multimed. 2021, 23, 3603–3617. [Google Scholar] [CrossRef]
- Shah, S.Z.H.; Rauf, H.T.; IkramUllah, M.; Khalid, M.S.; Farooq, M.; Fatima, M.; Bukhari, S.A.C. Fish-Pak: Fish species dataset from Pakistan for visual features based classification. Data Brief 2019, 27, 104565. [Google Scholar] [CrossRef] [PubMed]
- Ulucan, O.; Karakaya, D.; Turkan, M. A Large-Scale Dataset for Fish Segmentation and Classification. In Proceedings of the 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, 15–17 October 2020; pp. 1–5. [Google Scholar]
- Liu, C.; Li, H.; Wang, S.; Zhu, M.; Wang, D.; Fan, X.; Wang, Z. A Dataset and Benchmark of Underwater Object Detection for Robot Picking. In Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
- Garcia-d’Urso, N.; Galan-Cuenca, A.; Pérez-Sánchez, P.; Climent-Pérez, P.; Fuster-Guillo, A.; Azorin-Lopez, J.; Saval-Calvo, M.; Guillén-Nieto, J.E.; Soler-Capdepón, G. The DeepFish computer vision dataset for fish instance segmentation, classification, and size estimation. Sci. Data 2022, 9, 287. [Google Scholar] [CrossRef]
- Boulais, O.E.; Alaba, S.Y.; Yu, J.; Iftekhar, A.T.; Zheng, A.; Prior, J.; Moorhead, R.; Ball, J.; Primrose, J.; Wallace, F. SEAMAPD21: A large-scale reef fish dataset for fine-grained categorization. In Proceedings of the FGVC8: The Eight Workshop on Fine-Grained Visual Categorization CVPR 2021, Online, 25 June 2021. [Google Scholar]
- Ou, L.; Liu, B.; Chen, X.; He, Q.; Qian, W.; Zou, L. Automated Identification of Morphological Characteristics of Three Thunnus Species Based on Different Machine Learning Algorithms. Fishes 2023, 8, 182. [Google Scholar] [CrossRef]
- Suzuki, A.; Sakanashi, H.; Kido, S.; Shouno, H. Feature Representation Analysis of Deep Convolutional Neural Network using Two-stage Feature Transfer—An Application for Diffuse Lung Disease Classification. IPSJ Trans. Math. Model. Its Appl. 2018, 11, 74–83. [Google Scholar]
- Dana, K.J.; van Ginneken, B.; Nayar, S.K.; Koenderink, J.J. Reflectance and Texture of Real-World Surfaces. Acm Trans. Graph. 1999, 18, 1–34. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Z.; Gao, J.; Huang, W.; Li, P.; Zhang, J. A two-stage deep transfer learning model and its application for medical image processing in Traditional Chinese Medicine. Knowl.-Based Syst. 2022, 239, 108060. [Google Scholar] [CrossRef]
- Zhang, W.; Deng, L.; Zhang, L.; Wu, D. A Survey on Negative Transfer. IEEE/CAA J. Autom. Sin. 2023, 10, 305–329. [Google Scholar] [CrossRef]
- Soviany, P.; Ionescu, R.T.; Rota, P.; Sebe, N. Curriculum Learning: A Survey. Int. J. Comput. Vis. 2022, 130, 1526–1565. [Google Scholar] [CrossRef]
- Shen, X.; Wang, Y.; Lin, M.; Huang, Y.; Tang, H.; Sun, X.; Wang, Y. DeepMAD: Mathematical Architecture Design for Deep Convolutional Neural Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 6163–6173. [Google Scholar]
- Chen, X.; Liang, C.; Huang, D.; Real, E.; Wang, K.; Liu, Y.; Pham, H.; Dong, X.; Luong, T.; Hsieh, C.J.; et al. Symbolic Discovery of Optimization Algorithms. arXiv 2023, arXiv:2302.06675. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models from Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; Volume 139, pp. 8748–8763. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Cubuk, E.D.; Zoph, B.; Shlens, J.; Le, Q. RandAugment: Practical Automated Data Augmentation with a Reduced Search Space. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Nice, France, 2020; Volume 33, pp. 18613–18624. [Google Scholar]
- Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
- Zhang, Y.; Yang, Q. An overview of multi-task learning. Natl. Sci. Rev. 2017, 5, 30–43. [Google Scholar] [CrossRef]
- Deng, J.; Guo, J.; Xue, N.; Zafeiriou, S. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4685–4694. [Google Scholar] [CrossRef]
- Dhall, A.; Makarova, A.; Ganea, O.; Pavllo, D.; Greeff, M.; Krause, A. Hierarchical Image Classification using Entailment Cone Embeddings. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 3649–3658. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, G. Using machine learning to alleviate the allometric effect in otolith shape-based species discrimination: The role of a triplet loss function. ICES J. Mar. Sci. 2023, 80, 1277–1290. [Google Scholar] [CrossRef]
- Yang, Z.; Li, J.; Chen, T.; Pu, Y.; Feng, Z. Contrastive learning-based image retrieval for automatic recognition of in situ marine plankton images. ICES J. Mar. Sci. 2022, 79, 2643–2655. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; Volume 97, pp. 6105–6114. [Google Scholar]
- Tan, M.; Le, Q. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; Meila, M., Zhang, T., Eds.; Volume 139, pp. 10096–10106. [Google Scholar]
- Xu, J.; Pan, Y.; Pan, X.; Hoi, S.; Yi, Z.; Xu, Z. RegNet: Self-Regulated Network for Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 1–6. [Google Scholar] [CrossRef]
- Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar] [CrossRef]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2021, arXiv:2108.07258. [Google Scholar]
- Tanaka, M.; Hasegawa, T. Explainable Few-Shot fish classification method using CLIP. In Proceedings of the 85th National Convention of IPSJ, Tokyo, Japan, 2–4 March 2023; Volume 2023. (In Japanese). [Google Scholar]
- van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Dataset | Provider | # of Train | # of Test | Species | Used Section |
---|---|---|---|---|---|
KPM | KPM | 176,819 | 19,474 | 2826 | Section 3.1 |
Webfish | Zukan.com | 29,134 | 10,335 | 1212 | Section 3.2.1 |
Webfish-small | Zukan.com | - | 1171 | 129 | Section 3.3 |
Fish-pak | Shah et al. [20] | 30 | 6 | Section 3.2.2 | |
LargeFish | Ulucan et al. [21] | 90 | 9 | Section 3.2.2 |
Name | Source | KPM-Train Strategy | KPM | Webfish | |
---|---|---|---|---|---|
CAWR | MTL | ||||
- | None | - | - | - | 72.8 |
- | ImageNet | - | - | - | 72.6 |
(1) | None | 69.0 | 67.5 | ||
(2) | ✔ | 72.4 | 67.6 | ||
(3) | ✔ | 77.6 | 67.2 | ||
(4) | ✔ | ✔ | 77.7 | 66.7 | |
(5) | ImageNet | 80.0 | 76.2 | ||
(6) | ✔ | 80.8 | 76.2 | ||
(7) | ✔ | 84.1 | 76.0 | ||
(8) | ✔ | ✔ | 84.2 | 76.0 |
Target→ | KPM | Webfish | |||
---|---|---|---|---|---|
Source→ | ImageNet | None | ImageNet | (8) KPM | |
LR Scheduler→ | None | CAWR | CAWR | ||
ResNet50_V1 (25.6M) | 80.0 | 84.1 | 72.8 | 72.6 | 76.0 |
ResNet50_V2 (25.6M) | 82.0 | 86.3 | 77.3 | 77.2 | 79.3 |
EfficientNet_B0 (5.3M) | 83.4 | 85.5 | 76.4 | 76.0 | 79.1 |
EfficientNet_V2_S (21.5M) | 85.6 | 87.9 | 79.8 | 79.4 | 81.7 |
RegNet_Y_800MF (6.4M) | 83.2 | 86.4 | 77.5 | 77.1 | 80.1 |
Swin_T (28.3M) | 86.2 | 88.4 | 81.1 | 81.6 | 83.7 |
ConvNeXt_Tiny (28.6M) | 84.1 | 87.2 | 77.6 | 77.9 | 79.6 |
Source | KPM-Train Strategy | KPM | Webfish | ||
---|---|---|---|---|---|
CAWR | MTL | Loss | |||
None | - | - | - | - | 49.1 |
ImageNet | - | - | - | - | 81.8 |
ImageNet | CE | 84.8 | 81.6 | ||
✔ | 86.2 | 82.7 | |||
✔ | 87.8 | 82.8 | |||
✔ | ✔ | 88.6 | 83.8 | ||
None | AF | 87.5 | 85.1 | ||
✔ | 88.1 | 85.3 | |||
✔ | 88.3 | 85.4 | |||
✔ | ✔ | 88.3 | 85.4 | ||
ImageNet | AF | 87.4 | 84.9 | ||
✔ | 87.9 | 85.8 | |||
✔ | 88.4 | 84.7 | |||
✔ | ✔ | 88.7 | 85.9 |
2826 Speceis | Masked 129 Species | |||
---|---|---|---|---|
Top 1 | Top 5 | Top 1 | Top 5 | |
TFI model | 52.3 | 77.3 | 67.9 | 88.2 |
TFI model (MTL) | 56.7 | 79.3 | 69.9 | 88.3 |
TFI model (AF) | 54.7 | 78.6 | 68.2 | 88.7 |
TFI model (AF+MTL) | 58.5 | 81.6 | 72.9 | 90.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hasegawa, T.; Kondo, K.; Senou, H. Transferable Deep Learning Model for the Identification of Fish Species for Various Fishing Grounds. J. Mar. Sci. Eng. 2024, 12, 415. https://doi.org/10.3390/jmse12030415
Hasegawa T, Kondo K, Senou H. Transferable Deep Learning Model for the Identification of Fish Species for Various Fishing Grounds. Journal of Marine Science and Engineering. 2024; 12(3):415. https://doi.org/10.3390/jmse12030415
Chicago/Turabian StyleHasegawa, Tatsuhito, Kei Kondo, and Hiroshi Senou. 2024. "Transferable Deep Learning Model for the Identification of Fish Species for Various Fishing Grounds" Journal of Marine Science and Engineering 12, no. 3: 415. https://doi.org/10.3390/jmse12030415
APA StyleHasegawa, T., Kondo, K., & Senou, H. (2024). Transferable Deep Learning Model for the Identification of Fish Species for Various Fishing Grounds. Journal of Marine Science and Engineering, 12(3), 415. https://doi.org/10.3390/jmse12030415