Next Article in Journal
Privacy-Protection Method for Blockchain Transactions Based on Lightweight Homomorphic Encryption
Next Article in Special Issue
Fabric Defect Detection in Real World Manufacturing Using Deep Learning
Previous Article in Journal
A Comprehensive Bibliometric Analysis of Business Process Management and Knowledge Management Integration: Bridging the Scholarly Gap
Previous Article in Special Issue
The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AquaVision: AI-Powered Marine Species Identification

Oceanography Malta Research Group, Department of Geosciences, University of Malta, MSD 2080 Msida, Malta
*
Author to whom correspondence should be addressed.
Information 2024, 15(8), 437; https://doi.org/10.3390/info15080437
Submission received: 3 June 2024 / Revised: 25 July 2024 / Accepted: 26 July 2024 / Published: 27 July 2024
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)

Abstract

:
This study addresses the challenge of accurately identifying fish species by using machine learning and image classification techniques. The primary aim is to develop an innovative algorithm that can dynamically identify the most common (within Maltese coastal waters) invasive Mediterranean fish species based on available images. In particular, these include Fistularia commersonii, Lobotes surinamensis, Pomadasys incisus, Siganus luridus, and Stephanolepis diaspros, which have been adopted as this study’s target species. Through the use of machine-learning models and transfer learning, the proposed solution seeks to enable precise, on-the-spot species recognition. The methodology involved collecting and organising images as well as training the models with consistent datasets to ensure comparable results. After trying a number of models, ResNet18 was found to be the most accurate and reliable, with YOLO v8 following closely behind. While the performance of YOLO was reasonably good, it exhibited less consistency in its results. These results underline the potential of the developed algorithm to significantly aid marine biology research, including citizen science initiatives, and promote environmental management efforts through accurate fish species identification.

1. Introduction

Biodiversity in diverse ecosystems is increasingly under threat due to rapid global changes. Notably, global warming and the introduction of invasive alien species (IAS) and non-indigenous species (NIS) often act in tandem, aggravating their impact on ecosystems [1]. The Mediterranean coastal regions are a prime example of these biodiversity changes. Since the opening and subsequent expansion of the Suez Canal, over 150 non-indigenous fish species from the Red Sea have entered the Mediterranean Sea since 1869 [2]. The significant warming of the Mediterranean surface waters, which has increased by over 1 °C since 1980, has further driven the northward and westward spread of these tropical non-native species, affecting both their distribution and abundance [3].
In recent years, the use of underwater image processing for species identification has gained significant attention, especially in addressing the challenges posed by newly introduced IAS and Alien Invasive Species (AIS). Accurate species classification has become essential in various fields, including fisheries, marine biology, and aquaculture. Identifying fish species serves several purposes in fisheries management. Firstly, it aids in post-catch inspections, particularly in countries with regulations prohibiting the capture of protected species and enforcing quotas on fishing vessels. Additionally, accurate species identification is crucial for effectively monitoring fish harvests and establishing sustainable and profitable fisheries [4].
Alien species are organisms or taxa introduced to regions beyond their natural habitats, often through human activities. These species can survive, reproduce, and spread outside their original range, exceeding their natural propagation [5]. The term includes various synonyms such as non-native, non-indigenous, allochthonous, foreign, exotic, immigrant, imported, transported, or adventive, as classified within the Colautti and MacIsaac system [6]. In biodiversity conservation, invasive alien species are particularly concerning as they can negatively impact native ecosystems. However, it is important to recognise that not all alien species are harmful; some may benefit society and the economy, especially in sectors like agriculture and aquaculture [5,7].
Invasive species are those that have successfully established themselves in new habitats, are rapidly increasing in population, and pose a threat to native species diversity and abundance [7]. They disrupt the ecological balance and can endanger economic activities and human health. As the International Union for Conservation of Nature (IUCN) notes, their impact is comparable to habitat loss [7]. Biological invasions in marine environments are particularly troubling for conservation and economic interests, prompting awareness campaigns by national governments and international organisations [8]. The Mediterranean Sea has over 500 recorded alien species, leading to significant impacts such as sudden declines in native species abundance and local extinctions. While immediate extinctions may not always occur, the changes induced by invasive species reduce genetic diversity, disrupt ecological functions, and alter habitats, increasing the risk of further declines and ecosystem homogenisation [9].
The introduction of Siganus luridus and S. rivulatus in the Mediterranean occurred in 1956 and 1927, respectively. S. luridus has established significant populations in both eastern and western regions, while S. rivulatus has been found as far west as Corsica [10]. These species impact algae through grazing, leading to areas barren of macroalgae, notably affecting species like Cystoseira sp. Another invasive species, the blue-spotted cornetfish (Fistularia commersonii), originally from the Indian and Pacific Oceans, is one of the most successful invaders in the Mediterranean and European waters and was first recorded along the Mediterranean coasts of Israel in January 2000 [11]. Similarly, the Atlantic tripletail (Lobotes surinamensis), a cosmopolitan species, has been found in Maltese coastal waters. The bastard grunt (Pomadasys incisus), a subtropical species, has naturally entered the Mediterranean Sea through the Straits of Gibraltar, along with other Haemulidae family members, including the non-native P. stridens [6]. Many of the introduced species, such as the reticulated leatherjacket (Stephanolepis diaspros), are often discarded by local fishermen [12,13].
Shipping remains the primary introduction pathway for alien species, followed by secondary dispersal within the Mediterranean Sea. There is a clear increasing trend in the reporting of alien species, peaking in the last decade. Additionally, the warming trend in Mediterranean waters is facilitating the spread of thermophilic alien species from the Eastern to the Central Mediterranean and the expansion of tropical and subtropical Eastern Atlantic species towards the east [5].
The study of marine species recognition is a crucial component of efforts to protect the ocean environment. Despite its significance, this area remains under-represented within the field of computer vision. Nevertheless, advancements in deep learning have stimulated a growing interest in this subject [14]. Within recent studies, the precise automation of fish individual detection and identification has been achieved using machine-learning techniques, which include support vector machines, nearest neighbour classifiers, discriminant analysis classifiers, and deep learning. Recent competitions and comparisons indicate that deep-learning methods, specifically CNNs, which integrate automatic image description and classification, consistently demonstrate superior performance [15]. Another study conducted by Catalán et al. [16] investigated advancements in fish identification and classification within underwater imagery using deep learning. They underscored key issues such as the influence of diverse backgrounds, accuracy in labelling small fish, the need for a large training image dataset, and selection methodologies. The researchers introduced an annotated dataset comprising 18,400 records of Mediterranean fish representing 20 species within 1600 images featuring diverse backgrounds. Comparing two leading object-detection models, YOLOv5m and Faster RCNN, they found that YOLOv5m consistently outperformed Faster RCNN, specifically achieving [email protected] values exceeding 0.8 across various scenarios.
The study by Gauci et al. [17] explored the use of citizen science initiatives for classifying jellyfish species in Maltese waters using CNNs. As citizen science campaigns gain popularity, data collection in marine environments has increased. However, the validation of submitted reports, particularly for the taxonomic identification of jellyfish, remains challenging due to the high volume of reports and the limited number of trained staff. The photos for their dataset were sourced from the “Spot the Jellyfish” campaign to train region-based CNNs to classify the five most commonly reported jellyfish species in Maltese waters: Pelagia noctiluca, Cotylorhiza tuberculata, Carybdea marsupialis, Velella velella, and salps. The reliability of their models was evaluated using precision, recall, f1 score, and κ score metrics. They also investigated the advantages of data augmentation and transfer-learning techniques. The results were very promising, suggesting the potential for embedding automated classification methods, possibly in smartphone apps, to reduce or eliminate the need for human validation of the submitted reports. Confusion matrices were used to assess classification accuracy, and precision, recall, and f1 scores were used as error metrics for each class. The study found that models using the GoogLeNet and ResNet feature extraction network performed the best, with an increase in accuracy observed with more anchors.
The main aim of this research study is to create an algorithm that will identify invasive alien marine species in the Mediterranean region. This algorithm will be integrated within a website where individuals can submit any invasive species caught as by-catch and receive an accurate identification, alongside a species description.
The findings will be saved to a database to continuously train the model to new morphological adaptations of the species, thus enhancing its learning and accuracy over time. Initially, the project will begin as a local initiative within the Maltese Islands, engaging the local community. Eventually, it will expand to cover the entire Mediterranean basin, allowing participation from any country in the region to contribute their invasive species data.
However, technological advancements persist, which can accurately identify fish species despite various factors such as limited datasets, segmentation errors, image distortion, and object overlap, all of which hindered the effectiveness of existing methods [18]. Moreover, the need to distinguish between species with similar characteristics, which is exacerbated by variations in body colouration caused by factors like light absorption at different depths, presents additional obstacles [19]. In addition, the degradation of image quality in underwater settings is consistently identified as a significant factor contributing to inaccuracies and misclassifications in image-based techniques. Addressing image quality concerns is therefore pivotal to improving the accuracy and reliability of underwater image analysis methods [11]. Manual classification methods were previously used for fish species identification, but these are labour-intensive and prone to human errors, necessitating efficient automatic methods [19]. The application of CNNs has shown promise in automating this process, achieving high accuracy scores and reducing reliance on manual intervention [20].
Overall, this project aims to engage citizen scientists and raise awareness about the recurring issue of invasive marine species. It will create a comprehensive database documenting species finding, enabling the tracking of species spread, and facilitating future mitigation efforts. Additionally, it will provide countries with early warnings to prepare for potential extreme invasions, thereby safeguarding local marine flora and fauna.

2. Materials and Methods

2.1. Framework

The framework adopted in this study is presented in Figure 1. This infrastructure was used to train the image classifiers and the object detector. The only difference was that during the pre-processing stage there were the inclusion of image annotations and the implementation of bounding boxes.

2.2. Generation of Image Dataset

A comprehensive dataset was created by sourcing images from various platforms, including Google, FishBase, iNaturalist, contributions from divers, and submissions from the ‘Spot the Alien’ citizen science campaign. These images (Figure 2) varied in resolution, angles, and morphological colours, and depicted both younger and more mature specimens. This diverse range of images ensured thorough training of the algorithm, enabling it to effectively learn and adapt to the nuances of fish species identification.
Table 1 showcases the distribution of images across different classes, showcasing the composition of the final dataset comprising five fish species for this study, totalling 1337 images. These IAS were selected for two reasons; firstly as they are prevalent within Maltese waters, and secondly they have an abundance of images available, which is key for effectively training the models. The dataset was partitioned into train, validation, and test sets with proportions of 70%, 20%, and 10%, respectively, to ensure consistency across model training. All models underwent training on this standardised dataset to facilitate meaningful comparisons. However, it is worth noting that around 15% of the test images were not used as they were not recognised due to format restrictions by YOLO.
In the pre-processing stage, images were resized to prepare them for training across different models. Unnecessary backgrounds were cropped from certain images, and bounding boxes were created for the YOLO model to ensure the focus remained on relevant objects. Notably, YOLO v8 does not require specific pixel dimensions. However, for ResNet18, images were resized to a standardised size of 224 pixels, while for the TF.Keras Sequential model resizing was set to 180 pixels. This tailoring in image sizes within different model datasets facilitated consistent training and evaluation processes.

2.3. Development of the Classification Models

Due to the limited availability of high-resolution images, the existing dataset was utilised until further testing with a larger high-resolution photo dataset can be undertaken. Nonetheless, it is essential to ensure that the training of the algorithm encompasses a wide array of resolutions and augmented images to enhance its robustness [6]. For the YOLO model, Roboflow was used to annotate all images (Figure 3) and to create bounding boxes around objects of interest. This preparation of the dataset facilitates effective training of the model for accurate detection and classification of objects within the images.
In this study, three models were utilised to differentiate between five common invasive fish species, based on research conducted from multiple journal articles and sources. Amongst them, two are image classifiers (ResNet and TF.Keras), whilst the other one, YOLO, serves as an object detector. Despite the widespread use of all these models in species identification studies, image classifiers tend to be the preferred choice in this aspect.
The final models used were built upon the open versions freely available on the internet, which were already optimised and pre-trained on similar pattern recognition tasks via transfer learning. Transfer learning in machine learning involves using a model developed for one task as the foundation for a new model tailored to a different but related task. This technique makes use of the knowledge gained from solving one problem and applies it to another, facilitating more efficient learning, as demonstrated in this study. In computer vision, transfer learning is particularly prevalent, typically involving the use of a pre-trained model from an extensive database as a starting point for a specific task [21]. Such an approach involves the use of a pre-trained CNN that can extract features from thousands of natural images coming from various categories. By utilising a model that has already refined generic features from a similar task, transfer learning significantly reduces the time and computational resources needed in comparison to building and training a new model from the ground up [18].
Subsequently, the models were trained and optimised to extract the features from the fish images. Feature extraction is the process of transforming raw data into interpretable representations modified for a specific classification task. For instance, an image consists of millions of pixels, each containing colour information. Feature extraction reduces the high dimensionality of these images by calculating abstract features, which are quantified representations that preserve relevant information for the classification task (such as shape, texture, or colour) while discarding redundant details [22]. This process enhances the efficiency and accuracy of the classification by focusing on the most pertinent aspects of the data.
These models were trained with different epochs, ensuring sufficient training and achieving reasonable accuracies while avoiding overfitting. Confusion matrices, graphs, and accuracy scores, including metrics like precision, recall, and f1 score, were produced to evaluate the training and learning outcomes. After this thorough evaluation, the best model was chosen based on these scores, and it was then fully optimised and developed for the final implementation.
In many classification scenarios, the data available are often insufficient to train classifiers that are both accurate and robust. To tackle this challenge, a widely adopted strategy is data augmentation (DA) [23]. This approach involves generating synthetic instances to augment the size of the training dataset, aiding in the prevention of overfitting. In this research study, with only 1337 training images accessible, overfitting was mitigated through augmentation, highlighting the effectiveness of this method in enhancing model robustness despite limited data availability.
In the implementation of the ResNet18 model, data augmentation techniques such as random resize and random horizontal flip were applied, alongside data normalisation, to both the training and validation datasets. Following this, TF.Keras Sequential model utilised data augmentation methods, ensuring that the datasets were enhanced for improved model performance, whilst YOLO v8 incorporated automatic data augmentation techniques. This step ensured that all models benefitted from augmented datasets during training, thereby improving their ability to learn diverse features and prevent overfitting.

3. Results

3.1. Confusion Matrices

The confusion matrices resulting after testing all three classification models with the same set of images are shown in Table 2. The columns represent the actual species depicted in the test images, while the rows indicate the classifications assigned by the respective model. Confusion matrices were employed as they facilitate in discerning whether a classifier is exhibiting random behaviour or encountering difficulty in distinguishing between certain classes [24].

3.2. Error Metrics

The accuracy, precision, recall, and f1 score are key evaluation metrics that assess the performance of machine-learning models. These were calculated using the following labelled Equations (1)–(4). True positive (TP) is a measure of how much the model correctly predicted the class (fish species). False positive (FP) indicates how much the model incorrectly identifies the positive class but is actually negative. True negative (TN) is a measure of the occurrences that the model identifies the negative class. False negative (FN) corresponds to the number of situations where the model fails to identify the positive class, incorrectly predicting the negative class.
The precision (Equation (1)) measures the proportion of TP predictions among all positive predictions made by the model. Additionally, the recall (Equation (2)) calculates the proportion of true positive predictions among all actual positive instances. The f1 score (Equation (3)) balances precision and recall by calculating their harmonic and is useful for finding a balance between high precision and high recall, as it accounts for extreme values of either metric. Lastly, the accuracy (Equation (4)) takes into consideration the proportion of correct predictions made by the model across the entire dataset.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
f 1   s c o r e = 2 ( R e c a l l × P r e c i s i o n ) R e c a l l × P r e c i s i o n
A c c u r a c y = T P + T N T P + T N + F P + F N

3.3. Model Performance Metrics

The calculated metrics for all five species and each model can be observed in Table 3. It provides a detailed comparison of how each model performs in identifying each species. In addition, Figure 4 and Figure 5 illustrate the confidence scores on unseen images by YOLO and TF.Keras.

3.4. Discussion

It is worth noting that there is a research gap concerning four of the five fish species targeted within this study (i.e., Fistularia commersonii, Lobotes surinamensis, Pomadasys incisus, and Stephanolepis diaspros) given that the same species have not been previously included in any previous image classification studies. This highlights a clear opportunity for further research and exploration into the application of image classification techniques for these invasive alien species.
A recent study features automated identification protocols for two invasive rabbitfish in the Mediterranean basin, one of which is the Siganus luridus. Fleuré et al. compiled a dataset comprising 31,285 images, featuring Siganus spp. and six other native Mediterranean fish species, extracted from 40 underwater videos filmed across three reef habitats. Using a deep-learning algorithm based on ResNet50, the researchers trained the model to identify Siganus spp. within an image dataset containing these eight species [1].
The results demonstrated that the model achieved a recall of 0.92 for the Siganus genus, which increased to 0.98 after confidence-based post-processing, in contrast to the ones used in our study, which achieved slightly lower values (as can be seen in Table 3) as follows: 0.91 (ResNet18), 0.74 (YOLO), and 0.46 (TF.Keras). In terms of precision values, the model of Fleuré et al. obtained a precision of 0.61, whereas our implemented models attained precision values of 0.80 (YOLO) and 0.81 (ResNet18), whilst the TF.Keras model had a significantly lower value of 0.52.
The accuracy values (Table 3) for the ResNet and YOLO (Figure 4) models were notably high, reaching 0.91 and 0.84, respectively, indicating that the resultant automated fish identification from the use of these two models is generally robust and consistent. In contrast, the TF.Keras sequential model (Figure 5) performed the worst, achieving an accuracy of only 0.61.
In terms of precision values, the fish species included in the study which were classified most correctly were Lobotes surinamensis (with two out of three models reporting a precision higher than 0.93) and Fistularia commersonii (with two out of three models reporting a precision higher than 0.86, including an impressive 0.98 precision from the ResNet18 model).
Interestingly, the ResNet18 model had the highest precision value of 0.95 for Pomadasys incisus, while the other models achieved precision values of 0.60 and 0.39, making Pomadasys incisus the least identified species for those models. Both the YOLO and ResNet models registered precision values above 0.80 for Stephanolepis diaspros, with YOLO achieving an impressive 0.94 precision for S. diaspros.
Similarly, high recall and f1 score metrics were recorded for two of the three classification models, further substantiating their robust performance in correctly identifying fish species from the images provided. The f1 scores obtained in this study ranged from 0.47 to 0.95, with the lowest values for all species coming from the TF. Keras model, while the other two models had scores ranging from 0.69 to 0.95. Notably, the lowest f1 score for the ResNet model was 0.86 for S. luridus, followed by 0.87 for S. diaspros, which is still relatively high compared to values from the other models.
Upon reviewing the outcomes of this study, it is crucial to acknowledge its limitations. As discussed by Villon et al. [25], there are multiple drawbacks when it comes to underwater image classification. For instance, one drawback of the deep-learning algorithm classifier is that its ability to identify a fish species is limited only to images featuring a single individual at the centre. Consequently, raw underwater videos or images must be cropped to isolate fish individuals from noise (e.g., surrounding habitat features or other fish species). Moreover, underwater images might be affected by noise arising from poor lighting conditions or the image compression process. The training and testing sets used in this study were not significantly affected to the extent that it would hinder the detection process. To maintain the sharpness of the images, it was decided not to incorporate any filtering in the processing pipeline. Future work could explore the impact of impulse noise and potential filtering techniques to enhance classification performance further.
Another constraint involves the classifiers’ reliance on an adequate number of training dataset photos for each of the targeted species. This necessitates the acquisition of thousands of images per species, often resulting in reduced recall and precision values.

4. Conclusions

This study has identified the ResNet18 model as the most effective tool for the automated identification of fish species from images, achieving the highest accuracy score of 0.91. The performance of this model in classifying various fish species, such as Lobotes surinamensis and Fistularia commersonii, was particularly robust, with precision values reaching as high as 0.98. Given these promising results, it is planned to further develop the ResNet18 model and to focus on improving the overall database to ensure even higher accuracy and reliability values.
Currently, a website is being developed to integrate online the ResNet18 model for use by any citizen scientist. This platform will feature an interactive map and will allow users to submit new sightings of marine species. Eventually, the long-term goal is to expand this database and to include multiple marine invasive species, covering various taxa such as crabs, sea hares, and algae. This initiative will not only help in monitoring marine biodiversity changes through the monitoring of invasive alien species populations but also raise awareness about the same species’ introductions through citizen science.
The website will enable citizen scientists to submit their findings, where the algorithm will identify the species and provide a brief description alongside the submitted photo. Each submission will record the location, and, after a review, the image will be added to the database to continually train and improve the algorithm.
An interactive map will eventually display these sightings, initially focusing on the Maltese Islands, so as to engage local citizens through ‘Spot the Alien Fish’ citizen science campaign. By starting with Malta, we aim to build a strong foundation of local participation and data collection, which will be instrumental, at a second stage, in scaling up the project so as to cover the entire Mediterranean region.
Through this approach, we aspire to create a comprehensive and dynamic tool that not only aids in species identification but also contributes to the broader effort of marine conservation and invasive species management. The integration of citizen science will play a vital role in this process, empowering individuals to contribute to scientific research.

Author Contributions

Conceptualization, A.G.; formal analysis, B.M.S.; investigation, B.M.S.; writing—original draft, B.M.S.; writing—review & editing, A.G. and A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the University of Malta SEA-EU Blue Economy Student Research Grant.

Institutional Review Board Statement

This research was approved by the Faculty of Science Research Ethics Committee of the University of Malta (SCI-2024-00084).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fleuré, V.; Magneville, C.; Mouillot, D.; Sébastien, V. Automated identification of invasive rabbitfishes in underwater images from the Mediterranean Sea. Aquat. Conserv. 2024, 34, e4073. [Google Scholar] [CrossRef]
  2. Azzurro, E.; Smeraldo, S.; D’Amen, M. Spatio-temporal dynamics of exotic fish species in the Mediterranean Sea: Over a century of invasion reconstructed. Glob. Change Biol. 2022, 28, 6268–6279. [Google Scholar] [CrossRef] [PubMed]
  3. Shaltout, M.; Omstedt, A. Recent sea surface temperature trends and future scenarios for the Mediterranean Sea. Oceanologia 2014, 56, 411–443. [Google Scholar] [CrossRef]
  4. Ovalle, J.C.; Vilas, C.; Antelo, L.T. On the use of deep learning for fish species recognition and quantification on board fishing vessels. Mar. Policy 2022, 139, 105015. [Google Scholar] [CrossRef]
  5. Evans, J.; Barbara, J.; Schembri, P.J. Updated review of marine alien species and other “newcomers” recorded from the Maltese Islands (Central Mediterranean). Mediterr. Mar. Sci. 2015, 16, 225. [Google Scholar] [CrossRef]
  6. Li, J.; Gray, R.M.; Olshen, R.A. Multiresolution image classification by hierarchical modeling with two-dimensional hidden Markov models. IEEE Trans. Inf. Theory 2000, 46, 1826–1841. [Google Scholar]
  7. Riley, S. Preventing Transboundary Harm From Invasive Alien Species. Rev. Eur. Community Int. Environ. Law 2009, 18, 198–210. [Google Scholar] [CrossRef]
  8. Occhipinti-Ambrogi, A.; Galil, B. Marine alien species as an aspect of global change. Adv. Oceanogr. Limnol. 2010, 1, 199–218. [Google Scholar] [CrossRef]
  9. Galil, B.S. Loss or gain? Invasive aliens and biodiversity in the Mediterranean Sea. Mar. Pollut. Bull. 2007, 55, 314–322. [Google Scholar] [CrossRef] [PubMed]
  10. Galanidi, M.; Zenetos, A.; Bacher, S. Assessing the socio-economic impacts of priority marine invasive fishes in the Mediterranean with the newly proposed SEICAT methodology. Mediterr. Mar. Sci. 2018, 19, 107. [Google Scholar] [CrossRef]
  11. Azzurro, E.; Soto, S.; Garofalo, G.; Maynou, F. Fistularia commersonii in the Mediterranean Sea: Invasion history and distribution modelling based on presence-only records. Biol. Invasions 2012, 15, 977–990. [Google Scholar] [CrossRef]
  12. Deidun, A.; Vella, P.; Sciberras, A.; Sammut, R. New records of Lobotes surinamensis (Bloch, 1790) in Maltese coastal waters. Aquat. Invasions 2010, 5 (Suppl. 1), S113–S116. [Google Scholar] [CrossRef]
  13. Pešić, A.; Marković, O.; Joksimović, A.; Ćetković, I.; Jevremović, A. Invasive Marine Species in Montenegro Sea Waters. In The Handbook of Environmental Chemistry; Springer: Berlin/Heidelberg, Germany, 2020; pp. 547–572. [Google Scholar] [CrossRef]
  14. Xu, L.; Bennamoun, M.; An, S.; Sohel, F.; Boussaid, F. Deep Learning for Marine Species Recognition. In Smart Innovation, Systems and Technologies; Springer: Berlin/Heidelberg, Germany, 2019; pp. 129–145. [Google Scholar] [CrossRef]
  15. Villon, S.; Mouillot, D.; Chaumont, M.; Darling, E.S.; Subsol, G.; Claverie, T.; Villéger, S. A Deep learning method for accurate and fast identification of coral reef fishes in underwater images. Ecol. Inform. 2018, 48, 238–244. [Google Scholar] [CrossRef]
  16. Catalán, I.A.; Álvarez-Ellacuría, A.; Lisani, J.-L.; Sánchez, J.; Vizoso, G.; Heinrichs-Maquilón, A.E.; Hinz, H.; Alós, J.; Signarioli, M.; Aguzzi, J.; et al. Automatic detection and classification of coastal Mediterranean fish from underwater images: Good practices for robust training. Front. Mar. Sci. 2023, 10, 1151758. [Google Scholar] [CrossRef]
  17. Gauci, A.; Deidun, A.; Abela, J. Automating Jellyfish Species Recognition through Faster Region-Based Convolution Neural Networks. Appl. Sci. 2020, 10, 8257. [Google Scholar] [CrossRef]
  18. Rum, S.N.M.; Az, F. FishDeTec: A Fish Identification Application using Image Recognition Approach. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 102–106. [Google Scholar] [CrossRef]
  19. Barbedo, J.G.A. A Review on the Use of Computer Vision and Artificial Intelligence for Fish Recognition, Monitoring, and Management. Fishes 2022, 7, 335. [Google Scholar] [CrossRef]
  20. Hassoon, M.I. Fish Species Identification Techniques: A Review. Al-Nahrain J. Sci. 2022, 25, 39–44. [Google Scholar] [CrossRef]
  21. Ma, Y.-X.; Zhang, P.; Tang, Y. Research on Fish Image Classification Based on Transfer Learning and Convolutional Neural Network Model. In Proceedings of the 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Huangshan, China, 28–30 July 2018. [Google Scholar] [CrossRef]
  22. Wäldchen, J.; Mäder, P. Machine learning for image based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
  23. Fawzi, A.; Samulowitz, H.; Turaga, D.; Frossard, P. Adaptive data augmentation for image classification. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3688–3692. [Google Scholar] [CrossRef]
  24. Marom, N.D.; Rokach, L.; Shmilovici, A. Using the confusion matrix for improving ensemble classifiers. In Proceedings of the 2010 IEEE 26th Convention of Electrical and Electronics Engineers in Israel, Eilat, Israel, 17–20 November 2010; pp. 555–559. [Google Scholar] [CrossRef]
  25. Villon, S.; Iovan, C.; Mangeas, M.; Claverie, T.; Mouillot, D.; Villéger, S.; Vigliola, L. Automatic underwater fish species classification with limited data using few-shot learning. Ecol. Inform. 2021, 63, 101320. [Google Scholar] [CrossRef]
Figure 1. The proposed framework for model training initiates with pre-processing and augmenting an image dataset. This is then used to train the model, whilst employing transfer-learning techniques. Ultimately, the trained model extracts the image features, and stores them in a database, generating an output description.
Figure 1. The proposed framework for model training initiates with pre-processing and augmenting an image dataset. This is then used to train the model, whilst employing transfer-learning techniques. Ultimately, the trained model extracts the image features, and stores them in a database, generating an output description.
Information 15 00437 g001
Figure 2. The five invasive fish species chosen for this research study; (a) Pomadasys incisus, (b) Stephanolepis diaspros, (c) Lobotes surinamensis, (d) Siganus luridus, (e) Fistularia commersonii.
Figure 2. The five invasive fish species chosen for this research study; (a) Pomadasys incisus, (b) Stephanolepis diaspros, (c) Lobotes surinamensis, (d) Siganus luridus, (e) Fistularia commersonii.
Information 15 00437 g002
Figure 3. Bounding boxes applied using Roboflow (https://roboflow.com) to identify two species: (a) Fistularia commersonii and (b) Lobotomes surminensis. Each bounding box highlights the individual species within the image for clearer recognition and analysis.
Figure 3. Bounding boxes applied using Roboflow (https://roboflow.com) to identify two species: (a) Fistularia commersonii and (b) Lobotomes surminensis. Each bounding box highlights the individual species within the image for clearer recognition and analysis.
Information 15 00437 g003
Figure 4. Results from the YOLO model, correctly identifying two of the species (a,b) with a 90% confidence score.
Figure 4. Results from the YOLO model, correctly identifying two of the species (a,b) with a 90% confidence score.
Information 15 00437 g004
Figure 5. Unseen test images given to the TF.Keras model, where it correctly identified (a) Stephanolepis diaspros, but misidentified (b), which is a Fistularia commersonii.
Figure 5. Unseen test images given to the TF.Keras model, where it correctly identified (a) Stephanolepis diaspros, but misidentified (b), which is a Fistularia commersonii.
Information 15 00437 g005
Table 1. Number of images used for training, validation, and testing for each species.
Table 1. Number of images used for training, validation, and testing for each species.
SpeciesNumber of Training ImagesNumber of Validation ImagesNumber of Testing Images
Fistularia commersonii2777941
Lobotes surinamensis2035830
Pomadasys incisus1233519
Siganus luridus1775026
Stephanolepis diaspros1534323
Table 2. Confusion matrices presenting the classification results for five selected fish species using three different models: TF.Keras, ResNet18, and YOLO v8. Each matrix illustrates the performance and accuracy of the respective model in identifying and distinguishing between the species.
Table 2. Confusion matrices presenting the classification results for five selected fish species using three different models: TF.Keras, ResNet18, and YOLO v8. Each matrix illustrates the performance and accuracy of the respective model in identifying and distinguishing between the species.
Actual
Model Fistularia commersoniiLobotes surinamensisPomadasys incisusSiganus luridusStephanolepis diaspros
TF.KerasFistularia commersonii242843
Lobotes surinamensis420312
Pomadasys incisus321112
Siganus luridus423125
Stephanolepis diaspros303512
ResNet18Fistularia commersonii400010
Lobotes surinamensis128001
Pomadasys incisus001810
Siganus luridus111212
Stephanolepis diaspros120020
YOLO v8 *Fistularia commersonii320000
Lobotes surinamensis217041
Pomadasys incisus10910
Siganus luridus106200
Stephanolepis diaspros110015
* A select number of images in the test dataset could not be used due to file format restrictions.
Table 3. The precision, recall, f1 score, and accuracy metrics for the image classification of five fish species using three models. The metrics provide a detailed assessment of the ability of each model to accurately predict and distinguish between the species.
Table 3. The precision, recall, f1 score, and accuracy metrics for the image classification of five fish species using three models. The metrics provide a detailed assessment of the ability of each model to accurately predict and distinguish between the species.
ModelSpeciesPrecisionRecallf1 ScoreAccuracy
TF.KerasFistularia commersonii0.630.590.610.57
Lobotes surinamensis0.770.670.71
Pomadasys incisus0.390.580.47
Siganus luridus0.520.460.49
Stephanolepis diaspros0.500.520.51
ResNet18Fistularia commersonii0.980.930.950.91
Lobotes surinamensis0.930.900.92
Pomadasys incisus0.950.950.94
Siganus luridus0.810.910.86
Stephanolepis diaspros0.870.870.87
YOLO v8Fistularia commersonii0.861.000.930.84
Lobotes surinamensis0.940.710.81
Pomadasys incisus0.600.820.69
Siganus luridus0.800.740.77
Stephanolepis diaspros0.940.880.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mifsud Scicluna, B.; Gauci, A.; Deidun, A. AquaVision: AI-Powered Marine Species Identification. Information 2024, 15, 437. https://doi.org/10.3390/info15080437

AMA Style

Mifsud Scicluna B, Gauci A, Deidun A. AquaVision: AI-Powered Marine Species Identification. Information. 2024; 15(8):437. https://doi.org/10.3390/info15080437

Chicago/Turabian Style

Mifsud Scicluna, Benjamin, Adam Gauci, and Alan Deidun. 2024. "AquaVision: AI-Powered Marine Species Identification" Information 15, no. 8: 437. https://doi.org/10.3390/info15080437

APA Style

Mifsud Scicluna, B., Gauci, A., & Deidun, A. (2024). AquaVision: AI-Powered Marine Species Identification. Information, 15(8), 437. https://doi.org/10.3390/info15080437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop