Next Article in Journal
An Analysis of Uncertainty Propagation Methods Applied to Breakage Population Balance
Next Article in Special Issue
A Glucose-Dependent Pharmacokinetic/ Pharmacodynamic Model of ACE Inhibition in Kidney Cells
Previous Article in Journal
Dual Population Balance Monte Carlo Simulation of Particle Synthesis by Flame Spray Pyrolysis
Previous Article in Special Issue
Multicellular Models Bridging Intracellular Signaling and Gene Transcription to Population Dynamics
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Information Extraction from Retinal Images with Agent-Based Technology

BISITE Digital Innovation Hub, University of Salamanca, Edificio Multiusos I+D+i, Calle Espejo 2, 37007 Salamanca, Spain
Instituto de Investigación Biomédica de Salamanca (IBSAL), Hospital of Salamanca, Hospital Virgen de la Vega, 10a Planta, Paseo de San Vicente, 58-182, 37007 Salamanca, Spain
Primary Care Research Unit, La Alamedilla Health Center, Sanidad de Castilla y León (Sacyl), Red de investigaciones preventivas y promocion de la salud (REDIAPP), Avenida Comuneros, 37003 Salamanca, Spain
Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
Pusat Komputeran dan Informatik, Universiti Malaysia Kelantan, Karung Berkunci 36, Pengkaan Chepa, Kota Bharu 16100, Kelantan, Malaysia
Authors to whom correspondence should be addressed.
Processes 2018, 6(12), 254;
Original submission received: 23 October 2018 / Revised: 21 November 2018 / Accepted: 5 December 2018 / Published: 6 December 2018
(This article belongs to the Special Issue Systems Biomedicine )


The study of retinal vessels can provide information on a wide range of illnesses in the human body. Numerous works have already focused on this new field of research and several medical software programs have been proposed to facilitate the close examination of retinal vessels. Some allow for the automatic extraction of information and can be combined with other clinical tools for effective diagnosis and further medical studies. This article proposes an Agent-based Virtual Organizations (VO) System which applies a novel methodology for taking measurements from fundus images and extracting information on the retinal vessel caliber. A case study was conducted to evaluate the performance of the developed system, and the fundus images of different patients were used to extract information. Its performance was compared with that of similar tools.

1. Introduction

Fundus examination is a noninvasive technique which consists of the assessment of blood vessels in the eye from an image. The importance of these images lies in the information they contain, as different medical studies have related the caliber of retinal vessels with different pathologies. Fundus image analysis is challenging for several reasons, such as: (i) the intertwining of numerous blood vessels; (ii) image tone, which is affected by the amount of light present when the photo is taken and may result in variability between two images of the same eye; (iii) the difficulty in differentiating arteries from veins, even when they are evaluated by an expert; and (iv) the lack of validated procedures when measuring retinal blood vessels. However, there are also some advantages to consider; thanks to morphological relationships, it is possible to identify and analyze different components in the images. To be able to identify those relationships, it is first necessary to extract the information through the use of systems that measure different vessel factors. The present work proposes a supervised automatic software system as a support tool in the research on the relationship between diseases and the blood vessels of the eye. The software architecture is based on a Multi Agent System (MAS) running on the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA) [1]. The case study, in which different fundus cameras were used to take 50 retinal images of several patients, has demonstrated that the system works correctly.
Although several previous studies have intended to develop tools or techniques that would determine the thickness of blood vessels, such as [2,3,4,5], human intervention is required in all of them. There are also different tools that attempt to deal with the analysis of retinal vessels, most of them, [6] or [7], focus on specific eye pathologies. In other research, the amount of extracted information is insufficient, such as [8,9,10,11,12], or the extracted measurements depend on the user who analyzes the images [5].
In addition, the tools used in previous works are not open source, they lack flexibility and therefore cannot be expanded or integrated with other types medical applications. As a result, it is necessary to design a new tool, which will not only be capable of following existing methodologies but will also allow medical researchers to extract new parameters and propose new methods.
This study presents a new methodology that integrates, modifies and adapts existing image analysis techniques. All of the algorithms are described so that they can be freely implemented and adapted to any other tool. In addition, thanks to its design based on Virtual Organizations (VO) of agents, the modules of the presented tool can be modified or extended for the inclusion of new techniques.
Artificial Intelligence (AI) has been used in the development of the present software tool in order to make it capable of performing analysis even if the images are very different from one another due to the large number of factors. Compared to traditional techniques, AI provides greater flexibility when analyzing images. The survey presented in [13] demonstrates increased precision thanks to the evolution of AI techniques.
In this work, an MAS-based VO was designed, where agents with specific knowledge about every analysis stage collaborate with each other [14]. We present a novel methodology focused on retinal analysis, where the agents do most of the work that is normally done by an expert user. Agents perform different image processing techniques, and detection and extraction of the different measurements from the blood vessels (associated with every parameter).
The information extracted by the platform from the fundus images has been validated by the staff of the Research Unit from Centro de Salud de La Alamedilla (Sacyl) [15], and a validation protocol was published in [16]. Therefore, the main contribution of this article is a validated methodology for semi-supervised and accurate extraction of different measurements.
The remainder of the article is structured as follows: the next section provides a comprehensive review of related state-of-the-art works and a review of VO of agents. Subsequently, the proposed architecture is presented; then, the agent platform and the algorithms and techniques that make up this methodology are all described one by one in great detail. Finally, the results obtained from the case study are outlined, conclusions are drawn from them and plans for valuable future work are provided.

2. Fundus Image Analysis

A fundus camera provides clear images of the retina. This noninvasive technique makes it possible to see blood vessels. Upon analyzing the image, a structure associated with the blood vessels can be achieved separately from the rest of the components so that their characteristics can be extracted. This vascular structure is composed of veins and arteries which must be identified when detected. Once this information is obtained, it can be associated with existing knowledge about certain pathologies [17]. More specifically, studies can relate the caliber of retinal blood vessels with arterial hypertension [18], metabolic syndrome [19], left ventricular hypertrophy [20], stroke [21] and coronary heart disease [22].
Evidently, this relationship is not a recent discovery. In fact, retinal images have been analyzed for decades, and were initially analyzed manually, as in [23,24].
As a result of the considerable advances in computer vision, there has been greater interest in creating a tool for automatic extraction of information from those images [25]. Tools such as [26] apply algorithms based on adaptive filtering techniques to highlight the difference between the background and the blood vessels structure.
Studies such as [27,28,29] propose the application of morphological methods, which are widely used when extracting features of images whose shape is known beforehand. Specifically, these methods have been very successful when used to obtain vasculature segmentation applied to the detection of micro-aneurisms. An image preprocessing algorithm which can automatically detect exudates in retinal images is proposed in [6]. The information extracted in these works is not sufficiently complete as the information that the present work is intended to extract.
In addition to the previously mentioned tools, there are others; the Singapore I Vessel Assessment (SIVA) software tool [11] which allows for semi-supervised measurement of many retinal vessel parameters, such as vessels mean width, length, branching coefficient and angles, simple and curvature tortuosity, or fractals. It is the most complete tool on the market, however, unlike the tool proposed in this work, it does not provide information about the arteriovenous index (AVindex [30]) by area, which, according to the medical team involved in this work, is one of the most important parameters for retinal vessel measurement. Moreover, neither the article which originally proposed the SIVA tool nor other works which also analyzed it [31]) provide any type of experimental results. In addition, the methodology and the tool are not open source, so it was not possible to to compare its performance with the platform proposed in this work.
Methods to detect diabetic retinopathy have also been proposed, for example in [7], where a four-block division of the analysis is presented, including ‘Image preprocessing’, ‘Shape estimation’, ‘Feature extraction’ and ‘Classification’. The authors in [8] present a set of techniques when measuring blood vessels, also focused on the task of detecting diabetes, in addition to other pathologies such as hypertension or premature retinopathy.
The medical team involved in this development was also involved in the development of the tool called ‘AVIndex’, introduced in [10], which obtained information similar to that desired with the present project; however, it is incomplete, uses a smaller analyzed area, and is less accurate in extracting the vasculature.
A previous version of this tool was presented in [32], which was used to validate that the applied methodology was able to extract the correct parameters. In that case, the analysis was time-consuming for the supervising user, whose participation and decisions were required at every step. Based on that application design and additional measurement techniques, this new methodology was designed and developed.
A review of existing tools that apply artificial vision to the analysis of fundus images is presented in [33], which divides the analysis process into three different stages: preprocessing, location of the optic disc, and segmentation of the vasculature. Techniques applied to each section are then analyzed. Contrast enhancement techniques are the most commonly applied in the preprocessing stage (7 articles of 19). In the location of the optic disk stage, algorithms analyzing the position of vessels are the most applied (6 articles of 38). Finally, for the segmentation of vasculature stage, vessel tracking algorithms are the most applied (17 articles of 62).
A more recent review is presented in [13], which in addition to a new methodology provides a comparative of 26 recent articles addressing the topic of blood vessel segmentation (it does not include the information extraction phase), where it is shown that the accuracy in the segmentation ranges between 87.73% and 95.36% of its proposal. The results of our methodology place it among the top three of those that had been evaluated.

3. Virtual Organizations

Software agents are able to adapt to the context of a problem at any given moment due to their specific characteristics. In the case of a fundus image analysis, in which there are differences among the images, despite components follow standard criteria, a decision-making process could be very useful. To this end, agents have to be specialized in one single functionality providing all their knowledge when required.
Since there are different stages and components that must be detected when analyzing the fundus images, the present study proposes an architecture based on an MAS where different agents cooperate to reach a common goal of extracting the hidden information in blood vessels. As a result, every agent is able to apply the same technique in a different way at the same stage, thus allowing the system to choose the proposal that provides the best results in every stage, and to change the input parameters by adapting them for every image.
As a result, every agent can apply the involved techniques to solve a stage in a different way or with different values than other agents trying to solve the same stage, so the system can choose the proposal with the best provided results in every stage and also is able to change the input parameters adapting them for every image.
In addition, agents are able to learn with the history of patients so they can refine that input parameters with every new image. In this way, results will be more efficient as the patient database increases [34].
This structure also allows the possibility of integrating the VO with others specialized in processing the extracted information in order to find relationships between this information and other diseases. This integration is the next phase of this project.

4. Automatic Image Analyzer to Assess Retinal Vessel Caliber (ALTAIR): A Virtual Organization Based Fundus Image Analyzer Platform

As mentioned, this paper uses the tool presented in [32] as its starting point and develops it further. The tool takes a series of steps to obtain the parameters form the fundus images that are associated with blood vessels. The clinical validity of the developed tool has been confirmed and its method of information extraction works; however, the image analysis results were not good enough and it was necessary for a human expert to intervene considerably in the process.
Once the parameters associated with the blood vessels (length, area and thickness) were identified, the methodology of analysis was improved with automatic detection (however, an unsupervised system would not be able to guarantee 100 accuracy rate).
MAS technology has been leveraged for the creation of this platform because of its ability to dynamically interpret the information hidden in every image; agents provide different values (for the required parameters) so that the best result can be found for every image, at each step of the methodology. Throughout this section, two aspects of the proposed system are described. The first one is focused on a VO-based solution for agents. The second one is focused on the algorithms, specifically designed for the proposed methodology. The algorithms only use the techniques that have the highest performance in image analysis.

4.1. Virtual Organizations

When defining the agents and the system structure, it is necessary to consider the platform on which they are going to be deployed. A new version of PANGEA [1] was used to develop the platform. Its main advantage is that it integrates a wide range of functionalities, such as self-managed database access or the ability to easily define new agents and VO within the system. Thanks to PANGEA, developers do not have to worry about basic functionalities and may fully focus their attention on more specific functionalities, necessary for resolving the problem.
Furthermore, it is a multi-language programming platform capable of running in Cloud Computing (CC) environments, which makes it possible to publish an online tool that provides enough resources for parallel image processing. This will result in a bigger database with even more data, thus improving the accuracy and the knowledge of the system.
One of the designed VOs is responsible for extracting information from the image. It is composed of several agents which fulfill 14 different roles. Specifically, the agent with the Coordinator role is responsible for coordinating the flow of the analysis and for establishing rules. The agent with the Image role represents the current status of an image and is updated in every step of the flow. One of the agents performs four roles related to the techniques applied at every step of the analysis; it decides the best set of techniques to achieve the best result. These four roles are: Location, Segmentation, Skeleton, and Identification. The agent with the Measurement role extracts the parameters obtained in the analysis. The remaining roles are related to different AI and image analysis techniques, which are implemented by different agents to provide values for each of the input parameters of the algorithms, thus allowing them to select options such as using the mean, mode, maximum values or minimum values. Several agents may perform this task simultaneously; each one will intend to solve the problem using different values; however, only the best results, provided by any of the agents, are considered.
The remainder of this section provides a description of all the agent roles that are performed every time a new image is processed, as seen in Figure 1.
Coordinator: this role is implemented by a single agent responsible for keeping track of the entire analysis process—from the moment the image is received until the extracted information is presented. It regulates the analysis process and the communication between the system’s agents and it is in charge of obtaining the inputs for the other agents of the system.
Image: this role is implemented by a single agent that keeps track of the status of the image at every moment. The agents associated with each step are responsible for updating the status of the image by communicating it through the Coordinator Agent. Initially, an image is uploaded and, if the image agent validates that the image is correct, it notifies the Coordinator agent that the analysis process should start. Subsequently, the Coordinator agent notifies the Location agent to start its process. When a new algorithm requires preprocessed images, the Image agent is in charge of providing images from previous stages that have already been preprocessed by other algorithms.
Location: this role is used by agents whose purpose is to locate: (i) the macula; (ii) both retina edges and the optic disc. In these cases, the analysis is not as complex as in the following steps; consequently, a single agent is responsible for executing the task of locating the macula and another agent is responsible for locating the optic disk. The macula detection process is described in Algorithm 1 (Section 4.2.1), the retina edge detection process is described in Algorithm 2 (Section 4.2.2), and the optic disc detection process is described in Algorithm 3 (Section 4.2.2).
Segmentation: this role is implemented by four agents whose input values are the mean, mode, maximum values and minimum values, as mentioned previously. Its goal is to present a structure that represents the segmentation of the vessels by dismissing the background and allowing the agents to process the vessels separately in subsequent steps. Every agent executes the step and applies the methodology using the required techniques and providing its results as output. Further description is presented in Section 4.2.3 and the blood vessel segmentation process is described in Algorithm 4.
Skeleton: this role is implemented by a single agent which is responsible for extracting the skeleton from the blood vessels structure obtained as output in the previous step. All the details are presented in Section 4.2.3 and the process performed to obtain the skeleton of the blood vessels is described in Algorithm 5.
Identification: this role is implemented by four agents whose inputs are also based on the mean, the mode, the maximum values and the minimum values. Their main goal is to identify every detected blood vessel by classifying them as either an artery or a vein. The only useful difference among the analyzed vessels is their color tone. However, identification is not an easy issue as the image illumination is not uniform and can vary according to the zone. To identify the vessels, the methodology (detailed in the next section) is applied in parallel by the agents, each using their own technique. The results are provided to the Coordinator Agent. All the details are presented in Section 4.2.4 and the blood vessel identification process is described in Algorithm 6.
Measurement: this role is implemented by the agent responsible for extracting the required parameters. These parameters are specified by the medical team. The thickness, length, and area of every vein and artery were identified as the data required for our analysis. The measurement process is fully detailed in Section 4.3.
Other roles representing techniques: there are other roles related to the technologies that can be applied at each stage of the analysis. Every agent receives a series of arguments representing which they use as the input parameters for the algorithm. For example, some of the agent roles are NoiseRemove, Contrast, Binarizing, GrayScale, MorphologyFilter, VesselMorphology or Blur.
When evaluating the results provided by the agents at every step, it is important to bear in mind that there are several characteristics of retinal images that must be satisfied. As highlighted in the introduction, one of the few advantages of fundus analysis is the morphological relationship between some of the components. For example, we know that the number of arteries should be slightly higher than the number of veins, or that there should be at least three or four veins in the image. One of the agents with the Identifier role will then show the most accurate results which will be used to update the image status.
Therefore, the main advantage of the VOs approach of the proposed software is that the all agents perform image processing simultaneously (each agent applies filters sequentially, so this process cannot be distributed in any other way). Thus, VOs allow for obtaining better results in a shorter period of time than if it were a single sequential execution.

4.2. Information Extraction Methodology

Having described the architecture of the system, we now proceed to the methodology used to generate the results at each step. The set of techniques and algorithms used in the methodology is detailed below.
To gain a better understating of this methodology, the process that image goes through from the time it enters the system has been divided into steps. Figure 2 shows a schema of those steps and the analysis techniques used to obtain the desired results at every step. The output of every step defines the status of the image, which is the input for the next step.
This section describes the steps involved in visual analysis that lead to the removal of information from blood vessels.

4.2.1. Image Selection (Eye Side Detection)

Once the patient’s profile has been created in the system, the system associates new images with that patient. The moment the user selects an image with a patient’s retina, the system initially starts with the first stage of analysis.
The first stage consists of identifying the eye (left or right); this task is performed by the Location Agent. It locates the macula for this purpose. In all the processed images, the optic disc is near the center of the image, so the macula must be located on either side, at a medium height. The macula can be detected as a darker area in the external part of the eye, so, when detected on the left side of the image, it corresponds to the right eye and vice versa. Algorithm 1 shows how the image provided by the software is processed, while Figure 3 shows the different parts of the image that Algorithm 1 uses, and the different parts of the eye.
Algorithm 1: Eye side detection
// w and h are the image I width and height
Input: I , w , h
Output: S I
// I i j N × N × N
I 1 { I i j / i w × 0.2 , i w × 0.5 ; j h × 0.35 , j h × 0.65 } ;
I 2 { I i j / i w × 0.5 , i w × 0.8 ; j h × 0.35 , j h × 0.65 } ;
// G 1 N , G 2 N
G 1 r g b 2 g r a y ( I 1 ) ;
G 2 r g b 2 g r a y ( I 2 ) ;
G 1 ¯ 1 i × j i , j G i j 1 ;
G 2 ¯ 1 i × j i , j G i j 2 ;
if G 1 ¯ > G 2 ¯ then (
 | S I 1 ;
 | S I 2 ;
The example shown in Figure 3 would get values of 93.5 for the left image ( G 1 ) and 114.5 for the right image ( G 2 ). Bearing in mind that the values vary between 0 and 255, where 0 is black and 255 is white, the darkest image and where the macula is located, is on the left side ( G 1 ) and therefore it is the right eye. The differences between G 1 and G 2 images usually range from 5% to 20%, which is enough to determine the location of the macula.
The proposed algorithm is simple because it was intended to have maximum efficiency in the shortest possible time. As demonstrated in Section 5, all the performed tests have correctly identified whether it was the right eye or the left eye. However, should it fail, the software tool always provides manual tools in the left side menu to correct the automatic analysis and change the eye being analyzed.

4.2.2. Bounds Detection

In the next step of the analysis, the edges of both the retinal and the optic disc (or papilla) are detected. Information about the size of the retina is provided by detecting its edge, which is very useful when applying morphological techniques to analyze the image. As detecting the edge of the retina is computationally easier than detecting the edge of the optic disc, and the size of the optic disc is directly related to the size of the retina, it is the first step taken when locating the optic disc. Algorithm 2 shows the methodology used to detect the edge of the retina.
The result can be seen in the example shown in Figure 4, where the diameter of the retina d in the image G, is determined by discriminating the black border and the noise (few individual or grouped pixels) that may be in it. It is therefore necessary for the background of the image to be uniform; most fundus cameras are designed to make the background uniform.
Once the retina edge has been detected, additional information about the optic disc is available and it can now be located. More specifically, the diameter of the optic disc diameter is known to be slightly less than one-sixth of the diameter of the retina, which is taken into consideration when locating it. The techniques used to locate the optic disc are described in Algorithm 3. Figure 4 shows both the relationship between the retina and the optic disc, and the result of different filters applied to the image in this step.
Algorithm 2: Retina edge detection
// w, h and d are the image I width, height and diameter
Input: I , w , h
Output: d
G r g b 2 g r a y ( I ) ;
// c is the background color, n c is the number of c color point
// d 1 is the initial diameter point, d 2 is the end diameter point
c G 00 ;
n c 0 ;
d 0 ;
d 1 0 ;
d 2 0 ;
for i 1 to w do
Processes 06 00254 i001
Algorithm 3: Optic disc location
// c r x is the x coord of the retina center, c r y is the y coord of the retina center and d is the diameter of the retina (
// c p is the x,y coords of the papilla
Input: I , c r x , c r y , d
Output: c p
// I i j N × N × N , G i j N
G r g b 2 g r a y ( I ) ;
S s q u a r e I n s c r i b e d ( G , d / 2 , c r x , c r y ) ;
// Papilla size is always about 6 times lower than the the retina
P G i j ;
P s h o r t T o L o w e r ( P ) ;
t h r e s h o l d P π × ( d / 6 ) 2 ;
B b i n a r y F i l t e r ( S , t h r e s h o l d ) ;
// Open filter to erode and dilate the image B
O o p e n F i l t e r ( B ) ;
// Fill gaps in the image O (
C c l o s e F i l t e r ( O ) ;
b l o b g e t B i g g e s t B l o b ( C ) ;
r e c t g e t B o u n d i n g R e c t ( b l o b ) ;
c p g e t C e n t e r ( r e c t ) ;
Locating the optic disc is necessary due to the fact that it is used by the vessels to access the retina. Consequently, the analysis of the vessels begins with the edge of the optic disc.

4.2.3. Segmentation

Once the edges of the retina and optic disc have been detected, it is necessary to analyze the area between them to segment the blood vessels from the background of the eye.
More specifically, the analysis is focused on the area between the edge of the optic disc and the concentric circle with a radius three times greater than the radius of the optic disc edge. This area is analyzed and Figure 5 shows the images obtained at every step of the detection methodology described in Algorithm 4 which are provided by the agent with the best results; that is, the one that provides the largest, morphologically correct vascular area and, therefore, a greater number of blood vessels.
Gaussian filters were used to segment the blood vessels at the bottom of the retina. The decision to adopt this approach was determined by the morphological characteristics of the retina, where the blood vessels (darker tonality) would become indistinguishable from the background (lighter tonality) because they are narrow and elongated.
In the literature, Gaussian filters is the most used technique in segmentation. However, our proposal presents novelties with respect to existing techniques. For example, when applying Gaussian filters in the area of the optic disc, where the tonality is much clearer than in the background, error occurs in the area of analysis that lies next to the optical disk. To avoid this, the optical disc is filled with the mean tonality of its closest outer contour, so that the Gaussian filter does not introduce error ( P S ¯ ).
Algorithm 4: Vessels detection
// c p x is the x coord of the papilla center, c p y is the y coord of the papilla center and r p is the radio of the papilla
// D is image with the detected vessels in white and the other points in black
Input: I , c p x , c p y , r p
Output: D
// I i j N × N × N , I r i j N
I r r g b 2 g r a y ( I , i n f r a r e d F i l t e r ) ;
// P contains all the papilla points
P { I r i j / ( i c p x ) 2 + ( j c p y ) 2 r p 2 } ;
S { I r i j / i c p x r p × 1.5 , i c p x + r p × 1.5 ; j c p y r p × 1.5 , j c p y + r p × 1.5 } ;
S S i j / S P = ;
// Change papilla color with the closest background mean color to avoid noise when applying gaussian filters
P S ¯ ;
// Analysis area goes from the papilla limit to 3 times its radio size
A { I r i j / i c p x r p × 4 , i c p x + r p × 4 ; j c p y r p × 4 , j c p y + r p × 4 } ;
// G a i j N
G a g a u s s i a n F i l t e r ( A ) ;
I r { I r i j / i < c p x r p × 4 , i > c p x + r p × 4 ; j < c p y r p × 4 , j c p y + r p × 4 } ;
I r { A i j / i c p x r p × 4 , i c p x + r p × 4 ; j c p y r p × 4 , j c p y + r p × 4 } ;
// D i j N
D I r ;
// Grayscale: 0 black; 255 white (
t h r e s h o l d I r I r / 100 ;
D i j 255 / D i j t h r e s h o l d i j ;
D i j 0 / D i j < t h r e s h o l d i j ;
D i j 0 / P i j P ;
// Remove noise: blobs whose area is lower than papilla diameter (all vessels must be bigger)
B g e t B l o b s ( D ) ;
B { B i / a r e a ( B i ) < ( r p × 2 ) } ;
D D ;
D i j 0 / i j B k ;
As the Gaussian filter is a technique that consumes a considerable amount of computational capacity, these are only applied in the area of analysis, which is not the whole retina, but goes from the edge of the optical disc to the bounding box (A) of a concentric circle with three times the radius of the optical disc. This limitation has been specified by the medical team.
Next, Gaussian filters are applied and any possible noise is filtered (small blobs that morphologically do not meet the characteristics of the blood vessels) and the result is later compared with the I r image, in which the blood vessels appear more marked with respect to the background than in the original grayscale image (G).
At this point, each agent applies a different threshold that will provide a vascular area. The detected vascular area is analyzed morphologically (branching pattern and vessel tracking) to ensure that blood vessels and noise are eliminated. The performance of an agent is measured by the area and number of blood vessels it has identified, the agent that identifies the largest number of blood vessels is classified as the best performing agent. In the performed tests, the threshold that gave the best results was I r I r / 100, so it is the one described in Algorithm 4.
After the segmentation of the vascular structure, the vessels must be identified. However, in order to identify them correctly, an intermediate process is required to avoid the problem of one vessel crossing another. This intermediate process determines whether a point of the structure belongs to a given vessel. To do this, the methodology proposed in [35] is followed. As a result, the skeleton of the structure is obtained.
Thanks to the analysis and classification of each of the points of the skeleton, useful information can be extracted for the identification process. Thus, the advantage of our proposal over existing techniques is that it effectively prevents crossing and branching. As shown in Figure 6, four points can be found according to the number of neighbors:
normal (2 neighbors, Figure 6a),
branch (3 neighbors, Figure 6b),
final (1 neighbor, Figure 6c),
cross (4 or more neighbors, Figure 6d).
This way, every detected vessel is fragmented into a series of segments which are analyzed individually. Every segment is a set of all the points between the following types of pairs: final–final, branch–final, branch–branch, branch–cross, cross–cross and cross–final. Additionally, full vessels (both veins and arteries) are points between a final–final pair with one or more chained segments. Algorithm 5 describes the process followed to obtain all the segments.
Algorithm 5: Skeleton and segment extraction
// ( c p x is the x coord of the papilla center, c p y is the y coord of the papilla center and r p is the radio of the papilla (
// ( I r is the infrared-grayscaled image D is a binary image with the detected vessels in white and the other points in black (
// ( S k is a binary image with the vessels skeleton and S k is the image with the vessels skeleton represented by the I r image background points (
// ( T S is a subset with the skeleton of all the segments (
Input: I r , D , c p x , c p y , r p
Output: S k , S k , T S
// ( I r i j N , D i j { 0 , 255 } (
// ( A contains all the points of the analyzed area (
A { ( i , j ) / ( i c p x ) 2 + ( j c p y ) 2 > r p 2 , ( i c p x ) 2 + ( j c p y ) 2 ( r p × 4 ) 2 } ;
// ( Get skeleton image ( S k ) and a new one from it ( S k ) with the closest (distance of r p / 2 ) background mean instead of white (
// ( S k i j { 0 , 255 } , S k i j N (
S k g e t S k e l e t o n ( D ) ;
// ( P i j contains the set of background points of the retina at a distance of r p / 2 (
P i j { ( x , y ) / ( x i ) 2 + ( y j ) 2 = ( r p / 2 ) 2 , S k i j = 255 , D x y = 0 , ( x , y ) A } ;
S k S k ;
S k i j 1 | P i j | P i j ( i , j ) / S k i j = 255 ;
// ( Get all white points type from S k image (based on the number of neighbors) (
// ( S P i j { f i n a l , r e g u l a r , b r a n c h , i n t e r s e c t i o n } (
S P { ( i , j ) / S k i j = 255 } ;
S P i j g e t P o i n t s T y p e ( S k i j ) ; // ( See Figure 5 (
// ( Get all segments T S i A (
T S g e t S e g m e n t s ( S P ) ;

4.2.4. Identification

Once each point is associated with its blood vessel within the vascular structure under examination, they can be individually classified as a vein or an artery.
The algorithm designed to identify these vessels takes the skeleton tonalities for each blood vessel and its closest background tonalities (ensuring that there is no other vessel point considered as a background point).
In addition to the tonality, and in order to facilitate the identification of the blood vessels, there are other features, such as the veins and arteries, which must be present in each of the images. The agent with the Coordinator role is responsible for selecting the most appropriate results. This helps to automatically determine the used threshold values. Algorithm 6 describes this process in more detail.

4.3. Diagnosis

Once the position of veins and arteries is known, the next step of the systems consists of extracting the encapsulated information. Different techniques are applied by the Measurement Agent to obtain parameters such as the length, the thickness and the area of every vessel classified by type (vein or artery) and by proximity to the optic disc in three concentric circles.
Area is obtained automatically by counting the number of pixels inside every vessel. Length is also easy to calculate; the number of pixels of every vessel skeleton is counted. Finally, the relationship between the area and the length of every single vessel determines its mean thickness.
Figure 7 shows an example in which an area of 3919 pixels is detected (1); 308 pixels represent both its skeleton and its length (2); and the mean thickness is 12.72 pixels; it is the result of dividing area by length, this can be verified in the zoomed section of the vessel (3).
Algorithm 6: Vessels and segments identification
// ( A is the set of segments which have been identified as arteries, V is the set of segments which have been identified as arteries (
Input: I r , S k , T S ,
Output: A , V
a v T h r 15 ; m x T h r 8 ; c o u n t 0 ; A { } ; V { } ;
while c o u n t < 7 do
Processes 06 00254 i002
The system works with pixels when these parameters are calculated, but the results are provided in real millimeters by applying the scale provided by the fundus camera manufacturer, which can be easily configured with the software tool.

5. Results

When evaluating the results provided by every stage of the software, the only images used were those in which an expert user could easily identify the blood vessels. In addition, images with any type of errors were been considered in the next steps, as their results depend on the output from the previous steps. Table 1 shows the success rate for every step from a sample of 150 retinal images.
The works found in the state-of-the-art all focus on specific analyses and extract specific information, thus there are great differences between them. One common component that is shared between those works is the blood vessel segmentation stage, whose results in other proposals could be compared with ours. According to [13], a comparative should be made of the techniques related to this stage. In this study, as indicated in Section 2, the authors present a comparison of 26 articles. Out of all of them, only the methodologies presented in [36] (95.03%) and the proposal of [13] (95.36%) achieve a percentage higher than 95%. Equally high percentage has been obtained in the present work. The percentage of segmentation precision in the other 24 articles varies from 94.72% of precision obtained in [37] to 87.73% obtained in the 1980s in the work presented in [38].
However, the proposed system has better computational performance than existing tools. The tool presented in [13] has a 95.36% efficiency in segmentation (the highest one); however, it takes this tool “around 20 mins” to process an image on a PC with i5 processor at 2.53 GHz and 4 GB RAM. In comparison, our system does the same task in less than 2 min (1 min and 54 s) on a PC with the same characteristics.
Thanks to the use of an MAS as the basis of the platform, the techniques and algorithms were easily applied, facilitating the choice of the best methodologies and their best input parameters. In addition, the system can be auto-adapted to the requirements of every single image and multiple agents ensure that the best possible results are obtained at each step of image processing and analysis.
Another advantage of the MAS is that it allows the platform to run in a distributed environment. This makes it possible to decouple the technology applied from the analysis process.
The outcome of this research is a multiplatform software tool (Figure 8) whose main features include: compatibility with different fundus cameras, and the ability to share results on a common server. This makes it possible for any medical center to process images, and the data from medical centers can be sent to the database.
The main parameters measured in each image are: total vessel thickness (vThck), total artery thickness (aThck), total vein area (vArea), total artery area (aArea), total vein length (vLength), total artery length (aLength) and the relationship between artery and vein thickness (AVIndex). An example is shown in Table 2.
Other parameters are also exported, such as the eye (right or left), the image name, patient identifier, other clinical variables of the patient, or the main parameters classified by quadrants and proximity.

6. Conclusions

The system allows for the analysis of fundus images using different algorithms that are adjusted automatically according to the characteristics of the image. During this process, filtering, analysis and data mining techniques are applied in combination with the knowledge of experts to carry out the analysis and the extraction of parameters.
Thanks to the flexibility provided by the VO based system, the software can be integrated with other clinical software in order to launch new studies on the relationship between the extracted information and other fields of medicine, such as blood test results. Furthermore, the use of virtual organizations simplifies the inclusion of new algorithms due to the open characteristics of this technology.
The medical research team involved in this research is currently finishing the validation phase; they are aiming to find connections between the extracted values and different pathologies. Knowledge of the parameters that are indicative of certain pathologies is very valuable to medicine. For this reason, it is important to develop tools that simplify the parameter analysis process and allow for obtaining reliable results.

Author Contributions

P.C. developed the platform; S.R. supervised the development; L.G.-O. and J.M.C. led the research. All authors contributed to the revision of the paper.


This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Sánchez, A.; Villarrubia, G.; Zato, C.; Rodríguez, S.; Chamoso, P. A gateway protocol based on FIPA-ACL for the new agent platform PANGEA. In Trends in Practical Applications of Agents and Multiagent Systems; Springer: Cham, Switzerland, 2013; pp. 41–51. [Google Scholar]
  2. Macía, I.; Graña, M.; Paloc, C. Knowledge management in image-based analysis of blood vessel structures. Knowl. Inf. Syst. 2012, 30, 457–491. [Google Scholar] [CrossRef]
  3. Cheung, C.Y.L.; Ikram, M.K.; Sabanayagam, C.; Wong, T.Y. Retinal microvasculature as a model to study the manifestations of hypertension. Hypertension 2012, 60, 1094–1103. [Google Scholar] [CrossRef] [PubMed]
  4. Nguyen, T.T.; Wang, J.J.; Sharrett, A.R.; Islam, F.A.; Klein, R.; Klein, B.E.; Cotch, M.F.; Wong, T.Y. Relationship of Retinal Vascular Caliber with Diabetes and Retinopathy the Multi-Ethnic Study of Atherosclerosis (MESA). Diabetes Care 2008, 31, 544–549. [Google Scholar] [CrossRef] [PubMed]
  5. Ortega, M.; Barreira, N.; Novo, J.; Penedo, M.G.; Pose-Reino, A.; Gómez-Ulla, F. Sirius: A web-based system for retinal image analysis. Int. J. Med. Inform. 2010, 79, 722–732. [Google Scholar] [CrossRef] [PubMed]
  6. Sanchez, C.I.; Hornero, R.; López, M.I.; Aboy, M.; Poza, J.; Abasolo, D. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis. Med. Eng. Phys. 2008, 30, 350–357. [Google Scholar] [CrossRef] [PubMed][Green Version]
  7. Ege, B.M.; Hejlesen, O.K.; Larsen, O.V.; Møller, K.; Jennings, B.; Kerr, D.; Cavan, D.A. Screening for diabetic retinopathy using computer based image analysis and statistical classification. Comput. Methods Programs Biomed. 2000, 62, 165–175. [Google Scholar] [CrossRef]
  8. Martinez-Perez, M.E.; Hughes, A.D.; Thom, S.A.; Bharath, A.A.; Parker, K.H. Segmentation of blood vessels from red-free and fluorescein retinal images. Med. Image Anal. 2007, 11, 47–61. [Google Scholar] [CrossRef]
  9. Podoleanu, A.G.; Rosen, R.B. Combinations of techniques in imaging the retina with high resolution. Prog. Retin. Eye Res. 2008, 27, 464–499. [Google Scholar] [CrossRef]
  10. García-Ortiz, L.; Recio-Rodríguez, J.I.; Parra-Sanchez, J.; Elena, L.J.G.; Patino-Alonso, M.C.; Agudo-Conde, C.; Rodríguez-Sánchez, E.; Gómez-Marcos, M.A. A new tool to assess retinal vessel caliber. Reliability and validity of measures and their relationship with cardiovascular risk. J. Hypertens. 2012, 30, 770–777. [Google Scholar] [CrossRef]
  11. Lau, Q.P.; Lee, M.L.; Hsu, W.; Wong, T.Y.; Ng, E.Y.K.; Acharya, U.R.; Campillo, A.; Suri, J.S. The singapore eye vessel assessment system. Image Anal. Model. Ophthalmol. 2014, 143–160. [Google Scholar] [CrossRef]
  12. Perez-Rovira, A.; MacGillivray, T.; Trucco, E.; Chin, K.S.; Zutis, K.; Lupascu, C.; Tegolo, D.; Giachetti, A.; Wilson, P.; Doney, A.; et al. VAMPIRE: Vessel assessment and measurement platform for images of the REtina. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 3391–3394. [Google Scholar]
  13. GeethaRamani, R.; Balasubramanian, L. Retinal blood vessel segmentation employing image processing and data mining techniques for computerized retinal image analysis. Biocybern. Biomed. Eng. 2016, 36, 102–118. [Google Scholar] [CrossRef]
  14. Alberola, J.M.; del Val, E.; Costa, A.; Novais, P.; Julian, V. A genetic algorithm for group formation in elderly communities. AI Commun. 2018, 31, 409–425. [Google Scholar] [CrossRef]
  15. Garcia-Ortiz, L.; Perez-Ramos, H.; Chamoso-Santos, P.; Recio-Rodriguez, J.I.; Garcia-Garcia, A.; Maderuelo-Fernandez, J.A.; Gomez-Sanchez, L.; Martínez-Perez, P.; Rodriguez-Martin, C.; De Cabo-Laso, A.; et al. Automatic Image Analyzer to Assess Retinal Vessel Caliber (altair) Tool Validation for the Analysis of Retinal Vessels. J. Hypertens. 2016, 34, e160. [Google Scholar] [CrossRef]
  16. Garcia-Ortiz, L.; Gómez-Marcos, M.A.; Recio-Rodríguez, J.I.; Maderuelo-Fernández, J.A.; Chamoso-Santos, P.; Rodríguez-González, S.; de Paz-Santana, J.F.; Merchan-Cifuentes, M.A.; Agudo-Conde, C. Validation of the automatic image analyser to assess retinal vessel calibre (ALTAIR): A prospective study protocol. BMJ Open 2014, 4, e006144. [Google Scholar] [CrossRef] [PubMed]
  17. Akil, H.; Huang, A.S.; Francis, B.A.; Sadda, S.R.; Chopra, V. Retinal vessel density from optical coherence tomography angiography to differentiate early glaucoma, pre-perimetric glaucoma and normal eyes. PLoS ONE 2017, 12, e0170476. [Google Scholar] [CrossRef] [PubMed]
  18. Tanabe, Y.; Kawasaki, R.; Wang, J.J.; Wong, T.Y.; Mitchell, P.; Daimon, M.; Oizumi, T.; Kato, T.; Kawata, S.; Kayama, T.; et al. Retinal arteriolar narrowing predicts 5 year risk of hypertension in Japanese people: The Funagata Study. Microcirculation 2010, 17, 94–102. [Google Scholar] [CrossRef] [PubMed]
  19. Wong, T.Y.; Duncan, B.B.; Golden, S.H.; Klein, R.; Couper, D.J.; Klein, B.E.; Hubbard, L.D.; Sharrett, A.; Schmidt, M.I. Associations between the metabolic syndrome and retinal microvascular signs: The Atherosclerosis Risk in Communities study. Investig. Ophthalmol. Vis. Sci. 2004, 45, 2949–2954. [Google Scholar] [CrossRef]
  20. Tikellis, G.; Arnett, D.K.; Skelton, T.N.; Taylor, H.W.; Klein, R.; Couper, D.J.; Richey Sharrett, A.; Wong, T.Y. Retinal arteriolar narrowing and left ventricular hypertrophy in African Americans. The Atherosclerosis Risk in Communities (ARIC) study. Am. J. Hypertens. 2008, 21, 352–359. [Google Scholar] [CrossRef]
  21. Yatsuya, H.; Folsom, A.R.; Wong, T.Y.; Klein, R.; Klein, B.E.; Sharrett, A.R. Retinal microvascular abnormalities and risk of lacunar stroke atherosclerosis risk in communities study. Stroke 2010, 41, 1349–1355. [Google Scholar] [CrossRef]
  22. Wong, T.Y.; Klein, R.; Sharrett, A.R.; Duncan, B.B.; Couper, D.J.; Tielsch, J.M.; Klein, B.E.; Hubbard, L.D. Retinal arteriolar narrowing and risk of coronary heart disease in men and women: The Atherosclerosis Risk in Communities Study. JAMA 2002, 287, 1153–1159. [Google Scholar] [CrossRef]
  23. Daxer, A. The fractal geometry of proliferative diabetic retinopathy: Implications for the diagnosis and the process of retinal vasculogenesis. Curr. Eye Res. 1993, 12, 1103–1109. [Google Scholar] [CrossRef] [PubMed]
  24. Mainster, M.A. The fractal properties of retinal vessels: Embryological and clinical implications. Eye 1990, 4, 235–241. [Google Scholar] [CrossRef] [PubMed][Green Version]
  25. Tu, S.; Huang, Y.; Liu, G. CSFL: A novel unsupervised convolution neural network approach for visual pattern classification. AI Commun. 2017, 30, 311–324. [Google Scholar]
  26. Chapman, N.; Witt, N.; Gao, X.; Bharath, A.A.; Stanton, A.V.; Thom, S.A.; Hughes, A.D. Computer algorithms for the automated measurement of retinal arteriolar diameters. Br. J. Ophthalmol. 2001, 85, 74–79. [Google Scholar] [CrossRef] [PubMed][Green Version]
  27. Matsopoulos, G.K.; Mouravliansky, N.A.; Delibasis, K.K.; Nikita, K.S. Automatic retinal image registration scheme using global optimization techniques. IEEE Trans. Inf. Technol. Biomed. 1999, 3, 47–60. [Google Scholar] [CrossRef] [PubMed][Green Version]
  28. Zana, F.; Klein, J.C. A multimodal registration algorithm of eye fundus images using vessels detec-tion and Hough transform. IEEE Trans. Med. Imaging 1999, 18, 419–428. [Google Scholar] [CrossRef] [PubMed]
  29. Zana, F.; Klein, J.C. Robust segmentation of vessels from retinal angiography. In Proceedings of the 13th International Conference on Digital Signal Processing Proceedings, Santorini, Greece, 2–4 July 1997; Volume 2, pp. 1087–1090. [Google Scholar]
  30. Espona, L.; Carreira, M.J.; Ortega, M.; Penedo, M.G. A snake for retinal vessel segmentation. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2007; pp. 178–185. [Google Scholar]
  31. Zhu, P.; Huang, F.; Lin, F.; Li, Q.; Yuan, Y.; Gao, Z.; Chen, F. The relationship of retinal vessel diameters and fractal dimensions with blood pressure and cardiovascular risk factors. PLoS ONE 2014, 9, e106551. [Google Scholar] [CrossRef]
  32. Chamoso, P.; Pérez-Ramos, H.; García-García, A. ALTAIR: Supervised Methodology to Obtain Retinal Vessels Caliber. Adv. Distrib. Comput. Artif. Intell. J. 2014, 3, 48–57. [Google Scholar] [CrossRef]
  33. Winder, R.J.; Morrow, P.J.; McRitchie, I.N.; Bailie, J.R.; Hart, P.M. Algorithms for digital image processing in diabetic retinopathy. Comput. Med. Imaging Graph. 2009, 33, 608–622. [Google Scholar] [CrossRef]
  34. Chamoso, P.; De Paz, J.F.; De La Prieta, F.; Bajo Pérez, J. Agreement technologies applied to transmission towers maintenance. AI Commun. 2017, 30, 83–98. [Google Scholar] [CrossRef]
  35. Mena, J.B. Vectorización automática de una imagen binaria mediante K-means y degeneración de la triangulación de Delaunay. Revista de la Asociación Espanola de Teledetección 2002, 7, 21–29. [Google Scholar]
  36. Franklin, S.W.; Rajan, S.E. Computerized screening of diabetic retinopathy employing blood vessel segmentation in retinal images. Biocybern. Biomed. Eng. 2014, 34, 117–124. [Google Scholar] [CrossRef]
  37. Lam, B.S.; Gao, Y.; Liew, A.W.C. General retinal vessel segmentation using regularization-based multiconcavity modeling. IEEE Trans. Med. Imaging 2010, 29, 1369–1381. [Google Scholar] [CrossRef] [PubMed]
  38. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Virtual Organizations (VO) and agents’ schema.
Figure 1. Virtual Organizations (VO) and agents’ schema.
Processes 06 00254 g001
Figure 2. Stages into which the method of analysis is divided.
Figure 2. Stages into which the method of analysis is divided.
Processes 06 00254 g002
Figure 3. The divisions resulting from preprocessing in the detection of the macula.
Figure 3. The divisions resulting from preprocessing in the detection of the macula.
Processes 06 00254 g003
Figure 4. Image processing to locate the optic disc.
Figure 4. Image processing to locate the optic disc.
Processes 06 00254 g004
Figure 5. Vessel detection filters results.
Figure 5. Vessel detection filters results.
Processes 06 00254 g005
Figure 6. Four kind of points detected when analyzing the vessels’ skeleton. (a) normal; (b) branch; (c) final; (d) cross.
Figure 6. Four kind of points detected when analyzing the vessels’ skeleton. (a) normal; (b) branch; (c) final; (d) cross.
Processes 06 00254 g006
Figure 7. Proceeding to get the (1) area, (2) length and (3) thickness of a vessel.
Figure 7. Proceeding to get the (1) area, (2) length and (3) thickness of a vessel.
Processes 06 00254 g007
Figure 8. Software screenshots on the last step of the analysis of two different images.
Figure 8. Software screenshots on the last step of the analysis of two different images.
Processes 06 00254 g008
Table 1. Success rate for every step and comparative in blood vessel segmentation stage.
Table 1. Success rate for every step and comparative in blood vessel segmentation stage.
Success (%) /StageEye SideRetinaPapillaSegment.Ident.
Proposed System100%100%99%95%91%
GeethaRaman et al. (2016) [13]---95.36%-
Franklin et al. (2014) [36]---95.03%-
Lam et al. (2010) [37]---94.72%-
Chaudhuri et al. (1989) [38]---87.73%-
Table 2. Main exported parameters for the image Figure 8, right.
Table 2. Main exported parameters for the image Figure 8, right.

Share and Cite

MDPI and ACS Style

Chamoso, P.; Rodríguez, S.; García-Ortiz, L.; Corchado, J.M. Information Extraction from Retinal Images with Agent-Based Technology. Processes 2018, 6, 254.

AMA Style

Chamoso P, Rodríguez S, García-Ortiz L, Corchado JM. Information Extraction from Retinal Images with Agent-Based Technology. Processes. 2018; 6(12):254.

Chicago/Turabian Style

Chamoso, Pablo, Sara Rodríguez, Luis García-Ortiz, and Juan Manuel Corchado. 2018. "Information Extraction from Retinal Images with Agent-Based Technology" Processes 6, no. 12: 254.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop