Next Article in Journal
Privacy-Preserving Mobility Model and Optimization-Based Advanced Cluster Head Selection (P2O-ACH) for Vehicular Ad Hoc Networks
Next Article in Special Issue
Development of Manipulator Digital Twin Experimental Platform Based on RCP
Previous Article in Journal
Recommendation Model Based on Probabilistic Matrix Factorization and Rated Item Relevance
Previous Article in Special Issue
Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Artificial Visual System for Three Dimensional Motion Direction Detection

1
Faculty of Electrical and Computer Engineering, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
2
Department of Intelligence Information Systems, University of Toyama, 3190 Gofuku, Toyama 930-8555, Japan
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(24), 4161; https://doi.org/10.3390/electronics11244161
Submission received: 11 November 2022 / Revised: 8 December 2022 / Accepted: 9 December 2022 / Published: 13 December 2022
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)

Abstract

:
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level and concentrated on a single cell. It remains challenging to explain the function and mechanism of the overall motion direction detection. In our previous papers, we have proposed a motion direction detection mechanism on the two dimensional level to solve these problems. The previous studies did not take into account that the information in the left and right retina is different and cannot be used to detect the three dimensional motion direction. Further effort is required to develop a more realistic system in three dimensions. In this paper, we propose a new three-dimensional artificial visual system to extend motion direction detection mechanism into three dimensions. We assumed that a neuron could detect the local motion of a single voxel object within three dimensional space. We also took into consideration that the information of the left and right retinas is different. Based on this binocular disparity, a realistic motion direction mechanism for three dimensions was established: the neurons received signals from the primary visual cortex of each eye and responded to motion in specific directions. There are a series of local direction-selective ganglion neurons arrayed on the retina by a logical AND operation. The response of each local direction detection neuron will be further integrated by the next neural layer to obtain the global motion direction. We carry out several computer simulations to demonstrate the validity of the mechanism. It shows that the proposed mechanism is capable of detecting the motion of complex three dimensional objects, which is consistent with most known physiological experimental results.

1. Introduction

The processing of visual information by mammalian visual system is very important for mammals. Considerable research efforts have been devoted to the motion direction detection. In 1953, the receptive field was proposed through moving a light spot before a cat’s retina by Kuffler [1]. The direction selectivity mechanism was first proposed in the cat’s primary visual cortex by Hubel and Wiesel [2], who found that moving lights usually received a greater response than static lights and that responses in a particular direction were great while responses in the opposite direction were limited. In 1967, the preferred directions of a number of direction-selective ganglion cells in the rabbit retina was found by Oyster and Barlow [3]. Furthermore, the preferred direction of direction-selective retinal ganglion cells was researched by Barlow, Hill [4]. In the following period, through animal research, people’s cognition of retinal direction selectivity was gradually improved [5,6,7,8,9,10], such as those of the mouse, rabbit, flies, and cat. Later, direction selectivity of blowfly motion-sensitive neurons was computed in a two-stage process [11]. Moreover, the amacrine cells were found to have a great relationship with the direction selectivity and the research entered a new stage [12,13,14,15,16]. A computational model of a visual motion detector that integrates direction and orientation selectivity features was found [17]. Which part of the visual system achieves direction selectivity and how it is achieved is being discovered [18,19,20,21]. In 2017, Alex S. Mauss and Anna Vlasits concluded the visual circuits for direction selectivity [22]. Recently, there have been some other studies in the three dimensional field [23,24]. In 2020, more consequential studies enriched the model of direction-selective neurons [25,26,27]. However, studies on direction selectivity mentioned above suffer the limitation that they are concentrated on a single cell, and cannot explain the function and cooperation of the overall motion direction detection [28]. Little is known about the mechanism of motion direction detection and the visual nervous system [29,30]. Recent developments of the mechanism of motion direction detection have attracted much attention.
In our previous paper, we proposed a mechanism to explain the direction selectivity of the retinal nerve at the two dimensional level to solve the problems [31,32,33,34,35]. However, the existing methods above [31,32,33,34,35] only focus on two dimensional cases. None of these studies took into account that humans have two eyes, nor did they figure out how the human eye perceives the three dimensional motion direction, which could only deal with the direction and speed of two-dimensional motion. Raymond van Ee has reported on that multiplex orientation, motion direction, speed and binocular disparity can be helpful for solving the binocular matching problem [36], while its function, structure and its contribution to global direction detection are still unclear.
In this paper, we propose an artificial visual system to provide a systemic explication of three dimensional motion direction detection to solve the problems. Compared with existing methods [31,32,33,34,35], (i) This mechanism explains how local three dimensional direction detection neurons extract the three dimensional motion direction. We introduce a neuron to detect the local motion direction in three dimension, which can be realized by a logic AND operation. (ii) This mechanism takes it into consideration that humans have two eyes which have different projections on the left and right retinas. Based on the binocular disparity, the motion direction in three dimension can be detected by the local direction detection neuron. This neuron receives signals from the primary visual cortex of right and left eyes and responses to a motion in a specific direction. We assume the local motion direction detective ganglion neurons which receive signals from left and right retina respond to their preferred direction of motion. The response of neurons in each area will be further integrated by the next neural layer to obtain the global motion direction.
The rest of this article is organized as follows. In Section 2, we introduce materials and methods of this article. In this section, we introduce the Barlow excitory scheme and propose a model based on Barlow’s algorithm. We introduce methods for projecting objects in the retina and judge the local motion direction of the voxels. Finally, we count all the local motion directions to judge the global motion direction. Based on this mechanism, we propose an artificial visual system for three-dimensional motion direction detection. In the results section, we conduct a series of experiments to verify the validity of the proposed mechanism. The proposed mechanism has a good performance in measuring the direction of motion of objects. In the conclusion section, we discuss the significance, advantages and disadvantages of our proposed mechanism.

2. Related Work

In the related work, a series of mechanism is proposed to give a detailed explanation of the direction-selective ganglion neurons [31,32,33,34,35]. We first propose the inhibitory scheme to explain the direction detection mechanism at the cell level [31]. Then, we provided the velocity mechanism for object motion detection in the mammals retina [32]. Based on the core computation of the Hassenstein–-Reichardt correlator (HRC) model, a novel mechanism was developed for global motion direction detection, with reference to the biological investigations of Drosophila [33]. Later, an orientation detection mechanism based on dendrite calculation of local orientation was proposed [34]. These mechanisms enrich our interpretation of direction selectivity. However, the existing methods above [31,32,33,34,35] only focus on two-dimensional cases. None of these studies took into account that humans have two eyes, nor did they figure out how the human eye perceives the three dimensional motion direction, which could only deal with the direction and speed of two-dimensional motion. Our paper solves the above problems and provides a detailed explanation of the motion in three dimensional space. In this paper, we have improved the cognition of direction selectivity to three dimensional, which greatly increases the possibility of practical application. We take it into consideration that humans have two eyes which have different projections on the left and right retinas and propose an artificial visual system to provide a systemic explication of three dimensional motion direction detection to solve the problems.

3. Materials and Methods

This section proposes the Artificial Visual System (AVS) for three dimensional direction detection based on Barlow’s excitatory direction-selective scheme. We introduce Barlow’s excitatory direction-selective scheme and propose the binocular mechanism of retina directional selection which explains how the projections move in the retina. Based on these, we propose the local three dimensional direction detection neuron which is the local feature detection neurons in the AVS. Then we expand it to the global direction detection which is regarded as global feature detection neuron in the AVS. Finally, we present the entire structure of the AVS and how the AVS processes the input information.

3.1. Barlow’s Excitatory Direction-Selective Scheme

It is intensively investigated that the direction selection neurons release a response signal when the object moves in the preferred direction. When the object moves in the opposite direction, the response is almost nonexistent. Barlow’s activation model is the most famous model for direction selectivity [3]. Figure 1 shows how neurons determine the motion direction of an object as it moves from left to right. Ra, Rb, Rc are three receptors, which respectively receive the optical information of the corresponding receptive field. The delayer is responsible for delaying the transmission of the response in the channel, and the AND gate is responsible or analyzing the two signals received. When the object moves from Ra to Rb, Ra is the first activated receptor and releases a response. The response will be delayed and finally reach the AND gate of Rb. When the object moves to Rb, Rb releases a response. Once the AND gate receives two responses from Ra and Rb, it will be activated. In our previous paper, we proposed a model of local direction detective neuron as shown in Figure 1. We realized local two-dimensional detective neurons, using them to extract two dimensional motion direction information locally and inferred global two-dimensional motion direction of objects [31,32,33,34,35]. However, these studies ingored the difference between left and right eyes. None of them took it into account that humans have two eyes which have different projections on the left and right retinas, nor did them figure out how the human eye perceives the motion direction of three dimensional objects, which could only deal with the direction and speed of two-dimensional motion. As a result, these studies can not detect the three dimensional motion direction.

3.2. Binocular Mechanism of Retina Directional Selection

Objects in the receptive field form different projections on the retinas of the left and right eyes. It is necessary here to clarify differences between projections of the left and right eyes. Consider such a single voxel object in three-dimensional space which is composed of voxels. The left and right retinas receive the position information of the object at the moment before moving (Time: T) and the moment after moving (Time: T + Δ t). Between these two moments, the object only moves once, which is projected in the retina. As an object moves, so do the projections on the retinas of the left and right eyes. We decompose the motion direction into two parts: (1) parallel to the retinal plane; (2) perpendicular to the retinal plane. If an object moves in a plane which is parallel to the retina, the projections of the object on both retinas move in similar trajectories. If the object moves in a direction perpendicular to the retina, the projection of the object in the two retinas will move in the opposite direction. In Figure 2, the object moves away from the retina. The projections on the left and right retinas move in opposite directions.
For simplicity, we assume that the space is divided into voxel blocks. The length of each voxel is regarded as a unit distance. We assume that the object has only two states: before and after moving. The object only moves once between these two states. The imaging of the object in the retina is upside down, and it will be adjusted in the brain. To facilitate understanding, we adjusted the upside-down imaging of the object in the retina (we will obtain the retinal imaging if we rotate the image 180 degrees around the rotation center, which does not affect the results of the subsequent analysis).
In Figure 3, consider an object with only one voxel in front of the eye. We establish a coordinate system with the object as the origin, select the coordinate axis according to the direction of motion relative to the eye, and expand the object along the three coordinate axis to obtain a 3 × 3 × 3 voxel space. The object moves in a fixed direction to one voxel of the 3 × 3 × 3 voxel space. There is a total of 3 × 3 × 3 possible cases −26 moving directions (the case where the object remains stationary is also regarded as a direction). We regard rightward as the positive direction of the x-axis, the upward direction as the positive direction of the y-axis, and the front as the positive direction of the z-axis. For the convenience of statistics, we label the twenty-six directions as directions 1, 2, 3... 26. We use (X,Y,Z) to represent the direction of motion, where X, Y, and Z represent the distance the object moves on the lower x,y,z coordinate axis.
The twenty-six directions are shown in the Table 1. Directions are represented as vectors.
We divide the 3 × 3 × 3 space into three layers based on the distance from eyes (z axis, forward direction). They are forward layer, middle layer, and backward layer. It can be found that each layer is parallel to the retinal plane. If the object moves in the same layer, the projections in their retinas are similar. If object moves to different layers, their projections in the retina will be different.
In Figure 4, the red block represents the object before moving, the green block represents the object after moving. The space that objects can reach is divided into three layers according to the distance from the eye. Each layer is a 3 × 3 voxel block. In Figure 4a, the object in the three dimensional space moves one voxel downward. Because the object only moves in the same layer, the direction of the projection motion in the retina all move downward. In this case, the motion of the object in the three dimensional space is similar to the motion projected on the retina. In Figure 4b, the object moves between layers. At this time, the object moves one layer backward on the basis of the Figure 4a. We can find that the corresponding projections on the retina have moved one voxel. In the left retina, the projection moves one voxel to the left, and in the right retina, the projection moves one voxel in the opposite direction.

3.3. Global Direction Selection Mechanism

As mentioned above, we considered a single voxel object in a three dimensional coordinate system. The 3 × 3 × 3 voxel space around the object was mapped onto the retina of the left and right eyes. We assumed that each single voxel object had coordinates R(i,j,t) within the 3 × 3 × 3 voxel space, which corresponded to the region (i,j) in a two-dimensional retinal field (M × N) in which movement could be detected by the corresponding local motion detection retinal ganglion neurons. Figure 5 shows the projection of objects on the retinae before and after motion and how the neurons processed the information from the retina. Similar to Figure 1, an object in the receptive field activates the receptor before moving, resulting in the receptor to release a response. The response will go through the delay channel and finally reach the AND gate. After moving, the moved object is projected into the retina, the receptors are activated and release a response that goes directly to the AND gate. We assumed that the response before the motion passed through the delay channel and that the signals before and after the motion arrived at the AND gate at the same time. The AND gate responsible for detecting the perticular direction will be activated. For each class of neurons in different orientations, the positions detected in the retinal projection map after exercise are also different. In fact, there are 26 directions of movement, and there may be such neurons activated in each direction. We count the neurons activated in each direction and take the direction of the neuron with the most activations as the global direction.
In practice, objects are usually multi-voxel. We usually calculate the direction obtained by each neuron and count the direction with the most activations as the final direction of motion. We define the directions as 1,2,3,4,...,i,...26. We set the activation times for each direction i as f(i).
The global direction N is as follows:
N = f 1 { m a x [ f ( 1 ) , . . . f ( i ) , . . . f ( 26 ) ] }
In Figure 6, considering a two-voxel object in a 3 × 3 × 3 voxel space, it can be seen that there is a 1 × 1 × 2 object in the space before motion. (composed of dark red and light red blocks). It moves in the direction(0,−1,−1). We paint the moved object in green. The proposed mechanism will scan every voxel in this space, most of which fail to activate neurons. The ’scan’ mechanism treats each voxel in space as a central point, and projects them into the retina for analysis. In the following two cases, the neurons are activated with the red voxel as the center point (None of the neurons are activated when the empty voxel is the center point). In Figure 6a, dark red voxels were scanned. The 3 × 3 × 3 voxel space was scanned after the movement (T+ Δ t). Multiple voxels are projected to the retina. At this time, it can be seen that two of the direction selection neurons(Direction 0,−1,−1; 1,−1,−1) connected to the retina are activated. In Figure 6b, the light red voxels were scanned. It can be seen that two of the direction selection neurons(Direction 0,−1,−1; −1,−1,−1) connected to the retina are activated. We count activations of all types of neurons and find that the direction (0,−1,−1) is activated the most.

3.4. Artificial Visual System (AVS)

As shown in Figure 7, we proposed an artificial visual system to explain how the retina processes motion information. The visual system consists of sensory organs (eyes), pathways connecting the visual cortex and other parts of the central nervous system. Visual information in the receptor of eyes is projected onto the retinas of the left and right eyes. The information from the left and right retina is processed and delivered to Layer 1. The neurons in Layer 1, which are local feature detection neurons (LFDNs), are responsible for the detection of the motion direction that corresponds to the cortical neurons. Then, the information is transmitted to the Layer 2, where more complex features that correspond to the subsequent primate layer in the middle temporal (MT) area of the brain are assessed. Layer 2 is also regarded as the global feature detection neuron (GFDN) layer, which detects higher-order features.
AVS is a feed-forward neural network which can be trained by error back propagation method. As the AVS was very different from traditional and convolutional neural networks, the parameter of Layer 1 (LFDN) in the AVS could be determined in advance using our prior knowledge. Furthermore, the AVS do not need to learn in most cases. As a result, the AVS can be more efficient and convenient than CNN.

4. Results

To verify the efficiency of the proposed mechanism, we conducted several computer experiments. We randomly generated a dataset consisting of 32 × 32 × 32 voxel images, where the light spots are random-dot patterns. In random-dot patterns, voxels are distributed randomly and discretely in space. The voxels in these images are from 1 voxel to 32 voxels and move randomly in one of the following 26 directions. Our algorithm flowchart is shown in Figure 8. We scan each voxel in the receptive field to acquire the local direction. In this part, we use a logic AND gate that accepts two signals A and B at the same time. We assume that the logic AND gate (defined as ANDGate) receives signals from two receptors A, B, and define their signals as SignalA, SignalB. We define * as logic AND operation.
A N D G a t e = S i g n a l A S i g n a l B
The project then scans the space voxel by voxel and judge the motion direction of the point through the information from receptors. When the program scans all the three dimensional space. Using function 1, we finally count all the local motion directions to judge the global motion direction.
First, in the three dimensional space, consider a single voxel object moving in direction 26 (1,1,1) in Figure 9. The object will be projected on the left and right retinas respectively. Local direction selective neurons will receive information from the retina and determine the direction (LFDN Layer). Then all the information of LFDN Layer will be analyzed by GFDN layer. GFDN layer will count all activations in each direction and determine the final motion direction. The pattern with only one voxel was displaced 1 unit along the x,y,z direction. The 26 local direction detective retinal ganglion neurons were used to detect the motion direction and the result is shown in Figure 9. The responses of the 26 local direction detective retinal ganglion neurons were obtained by spatially convolving 26 local direction detective retinal ganglion neurons over the three dimensional image at time t (after moving) and reacting with the delayed two-dimensional image at time t − Δ t (before moving) (in this simulation, the reaction was a logical AND operation of a voxel of the region at time t and the adjacent voxel at t − Δ t in the particular direction). The activities of the twenty-six local direction detective retinal ganglion neurons were recorded (Figure 9), of which only the UPWARD neuron was activated. We can also find that there is activation in only one direction in the histogram. We can infer the direction of object movement from the picture.
The same result is also applicable to complex voxels. In Figure 10, the pattern with only four voxels is displaced −1 unit along the x direction and 1 unit along the y,z direction. In Figure 10, direction 9 (−1,1,1) is activated. We counted the number of activations in each direction and drew a histogram Figure 9. It can be seen that the neurons of direction 9 (−1,1,1) activate most, from which can be inferred that the object moves in direction 9 (−1,1,1).
In Figure 11, the pattern with only sixteen voxels is displaced 1 unit along the y,z direction. We counted the number of activations in each direction and drew a histogram Figure 10. It can be easily seen that the number of activations in direction 17 (0,1,1) is the most, from which it can be inferred that the object is moving in direction 17 (0,1,1).
In recent years, Convolutional Neuron Network (CNN) has attracted great attention from researchers [37,38,39], including in the field of human computer interaction. To verify the accuracy of our proposed mechanism, we compare it with CNN-based methods. Its structure is shown in the Figure 12. There were five layers in the CNN that we used: (i) A convolutional layer with convolution kernels (3 × 3) and 16 feature maps; (ii) A pooling layer with max pooling and a pooling size of (2,2); (iii) A convolutional layer with convolution kernels (3 × 3) and 32 feature maps; (iv) Another pooling layer with max pooling and a pooling size is of (2,2); (v) A fully connected layer with 26 outputs.
The CNN in this experiment uses (i) Rectified Linear Unit (ReLU) as the activation function; (ii) Adamoptimizer with a learning rate of 0.02 as the optimizer; (iii) Softmax cross entropy as the cost function.
We use the projected left and right retina maps as the training set of CNN which contains 2 × 10,000 images and we use 70% of them as the training set, the other 30%of them are testing set. Each of these images above contains eight voxels and no noise voxel.
To make the experiment more convincing, we added noise to the image. To describe the amount of noise we add more precisely. We assume that the volume of the object (measured in voxels) is Vo, and we assume that the volumetric cumulative sum of the noise (measured in voxels) is Vn. We define the noise object ratio as follows:
N o i s e R a t i o = V n / V o
In Table 2, we can see that AVS achieves nearly 100% accuracy when there is no noise voxel which is equal to CNN. We then add 50% noise voxels to the testing set (Noise Ratio = 50%), CNN’s identification accuracy drops quickly to 87.45%, and the accuracy of the proposed mechanism remains at 100%, which is better than CNN. If we add 50% noise to the testing set (Noise Ratio = 50%), the accuracy of proposed mechanism 100% is still better than CNN 99.3%. Until we increase the noise by 200%(Noise Ratio = 200%), the accuracy of our method retains 100% accuracy, and CNN is almost undetectable which at 33.7%. In Table 2, it can be inferred that the proposed mechanism shows superior noise resistance. In Table 3, the result is not much different from Table 2. We then increase the object voxels to 32 in Table 4. Our proposed mechanism has a good accuracy 99.8% when it increases to 1600% (Noise Ratio = 1600%) while the accuracy of CNN has dropped to 9%.
Compared with CNN, the proposed mechanism shows better performance in the presence of noise and operates more efficiently. Furthermore, the proposed mechanism is more similar to the biological model. Using 1660Ti GPU, CNN needs a lot of training data and costs about 0.5–1 h to train, while the proposed mechanism does not need pre-training and only takes less than 2 s to finish the test. Compared with latest methods [31,32,33,34,35], these methods only focus on two dimensional cases. None of these studies took into account that humans have two eyes, nor did they figure out how the human eye perceives the three dimensional motion direction, which could only deal with the direction and speed of two-dimensional motion. The proposed mechanism explains how local three dimensional direction detection neurons extract the three dimensional motion direction.

5. Conclusions

In this paper, we have presented a three dimensional artificial visual system to provide a systemic explication of motion direction detection in three dimensional space. We propose the three dimensional direction detective neuron, which can be realized by a logic AND operation. With a local receptive field, these neurons can extract elementary visual features. These features are then analyzed by the subsequent layers in order to detect higher-order features, for example, the global motion direction. We carry out several computer simulations to demonstrate the validity of the mechanism. Compared to previous studies, this mechanism explains how direction-detection neurons analyse the motion of three dimensional objects. At the same time, our research also suffers some shortcomings and limitations. Although its resistance to noise is better than CNN, if a certain anti-noise processing is performed, the experimental results will be better. At the same time, it cannot measure the speed of objects, which is in our future work. The mechanism can be used as a frame for understanding many other basic phenomena such as the perception of motion direction, moving speed, orientation, binocular vision. The mechanism also gives a guidance for visual computation in the visual cortex and explains how visual input is fragmented and reassembled at different stages of the visual system and how function is divided across different elements of the visual circuit. The mechanism can also help us to understand how other sensor systems works, such as olfaction, taste, and touch, are encoded at the cortical circuit level.

Author Contributions

Y.T. and Z.T. designed the study. M.H. performed the experiments. Y.T., Z.T. and M.H. analyzed the data and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by JSPS KAKENHI Grant No. 19K12136.

Informed Consent Statement

Written informed consent has been obtained from the patients to publish this pape.

Data Availability Statement

The codes presented in this study are openly available in [https://github.com/MZH1991/3D-Generate.git (accessed on 14 October 2022)].

Conflicts of Interest

The authors declare no competing interests.

References

  1. Kuffler, S.W. Discharge patterns and functional organization of mammalian retina. J. Neurophysiol. 1953, 16, 37–68. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 1959, 148, 574. [Google Scholar] [CrossRef] [PubMed]
  3. Oyster, C.W.; Barlow, H.B. Direction-selective units in rabbit retina: Distribution of preferred direction. Science 1967, 155, 841–842. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Barlow, H.B.; Hill, R.M. Selective sensitivity to direction of movement in ganglion cells of the rabbit retina. Science 1963, 139, 412–414. [Google Scholar] [CrossRef] [Green Version]
  5. Goodwin, A.W.; Henry, G.H.; Bishop, P.O. Direction selectivity of simple striate cells: Properties and mechanism. J. Neurophysiol. 1975, 38, 1500–1523. [Google Scholar] [CrossRef]
  6. Goodwin, A.W.; Henry, G.H. Direction selectivity of complex cells in a comparison with simple cells. J. Neurophysiol. 1975, 38, 1524–1540. [Google Scholar] [CrossRef]
  7. Winterson, B.J.; Collewijn, H. Inversion of direction-selectivity to anterior fields in neurons of nucleus of the optic tract in rabbits with ocular albinism. Brain Res. 1981, 220, 31–49. [Google Scholar] [CrossRef]
  8. Ganz, L. Visual cortical mechanisms responsible for direction selectivity. Vis. Res. 1984, 24, 3–11. [Google Scholar] [CrossRef]
  9. Grzywacz, N.M.; Koch, C. Functional properties of models for direction selectivity in the retina. Synapse 1987, 1, 417–434. [Google Scholar] [CrossRef]
  10. Shingai, R. A model for the formation of direction-selective cells in developing retina. IEEE Trans. Syst. Man Cybern. 1980, 10, 575–580. [Google Scholar] [CrossRef]
  11. Borst, A.; Egelhaaf, M. Direction selectivity of blowfly motion-sensitive neurons is computed in a two-stage process. Proc. Natl. Acad. Sci. USA 1990, 87, 9363–9367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Tukker, J.J.; Taylor, W.R.; Smith, R.G. Direction selectivity in a model of the starburst amacrine cell. Vis. Neurosci. 2004, 21, 611–625. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhou, Z.J.; Lee, S. Synaptic physiology of direction selectivity in the retina. J. Physiol. 2008, 586, 4371–4376. [Google Scholar] [CrossRef] [PubMed]
  14. Briggman, K.L.; Helmstaedter, M.; Denk, W. Wiring specificity in the direction-selectivity circuit of the retina. Nature 2011, 471, 183–188. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, J.S.; Greene, M.J.; Zlateski, A.; Lee, K.; Richardson, M.; Turaga, S.C.; Seung, H.S. Space–time wiring specificity supports direction selectivity in the retina. Nature 2014, 509, 331–336. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, Q.; Wei, W. Stimulus-dependent engagement of neural mechanisms for reliable motion detection in the mouse retina. J. Neurophysiol. 2018, 120, 1153–1161. [Google Scholar] [CrossRef] [Green Version]
  17. Cyr, A.; Thériault, F.; Ross, M.; Berberian, N.; Chartier, S. Spiking neurons integrating visual stimuli orientation and direction selectivity in a robotic context. Front. Neurorobot. 2018, 12, 37–68. [Google Scholar] [CrossRef]
  18. Euler, T.; Detwiler, P.B.; Denk, W. Directionally selective calcium signals in dendrites of starburst amacrine cells. Nature 2002, 418, 845–852. [Google Scholar] [CrossRef]
  19. Sivyer, B.; Williams, S.R. Direction selectivity is computed by active dendritic integration in retinal ganglion cells. Nat. Neurosci. 2013, 16, 1848–1856. [Google Scholar] [CrossRef]
  20. Greene, M.J.; Kim, J.S.; Seung, H.S.; EyeWirers. Analogous convergence of sustained and transient inputs in parallel on and off pathways for retinal motion computation. Cell Rep. 2016, 14, 1892–1900. [Google Scholar] [CrossRef]
  21. Ding, H.; Smith, R.G.; Poleg-Polsky, A.; Diamond, J.S.; Briggman, K.L. Species-specific wiring for direction selectivity in the mammalian retina. Nature 2016, 535, 105–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Mauss, A.S.; Vlasits, A.; Borst, A.; Feller, M. Visual circuits for direction selectivity. Annu. Rev. Neurosci. 2017, 40, 211. [Google Scholar] [CrossRef] [PubMed]
  23. Ju, Y.; Shi, B.; Jian, M.; Qi, L.; Dong, J.; Lam, K.-M. NormAttention-PSN: A High-frequency Region Enhanced Photometric Stereo Network with Normalized Attention. Int. J. Comput. Vis. 2022, 130.12, 3014–3034. [Google Scholar] [CrossRef]
  24. Liu, J.; Ji, P.; Bansal, N.; Cai, C.; Yan, Q.; Huang, X.; Xu, Y. PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 2022, 1172, 8665–8675. [Google Scholar]
  25. Ankri, L.; Ezra-Tsur, E.; Maimon, S.R.; Kaushansky, N.; Rivlin-Etzion, M. Antagonistic center-surround mechanisms for direction selectivity in the retina. Curr. Biol. 2018, 28, 1204–1212. [Google Scholar] [CrossRef]
  26. Morrie, R.D.; Feller, M.B. A dense starburst plexus is critical for generating direction selectivity. Curr. Biol. 2018, 28, 1204–1212. [Google Scholar] [CrossRef] [Green Version]
  27. Chen, Q.; Smith, R.G.; Huang, X.; Wei, W. Preserving inhibition with a disinhibitory microcircuit in the retina. eLife 2020, 9, e62618. [Google Scholar] [CrossRef]
  28. Fukushima, K. A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 1980, 16, 193–202. [Google Scholar] [CrossRef]
  29. He, S.; Maslan, R.H. Retinal direction selectivity after targeted laser ablation of starburst amacrine cells. Nature 1997, 389, 378–382. [Google Scholar] [CrossRef]
  30. Taylor, W.R.; He, S.; Levick, W.R.; Vaney, D.I. Dendritic computation of direction selectivity by retinal ganglion cells. Science 2000, 289, 2347–2350. [Google Scholar] [CrossRef]
  31. Han, M.; Todo, Y.; Tang, Z. Mechanism of Motion Direction Detection Based on Barlow’s Retina Inhibitory Scheme in Direction-Selective Ganglion Cells. Electronics 2021, 10, 1663. [Google Scholar] [CrossRef]
  32. Han, M.; Todo, Y.; Tang, Z. A Neuron for Velocity Detection Based on Inhibitory Mechanism in Retina Ganglion. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Big Data, Chengdu, China, 28–31 May 2021; pp. 459–462. [Google Scholar]
  33. Yan, C.; Todo, Y.; Tang, Z. The Mechanism of Motion Direction Detection Based on Hassenstein-Reichardt Model. In Proceedings of the 2021 6th International Conference on Computational Intelligence and Applications, Xiamen, China, 11–13 June 2021; pp. 180–184. [Google Scholar]
  34. Zhang, X.; Zheng, T.; Todo, Y. The Mechanism of Orientation Detection Based on Artificial Visual System. Electronics 2022, 11, 54. [Google Scholar] [CrossRef]
  35. Tang, C.; Todo, Y.; Ji, J.; Tang, Z. A novel motion direction detection mechanism based on dendritic computation of direction-selective ganglion cells. Knowl.-Based Syst. 2022, 241, 108205. [Google Scholar] [CrossRef]
  36. van Ee, R.; Anderson, B.L. Motion direction, speed and orientation in binocular matching. Nature 2001, 410, 690–694. [Google Scholar] [CrossRef] [Green Version]
  37. Zhou, Z.; Wang, M.; Cao, Y.; Su, Y. CNN feature-based image copy detection with contextual hash embedding. Mathematics 2020, 8, 1172. [Google Scholar] [CrossRef]
  38. Alghazzawi, D.; Bamasag, O.; Albeshri, A.; Sana, I.; Ullah, H.; Asghar, M.Z. Efficient prediction of court judgments using an LSTM+ CNN neural network model with an optimal feature set. Mathematics 2022, 10, 683. [Google Scholar] [CrossRef]
  39. Park, H.; Kim, J.; Lee, W. Development of CNN-Based Data Crawler to Support Learning Block Programming. Mathematics 2022, 10, 2223. [Google Scholar] [CrossRef]
Figure 1. The mechanism of Barlow’s excitory scheme.
Figure 1. The mechanism of Barlow’s excitory scheme.
Electronics 11 04161 g001
Figure 2. Projection of objects in the retina.
Figure 2. Projection of objects in the retina.
Electronics 11 04161 g002
Figure 3. Take a single voxel as the origin of the coordinate system and extend it to a space of 3 × 3 × 3 voxel. The 3 × 3 × 3 voxel space is divided into three layers based on the distance from the eye.
Figure 3. Take a single voxel as the origin of the coordinate system and extend it to a space of 3 × 3 × 3 voxel. The 3 × 3 × 3 voxel space is divided into three layers based on the distance from the eye.
Electronics 11 04161 g003
Figure 4. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving.
Figure 4. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving.
Electronics 11 04161 g004
Figure 5. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving. AND gate and delay channel are displayed in the right half of the figure.
Figure 5. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving. AND gate and delay channel are displayed in the right half of the figure.
Electronics 11 04161 g005
Figure 6. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving. AND gate and delay channel are displayed in the right half of the figure. The figure shows how motion information of a 2 voxel object is processed in the retina.
Figure 6. The position of the object in space (Left) and the projection of the object in the retina (Right), twenty-six local motion direction detection neurons are used to detect region from (1,1) to (3,7) over the two-dimensional retina receptive field. The red block represents the object before moving and the green block represents the object after moving. AND gate and delay channel are displayed in the right half of the figure. The figure shows how motion information of a 2 voxel object is processed in the retina.
Electronics 11 04161 g006
Figure 7. Structure of Artificial visual system (AVS). In the input module, visual information in the receptor of eyes is projected in the retinas of left and right eyes. The information from left and right retina is processed and delivered to the layer 1—local feature detective neurons (LFDN) layer. Neurons in layer 1 are responsible for the detection of the motion direction, corresponding to the cortical neurons. Then, the information is transmitted to the second layer—global feature detective neuron (GFDN) layers and more complex features (like global motion direction) are invesgated.
Figure 7. Structure of Artificial visual system (AVS). In the input module, visual information in the receptor of eyes is projected in the retinas of left and right eyes. The information from left and right retina is processed and delivered to the layer 1—local feature detective neurons (LFDN) layer. Neurons in layer 1 are responsible for the detection of the motion direction, corresponding to the cortical neurons. Then, the information is transmitted to the second layer—global feature detective neuron (GFDN) layers and more complex features (like global motion direction) are invesgated.
Electronics 11 04161 g007
Figure 8. Algorithm flowchart.
Figure 8. Algorithm flowchart.
Electronics 11 04161 g008
Figure 9. The neuron activation diagram in the twenty-six direction caused by the motion of the 1-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 17th direction neurons have the highest number of activations. It can be referred that the object moves in the 17th direction.
Figure 9. The neuron activation diagram in the twenty-six direction caused by the motion of the 1-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 17th direction neurons have the highest number of activations. It can be referred that the object moves in the 17th direction.
Electronics 11 04161 g009
Figure 10. The neuron activation diagram in the twenty-six direction caused by the motion of the 4-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations is represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 1st-direction neurons have the highest number of activations. It can be inferred that the object moves in the 1st direction.
Figure 10. The neuron activation diagram in the twenty-six direction caused by the motion of the 4-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations is represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 1st-direction neurons have the highest number of activations. It can be inferred that the object moves in the 1st direction.
Electronics 11 04161 g010
Figure 11. The neuron activation diagram in the twenty-six direction caused by the motion of the 16-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 5th-direction neurons have the highest number of activations. It can be referred that the object moves in the 5th direction.
Figure 11. The neuron activation diagram in the twenty-six direction caused by the motion of the 16-voxel random-dot object in the three dimensional space (left). Each line represents 26 directions, each point on the line represents the detection neuron, and each stimulation on the line represents that the neuron at this point is activated. The number of activations represented by a histogram (right). The number of activations of neurons in all directions is displayed on the histogram. The 5th-direction neurons have the highest number of activations. It can be referred that the object moves in the 5th direction.
Electronics 11 04161 g011
Figure 12. The Convolutional Neural Network structure of the simulation, which contains two convolutional layers, two pooling layers and a fully connected layer.
Figure 12. The Convolutional Neural Network structure of the simulation, which contains two convolutional layers, two pooling layers and a fully connected layer.
Electronics 11 04161 g012
Table 1. The twenty-six directions map.
Table 1. The twenty-six directions map.
Direction Number1234567
Vector direction−1,−1,−1−1,−1,0−1,−1,1−1,0,−1−1,0,0−1,0,1−1,1,−1
Direction Number891011121314
Vector direction−1,1,0−1,1,10,−1,−10,−1,00,−1,10,0,−10,0,1
Direction Number15161718192021
Vector direction0,1,−10,1,00,1,11,−1,−11,−1,01,−1,11,0,−1
Direction Number2223242526
Vector direction1,0,01,0,11,1,−11,1,01,1,1
Table 2. Comparison of identification accuracy between CNN and proposed mechanism for 8 voxel object.
Table 2. Comparison of identification accuracy between CNN and proposed mechanism for 8 voxel object.
Noises0%50%100%200%400%800%1600%
NoisesNoisesNoisesNoisesNoisesNoisesNoises
CNN100%99.3%82%33.7%15.9%4.9%4.9%
Proposed Mechanism100%100%100%100%100%100%99.7%
Table 3. Comparison of identification accuracy between CNN and proposed mechanism for 16 voxel object.
Table 3. Comparison of identification accuracy between CNN and proposed mechanism for 16 voxel object.
Noise Ratio0%50%100%200%400%800%1600%
CNN100%99.7%95.5%63.4%30.8%15%9%
Proposed Mechanism100%100%100%100%100%100%99.1%
Table 4. Comparison of identification accuracy between CNN and proposed mechanism for 32 voxel object.
Table 4. Comparison of identification accuracy between CNN and proposed mechanism for 32 voxel object.
Noise Ratio0%50%100%200%400%800%1600%
CNN100%100%98.2%65.8%29.2%17.7%11.2%
Proposed Mechanism100%100%100%100%100%100%99.8%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, M.; Todo, Y.; Tang, Z. An Artificial Visual System for Three Dimensional Motion Direction Detection. Electronics 2022, 11, 4161. https://doi.org/10.3390/electronics11244161

AMA Style

Han M, Todo Y, Tang Z. An Artificial Visual System for Three Dimensional Motion Direction Detection. Electronics. 2022; 11(24):4161. https://doi.org/10.3390/electronics11244161

Chicago/Turabian Style

Han, Mianzhe, Yuki Todo, and Zheng Tang. 2022. "An Artificial Visual System for Three Dimensional Motion Direction Detection" Electronics 11, no. 24: 4161. https://doi.org/10.3390/electronics11244161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop