Next Article in Journal
Biomass District Heating Systems Based on Agriculture Residues
Next Article in Special Issue
Indoor Building Reconstruction from Occluded Point Clouds Using Graph-Cut and Ray-Tracing
Previous Article in Journal
Neuroprotective Investigation of Chitosan Nanoparticles for Dopamine Delivery
Previous Article in Special Issue
Performance Evaluation of Two Indoor Mapping Systems: Low-Cost UWB-Aided Photogrammetry and Backpack Laser Scanning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Initial Results of Testing a Multilayer Laser Scanner in a Collision Avoidance System for Light Rail Vehicles

1
Department of Electrical and Electronics Engineering, Kirikkale University, 71450 Kirikkale, Turkey
2
Hacilar Huseyin Aytemiz Vocational High School, Kirikkale University, 71480 Kirikkale, Turkey
3
Department of Mechanical Engineering, Kirikkale University, 71450 Kirikkale, Turkey
4
R&D Department of Durmazlar Machine Inc., 16140 Bursa, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(4), 475; https://doi.org/10.3390/app8040475
Submission received: 20 February 2018 / Revised: 12 March 2018 / Accepted: 17 March 2018 / Published: 21 March 2018
(This article belongs to the Special Issue Laser Scanning)

Abstract

:
This paper presents an application to detect and track obstacles using a multilayer laser scanner. The goal of the detection system is to develop collision avoidance for the Light Rail Vehicle (LRV). The laser scanner, which is mounted in front of the tram, collects information in a four-scan plane. The object recognition and tracking module, which is composed of a three sub-modules segmentation, classification, and Kalman Filter tracking, was carried out on the raw data. Thus, data were provided for collision avoidance module. The proposed system was applied to a tram named “Silkworm” which is manufactured by Durmazlar Machine Inc. (Bursa, Turkey) and initial experimental tests have been conducted at the facilities of Durmazlar Machine Inc. in the city of Bursa, Turkey. This study aims to illustrate parts of the possible tests that can be carried out and to share with the scientific community an important application of multilayer laser scanners, although in the initial implementation phase, in urban rail transportation.

1. Introduction

According to the latest United Nations Population Division projections, the number of people on the planet exceeded 7.3 billion as of mid-2015 and it is expected to reach 9.7 billion in 2050 and 11.2 billion by 2100. In the next 15 years, 60% of the total population will be city dwellers. This number will increase to 70% by 2050 [1]. These results show that the population is not only increasing, but is becoming progressively concentrated in urban areas as well. In this context, public transportation will remain to be an indispensable answer to the nation’s economic, energy, and environmental questions. As a result, local communities are expanding public transportation networks and people are using public transportation in increasing numbers.
Urban rail is the most rational solution for public transportation because it is well placed at the core of already existing public transportation networks given its high capacity, safety, and reliability [2]. In order to improve usage of urban rail, some precautions, especially safety precautions must be taken. According to the statistics reported by Mu et al. [3], 95% of traffic accident are related to human factors. Furthermore, 70% of these are directly related to human errors.
There are various studies for detecting obstacles, which compare the advantages and disadvantages of different technologies. These technologies have been studied using different sensors, like laser scanners [4,5,6,7,8], radars [9,10,11,12], and computer vision [13,14,15,16]. Sensory fusions of the aforementioned sensors have also been studied for obtaining additional information, like laser scanner and radar fusion [17], laser scanner and computer vision fusion [18,19,20,21], radar and computer vision fusion [22,23,24], and the fusion of all three sensors [25,26]. Apart from these methods, digital mapping has been studied as a supplementary sensor based method for getting extra information obtained by the vehicle environment system [27].
Computer vision-based obstacle detection methods are divided into three categories. The first one is the knowledge-based approach, which uses prior knowledge of obstacle features, like color [28], symmetry [29], and shadow [30]. The second one is the stereo vision-based approach [31] and the last one is the motion-based approach [32]. Using computer vision to solve the problem of obstacle detection seems to be a rational solution at first. Although computer vision methods have advantages, such as lower cost sensors, higher information capacity, and lower operating power, all three methods fail in cases of complex shadows, complex illumination, and challenging weather conditions, like heavy rain, snow, and turbid fog. Additionally, this method needs high computational costs. All of these disadvantages put active sensors, like laser scanners and radars, to the forefront. In many applications, laser scanners and radar sensors have similar performances.
Radar sensors can be separated broadly into two categories depending on the application. The first one is 77 GHz long-range radars (LRR) [33]. LRR can measure a 200 m distance like some laser scanner sensors. The second one is 24 GHz short-range radars (SRR) [34]. SRR generally operates at a shorter distance in comparison to LRR. Although radar-based methods are better in terms of detection range and reliability rate in adverse weather conditions, they have a relatively low angular resolution of 15° [35]. Thus, it is hard to detect multiple obstacles in crowded road conditions. To overcome this problem, multiple sensors need to be utilized. This makes the system more complicated.
Laser scanners are widely used for many applications from bar-code reading to vibration assessment [36,37]. Unlike video cameras, laser scanners offer accurate distances of up to 200 m. Moreover, these devices signals cannot easily interfere with other lighting sources by means of sending signals from their own signal sources.
Using a laser scanner to detect obstacles presents some difficulty, especially in varying weather conditions. Under conditions of fog, rain, and heavy snow, particles can cause the attenuation of signals or false echoes. The solution of these kinds of problems has been improved with multilayer laser scan planes by latter technology of laser scanners [7].
In order to reduce human-induced road accidents, this study presents a collision avoidance system (CAS) based on an approach using a state-of-the-art multilayer laser scanner mounted in front of tram. The system is designed to detect all objects (static and moving) within a 110-degree field in the front of the tram. The aim of the system is to work in all the heavy traffic scenarios. The system that uses a laser scanner sensor was first used in real-time tests for trams.
This system is integrated to a Light Rail Vehicle (LRV) called “Silkworm”, which is manufactured by Durmazlar Machine Inc. (Bursa, Turkey) and tested on trams at facilities of Durmazlar Machine Inc.

2. System Overview

Every second of traveling around the city requires complete concentration for tram drivers as in the case of drivers of all vehicles. Traffic conditions might lead to undesirable results, like collisions with other trams, cars, trucks, and pedestrians at any time, especially in rush hours. In this study, a CAS, combining a laser scanner sensor and a rail control unit were developed and adapted to trams, as shown Figure 1. The system not only warns a tram driver of an imminent possible danger of a crash, but it also activates the brakes and disengages the throttle to prevent collision in order to avoid unintended consequences. The CAS is engineered to work in all terrain and congestion scenarios. The system is the first step towards automated trams. System architecture and the laser scanner sensor are detailed as follows for the proposed CAS.

2.1. System Architecture

The system architecture for CAS is divided into three parts, as seen Figure 2. The laser scanner sensor provides raw data from the observed scene. Then, from the raw data, an object recognition algorithm is processed. The object recognition algorithm consists of three processes, called segmentation, classification, and tracking. Although the object recognition algorithm that is used in this phase can be sensitive to complicated road surroundings or measurement errors, which reduce the accuracy of the algorithm, this complexity cannot affect the algorithm much comparing to other road vehicles because trams cruise on fixed railways. The feedback loop between the classification and tracking processes is used to improve the performance of object recognition. In each scan of the laser scanner, positions of the objects in the scanned area are predicted and the classification parameters are updated.
After the object recognition algorithm, risk assessment is carried out. The system determines distance and calculates the likelihood of collision based on variables such as speed. It can detect objects within a 110-degree field in the front of the tram. If the CAS detects that an object is approaching dangerously close, then it activates a visual and acoustic warning for the driver. Should the tram driver not react to the warning signals within two seconds, the automated system slows the tram to a complete stop. According to Lee et al., the driver reaction time is approximately 1.195 s [38]. If the driver does not respond in two seconds, it means that the driver does not notice the possible danger.

2.2. Sick Ladar DigitalMultilayer Range Scanner (LD-MRS) Laser Scanner

In this project, the Sick Ladar Digital-Multilayer Range Scanner (LD-MRS) laser scanner is utilized to detect obstacles in tramline. The laser scanner is mounted near the center of the frontal area of the test vehicle, named “Silkworm”, as seen in Figure 1. The sensor can detect all objects in front of the vehicle up to 50 m with the accuracy of ±5 cm. The LD-MRS laser scanner has a variable scan area. In this study, the horizontal angle of visual field is limited to 110° from +50° to −60°. For obtaining simultaneous records of a scene, a video camera is mounted near the laser scanner sensor.
The LD-MRS is a four-layer laser scanner that uses Time-of-Flight (ToF) technology. It scans the surroundings radially with several rotating laser beams, receives the echoes with a photodiode receiver, and processes the data by means of a time of flight calculation. If the round trip time is t, then the distance to the object is calculated as seen below:
d = c t 2 ,
where d is the distance to the object and c is the speed of light.
Multilayer laser scanner technology provides pitch angle compensation through four scan planes with various vertical angles. Thus, the sensor can detect all of the objects reliably, even while accelerating or breaking. Moreover, the LD-MRS has multi-echo capability. So, it can gather and evaluate up to three echoes per transmitted laser pulse. While the echo of a raindrop yields a very low voltage over a short period of time, the echo of an object yields a high voltage over a longer period. Therefore, a raindrop and an object can be distinguished easily.

3. Methods

According to the European Enhanced Vehicle Safety Committee, 2004 (EEVC WG 19) [39] obstacle detection algorithms should be capable of the following,
  • Locating obstacles
  • Distinguishing obstacles (vehicles, pedestrians and others)
  • Tracking obstacles
  • Detecting potential hazards in order to eliminate false alarms
In designing a robust CAS, the accuracies of obstacle detection and tracking are the most powerful parameters because a possible error in these processes causes a failure of all the systems. This study uses the algorithm given in Figure 2, to detect, classify, track, and assess the possible collision of obstacles in railways. The algorithm is utilized for detecting all static or moving obstacles, which are located in front of the tram. The algorithm processes are detailed as follows.

3.1. Segmentation

The segmentation phase is the first and the fundamental step of the object detection algorithm. The raw measurements that are provided by the laser scanner are separated into clusters that are supposed to belong to the same object. Among several segmentation processes, the Point Distance-Based method (PDBS) was selected for segmentation. Basically, if two consecutive scan points, Pn-1 and Pn-2, are not greater than the pre-defined threshold distance, they belong to the same object.
If  D ( r n 1 , r n 2 ) > D thd ,
where Dthd is the threshold condition and D ( r n 1 , r n 2 ) is the Euclidean distance between two consecutive detected points as shown in Figure 3.
The performance of the segmentation phase is closely related to the selection of a threshold. There are several threshold methods. In [40], the threshold is determined by:
D thd = C 0 + 2 ( 1 cos ( Δ φ )   min { r n + r n 1 }
where C0. denotes the sensor noise and φ denotes the angular resolution of the laser scanner.
Lee used another method shown below [41]:
D thd = | r n 1 + r n 2 r n 1 + r n 2 | ,
D ( r n 1 , r n 2 ) = ( r n 1 2 + r n 2 2 2 ( r n 1 r n 2 cos Δ φ ) ) ,
This study uses Adaptive Breakpoint Detector (ABD), which was proposed by [42] for the threshold condition.
D thd = r n sin Δ φ sin ( λ Δ φ ) + σ r ,
where λ is an auxiliary parameter and is chosen on user experience σr is a residual variance, including the stochastic behavior of the sequence of the detected points and related noise that is associated with rn.
Before and after segmentation process outputs can be seen Figure 4.

3.2. Object Classification

After having identified which points belong to the same obstacle, object classification is performed to distinguish typical road objects like cars, trucks, bicycles and pedestrians. Normally, it is expected to have dependable object classification in every scan of laser scanners, but this does not happen because of laser scanner sensor characteristics. The shape of the same object can alternate depending upon its position. One way to overcome this problem is to take into consideration as many features as possible from previous object detections, especially dynamics and dimensions.
It is not possible to achieve a highly confident classification in the first scan of an object when it first appears. In this study, the Majority Voting Scheme (MVS) classification method is used. MVS method is based on a voting scheme considering every hypothesis over time, until reaching a high classification confidence [43]. In this method, classification is improved over time. Therefore, a better classification can be obtained after several frames.
Each feature that characterizes the object represents a voter actor, where the weight of the vote depends on the influence of the related feature to characterize the object and on the value n of that feature, given by Figure 5 and Expression (7).
The confidence level is reached with the addition of the votes of all the actors (7). When some of the hypotheses achieve an acceptable value, it is assumed that the object is classified as belonging to the type of that hypothesis.
{ V o v L 0 m 1 v + b 1 L 0 < v < L 1 V 1 L 1 < v < L 2 m 2 v + b 2 L 1 < v < L 2 V 2 v L 3
MVS does not have a self-consistent mathematical framework in order to support its stability and consistency. Nevertheless, the results, which can be seen [41], show its feasibility.

3.3. Kalman Filter Tracking

Object tracking is a very important phase for CAS. There are many object tracking methods in literature [4,18]. One of the most common method is Kalman Filter tracking [6,7,41]. Kalman Filter provides a recursive solution to the optimal filtering problem and it can be applied to stationary as well as nonstationary environments.
In this study, the position of detected segments is tracked over time using the Kalman Filter. The tracking algorithm estimates the position and velocities of objects from current observation and state predicted.
Kalman Filter consists of two main stages. In the first stage, the state of the process is predicted. In the second stage, predictions are corrected or updated by using measurements. Prediction and update equations of discrete-time Kalman Filter are as below:
1. Predict: For each time step t, the predicted state at this time step x ^ t is given by;
x ^ t = F t x ^ t 1 + B t u t ,
x ^ t = [ x y V x   V y ] ,
where x and y represent the coordinates of the object and Vx and Vy represent the objects velocity. To use Kalman Filter, the objects are considered moving at constant velocity and constant acceleration.
The transition matrix F is:
F = [ 1 0   Δ t 0 0 1   0 Δ t 0 0   1 0 0 0   0 1 ]
Converting ut into state space matrix B is:
B =   [ 1   0     0   0 ] ,
For concluding Kalman Filter prediction, P t error covariance matrix forward one-time step:
P t = F t P t 1 F t T + Q t ,
where P is the matrix representing error covariance in the state prediction and Q is the process noise covariance.
2. Update: After predicting the state x ^ t , the Kalman Filter computes Kalman Gain to correct the state estimate x ^ t . Kalman Gain is calculated as below:
K t = P t H t T ( H t P t H t T + R t ) 1 ,
where R is measurement noise covariance.
R = [ σ m 2 0 0 σ m 2 ] ,
After predicting the state x ^ t and its error covariance at time t using the time update steps, the Kalman Filter can update the state estimate using Kalman Gain and measurements that are derived from the laser scanner:
x ^ t = x ^ t + K t ( y t H t x ^ t ) ,
where y is the measurement derived from the laser scanner. It contains two dimensions (x and y coordinates) and it has the form for all of the different tracking algorithms for n tracking algorithm as below:
[ x 0 , y 0 x 1 , y 1 x n 1 , y n 1 ] ,
As a result, the H matrix (observation matrix), which is the matrix of converting state space into measurement space, has the form;
H = [ 1 0 0 0 1 0 0 0 ] ,
Finally, Kalman Filter updates error covariance P t into P t :
P t = ( I K t H t ) P t ,
Although Kalman Filter is a powerful tool for tracking, there is a difficulty in combining arbitrary tracking algorithms, as measurements come from computing the Kalman Gain. Normally, the measurement noise covariance matrix, R is difficult to determine, but thanks to laser scanners’ highly accurate measurement, a small noise covariance is sufficient for position measurement.

3.4. Risk Assessment System

After obtaining the location of moving or static objects as results of object recognition and tracking module, risk assessment module estimates the time-to-collision for each object. In this estimation, the velocity of each object is considered to be constant and the shortest distance of the closest point of the object to the test vehicle is detected by laser scanning sensor. Thus, time-to-collision and predicted collision place are estimated by using the knowledge of relative velocities and the shortest distance. However, since objects can change their motion, these estimations cannot be exact and the distribution of likely future states of the objects must be taken into consideration. Since pedestrians, unlike vehicles, are capable of making quick maneuvers, risk assessment module has extra time tolerance after the acoustic warning for pedestrians when compared to the vehicles.
In this study, the screen display obtained from the laser scanner is divided into two zones when considering the tram’s maximum velocity, dimensions, and route. The laser scanner scans the area with an angle of 110°. As seen in Figure 6, Zone 1 is the “Danger Zone”, whereas Zone 2 is the “Safe Zone”. Collision probability depends on not only the zone but also the type and velocity of the object. Three different collision scenarios are evaluated in the designed system. These scenarios are as follows:
1. No Collision Probability:
The scenario in which there is no probability of a collision when considering the heading, velocity, position, and distance of the objects that are detected on the tram line or in the vicinity of the line. In this case, shown in Figure 7, there is no need for warning signals or brake actuation.
2. Collision Probability:
This scenario is divided into two cases. In the first case, the detected object is in the Safe Zone but getting closer to the Danger Zone depending on its velocity and heading. In the second case, although the object is in the Danger Zone, it is moving in the same direction as the tram and slower than the tram. In this case, an exclamation mark appears on the screen and an alarm signal is played (Figure 8).
3. Collision Highly Probable:
The scenario in which the probability of collision is very high considering the position, velocity, heading, and distance of the detected object. In this case, a stop sign appears on the screen, an alarm signal plays and automatic brakes are activated if the tram driver does not respond in two seconds (Figure 9).

4. Field Tests

Field tests were implemented at the facilities of Durmazlar Machine Inc. in the city of Bursa, Turkey.
When the dynamics of objects are taken into account, there are two different types of objects that can be encountered in a traffic scene: pedestrian and vehicle. These tests are carried out with two different objects with various configurations, as seen in Figure 10. The first case is a LRV on the same line, the second one is a pedestrian on the route of the vehicle and the third one is multiple pedestrians and another LRV. All of the tests are implemented while the test vehicle is at the velocity of 20 km/h.
Some specific test scenarios were staged such as running pedestrian towards to test vehicle and dangerously crossing pedestrians. All of the test scenarios can be seen Table 1.
In the first scenario, while the test vehicle is moving with a velocity of 20 km/h, there is another static LRV on the same line (Figure 11a). In terms of its position, the static vehicle is not in the danger zone and because it is not moving, it does not pose any danger to the test vehicle. On the other hand, since the test vehicle is moving with a velocity of 20 km/h, visual signals will appear as static LRV enters the danger zone. The second scenario that can be seen in Figure 11b is another LRV on the same line is approaching the test vehicle with a velocity of 10 km/h. At the same time, the test vehicle is proceeding with a velocity of 20 km/h. The approaching LRV enters the danger zone and its heading is towards the test vehicle. For the moment, there is no absolute probability of collision. So, only a visual signal appears on the screen for the tram driver. At this point, there is no need for an acoustic warning or brakes to activate. The third scenario is the next step of the scenario, and it can be seen in Figure 11c. The LRV approaching the test vehicle with a velocity of 10 km/h is very close to the test vehicle. At the moment, it is not yet possible for a collision to occur in two seconds, but it can happen after this two-second time period. At this point, there is no need for automatic brakes. In the next scenario, another LRV approaching the test vehicle with a velocity of 20 km/h enters the danger zone (Figure 11d). There is a possibility of collision in two seconds because of its position, velocity, and heading. A visual signal appears on the screen and an acoustic warning is activated. Automatic brakes activate at this point because the tram driver does not step on the brakes and the probability of collision is very high. In Figure 11e, the scenario shows an incident that can frequently occur in routine travels of the tram. There is an LRV with the same heading and velocity of the test vehicle. Although the other LRV is in the danger zone, its velocity and its heading are the same as the test vehicle. In such a situation, a visual signal appears on the screen to warn the tram driver, but there is no probability of a collision in a short period of time. Thus, the acoustic warning and automatic brakes are not activated.
In the Figure 12a scenario, a pedestrian is running with a velocity of 5 km/h towards the test vehicle. The pedestrian is in the danger zone, but the probability of collision is not very high. A visual signal appears on the screen for the tram driver, but the acoustic warning is not activated. The next scenario that can be seen Figure 12b, a pedestrian is running with a velocity of 11 km/h towards the test vehicle and it is closer to the vehicle; the automatic brakes are not activated. A visual signal for the tram driver and an acoustic warning for the pedestrian are activated. At this point, it is not necessary for automatic brakes to activate. The pedestrian in the Figure 12c scenario is running towards the test vehicle with a velocity of 14 km/h. Although the acoustic warning was activated the heading of the pedestrian is not changed and is still towards the test vehicle. Taking into account the velocity and heading of the pedestrian, a collision is highly possible in a two-second time period. So, the automatic brakes are activated. In the Figure 12d scenario, the pedestrian running with a velocity of 13 km/h is in the danger zone, and a collision is highly probable. Although the pedestrian has changed his heading, there still exists a probability of a collision when the dimensions of the tram are taken into account. Because of this, the automatic brakes are activated.
There are two pedestrians crossing the tram line and a static LRV in the scenario that can be seen in Figure 13. In terms of its position, the static LRV does not pose a danger to test vehicle. Pedestrians are in danger zone, but probability of collision is not very high. A visual signal and an acoustic signal are activated.
In the first five test scenarios (Figure 11) and 7th, 8th, and 9th scenarios (Figure 11b–d) object detection and classification results were good and CAS worked well. In the sixth test scenario (Figure 11a), a pedestrian was detected by algorithm but was misclassified. Because, the pedestrian suddenly started to move and he kept moving slowly. Since the classification algorithm is closely related to the time, object is detected after a few second delay as a pedestrian. In scenario 10, system first classified pedestrians and static LRV. When two pedestrian got closer to each other, the classification algorithm could not distinguish two pedestrians. Despite the misdetection of the two situations, the CAS issued the appropriate alarm because pedestrian was detected as an object.

5. Conclusions and Future Work

This study presents a CAS algorithm for LRV to avoid the collision by using only a multilayer laser-scanning sensor mounted on a test vehicle called “Silkworm”. The proposed system has three modules. The first one is a sensor module that is based on a state-of-the-art laser scanning technology. The second module, called “recognition and tracking module” has three submodules to identify and track the obstacles in the scanned area. After identifying and tracking the obstacles, the third module calculates the collision probability. The whole system was applied to the Silkworm, which was manufactured by Durmazlar Machine Inc., Bursa, Turkey. Initial tests have been carried out at the Durmazlar Machine Inc. test facility. In normal traffic conditions, the objects that the system will encounter mostly would be pedestrians and other vehicles, so the test objects were restricted to pedestrians and other LRVs. Three primary test scenarios have been implemented. The first case is a LRV on the same line, the second one is a pedestrian on the route of the test vehicle, and the last one consists of both a LRV and multiple pedestrians on the route of the test vehicle.
It has been observed that designed recognition and tracking module is robust against rapidly moving and quick maneuvering objects and is able to follow a pedestrian running at over 6 m/s. The collision avoidance rate of the system is high due to its three phase warning system: visual warning for tram driver, acoustic warning for pedestrians, and finally automatic braking. Throughout the ten scenarios tested (Table 1), the laser scanning sensor has successfully detected all of the objects. The classification algorithm failed in two situations.
  • The pedestrian suddenly starts to move and keeps moving slowly (Table 1 Scenario 6)
  • Two pedestrians get close to each other (Table 1 Scenario 10)
The qualification processes have not yet been completed, driving tests in the city have not begun. Therefore, the designed system could not be tested in various weather conditions with various scenarios. Therefore, our future work will consist of implementing adverse weather conditions and varying scenarios during the qualification phase. Furthermore, to improve the efficiency of detection and classification algorithms, the next research step will include developing a new method of laser/camera fusion or laser/radar fusion. On the other hand, the system that uses only a laser scanning sensor was used for the first time in real-time tests for trams.
CAS that warn of collision and automatically brake in an emergency are increasingly spreading in rail transportation systems and they provide the basis for automated trams. All kinds of vehicles are expected to be smarter in the near future, so it is the first step for unmanned railway vehicles.

Author Contributions

Murat Lüy, Ertuğrul Çam and İbrahim Uzun conceived and lead the research. Salih İbrahim Akın performed and analyzed the experiments. Faruk Ulamis and Murat Lüy wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest with Durmazlar Machine Inc.

References

  1. United Nations, Department of Economic and Social Affairs, Population Division. World Population Prospects: The 2015 Revision; United Nations: New York, NY, USA, 2015; Available online: http://esa.un.org/unpd/wpp/Publications/Files/Key_Findings_WPP_2015.pdf (accessed on 1 January 2018).
  2. Vuchic, V.R. Urban passenger transport modes. In Urban transit systems and technology; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2007; pp. 45–91. [Google Scholar]
  3. Mu, G.Y.; Chen, M.W. Study on status quo and countermeasure of road traffic safety based on accident stat. Commun. Standarization 2005, 2, 37–39. [Google Scholar]
  4. Gidel, S.; Checchin, P.; Blanc, C.; Chateau, T.; Trassoudaine, L. Pedestrian detection and tracking in an urban environment using a multilayer laser scanner. IEEE Trans. Intell. Transp. Syst. 2010, 11, 579–588. [Google Scholar] [CrossRef]
  5. Jiménez, F.; Naranjo, J.E. Improving the obstacle detection and identification algorithms of a laserscanner-based collision avoidance system. Transp. Res. Part C Emerg. Technol. 2010, 19, 658–672. [Google Scholar] [CrossRef]
  6. Fuerstenberg, K.C.; Linzmeier, D.T.; Dietmayer, K.C.J. Pedestrian recognition and tracking of vehicles using a vehicle based multilayer laserscanner. In Proceedings of the 10th World Congress on Intelligent Transport Systems, Madrid, Spain, 16–20 November 2003; pp. 1–12. [Google Scholar]
  7. Fuerstenberg, K.C.; Dietmayer, K.C.J.; Eisenlauer, S.; Willhoeft, V. Multilayer laserscanner for robust object tracking and classification in urban traffic scenes. In Proceedings of the ITS, Chicago, IL, USA, 14–17 October 2002; pp. 7–16. [Google Scholar]
  8. Guivant, J.; Nebot, E.; Baiker, S. Autonomous navigation and map building using laser range sensors in outdoor applications. J. Robot. Syst. 2000, 17, 565–583. [Google Scholar] [CrossRef]
  9. Polychronopoulos, A.; Tsogas, M.; Amditis, J.A.; Andreone, L. Sensor fusion for predicting vehicles’ path for collision avoidance systems. IEEE Trans. Intell. Transp. Syst. 2007, 8, 549–562. [Google Scholar] [CrossRef]
  10. Le Beux, S.; Gagné, V.; Aboulhamid, E.M.; Marquet, P.; Dekeyser, J.L. Hardware/software exploration for an anti-collision radar system. In Proceedings of the IEEE 49th International Midwest Symposium on Circuits and Systems, San Juan, PR, USA, 6–9 August 2006; pp. 385–389. [Google Scholar]
  11. Pierowicz, J.; Jocoy, P.; Lloyd, M.; Bittner, A.; Pirson, B. Intersection Collision Avoidance Using ITS Counter Measures Task 9: Final Report; Technical Report NHTSA Rev 1; National Highway Traffic Safety Administration: Washington, DC, USA, 2000.
  12. Abou-Jaoude, R. ACC radar sensor technology, test requirement and test solution. IEEE Trans. Intell. Transp. Syst. 2004, 4, 115–122. [Google Scholar] [CrossRef]
  13. Caraffi, C.; Cattani, S.; Grisleri, P. Off-road path and obstacle detection using decision networks and stereo vision. IEEE Trans. Intell. Transp. Syst. 2007, 8, 607–618. [Google Scholar] [CrossRef]
  14. Broggi, A.; Caraffi, C.; Fedriga, R.I.; Grisleri, P. Obstacle detection with stereo vision for off-road vehicle navigation. In Proceedings of the International IEEE Workshop Machine Vision for Intelligent Vehicle, San Diego, CA, USA, 21–23 September 2005; p. 65. [Google Scholar]
  15. Elzein, H.; Lakshmanan, S.; Watta, P. A motion and shape-based pedestrian detection algorithm. Proc. IEEE Intell. Veh. Symp. 2003, 22, 500–504. [Google Scholar]
  16. Ball, D.; Upcroft, B.; Wyeth, G.; Corke, P. Vision based Obstacle Detection and Navigation for an Agricultural Robot. J. Field Robot. 2015, 33, 1107–1130. [Google Scholar] [CrossRef]
  17. Gavrila, D.M.; Kunert, M.; Lages, U. A multi-sensor approach for the protection of vulnerable traffic participants—The PROTECTOR project. In Proceedings of the IEEE Instrumentation and Measurement Technology Conference, Budapest, Hungary, 21–23 May 2001; Volume 3, pp. 2044–2048. [Google Scholar]
  18. Garcia, F.; Ponz, A.; Martin, D.; De la Escalera, A.; Armingol, J.M. Computer vision and laser scanner road environment perception. In Proceedings of the IEEE Systems, Signals and Image Processing (iwssip) Processing, Dubrovnik, Croatia, 12–15 May 2014; pp. 63–66. [Google Scholar]
  19. Kim, S.; Kim, H.; Yoo, W.; Huh, K. Sensor Fusion Algorithm Design in Detecting Vehicles Using Laser Scanner and Stereo Vision. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1072–1084. [Google Scholar] [CrossRef]
  20. García, F.; García, J.; Ponz, A.; De la Escalera, A.; Armingol, J.M. Context aided pedestrian detection for danger estimation based on laser scanner and computer vision. Expert Syst. Appl. 2014, 41, 6646–6661. [Google Scholar] [CrossRef] [Green Version]
  21. Fuerstenberg, K.C.; Roessler, B. Results of the EC project INTERSAFE. In Proceedings of the 12th International Conference on Advanced Microsystems for Automotive Applications, Berlin, Germany, 11–12 March 2008; pp. 91–102. [Google Scholar]
  22. Chen, X.; Ren, W.; Liu, M.; Jin, L.; Bai, Y. An Obstacle Detection System for a Mobile Robot Based on Radar-Vision Fusion. In Proceedings of the 4th International Conference Computer Engineering and Networks, Harbin, China, 19–20 December 2015; Volume 355, pp. 677–685. [Google Scholar]
  23. Hermann, D.; Galeazzi, R.; Andersen, J.C.; Blanke, M. Smart Sensor Based Obstacle Detection for High-Speed Unmanned Surface Vehicle. In Proceeding of the 10th IFAC Conference on Manoeuvring and Control of Marine Craft (MCMC), Copenhagen, Denmark, 24–26 August 2015; Volume 48, pp. 190–197. [Google Scholar]
  24. Tokoro, S.; Moriizumi, M.; Kawasaki, T.; Nagao, T.; Abe, K.; Fujita, K. Sensor fusion system for pre-crash safety system. In Proceedings of the IEEE Intelligent Vehicle Symposium, Parma, Italy, 14–17 June 2004; pp. 945–950. [Google Scholar]
  25. Floudas, N.; Polychronopoulos, A.; Aycard, O.; Burlet, J.; Ahrholdt, M. High level sensor data fusion approaches for object recognition in road environment. In Proceedings of the 2007 IEEE Intelligent Vehicle Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 136–141. [Google Scholar]
  26. Fuerstenberg, K.C.; Baraud, P.; Caporaletti, G.; Citelli, S.; Eitan, Z.; Lages, U.; Lavergne, C. Development of a pre-crash sensorial system—The CHAMELEON project. VDI BERICHTE 2001, 1653, 289–310. [Google Scholar]
  27. Wender, S.; Weiss, T.; Fuerstenberg, K.C.; Dietmayer, K. Object classification exploiting high level maps of intersections. In Advanced Microsystems for Automotive Applications; Springer: Berlin, Germany, 2006; pp. 187–203. [Google Scholar]
  28. Huang, M.C.; Yen, S.H. A real-time and color-based computer vision for traffic monitoring system. In Proceedings of the IEEE International Confrence on Multimedia and Expo (ICME), Taipei, Taiwan, 27–30 June 2004; Volume 3, pp. 2119–2122. [Google Scholar]
  29. Zielke, T.; Brauckmann, M.; Vonseelen, W. Intensity and edge-based symmetry detection with an application to car-following. CVGIP Image Underst. 1993, 58, 177–190. [Google Scholar] [CrossRef]
  30. Tzomakas, C.; Von Seelen, W. Vehicle Detection in Traffic Scenes Using Shadows; Institute Neuro Informatik, Ruht University Bochum: Bochum, Germany, 1998. [Google Scholar]
  31. Bertozzi, M.; Broggi, A.; Fascioli, A.; Nichele, S. Stereo Vision-based Vehicle Detection. In Proceedings of the IEEE Intelligent Vehicle Symposium, Dearbon, MI, USA, 3–5 October 2000; pp. 39–44. [Google Scholar]
  32. Lefaix, G.; Marchand, T.; Bouthemy, P. Motion-based obstacle detection and tracking for car driving assistance. In Proceedings of the 16th International Conference on IEEE Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 4, pp. 74–77. [Google Scholar]
  33. Rasshofer, R.H.; Naab, K. 77 GHz long range radar systems status, ongoing developments and future challenges. In Proceedings of the IEEE Radar Conference, Paris, France, 3–4 October 2005; pp. 161–164. [Google Scholar]
  34. Klotz, M.; Rohling, H. A 24 GHz short range radar network for automotive applications. In Proceedings of the IEEE Radar CIE International Conference on Radar, Beijing, China, 15–18 October 2001; pp. 115–119. [Google Scholar]
  35. Scharenbroch, G. Safety Vehicles Using Adaptive Interface Technology (SAVE-IT) (Task 10): Technology Review. Delphi Electronics and Safety Systems, Tech Report 2005. Available online: http://www.volpe.dot.gov/opsad/saveit/docs/dec04/finalrep_10.pdf (accessed on 10 January 2016).
  36. Swartz, J.; Harrison, S.A.; Barkan, E.; Delfine, F.; Brown, G. Portable Laser Scanning Arrangement for and Method of Evaluating and Validating Bar Code Symbols. U.S. Patent No. 4,251,798, 17 February 1981. [Google Scholar]
  37. Heinemann, T.; Becker, S. Axial Fan Blade Vibration Assessment under Inlet Cross-Flow Conditions Using Laser Scanning Vibrometry. Appl. Sci. 2017, 7, 862. [Google Scholar] [CrossRef]
  38. Lee, J.D.; McGehee, D.V.; Brown, T.L.; Reyes, M.L. Collision warning timing, driver distraction, and driver response to imminent rear-end collisions in a high-fidelity driving simulator. Hum. Factors 2002, 44, 314–334. [Google Scholar] [CrossRef] [PubMed]
  39. European Enhanced Vehicle-safety Committee. EEVC Working Group 19 Report—Improved Test Methods to Evaluate Pedestrian Protection Afforded by Passenger Cars; TN0 Crash-Safety Research Centre: Delft, The Netherlands, 1998; pp. 6–8. [Google Scholar]
  40. Dietmayer, K.; Sparbert, J.; Streller, D. Model based object classification and object tracking in traffic scenes from range images. In Proceedings of the IV IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–16 May 2001. [Google Scholar]
  41. Lee, K.J. Reactive Navigation for an Outdoor Autonomous Vehicle. Master’s Thesis, Department of Mechanical and Mechatronics Engineering, University of Sydney, Sydney, Australia, 2001. [Google Scholar]
  42. Borges, G.A.; Aldon, M.J. Line extraction in 2D range images for mobile robotics. J. Intell. Robot. Syst. 2004, 40, 267–297. [Google Scholar] [CrossRef]
  43. Mendes, A.; Bento, L.C.; Nunes, U. Multi-target detection and tracking with a laser scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 796–801. [Google Scholar]
Figure 1. Ladar Digital-Multilayer Range Scanner (LD-MRS) laser scanner mounted in front of the test vehicle named “Silkworm” which is manufactured by Durmazlar Machine Inc. and control unit of the test vehicle.
Figure 1. Ladar Digital-Multilayer Range Scanner (LD-MRS) laser scanner mounted in front of the test vehicle named “Silkworm” which is manufactured by Durmazlar Machine Inc. and control unit of the test vehicle.
Applsci 08 00475 g001
Figure 2. Outline of the proposed collision avoidance system (CAS).
Figure 2. Outline of the proposed collision avoidance system (CAS).
Applsci 08 00475 g002
Figure 3. The distance between the consecutive scanned points of the object in front of the tram.
Figure 3. The distance between the consecutive scanned points of the object in front of the tram.
Applsci 08 00475 g003
Figure 4. (a) Before segmentation; (b) after segmentation.
Figure 4. (a) Before segmentation; (b) after segmentation.
Applsci 08 00475 g004
Figure 5. Graphical representation of majority voting scheme classifier.
Figure 5. Graphical representation of majority voting scheme classifier.
Applsci 08 00475 g005
Figure 6. Dangerous areas and obstacles (static and moving) in an observed scene.
Figure 6. Dangerous areas and obstacles (static and moving) in an observed scene.
Applsci 08 00475 g006
Figure 7. All objects are in the safe zone and there is no collision probability considering heading, velocity, and position of objects.
Figure 7. All objects are in the safe zone and there is no collision probability considering heading, velocity, and position of objects.
Applsci 08 00475 g007
Figure 8. There is an object in danger zone and getting closer to tram slowly. An exclamation mark appears on the screen and an alarm signal is activated. Heading of moving object can be seen with a red line and possible collision position can be seen as a yellow circle on the Figure 8.
Figure 8. There is an object in danger zone and getting closer to tram slowly. An exclamation mark appears on the screen and an alarm signal is activated. Heading of moving object can be seen with a red line and possible collision position can be seen as a yellow circle on the Figure 8.
Applsci 08 00475 g008
Figure 9. There is an object in the danger zone and collision is highly probable. A stop sign appears on the screen, an alarm signal plays and automatic brakes are activated. Heading of the object can be seen with a red line and possible collision position can be seen with the red circle.
Figure 9. There is an object in the danger zone and collision is highly probable. A stop sign appears on the screen, an alarm signal plays and automatic brakes are activated. Heading of the object can be seen with a red line and possible collision position can be seen with the red circle.
Applsci 08 00475 g009
Figure 10. Scenarios for another Light Rail Vehicle (LRV) and Pedestrian.
Figure 10. Scenarios for another Light Rail Vehicle (LRV) and Pedestrian.
Applsci 08 00475 g010
Figure 11. The scenarios of another LRV on the same line. (a) A static LRV on the same line with the test vehicle; (b) Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h; (c) Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h; (d) Another LRV approaching the test vehicle from the opposite direction with a velocity of 20 km/h; (e) Another LRV on the same line and with the same heading moving with a velocity of 20 km/h.
Figure 11. The scenarios of another LRV on the same line. (a) A static LRV on the same line with the test vehicle; (b) Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h; (c) Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h; (d) Another LRV approaching the test vehicle from the opposite direction with a velocity of 20 km/h; (e) Another LRV on the same line and with the same heading moving with a velocity of 20 km/h.
Applsci 08 00475 g011
Figure 12. The scenarios for a pedestrian approaching the test vehicle. (a) A pedestrian running towards the test vehicle with a velocity of 5 km/h; (b) A pedestrian running towards the test vehicle with a velocity of 11 km/h; (c) A pedestrian running towards the test vehicle with a velocity of 14 km/h; (d) A pedestrian running towards the test vehicle with a velocity of 13 km/h but changes direction when an acoustic alarm is activated.
Figure 12. The scenarios for a pedestrian approaching the test vehicle. (a) A pedestrian running towards the test vehicle with a velocity of 5 km/h; (b) A pedestrian running towards the test vehicle with a velocity of 11 km/h; (c) A pedestrian running towards the test vehicle with a velocity of 14 km/h; (d) A pedestrian running towards the test vehicle with a velocity of 13 km/h but changes direction when an acoustic alarm is activated.
Applsci 08 00475 g012
Figure 13. The scenarios for multiple pedestrian and static LRV.
Figure 13. The scenarios for multiple pedestrian and static LRV.
Applsci 08 00475 g013
Table 1. Test Scenarios.
Table 1. Test Scenarios.
Test NumberThe Case of Another LRV on the Same LineRelated Figure
1A static LRV on the same line with the test vehicle.Figure 11a
2Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h.Figure 11b
3Another LRV on the same line approaching the test vehicle from the opposite direction with a velocity of 10 km/h. This scenario is the next step of the previous one and LRV is closer to the test vehicle.Figure 11c
4Another LRV approaching the test vehicle from the opposite direction with a velocity of 20 km/h.Figure 11d
5Another LRV on the same line and with the same heading moving with a velocity of 20 km/h.Figure 11e
The scenarios for a pedestrian approaching the test vehicle
6A pedestrian running towards the test vehicle with a velocity of 5 km/h.Figure 12a
7A pedestrian running towards the test vehicle with a velocity of 11 km/h.Figure 12b
8A pedestrian running towards the test vehicle with a velocity of 14 km/h.Figure 12c
9A pedestrian running towards the test vehicle with a velocity of 13 km/h but changes direction when an acoustic alarm is activated.Figure 12d
Mixed type scenario
10A static LRV on the line and two pedestrian crossing from the tram route.Figure 13

Share and Cite

MDPI and ACS Style

Lüy, M.; Çam, E.; Ulamış, F.; Uzun, İ.; Akın, S.İ. Initial Results of Testing a Multilayer Laser Scanner in a Collision Avoidance System for Light Rail Vehicles. Appl. Sci. 2018, 8, 475. https://doi.org/10.3390/app8040475

AMA Style

Lüy M, Çam E, Ulamış F, Uzun İ, Akın Sİ. Initial Results of Testing a Multilayer Laser Scanner in a Collision Avoidance System for Light Rail Vehicles. Applied Sciences. 2018; 8(4):475. https://doi.org/10.3390/app8040475

Chicago/Turabian Style

Lüy, Murat, Ertuğrul Çam, Faruk Ulamış, İbrahim Uzun, and Salih İbrahim Akın. 2018. "Initial Results of Testing a Multilayer Laser Scanner in a Collision Avoidance System for Light Rail Vehicles" Applied Sciences 8, no. 4: 475. https://doi.org/10.3390/app8040475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop