Next Article in Journal
Studies on the High-Power Piezoelectric Property Measurement Methods and Decoupling the Power and Temperature Effects on PZT-5H
Next Article in Special Issue
Rope on Rope: Reducing Residual Vibrations in Rope-Based Anchoring System and Rope-Driven Façade Operation Robot
Previous Article in Journal
PC-YOLO11s: A Lightweight and Effective Feature Extraction Method for Small Target Image Detection
Previous Article in Special Issue
Towards Autonomous Retail Stocking and Picking: Methods Enabling Robust Vacuum-Based Robotic Manipulation in Densely Packed Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle-Aware Crowd Surveillance with Mobile Robots in Transportation Stations

Department of Embedded Systems Engineering, Incheon National University, Incheon 22012, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(2), 350; https://doi.org/10.3390/s25020350
Submission received: 13 December 2024 / Revised: 6 January 2025 / Accepted: 7 January 2025 / Published: 9 January 2025
(This article belongs to the Special Issue Intelligent Service Robot Based on Sensors Technology)

Abstract

:
Recent transportation systems are operated by cooperative factors including mobile robots, smart vehicles, and intelligent management. It is highly anticipated that the surveillance using mobile robots can be utilized in complex transportation areas where the high accuracy is required. In this paper, we introduce a crowd surveillance system using mobile robots and intelligent vehicles to provide obstacle avoidance in transportation stations with a consideration of different moving strategies of the robots in an existing 2D area supported by line-based barriers and surveillance formations. Then, we formally define a problem that aims to minimize the distance traveled by a mobile robot, while also considering the speed of the mobile robot and avoiding the risk of collisions when the mobile robot moves to specific locations to fulfill crowd surveillance. To solve this problem, we propose two different schemes to provide improved surveillance that can be used even when considering speed. After that, various ideas are gathered to define conditions, set various settings, and modify them to evaluate their performances.

1. Introduction

Recently, transportation systems are operated with a combination of 5G, 6G, IoT (Internet of Things), IIoT (Industrial Internet of Things), Open RAN (Radio Access Network) and cooperative components such as mobile robots, smart vehicles, intelligence for the purpose of autonomous control, intelligent service, etc. [1,2,3,4,5,6,7,8,9,10,11,12,13]. It is highly expected that the affective surveillance based on virtual emotion detection is expanding its applicability to various industrial and academic areas to achieve various missions and tasks including terror prevention, patrol service, virtual emotion-based service, criminal tracking, maritime transportation monitoring, smart complex area surveillance, etc. [14,15,16,17].
Mobile robots are increasingly being used in various fields with 5G, 6G, IoT, IIoT and the scope of their use is expanding to various environments such as industries, logistics, factories, and hospitals [18,19,20,21,22,23,24]. Mobile robots have various advantages, providing the ability to move in small spaces and perform tasks efficiently in complex and obstacle-filled environments. In addition, mobile robots are equipped with functions that can detect the surrounding environment, identify locations, plan routes, and control movement. This allows you to move along the desired path and perform your work. Taking advantage of these advantages, mobile robots can be used for various purposes even in places such as transportation areas, theme parks. In particular, it is anticipated that mobile robots and relevant detection technologies can take critical role of surveillance and patrol area [25,26,27,28,29,30,31,32]. In [33], the efficient coverage with re-assignment strategy was studied for multi-robot long-term surveillance. The cooperative dual-task path planning scheme was studied for persistent surveillance supported by a group of unmanned ground vehicles [34]. Also, the automation kit for dual-mode military unmanned ground vehicle was introduced for the purpose of surveillance and security [35]. In [36], the novel robot navigation with deep reinforcement learning was proposed to consider crowd area. The robot motion planning scheme with obstacle avoidance was performed in [37]. The distributed multi-robot navigation system was investigated based on variational Bayesian model [38]. Moreover, a balanced task allocation and motion planning framework was developed to consider fuzzy time windows [39]. In [40], authors developed the multi-agent deep reinforcement learning (MARL) that considers the limited sensor capability, localization. Also, in [41], the distributed and autonomous event-driven cooperative scheme using multi-robots was proposed to monitor mobile objects.
Then, cooperative and intelligent mobile robot systems are needed to achieve joint goals by interacting with each other. To this end, efficient communication and cooperation methods between mobile robots must be established, and algorithms for effective teamwork must be designed. Technical problems such as collision avoidance, routing and task distribution also should be effectively addressed. Through these efforts, mobile robots can efficiently cooperate and complete complex tasks. Previous studies have dealt with mobile robot systems mainly in 2D environments, but lacked consideration for robot movement speed. As a result, collisions between robots were ignored and the resulting problems were not considered sufficiently. However, collisions between robots can occur in real-world environments, which can prevent some robots from performing tasks or follow unexpected travel paths.
In this study, we solve the problem of minimizing the total distance traveled by robots by dealing with how to perform monitoring tasks using mobile robots within square 2D zones. Initially, robots are randomly placed in the area, and then conditions are added to set the robots moving speed the same. The relocation path of the robot should be planned in advance by considering the speed of movement of the mobile robot. In this situation, obstacles present in a square area and the priority between them can be dealt with. Obstacles are randomly assigned priorities, which play an important role in determining our main behavior. Obstacles have their own priorities. This is randomly assigned, and high priority obstacles must be avoided before low priority obstacles. It is important to avoid high-priority obstacles, which should be avoided sequentially according to priority. However, if the priority of an obstacle is lower than or equal to a certain degree, the barrier is formed rather than avoided and the obstacle is ignored. In this paper, we provide the ability to build barriers in a given area using mobile robots.
Based on the above observations, the main contributions and contents of the paper can be summarized as follows.
  • First, we introduce the crowd surveillance system using mobile robots and intelligent vehicles to provide obstacle avoidance in transportation stations and smart buildings.
  • Then, we officially define a main research problem that aims to minimize the distance traveled by robots when the crowd surveillance is created in the given transportation area with smart buildings where there are obstacles and these obstacles are randomly assigned priorities of security levels and monitoring importance.
  • To solve the problem by applying various settings and running multiple simulations, we develop two different schemes. One method is based on dividing a given area in half and separately making it as well as another method is to exclude coordinates by drawing lines overall.
  • Moreover, we evaluate the performance of the proposed schemes based on numerical results by expansive simulations according to various settings and scenarios and provide discussions and analysis for the obtained outcomes.
The rest of the paper is organized as follows: Section 2 covers the system settings, assumptions, system conditions, problem definition and key terms. Section 3 represents the proposed schemes to resolve the defined problem. Then, in Section 4, the performances of the proposed algorithms are analyzed according to the earned numerical results through simulations. Section 5 concludes the paper.

2. System Overview and Problem Definition

In this section, we specify the system settings, assumptions, system conditions, the main research problem definition, and essential terms which are used in the proposed crowd surveillance system.

2.1. System Settings and Assumptions

The following settings and assumptions are applied to the proposed system.
  • The given transportation station area is a 2D square area where there may exist obstacles. Each obstacle is given priority at random.
  • Security importance in transportation station area is determined by the given priority and can be ignored if it is lower than a certain priority.
  • The system components include mobile robots where each mobile robot has the equal speed of movement.
  • Mobile robots have communication systems for efficient communication and cooperation.
  • If a mobile robot crashes, it can not be used. The mobile robot moves the floor and moves to specific positions to provide crowd surveillance.
  • Mobile robots can move freely in all areas.
  • Mobile robots work together to determine the optimal path and placement to create crowd surveillance.

2.2. System Conditions

Mobile robots are used to support crowd surveillance that can cover the entire area with a given square area, which prevents collisions between mobile robots and minimizes the total travel distance of the robot. As a condition, first set the mobile robot’s moving speed to the same. By doing everything the same, the order and route must be adjusted when moving the mobile robot, which prevents conflicts and allows for cooperative work. It also reduces algorithm complexity by eliminating the need for ordering and routing. In addition, there are obstacles in the square area, so you should avoid them and barrier-rum them, but the obstacles have randomly set priorities. The higher this priority is, it means that it is an obstacle that should be avoided carefully, but if it is lower than a certain price, the formation of a barrier becomes more important than avoiding obstacles, and obstacles are ignored. Next, set a limit on the maximum travel distance of the robot. This limits the robot from moving more than a certain distance and helps to find the minimum travel distance. This limitation limits the range of movement of the robot, which can lead to efficient path selection. Finally, we need to set limits on the communication range of mobile robots. Robots must be within a certain distance to communicate or cooperate effectively with each other. This allows robots to exchange information, share current situations, and work together. These system conditions are used to control the elements required for a mobile robot system to efficiently generate barriers, prevent collisions and achieve minimum travel distances.

2.3. Problem Definition

We deal with the problem of efficiently providing crowd surveillance using mobile robots within a given square area. The core of this problem is to minimize the total distance traveled by robots while avoiding collisions between them. Here, there are obstacles within this zone, and these obstacles randomly take precedence. Crowd surveillance should be constructed while avoiding obstacles, but priorities are like indicating importance, and obstacles with priorities below a certain value may be ignored because they are less important than barrier formation. Components include mobile robots that move within square areas to form crowed surveillance. The robot’s moving speed must be the same, and the moving speed must be adjusted appropriately to prevent it from becoming too fast or slow. The robot’s communication range and maximum travel distance should also be restricted to minimize travel distance. Then, the key terms and main research problem for the proposed system are defined as follows.
Definition 1 (Crowd transportation space).
The crowd surveillance space, referred as CTS, is the transportation space to allow crowd environments and factors covering crowd pedestrians, vehicles, objects, signals to provide intelligent transportation services in regard to the large amount of information and data.
Definition 2 (Crowd surveillance).
Let us suppose that there exists the targeted crowd transportation space, the group of mobile components with moving speed, the set of obstacles. The crowd surveillance, called as CrowdSurv, is to support the detection of the crowd groups including penetrations and requested objects in the given space.
Definition 3 (crowd surveillance total movement minimization problem).
Given that a group of mobile robots, a set of obstacles in crowd surveillance space, the crowd surveillance total movement minimization, referred as CrowdSurvTMin, is to minimize a total distance traveled by mobile robots such that the crowd surveillance is provided with adjusted moving speed of robots and with avoiding obstacles as well as the required detection range and allowable maximum movement limit are satisfied in crowd surveillance space.
Then, the objective function (1) of CrowdSurvTMin problem is to
Minimize μ

3. Proposed Schemes

In this section, two different algorithms are presented to reduce the total travel distance of the robot within a square zone and to obtain energy-efficient results with minimal total movements of mobile robots. Then, a description of the execution procedures for each algorithm is specified.

3.1. Algorithm 1: Half-Divided-Positioning

Now, we describe the first algorithm, called as, Algorithm 1: Half-Divided-Positioning. The execution stage of the algorithm is as follows:
  • Check the given square area.
  • Identify the location of randomly scattered mobile robots.
  • Verify the location of obstacles in the area and identify randomly assigned priorities.
  • Obstacles whose priority is below a certain value are excluded.
  • Divide the area in half horizontally.
  • A crowd surveillance is constituted by moving one robot at a time in two divided areas.
  • Calculate the total distance traveled by mobile robots and return it as final output μ .
After dividing the entire area in half horizontally, the mobile robot in the ground is moved one by one from above and below at the same time to generate a crowd surveillance. At this time, the mobile robot moves to the nearest place it can go. This is also applied when avoiding obstacles, and the robot closest to them moves to avoid obstacles. If this happens, the robots will move in different areas. Otherwise, the collisions between the mobile robots will not occur.
Figure 1 depicts the initial status and the half-divided sub-regions by Algorithm 1: Half-Divided-Positioning. Figure 1a shows the initial state to cover a group of mobile robots and a set of obstacles with security priority. Also, Figure 1b represents the status of half-divided sub-regions by Algorithm 1: Half-Divided-Positioning so that the mobile robots determine the positioning or the moving strategy according the half-divided sub-regions.
Figure 2 shows implementation procedures and cases by Algorithm 1: Half-Divided-Positioning. Figure 2a describes the initial status with a verification of a group of mobile robots and a set of obstacles with priorities within the given area. Figure 2b depicts the first construction of crowd surveillance in the upper sub-region and lower sub-region, respectively. Figure 2c the case of obstacle avoidance with the highest priority in the upper sub-region while the crowd surveillance is maintained for obstacle the second priority in lower sub-region. Also, Figure 2d stands for the continuous construction of crowd surveillance in both the upper sub-region and lower sub-region. Figure 2e returns the completed result of crowd surveillance so that crowd surveillance are formed in both the upper sub-region and lower sub-region, respectively.
Algorithm 1 Half-Divided-Positioning
Inputs: S , M , T , Output: μ
  1: accept the given area S;
  2: verify the positions of mobile robots M in S;
  3: set M ;
  4: set total distance = 0;
  5: recognize the locations of obstacles T in S;
  6: assign the priority to each obstacle;
  7: divide S in half and set them as S u p p e r and S l o w e r ;
  8: while a crowd surveillance is not formed do
  9:       select a robot m in M;
10:       move m to min( S u p p e r , S l o w e r );
11:       estimate the moving distance and add it to total distance;
12:       if the crowd surveillance is generated completely in S then
13:           exit
14: update total distance to μ ;
15: return μ ;
Moreover, the pseudocode of Half-Divided-Positioning is described in Algorithm 1 with clear representations.

3.2. Algorithm 2: Excluded-Coordinated-Movement

Next, the second method, referred as Algorithm 2: Excluded-Coordinated-Movement, is presented to resolve the CrowdSurvTMin problem. The essential strategy of Algorithm 2: Excluded-Coordinated-Movement is to draw virtual lines to all given areas and to give coordinates to each other. Then, Algorithm 2: Excluded-Coordinated-Movement plans the movement path in advance using this coordinate, excluding the coordinates that passed bygone parts. Figure 3 presents the initial state which accepts transportation space with a group of mobile robots, a set of obstacles and its security priority toward securing crowd surveillance. As shown in Figure 3, the given transportation space is drawn by the dotted virtual lines so that the coordinated sub-regions are created. For the advantage of Algorithm 2: Excluded-Coordinated-Movement, the coordinates are used to make it easier to calculate the distance traveled, and the risk of collision is much lower because coordinates in the past are excluded. Also, since the route is planned in advance, it is efficient for avoiding obstacles.
Then, the execution procedures and steps of Algorithm 2: Excluded-Coordinated-Movement are explained as follows:
  • Verify the given transportation space.
  • Place mobile robots randomly in the given space.
  • Identify the arbitrary location of mobile robots in all regions.
  • Check the location of obstacles within the area and identify randomly assigned priorities.
  • Exclude obstacles with priorities which do not exceed a certain value.
  • Draw a line and give coordinates to all areas.
  • Plan the robot’s travel path, excluding coordinates of past parts.
  • Move the robot to a planned path and create a crowd surveillance.
  • Estimate the total moving distance by mobile robots and return it as final output μ .
Figure 4 shows execution procedures and cases by Algorithm 2: Excluded-Coordinated-Movement. Figure 4a presents the initial state covering a group of mobile robots and a set of obstacles with priorities within the given area where the coordinated sub-regions are drawn by virtual lines. Figure 4b depicts the scheduled status of mobile robot trajectory for crowd surveillance. Figure 4c represents the next planned status of mobile robots with the exclusion for the coordinates of past parts. Figure 4d stands for the completed result of crowd surveillance so that crowd surveillance is consequently generated with obstacle avoidance according to coordinated sub-regions.
Algorithm 2 Excluded-Coordinated-Movement
Inputs: S , M , T , b , Output: μ
  1: verify the given area S;
  2: locate a set of mobile robots M into S;
  3: check the positions of M;
  4: set M ;
  5: set total distance = 0;
  6: set a security priority bound b;
  7: identify the positions of obstacles T in S;
  8: while a priority is assigned to every obstacle T do
  9:       assign the priority p to an obstacle t of T;
10:       if p < b then
11:           exclude t from T;
12: generate the line-based coordinates in S;
13: while a crowd surveillance is not formed do
14:       select a robot m in M;
15:       move a robot to position for crowd surveillance with excluding coordinates of past parts;
16:       calculate the moving distance and add it to total distance;
17:       if the crowd surveillance is created completely in S then
18:           exit
19: update total distance to μ ;
20: return μ ;
In addition, the pseudocode of Excluded-Coordinated-Movement is presented in Algorithm 2 in more detail.

4. Experimental Evaluations and Discussions

4.1. Simulation Results

In this section, we analyze the outcomes of two different approaches: Algorithm 1: Half-Divided-Positioning and Algorithm 2: Excluded-Coordinated-Movement and their performances obtained from expansive simulations are demonstrated. For the simulation settings, the executed simulations have used the size of transportation space as 100 (width) by 100 (height) m, 150 by 150, 200 by 200, 250 by 250, respectively. The total number of mobile robots and vehicles n is between 100 and 250 and the communication range r is between 50 and 100 for each scenario graph. The required crowd surveillance level l is ranging from 4 to 7. Also, it is verified that every numerical result value of the total moving distance by mobile robots μ is the average value after 1000 cases with different settings and parameters with random locations of mobile robots and obstacles are performed. As a whole, our experiments for crowd surveillance with obstacle avoidance consist of three different groups of the applied scenarios according to the given space sizes, the total number of mobile robots, etc.
For the first group of experiment, we implement two different methods: Algorithm 1: Half-Divided-Positioning and Algorithm 2: Excluded-Coordinated-Movement with various number of mobile robots n in 100 × 100 crowd transportation space as shown in Figure 5. It is identified that the performance result graph represents two-axis so that X-coordinate stands for communication of mobile robot and Y-coordinate axis depicts the outcome of total distance μ . In the result graph, Algorithm 1: Half-Divided-Positioning is presented with circle marker with blue color, Algorithm 2: Excluded-Coordinated-Movement is shown with triangle marker with red color. Figure 5a,b demonstrate the results when n = 100 and n = 150 are put into the experiment. Figure 5c,d show the performance if n = 200 and n = 250 are set up with 100 × 100 crowd transportation space. As seen in Figure 5, we could check that the value of μ is decreasing as the communication range is increasing for both algorithms and Algorithm 2: Excluded-Coordinated-Movement has better performance than Algorithm 1: Half-Divided-Positioning.
About the second group of simulation, we perform Algorithm 1: Half-Divided-Positioning and Algorithm 2: Excluded-Coordinated-Movement for a different size of crowd transportation space with the total number of mobile robots n = 100 as seen in Figure 6. Similar to the first group of experiment, Algorithm 1: Half-Divided-Positioning is presented with a circle marker with blue color and Algorithm 2: Excluded-Coordinated-Movement is shown with a triangle marker with red color on the result graph. Figure 6a,b show the performance if 100 × 100 and 150 × 150 size are considered as the crowd transportation space. Also, Figure 6c,d present the numerical result when 200 × 200 and 250 × 250 size are put into the given area size. According to Figure 6, we could verify that not only the total movement distance for mobile robots of μ decreases as the communication range increases but also Algorithm 2: Excluded-Coordinated-Movement makes a lower total moving distance than Algorithm 1: Half-Divided-Positioning.
For the third group of experiment, we execute Algorithm 1: Half-Divided-Positioning and Algorithm 2: Excluded-Coordinated-Movement by various crowd surveillance level l with n = 100 in 100 × 100 crowd transportation space as it is shown in Figure 7. It is noted that Algorithm 1: Half-Divided-Positioning is depicted with a circle marker with blue color and Algorithm 2: Excluded-Coordinated-Movement is represented with a triangle marker with red color on the result graph. Figure 7a,b demonstrate the performance when crowd surveillance level l = 4 and l = 5 are given, Figure 7c,d show the output value if crowd surveillance level l = 6 and l = 6 are requested, respectively. Based on Figure 7, we could check that the value of μ is decreasing as the communication range is increasing for both algorithms. Algorithm 2: Excluded-Coordinated-Movement returns the smaller moving distance μ than Algorithm 1: Half-Divided-Positioning for crowd surveillance. As a whole, it is confirmed that Algorithm 2: Excluded-Coordinated-Movement outperforms Algorithm 1: Half-Divided-Positioning consequently.

4.2. Complexity Analysis

When we calculate the complexity of the proposed Algorithm 1: Half-Divided-Positioning and Algorithm 2: Excluded-Coordinated-Movement, suppose that the given surveillance area is S, the total number of mobile robots is n, the total number of surveillance priority k, the total number of obstacles is b and the total number of coordinates c, where n > q , n > b , n > c .
First, if we deliberate on the complexity of Algorithm 1: Half-Divided-Positioning, it identifies the given surveillance area S within O ( 1 ) and it checks the location of randomly deployed mobile robots within O ( n ) . Also, it verifies the assigned priorities within O ( k ) as well as checks the location of obstacles within O ( b ) . Then, it divides the given area in half horizontally within O ( 1 ) . Each mobile robot is positioned to two divided areas for avoiding obstacles b within b O ( n ) . Then, the total distance of mobile robot is estimated within O ( n ) and return it as O ( 1 ) . If so, the total number of iterations and calculations will be O ( 1 ) + O ( n ) + O ( k ) + O ( 1 ) + b O ( n ) + O ( n ) + O ( 1 ) . Because b is constant, the asymptotic upper bound is O(n). Hence, the complexity of Algorithm 1: Half-Divided-Positioning is estimated as O ( n ) .
Second, if we evaluate the complexity of Algorithm 2: Excluded-Coordinated-Movement, it verifies the requested surveillance area S within O ( 1 ) and places the mobile robots to the area S within O ( n ) . It generates the arbitrary positions of mobile robots in all regions within O ( n ) . It recognizes the location of obstacles within O ( b ) and the assigned priorities within O ( k ) . Then, it draws the line and gives coordinates to each area within 2 O ( 1 ) O ( c ) . Also, it estimates the robot’s travel path excluding the previous parts within b O ( n ) O ( c ) for avoiding obstacles b. Then, the total distance of mobile robot is estimated within O ( n ) and return it as O ( 1 ) . If so, the total number of iterations and calculations will be O ( 1 ) + O ( n ) + O ( n ) + O ( b ) + O ( k ) + 2 O ( 1 ) O ( c ) + b O ( n ) O ( c ) + O ( 1 ) . Since b is constant, the asymptotic upper bound is O ( n ) . Therefore, the complexity of Algorithm 2: Excluded-Coordinated-Movement is estimated as O ( n ) .

5. Concluding Remarks

In this article, we introduced the crowd surveillance framework using mobile robots and intelligent vehicles to provide obstacle avoidance with security priorities in transportation stations, which deliberates on different moving strategies. Then, we made a formal definitions of key terms and the crowd surveillance total movement minimization problem. To find the solution of the defined problem, two different algorithms are devised to provide crowd surveillance completely in the give transportation space. Furthermore, the performance of the proposed schemes is analyzed based on numerical results by expansive simulations with various settings and scenarios.

Author Contributions

Conceptualization, Y.C.; software, Y.C; validation, Y.C; investigation, Y.C. and H.K.; methodology, Y.C.; resources, Y.C.; data curation, Y.C.; writing—draft preparation, Y.C. and H.K.; writing—review and editing, H.K.; visualization, Y.C. and H.K.; supervision, H.K.; project administration, H.K.; funding acquisition, H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
IIoTIndustrial Internet of Things
RANRadio Access Network

References

  1. Zhu, Y.; Mao, B.; Kato, N. On a novel high accuracy positioning with intelligent reflecting surface and unscented kalman filter for intelligent transportation systems in B5G. IEEE J. Sel. Areas Commun. 2024, 42, 68–77. [Google Scholar] [CrossRef]
  2. Scotece, D.; Noor, D.A.; Foschini, L.; Corradi, A. 5G-Kube: Complex telco Core infrastructure deployment made low-cost. IEEE Commun. Mag. 2023, 61, 26–30. [Google Scholar] [CrossRef]
  3. Tuong, V.; Noh, W.; Cho, S. Sparse CNN and deep reinforcement learning-based D2D scheduling in UAV-assisted industrial IoT network. IEEE Trans. Ind. Inform. 2024, 20, 213–223. [Google Scholar] [CrossRef]
  4. Ha, T.; Masood, A.; Na, W.; Cho, S. Intelligent multi-path TCP congestion control for video streaming in internet of deep space things communication. ICT Express 2023, 9, 860–868. [Google Scholar] [CrossRef]
  5. Yeh, C.; Jo, G.D.; Ko, Y.-J.; Chung, H.K. Perspectives on 6G wireless communications. ICT Express 2023, 9, 82–91. [Google Scholar] [CrossRef]
  6. Ahammed, T.B.; Patgiri, R.; Nayak, S. A vision on the artificial intelligence for 6G communication. ICT Express 2023, 9, 197–210. [Google Scholar] [CrossRef]
  7. Hu, H.; Xiong, K.; Yang, H.; Ni, Q.; Gao, B.; Fan, P.; Letaief, K.B. AoI-minimal online scheduling for wireless-powered IoT: A lyapunov optimization-based approach. IEEE Trans. Green Commun. Netw. 2023, 7, 2081–2092. [Google Scholar] [CrossRef]
  8. Chatzakis, M.; Fatourou, P.; Kosmas, E.; Palpanas, T.; Peng, B. Odyssey: A journey in the land of distributed data series similarity search. Proc. VLDB Endow. 2023, 16, 1140–1153. [Google Scholar]
  9. Zhao, R.; Wang, Y.; Xue, Z.; Ohtsuki, T.; Adebisi, B.; Gui, G. Semisupervised Federated-Learning-Based Intrusion Detection Method for Internet of Things. IEEE Internet Things J. 2023, 10, 8645–8657. [Google Scholar] [CrossRef]
  10. Hossain, A.; Hossain, M.; Ansari, N. Dual-band aerial networks for priority-based traffic. IEEE Trans. Veh. Technol. 2023, 72, 9500–9510. [Google Scholar] [CrossRef]
  11. Kyung, Y.; Ko, H.; Lee, J.; Pack, S.; Park, N.; Ko, N. Location-aware B5G LAN-type services: Architecture, use case, and challenges. IEEE Commun. Mag. 2023, 62, 88–94. [Google Scholar] [CrossRef]
  12. Ko, H.; Pack, S.; Leung, V.C.M. Performance optimization of serverless computing for latency-guaranteed and energy-efficient task offloading in energy-harvesting industrial IoT. IEEE Internet Things J. 2023, 10, 1897–1907. [Google Scholar] [CrossRef]
  13. Cruz, M.; Abbade, L.; Lorenz, P.; Mafra, S.B.; Rodrigues, J.J.P.C. Detecting compromised IoT devices through XGBoost. IEEE Trans. Intell. Transp. Syst. 2023, 24, 15392–15399. [Google Scholar] [CrossRef]
  14. Zhao, M.; Adib, F.; Katabi, D. Emotion recognition using wireless signals. Commun. ACM 2018, 61, 91–100. [Google Scholar] [CrossRef]
  15. Kim, H.; Ben-Othman, J.; Cho, S.; Mokdad, L. Intelligent aerial-ground surveillance and epidemic prevention with discriminative public and private services. IEEE Netw. 2018, 36, 40–46. [Google Scholar] [CrossRef]
  16. Lee, S.; Lee, S.; Choi, Y.; Ben-Othman, J.; Mokdad, L.; Hwang, K.; Kim, H. Task-oriented surveillance framework for virtual emotion informatics in polygon spaces. IEEE Wirel. Commun. 2023, 30, 104–111. [Google Scholar] [CrossRef]
  17. Lin, P.; Song, Q.; Yu, F.; Wang, D.; Jamalipour, A.; Guo, L. Wireless virtual reality in beyond 5G systems with the internet of intelligence. IEEE Wirel. Commun. 2021, 28, 70–77. [Google Scholar] [CrossRef]
  18. Herrera, J.L.; Galán-Jiménez, J.; Foschini, L.; Bellavista, P.; Berrocal, J.; Murillo, J.M. QoS-aware fog node placement for intensive IoT applications in SDN-fog scenarios. IEEE Internet Things J. 2022, 9, 13725–13739. [Google Scholar] [CrossRef]
  19. Song, R.; Wu, J.; Pan, Q.; Imran, M.; Naser, N.; Jones, R.; Verikoukis, C.V. Zero-Trust Empowered Decentralized Security Defense against Poisoning Attacks in SL-IoT: Joint Distance-Accuracy Detection Approach. Proc. IEEE Globecom 2023, 2766–2771. [Google Scholar]
  20. Memos, V.A.; Psanni, K.E. Optimized UAV-based data collection from MWSNs. ICT Express 2023, 9, 29–33. [Google Scholar] [CrossRef]
  21. Liu, Y.; Xiong, K.; Zhang, W.; Yang, H.; Fan, P.; Letaief, K.B. Jamming-enhanced secure UAV communications with propulsion energy and curvature radius constraints. IEEE Trans. Veh. Technol. 2023, 72, 10852–10866. [Google Scholar] [CrossRef]
  22. Messous, M.A.; Senouci, S.; Sedjelmaci, H.; Cherkaoui, S. A Game Theory Based Efficient Computation Offloading in an UAV Network. IEEE Trans. Veh. Technol. 2019, 68, 4964. [Google Scholar] [CrossRef]
  23. Lhazmir, S.; Oualhaj, O.A.; Kobbane, A.; Ben-Othman, J. Matching Game With No-Regret Learning for IoT Energy-Efficient Associations With UAV. IEEE Trans. Green Commun. Netw. 2020, 4, 973–981. [Google Scholar] [CrossRef]
  24. Srivastava, V.; Debnath, S.K.; Bera, B.; Das, A.K.; Park, Y.; Lorenz, P. Blockchain-envisioned provably secure multivariate identity-based multi-signature scheme for internet of vehicles environment. IEEE Trans. Intell. Transp. Syst. 2022, 71, 9853–9867. [Google Scholar] [CrossRef]
  25. Saad, W.; Sanjab, A.; Wang, Y.; Kamhoua, C.A.; Kwiat, K.A. Hardware trojan detection game: A prospect-theoretic approach. IEEE Trans. Veh. Technol. 2022, 66, 7697–7710. [Google Scholar] [CrossRef]
  26. Feng, S.; Lu, X.; Sun, S.; Niyato, D.; Hossain, E. Securing large-scale D2D networks using covert communication and friendly jamming. IEEE Trans. Wirel. Commun. 2024, 23, 592–606. [Google Scholar] [CrossRef]
  27. Asheralieva, A.; Niyato, D.; Xiong, Z. Auction-and-learning based lagrange coded computing model for privacy-preserving, secure, and resilient mobile edge computing. IEEE Trans. Mob. Comput. 2023, 22, 744–764. [Google Scholar] [CrossRef]
  28. Xiao, Y.; Yan, C.; Lyu, S.; Pei, Q.; Liu, X.; Zhang, N.; Dong, M. Defed: An edge-feature-enhanced image denoised network against adversarial attacks for secure internet of things. IEEE Internet Things J. 2023, 10, 6836–6848. [Google Scholar] [CrossRef]
  29. Kang, J.; Ryu, D.; Baik, J. A Case Study of Industrial Software Defect Prediction in Maritime and Ocean Transportation Industries. J. KIISE 2020, 47, 769–778. [Google Scholar] [CrossRef]
  30. Park, S.; Kang, M.; Park, J.; Chom, S.; Han, S. Analyzing the Effects of API Calls in Android Malware Detection Using Machine Learning. J. KIISE 2021, 48, 257–263. [Google Scholar] [CrossRef]
  31. Hu, X.; Zhang, H.; Ma, D.; Wang, R.; Zheng, J. Minor class-based status detection for pipeline network using enhanced generative adversarial networks. Neurocomputing 2021, 424, 71–83. [Google Scholar] [CrossRef]
  32. Huang, L.; Zhou, M.; Hao, K.; Hou, E. A survey of multi-robot regular and adversarial patrolling. IEEE/CAA J. Autom. Sin. 2019, 6, 894–903. [Google Scholar] [CrossRef]
  33. Lee, S. An Efficient Coverage Area Re-Assignment Strategy for Multi-Robot Long-Term Surveillance. IEEE Access 2023, 11, 33757–33767. [Google Scholar] [CrossRef]
  34. Zhang, J.; Wu, Y.; Zhou, M. Cooperative Dual-Task Path Planning for Persistent Surveillance and Emergency Handling by Multiple Unmanned Ground Vehicles. IEEE Trans. Intell. Transp. Syst. 2024, 11, 16288–16299. [Google Scholar] [CrossRef]
  35. Naranjo, J.E.; Jimenez, F.; Anguita, M.; Rivera, J.L. Automation Kit for Dual-Mode Military Unmanned Ground Vehicle for Surveillance Missions. IEEE Intell. Transp. Syst. Mag. 2024, 12, 125–137. [Google Scholar] [CrossRef]
  36. Yang, H.; Yao, C.; Liu, C.; Chen, Q. RMRL: Robot Navigation in Crowd Environments with Risk Map-Based Deep Reinforcement Learning. IEEE Robot. Autom. Lett. 2023, 8, 7930–7937. [Google Scholar] [CrossRef]
  37. Hong, Y.; Ding, Z.; Yuan, Z.; Chi, W.; Sun, L. Obstacle Avoidance Learning for Robot Motion Planning in Human–Robot Integration Environments. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 2169–2178. [Google Scholar] [CrossRef]
  38. Chen, L.; Wang, Y.; Miao, Z.; Feng, M.; Zhou, Z.; Wang, H. Toward Safe Distributed Multi-Robot Navigation Coupled with Variational Bayesian Model. IEEE Trans. Autom. Sci. Eng. 2024, 21, 7583–7598. [Google Scholar] [CrossRef]
  39. Xidias, E.; Zacharia, P. Balanced task allocation and motion planning of a multi-robot system under fuzzy time windows. Eng. Comput. 2024, 41, 1301–1326. [Google Scholar] [CrossRef]
  40. Mishra, M.; Poddar, P.; Agrawal, R.; Chen, J.; Tokekar, P.; Sujit, P.B. Multi-Agent Deep Reinforcement Learning for Persistent Monitoring with Sensing, Communication, and Localization Constraints. IEEE Trans. Autom. Sci. Eng. 2024, 1–13, Early Access. [Google Scholar] [CrossRef]
  41. Huang, L.; Zhou, M.; Hao, K.; Han, H. Multirobot Cooperative Patrolling Strategy for Moving Objects. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 2995–3007. [Google Scholar] [CrossRef]
Figure 1. Initial status and the half-divided sub-regions by Algorithm 1: Half-Divided-Positioning.
Figure 1. Initial status and the half-divided sub-regions by Algorithm 1: Half-Divided-Positioning.
Sensors 25 00350 g001
Figure 2. The execution cases of Algorithm 1: Half-Divided-Positioning.
Figure 2. The execution cases of Algorithm 1: Half-Divided-Positioning.
Sensors 25 00350 g002aSensors 25 00350 g002b
Figure 3. Initial status and the coordinated sub-areas according to virtual lines by Algorithm 2: Excluded-Coordinated-Movement.
Figure 3. Initial status and the coordinated sub-areas according to virtual lines by Algorithm 2: Excluded-Coordinated-Movement.
Sensors 25 00350 g003
Figure 4. The implementation cases of Algorithm 2: Excluded-Coordinated-Movement.
Figure 4. The implementation cases of Algorithm 2: Excluded-Coordinated-Movement.
Sensors 25 00350 g004
Figure 5. The total moving distance of μ by various number of mobile robots n in 100 × 100 crowd transportation space. (a) n = 100 in 100 × 100 crowd transportation space, (b) n = 150 in 100 × 100 crowd transportation space, (c) n = 200 in 100 × 100 crowd transportation space, (d) n = 250 in 100 × 100 crowd transportation space.
Figure 5. The total moving distance of μ by various number of mobile robots n in 100 × 100 crowd transportation space. (a) n = 100 in 100 × 100 crowd transportation space, (b) n = 150 in 100 × 100 crowd transportation space, (c) n = 200 in 100 × 100 crowd transportation space, (d) n = 250 in 100 × 100 crowd transportation space.
Sensors 25 00350 g005
Figure 6. The total moving distance of μ by different size of crowd transportation space with the total number of mobile robots n = 100. (a) 100 × 100 crowd transportation space size with n = 100, (b) 150 × 150 crowd transportation space size with n = 100, (c) 200 × 200 crowd transportation space size with n = 100, (d) 250 × 250 crowd transportation space size with n = 100.
Figure 6. The total moving distance of μ by different size of crowd transportation space with the total number of mobile robots n = 100. (a) 100 × 100 crowd transportation space size with n = 100, (b) 150 × 150 crowd transportation space size with n = 100, (c) 200 × 200 crowd transportation space size with n = 100, (d) 250 × 250 crowd transportation space size with n = 100.
Sensors 25 00350 g006aSensors 25 00350 g006b
Figure 7. The total moving distance of μ by various crowd surveillance level l with n = 100 in 100 × 100 crowd transportation space. (a) crowd surveillance level l = 4 with n = 100 in 100 × 100 crowd transportation space, (b) crowd surveillance level l = 5 with n = 100 in 100 × 100 crowd transportation space, (c) crowd surveillance level l = 6 with n = 100 in 100 × 100 crowd transportation space, (d) crowd surveillance level l = 7 with n = 100 in 100 × 100 crowd transportation space.
Figure 7. The total moving distance of μ by various crowd surveillance level l with n = 100 in 100 × 100 crowd transportation space. (a) crowd surveillance level l = 4 with n = 100 in 100 × 100 crowd transportation space, (b) crowd surveillance level l = 5 with n = 100 in 100 × 100 crowd transportation space, (c) crowd surveillance level l = 6 with n = 100 in 100 × 100 crowd transportation space, (d) crowd surveillance level l = 7 with n = 100 in 100 × 100 crowd transportation space.
Sensors 25 00350 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, Y.; Kim, H. Obstacle-Aware Crowd Surveillance with Mobile Robots in Transportation Stations. Sensors 2025, 25, 350. https://doi.org/10.3390/s25020350

AMA Style

Choi Y, Kim H. Obstacle-Aware Crowd Surveillance with Mobile Robots in Transportation Stations. Sensors. 2025; 25(2):350. https://doi.org/10.3390/s25020350

Chicago/Turabian Style

Choi, Yumin, and Hyunbum Kim. 2025. "Obstacle-Aware Crowd Surveillance with Mobile Robots in Transportation Stations" Sensors 25, no. 2: 350. https://doi.org/10.3390/s25020350

APA Style

Choi, Y., & Kim, H. (2025). Obstacle-Aware Crowd Surveillance with Mobile Robots in Transportation Stations. Sensors, 25(2), 350. https://doi.org/10.3390/s25020350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop