Next Article in Journal
Open-Loop Characterisation of Soft Actuator Pressure Regulated by Pulse-Driven Solenoid Valve
Previous Article in Journal
SCARA Assembly AI: The Synthetic Learning-Based Method of Component-to-Slot Assignment with Permutation-Invariant Transformers for SCARA Robot Assembly
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dataset of Standard and Abrupt Industrial Gestures Recorded Through MIMUs

Department of Mechanical and Aerospace Engineering, Politecnico di Torino, 10129 Turin, Italy
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(12), 176; https://doi.org/10.3390/robotics14120176
Submission received: 10 October 2025 / Revised: 21 November 2025 / Accepted: 27 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)

Abstract

Considering the human-centric approach promoted by Industry 5.0, safety becomes a crucial aspect in scenarios of human–robot interaction, especially when abrupt human movements occur due to inattention or unexpected circumstances. To this end, human motion tracking is necessary to promote a safe and efficient human–machine interaction. Literature datasets related to the industrial context generally contain controlled and repetitive gestures tracked with visual systems or magneto-inertial measurement units (MIMUs), without considering the occurrence of unexpected events that might cause operators’ abrupt movements. Accordingly, the aim of this paper is to present the dataset DASIG (Dataset of Standard and Abrupt Industrial Gestures) related to both standard typical industrial movements and abrupt movements registered through MIMUs. Sixty healthy working-age participants were asked to perform standard pick-and-place gestures interspersed with unexpected abrupt movements triggered by visual or acoustic alarms. The dataset contains MIMUs signals collected during the execution of the task, data related to the temporal generation of alarms, anthropometric data of all participants, and a script for demonstrating DASIG usability. All raw data are provided, and the collected dataset is suitable for several analyses related to the industrial context (gesture recognition, motion planning, ergonomics, safety, statistics, etc.).

1. Introduction

In line with the human-centric approach promoted by Industry 5.0, the most critical safety concern in collaborative robotics arises when humans and robots share the same workspace [1]. In particular, among repeated and regular gestures typical of the industrial context, abrupt gestures caused by inattention and unexpected circumstances can occur. In such scenarios, promptly detecting the onset of abrupt gestures is crucial to prevent human–machine collisions and to ensure that overall task execution remains efficient [2]. A possible approach to ensure safety is based on power and force limitation, which entails a combination of passive safety design and energy control methods in mobile robotic components, imposing constraints on forces, torques, and velocities. Another possibility is speed and separation monitoring (SSM), which imposes a protective distance between the robot and the operator, without specifying limitations for the robotic system. In particular, when the operator is inside the shared workspace, the SSM approach relies on real-time robot control to continuously adjust its trajectory, ensuring collisions are avoided. To monitor the minimum distance between the human and the robot, motion tracking systems are required. Typically, this operation takes advantage of a customized model of the human arm, and it should be conveniently achieved with a limited number of sensors and a reduced computational time. Additionally, to update the robot trajectory, algorithms for collision avoidance must be implemented, also considering the possible occurrence of abrupt movements.
Focusing on repetitive and controlled industrial tasks such as assembly [3] and pick-and-place [4], the tracking of human motion and the recognition of human gestures were achieved through different technologies such as vision systems, wearable inertial sensors and electromyography [5,6,7,8]. The precision and accuracy of traditional visual systems was exploited to collect repetitive daily and industrial gestures. Dallel and colleagues proposed an RGB-D dataset for action recognition, collected in an industrial environment on sixteen participants [9]. Lagamtzis and colleagues published an RGB-D dataset of six participants comprising several industrial assembly tasks performed by the operator alone or in collaboration with a robot [10]. The dataset of Rudenko and colleagues contains human motion trajectory and eye gaze data of nine subjects in an indoor environment [11]. Delamare and colleagues [12] and Tamantini and colleagues [13] created two different datasets of six and eight subjects, respectively, performing assembly line work tasks, recorded with a video motion capture system. In the study of Kratzer and colleagues [14], a dataset of full-body motion of seven participants recorded during manipulation tasks using a traditional motion capture system and a wearable pupil-tracking device is presented. Duarte and Neto introduced a dataset of manufacturing tasks recorded from five participants through an event camera, a depth camera, and a magnetic tracking system [15].
Despite their advantages, traditional visual systems for motion capture have some downsides such as encumbrance, high costs, constraint of the analysis to a structured space and limited acquisition frequencies. To overcome these limitations in the industrial scenario, magneto-inertial measurement units (MIMUs) are an appropriate solution because they are low-cost, portable, easy to wear, minimally invasive, able to guarantee very high frequencies, and suitable for an almost unlimited tracking space [16]. In the literature, a few datasets of industrial gestures have been created with data collected through MIMUs. For example, Olivas-Padilla and colleagues proposed seven MIMUs datasets of gestures carried out by eighteen industrial operators and skilled craftsmen [17]. Alternatively, Maurice and colleagues created a whole-body kinematic dataset of thirteen participants performing several industrial activities such as screwing and handling loads [18].
All the previously cited and described datasets recorded with either visual systems or inertial sensors focus exclusively on repeated and regular gestures typical of industrial contexts, without considering the occurrence of abrupt gestures not directly related to the work task. Moreover, these datasets generally include fewer than twenty participants, which limits their statistical power and reduces the generalizability of their findings. Accordingly, this paper proposes a new MIMUs Dataset of Standard and Abrupt Industrial Gestures—DASIG [19] collected on sixty participants performing both standard and abrupt movements in a simulated workstation. In detail, participants were asked to perform a traditional pick-and-place task alternating with abrupt movements triggered by visual and acoustic alarms. The pick-and-place task was selected because it represents a fundamental action in many industrial contexts, including sorting components on assembly lines, transferring parts between containers, and positioning items for packaging. In detail, the dataset includes MIMUs raw data (accelerations, angular velocities, magnetic fields, and orientations expressed through quaternions) collected during the execution of the task, data related to the temporal generation of alarms, and anthropometric characteristics of all participants. Moreover, a Matlab® (Version 2024b, Mathworks, Natick, MA, USA) structure containing the database and a script for plotting its contents are added to demonstrate the usability of DASIG. The dataset has significant potential to support the development and fine-tuning of algorithms aimed at enhancing system safety by detecting and responding to abrupt movements. These algorithms can help ensuring timely intervention to prevent collisions, thereby improving safety and efficiency in collaborative human–robot workspaces. Moreover, while artificial data generation can be a valuable complementary tool, the unpredictable and highly variable nature of abrupt gestures makes it difficult to reproduce realistically through simulation or augmentation alone. Accordingly, this real-world dataset is also essential to provide a necessary foundation for any subsequent artificial data generation efforts.

2. Dataset Collection and Design

2.1. Experimental Setting and Data Collection

To simulate a pick-and-place task, the workstation illustrated in Figure 1 was used. The task consisted of picking a golf ball from a box and placing it in the hole associated with a station. The set-up was positioned on a table in front of the participant, and it was composed of a box containing 30 golf balls (diameter of 43 mm), a board with different holes (diameter of 60 mm), and a second smaller board with a single hole (diameter of 60 mm) placed at a height of 30 cm above the table. Four stations were defined: SA, SB and SC on the first board and SD on the second board (Figure 1). The board was set up by selecting the most suitable holes for stations SA, SB, and SC from a group of distributed holes, taking each participant’s anthropometry into account. The distance of station SD from the participant was adjusted individually prior to the test. Stations SA, SB, and SC were positioned at table level, while SD was elevated by 30 cm. A sound buzzer, used to generate acoustic alarms, was installed on the left side of the table, and a pair of LEDs (one green and one red) was placed near each station (Figure 1). For the standard movement, participants were asked to pick a ball at a time from the box and to place it into a specific station hole, whose sequence was defined by the lighting of green LEDs positioned near each station (Figure 2a). Interspersed with the standard movements, visual or audible alarms were randomly generated through the lighting of a red LED or an audible buzzer, respectively. In both cases, participants were asked to perform an abrupt movement as fast as possible, placing the ball inside the hole corresponding to the activated red LED (Figure 2b) or vertically extending the arm in case of the sound buzzer (Figure 2c).
Abrupt events were designed to engage two sensory modalities (vision and hearing) and to elicit movements in different directions, thereby reflecting a broader and more realistic industrial workspace and increasing motion variability. The visual trigger reproduces a common industrial scenario, such as the sudden fall of an object from a workstation or conveyor, prompting the instinctive reaction to reach out and catch or stabilize it. The auditory trigger, instead, mimics unexpected noise such as the sudden release of compressed air or the accidental drop of a tool, which may induce a rapid protective or evasive movement. Both situations can be potentially hazardous, particularly in human–machine interaction contexts. Each task consisted of 30 pick-and-place gestures, including 4 sudden alarms. The test was conducted three times for each participant under different conditions: using the right hand with the trunk facing the table (FR_R), using the left hand with the trunk facing the table (FR_L), and using the left hand with the trunk oriented laterally to the table (LA_L). During the experiment, the researchers verified the correct execution of each trial. In cases where participants failed to respond to more than one alarm, they were asked to repeat the test. Conversely, when participants failed to respond to only one alarm, the test was considered valid.
All signal events, including the illumination of the green LEDs and the activation of acoustic and visual alarms, were managed by an Arduino Nano microcontroller (Arduino, Ivrea, Italy) equipped with an ATmega328 processor, a 16 MHz clock, and a 5 V operating voltage. The control logic was implemented using the Arduino integrated development environment. A block diagram illustrating the code structure is shown in Figure 3. First, the instants corresponding to the generation of the visual and the acoustic alarms are randomized (yellow boxes in Figure 3). Specifically, one visual and one acoustic alarms are generated in the first half of the test, after the first six events. Two more alarms (one visual and one acoustic) are generated in the second half of the trial. In addition, a control mechanism was incorporated to prevent visual and acoustic alarms from overlapping. Next, the loop related to the managing of the pick-and-place phase is introduced in the scheme. Index i varies from 0 to 29 (number of pick-and-place tasks in each trial), while index j varies from 0 to 3 (corresponding to a specific station: 0 for SA, 1 for SB, 2 for SC, 3 for SD). In each cycle, a green LED (green boxes in Figure 3) lights first at a specific station. Red and blue blocks of Figure 3 are related to the generation of visual and acoustic alarms, respectively. After the green LED lights up, it is checked whether an alarm (visual or acoustic) should be triggered. When this situation occurs, either a red LED lights up in a specific station (other than the one where the green LED has lit up) or the sound buzzer is activated. Both the acoustic and visual alarms are activated 500 ms after the green LED is lit up. The sequence of green LEDs and alarms is updated. When the 30th cycle is completed, the code is ended (white boxes in Figure 3).
This study was approved by the Ethics Committee for Research at the Politecnico di Torino, nominated with D.R. 1012 on 26 November 2020 (number of protocol: n. 66004/202). In total, 60 healthy participants (36 males and 24 females) with no musculoskeletal or neurological diseases were recruited for the experiment through a written informed consent. Participants were asked to indicate their age among 9 ranges of 5 years each:
  • Six participants in the range 20–24 years;
  • Forty-one participants in the range 25–29 years;
  • Eleven participants in the range 30–34 years;
  • One participant in the range 50–54;
  • One participant in the range 55–59.
No participants were recruited in the 35–39, 40–44, 45–49, and 60–65 age ranges. Anthropometric data (mean ± standard deviation) of participants are reported in the following: height = 1.73 ± 0.09 m, weight = 67.9 ± 11.2 kg, Body Mass Index = 22.5 ± 2.3 kg/m2, upper arm length = 0.34 ± 0.03 m, forearm length = 0.29 ± 0.02 m. Fifty-one subjects were right-handed, nine were left-handed. Each participant was equipped with five wireless MIMUs from an inertial sensor system (Opal™ APDM, Portland, OR, USA). Each sensor included an accelerometer, a gyroscope, and a magnetometer, all of which were tri-axial, with ranges of ±200 g, ±2000 deg/s, and ±8 Gauss, respectively. As illustrated in Figure 4, the sensors were attached to the participants using the bands provided in the APDM kit, positioned on the right upper arm (RUA), right forearm (RFA), sternum (STR), left upper arm (LUA), and left forearm (LFA), with the x-axis of each sensor aligned along the longitudinal axis of the corresponding body segment.
Sensor placement was standardized across participants to minimize variability, and the inherent intra- and inter-subject differences, including gender and anthropometry, were preserved to enhance the dataset generalizability. Data transmission from the MIMUs to the PC was handled through a Bluetooth connection. Data acquisition was performed using the proprietary Motion Studio™ V2R software (APDM, Portland, OR, USA) at a sampling rate of 200 Hz. To synchronize the MIMU system with the Arduino, a 5 V voltage trigger was sent from the Arduino to the Opal sensors. Raw IMU data were provided without filtering, allowing users to apply their preferred preprocessing methods.

2.2. Database Structure

The structure of the DASIG dataset is schematized in Figure 5. The dataset is in a folder called DASIG. This folder contains sixty folders, one for each subject. In addition, there are two files that summarize the anthropometric characteristics of all participants (sub_info.csv) and the dataset organization (readme.txt), respectively. Three exemplifying videos are also provided to make the execution of the task clearer. Two Matlab files are also provided: a structure containing the database (DASIG.mat) and a script providing a wizard to plot all the signals (DASIG_plot.m).
In the folder associated with each subject there are seven files of three different types:
  • Data collected from all MIMUs related to the three different trials (subXXX_FR_R_MIMU.csv, subXXX_FR_L_MIMU.csv, subXXX_LA_L_MIMU.csv). Each .csv file contains:
    o
    Time (s) of the acquisition with a sampling frequency of 200 Hz;
    o
    Accelerations (m/s2) along the three sensor axes;
    o
    Angular velocities (rad/s) around the three sensor axes;
    o
    Magnetic fields (G) along the three sensor axes;
    o
    Orientation (expressed in form of quaternions) of the sensor with respect to the Earth reference frame.
  • Data collected from Arduino system related to the three different trials (subXXX_FR_R_Arduino.csv, subXXX_FR_L_Arduino.csv, subXXX_LA_L_Arduino.csv). Each .csv file contains:
    o
    Sequence of temporal instants (s) corresponding to the occurrence of specific events during the test (lighting of a green LED, lighting of a red LED, and activation of the sound buzzer);
    o
    Numeric code identifying the type of each event and the specific station in which it occurs. In detail, numbers from 2 to 5 indicate the lighting of a green LED, numbers from 6 to 9 indicate the lighting of a red LED, and number 10 indicates the activation of the sound buzzer. Specifically, numbers 2 and 6 identify events occurred in the station SA, numbers 3 and 7 events occurred in the station SB, numbers 4 and 8 events occurred in the station SC, and numbers 5 and 9 identify events occurred in the station SD.
    o
    Anthropometric data of the subject: gender, age range (years), height (m), weight (kg), dominant arm, right upper arm length (m), left upper arm length (m), right forearm length (m), left forearm length (m).
The dataset is in a folder called DASIG. Since two visual and two acoustic alarms were generated for each test of each participant, the number of suggested abrupt events was equal to four. However, in some cases, subjects were not able to react to a visual alarm because of the short time from the lighting of the green LED. In detail, tests with the lack of one abrupt movement associated with a visual alarm are: sub027_FR_R, sub033_LA_L, sub039_FR_R, sub056_FR_R, and sub060_FR_R.

2.3. Database Statistics

To ensure an informative dataset, descriptive statistics were computed for both linear accelerations and angular velocities. Signals were segmented into 3-s windows based on the activation of the green LEDs (information provided by Arduino). Each window was then classified as either standard or abrupt, depending on whether a visual or acoustic alarm occurred. For each window, the Root Mean Square (RMS) of the acceleration and angular velocity norms was calculated. Subsequently, the mean and standard deviation of the RMS values were computed across all windows and participants, keeping the three trials separate and distinguishing between standard and abrupt gestures (Table 1).
It can be seen that the mean RMS values differ between standard and abrupt gestures. To assess the statistical significance of this difference, a Wilcoxon test was conducted, producing a p-value well below 0.01 for all three trials and for both signals.

3. Database Demonstration

To demonstrate how to use the contents of the proposed dataset, a MATLAB® (MathWorks, Natick, MA, USA) structure called DASIG.mat has been created from the .csv files and inserted into the folder. Additionally, a script named DASIG_plot.m has been written to visualize the database content. First, the script loads the structure containing the complete dataset. Then, the user is prompted to select the subject, trial, and MIMU location of interest by entering a numeric code following a guided procedure. Finally, the script generates four figures displaying linear accelerations, angular velocities, magnetic field, and orientation expressed through quaternions. As an example, Figure 6, Figure 7, Figure 8 and Figure 9 have been obtained running the script DASIG_plot.m. They illustrate the signals recorded by the left forearm MIMU (LFA) during the FR_L trial of subject 005. Specifically, Figure 6 displays linear accelerations, Figure 7 shows angular velocities, Figure 8 presents the magnetic field components, and Figure 9 depicts the quaternion trends. Each figure consists of two panels. The upper panel shows the time series of all components of the selected signal. The x-axis spans 90 s for all plots, corresponding to the duration of each trial. The y-axis is standardized across all subjects, with limits set according to the maximum and minimum values in the dataset. Trigger events are marked using vertical colored lines: green for green LEDs activations, red for visual alarms activation, and blue for acoustic alarms activation. The occurrence of triggers is also depicted in the second panel of each figure, where colored dots (matching the vertical lines in the first panel) represent the temporal sequence of trigger events. Specifically, the horizontal placement of green and red dots indicates the station (SA, SB, SC, or SD) where the LEDs were activated. Blue dots related to acoustic alarms (AA) and positioned at a higher level indicate that subjects raised their arm.

4. Discussions

DASIG contains comprehensive data acquired through MIMUs during the execution of the pick-and-place task, including raw inertial signals, information on the visual and acoustic alarms used to induce standard and abrupt movements, detailed anthropometric measurements for all sixty participants, and a script demonstrating how to load, process, and analyze the dataset. The data showed statistically significant differences between the two types of movement in both acceleration and angular velocity, demonstrating that the two groups are distinguishable and exhibit distinct characteristics. By providing access to all raw and synchronized data streams, DASIG enables a wide range of analyses. Researchers can investigate movement patterns during standard gestures and assess gesture repeatability both within and across participants. The dataset also supports the evaluation of response times to alarm signals, as well as an in-depth examination of the specific abrupt gestures performed in reaction to those alarms. Furthermore, the structure of the experimental protocol allows for the study of how movement strategies vary depending on different starting and ending stations, as well as on the operator’s relative position within the workspace. Comparisons between gestures executed with the dominant and non-dominant arm are also possible, making the dataset suitable for assessing lateralization effects. These analyses can be conducted at both the intra-subject and inter-subject levels, thus providing insights into individual variability and collective trends. In addition, DASIG represents a valuable resource for training and validating artificial intelligence algorithms aimed at gesture recognition, anomaly detection, and early identification of abrupt movements. It is important to emphasize that the data provided allows for a wide range of analyses; however, depending on the specific analysis, filtering, cleaning, and data-handling procedures are required to ensure robust and application-specific results. For example, in the case of abrupt-gesture recognition, it may be necessary to redefine abrupt windows not only by considering the presence or absence of an alarm, but also by accounting for the exact activation instant, the reaction time preceding the actual movement, and the duration of the movement itself, so as to eliminate the influence of normal motions occurring within the abrupt windows. These gestures are crucial for developing effective and safe human–machine interaction in industrial contexts. The main limitation of this dataset is that it was not collected in a real industrial environment. However, the experimental setup and protocol faithfully reproduce classical industrial pick-and-place gestures. Accordingly, the dataset is highly suitable for studying how human movements can be integrated with, or anticipated by, robotic systems in collaborative industrial scenarios.

5. Conclusions

This paper presents the DASIG dataset [19], collected using MIMUs on sixty participants performing a pick-and-place task in a simulated workstation. Existing datasets of industrial gestures, whether acquired with traditional visual systems or with MIMUs, typically include only repetitive and conventional movements and involve fewer than twenty participants. In contrast, the dataset proposed in this paper comprises both standard movements and abrupt movements elicited by unexpected situations or momentary inattention, and it includes sixty participants to enhance its representativity and the generalizability of any analyses conducted by future users. DASIG provides raw, synchronized data that support diverse analyses of movement patterns, gesture repeatability, alarm-response behavior, and workspace-dependent strategies at both the intra- and inter-subject levels. Its structure also makes it suitable for training and validating AI models for gesture recognition, anomaly detection, and early identification of abrupt movements.

Author Contributions

Conceptualization, E.D., M.P., L.G. and S.P.; methodology, E.D., M.P. and E.C.; data curation, E.D., M.P. and E.C.; writing—original draft preparation, E.D., M.P. and E.C.; writing—review and editing, E.D., L.G. and S.P.; supervision, L.G. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The Dataset of Standard and Abrupt Industrial Gestures—DASIG is available here: https://doi.org/10.5281/zenodo.17660014 (accessed on 26 November 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  2. Digo, E.; Polito, M.; Pastorelli, S.; Gastaldi, L. Detection of upper limb abrupt gestures for human–machine interaction using deep learning techniques. J. Braz. Soc. Mech. Sci. Eng. 2024, 46, 227. [Google Scholar] [CrossRef]
  3. Cohen, Y.; Faccio, M.; Galizia, F.G.; Mora, C.; Pilati, F. Assembly system configuration through Industry 4.0 principles: The expected change in the actual paradigms. IFAC-PapersOnLine 2017, 50, 14958–14963. [Google Scholar] [CrossRef]
  4. Quintero, C.P.; Tatsambon, R.; Gridseth, M.; Jagersand, M. Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 349–354. [Google Scholar] [CrossRef]
  5. Boldo, M.; Bombieri, N.; Centomo, S.; De Marchi, M.; Demrozi, F.; Pravadelli, G.; Quaglia, D.; Turetta, C. Integrating Wearable and Camera Based Monitoring in the Digital Twin for Safety Assessment in the Industry 4.0 Era. In Leveraging Applications of Formal Methods, Verification and Validation. Practice; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022; Volume 13704 LNCS, pp. 184–194. [Google Scholar] [CrossRef]
  6. De Feudis, I.; Buongiorno, D.; Grossi, S.; Losito, G.; Brunetti, A.; Longo, N.; Di Stefano, G.; Bevilacqua, V. Evaluation of Vision-Based Hand Tool Tracking Methods for Quality Assessment and Training in Human-Centered Industry 4.0. Appl. Sci. 2022, 12, 1796. [Google Scholar] [CrossRef]
  7. Calderón-Sesmero, R.; Lozano-Hernández, A.; Frontela-Encinas, F.; Cabezas-López, G.; De-Diego-Moro, M. Human–Robot Interaction and Tracking System Based on Mixed Reality Disassembly Tasks. Robotics 2025, 14, 106. [Google Scholar] [CrossRef]
  8. Raj, R.; Kos, A. Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies. Electronics 2024, 13, 3285. [Google Scholar] [CrossRef]
  9. Dallel, M.; Havard, V.; Baudry, D.; Savatier, X. InHARD-Industrial Human Action Recognition Dataset in the Context of Industrial Collaborative Robotics. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems, ICHMS 2020, Rome, Italy, 7–9 September 2020. [Google Scholar] [CrossRef]
  10. Lagamtzis, D.; Schmidt, F.; Seyler, J.; Dang, T. CoAx: Collaborative Action Dataset for Human Motion Forecasting in an Industrial Workspace. Scitepress 2022, 3, 98–105. [Google Scholar] [CrossRef]
  11. Rudenko, A.; Kucner, T.P.; Swaminathan, C.S.; Chadalavada, R.T.; Arras, K.O.; Lilienthal, A.J. THÖR: Human-Robot Navigation Data Collection and Accurate Motion Trajectories Dataset. IEEE Robot. Autom. Lett. 2020, 5, 676–682. [Google Scholar] [CrossRef]
  12. Delamare, M.; Duval, F.; Boutteau, R. A new dataset of people flow in an industrial site with uwb and motion capture systems. Sensors 2020, 20, 4511. [Google Scholar] [CrossRef] [PubMed]
  13. Tamantini, C.; Cordella, F.; Lauretti, C.; Zollo, L. The WGD—A dataset of assembly line working gestures for ergonomic analysis and work-related injuries prevention. Sensors 2021, 21, 7600. [Google Scholar] [CrossRef] [PubMed]
  14. Kratzer, P.; Bihlmaier, S.; Midlagajni, N.B.; Prakash, R.; Toussaint, M.; Mainprice, J. MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze. IEEE Robot. Autom. Lett. 2021, 6, 367–373. [Google Scholar] [CrossRef]
  15. Duarte, L.; Neto, P. Classification of primitive manufacturing tasks from filtered event data. J. Manuf. Syst. 2023, 68, 12–24. [Google Scholar] [CrossRef]
  16. Digo, E.; Pastorelli, S.; Gastaldi, L. A Narrative Review on Wearable Inertial Sensors for Human Motion Tracking in Industrial Scenarios. Robotics 2022, 11, 138. [Google Scholar] [CrossRef]
  17. Olivas-Padilla, B.E.; Glushkova, A.; Manitsaris, S. Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts for Human Movement Analysis. IEEE Access 2023, 11, 40075–40092. [Google Scholar] [CrossRef]
  18. Maurice, P.; Malaisé, A.; Amiot, C.; Paris, N.; Richard, G.-J.; Rochel, O.; Ivaldi, S. Human movement and ergonomics: An industry-oriented dataset for collaborative robotics. Int. J. Robot. Res. 2019, 38, 1529–1537. [Google Scholar] [CrossRef]
  19. Digo, E.; Polito, M.; Caselli, E.; Gastaldi, L.; Pastorelli, S. Dataset of Standard and Abrupt Industrial Gestures (DASIG). 2025. [Google Scholar] [CrossRef]
Figure 1. Top view of the experimental set-up with all stations (SA, SB, SC, and SD). Pink arrows indicate the location of green and red LEDs of each station, while the blue arrow indicates the location of the sound buzzer.
Figure 1. Top view of the experimental set-up with all stations (SA, SB, SC, and SD). Pink arrows indicate the location of green and red LEDs of each station, while the blue arrow indicates the location of the sound buzzer.
Robotics 14 00176 g001
Figure 2. Experimental protocol: (a) Standard movement—indicated by a green LED; (b) Abrupt movement—indicated by a red LED (visual alarm); (c) Abrupt movement—indicated by a buzzer (acoustic alarm).
Figure 2. Experimental protocol: (a) Standard movement—indicated by a green LED; (b) Abrupt movement—indicated by a red LED (visual alarm); (c) Abrupt movement—indicated by a buzzer (acoustic alarm).
Robotics 14 00176 g002
Figure 3. Scheme of the developed Arduino code.
Figure 3. Scheme of the developed Arduino code.
Robotics 14 00176 g003
Figure 4. MIMUs positioning on upper body and their reference frames.
Figure 4. MIMUs positioning on upper body and their reference frames.
Robotics 14 00176 g004
Figure 5. Dataset DASIG structure.
Figure 5. Dataset DASIG structure.
Robotics 14 00176 g005
Figure 6. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = acceleration).
Figure 6. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = acceleration).
Robotics 14 00176 g006
Figure 7. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = angular velocity).
Figure 7. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = angular velocity).
Robotics 14 00176 g007
Figure 8. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = magnetic field).
Figure 8. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = magnetic field).
Robotics 14 00176 g008
Figure 9. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = orientation through quaternions).
Figure 9. Example of plot from DASIG database (subject = 005, trial = FR_L, MIMU = LFA, and signal = orientation through quaternions).
Robotics 14 00176 g009
Table 1. RMS values of accelerations and angular velocities averaged among windows (mean ± standard deviation), and p-values obtained from the Wilcoxon test between standard and abrupt values (** indicate a statistically significant difference).
Table 1. RMS values of accelerations and angular velocities averaged among windows (mean ± standard deviation), and p-values obtained from the Wilcoxon test between standard and abrupt values (** indicate a statistically significant difference).
FR_RFR_LLA_L
StandardAbruptStandardAbruptStandardAbrupt
Acceleration
(m/s2)
9.96 ± 0.1411.18 ± 1.179.98 ± 0.1111.07 ± 1.019.96 ± 0.1110.97 ± 0.91
p-value<0.01 **<0.01 **<0.01 **
Angular
velocity (rad/s)
1.29 ± 0.352.11 ± 0.730.98 ± 0.251.94 ± 0.741.12 ± 0.331.97 ± 0.68
p-value<0.01 **<0.01 **<0.01 **
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Digo, E.; Polito, M.; Caselli, E.; Gastaldi, L.; Pastorelli, S. A Dataset of Standard and Abrupt Industrial Gestures Recorded Through MIMUs. Robotics 2025, 14, 176. https://doi.org/10.3390/robotics14120176

AMA Style

Digo E, Polito M, Caselli E, Gastaldi L, Pastorelli S. A Dataset of Standard and Abrupt Industrial Gestures Recorded Through MIMUs. Robotics. 2025; 14(12):176. https://doi.org/10.3390/robotics14120176

Chicago/Turabian Style

Digo, Elisa, Michele Polito, Elena Caselli, Laura Gastaldi, and Stefano Pastorelli. 2025. "A Dataset of Standard and Abrupt Industrial Gestures Recorded Through MIMUs" Robotics 14, no. 12: 176. https://doi.org/10.3390/robotics14120176

APA Style

Digo, E., Polito, M., Caselli, E., Gastaldi, L., & Pastorelli, S. (2025). A Dataset of Standard and Abrupt Industrial Gestures Recorded Through MIMUs. Robotics, 14(12), 176. https://doi.org/10.3390/robotics14120176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop