1. Introduction
Industrial security is achieved when adequate measures and procedures are applied to obtain access to, handle or generate classified information during the execution of a classified contract or program. Industrial safety is the set of rules and activities aimed at preventing and limiting the potential risks associated with an industry, including both transient and permanent risks [
1,
2].
Many safety protocols have been proposed to improve the quality of life of workers using different techniques [
3,
4]. Several studies have examined how the availability of artificial intelligence (AI) techniques could affect the industrial organization of both AI service providers and industries adopting AI technology [
5]. Above all, the impact of AI on industry 4.0 and its possible applications in other fields have been studied in depth [
6].
In recent years, research has also been conducted on the applications of AI in the manufacturing industry [
7,
8,
9,
10,
11]. The system architecture described in the article integrates technology together with communication systems and permits analyzing intelligent manufacturing. The provided information shows an overview of the possible applications of AI in all industrial areas.
AI allows to maximize decision making in simple or very complex situations. The AI boom that has taken place in the last decades has led to the development of countless AI applications in numerous areas. At present, increasingly better solutions are available to protect the lives of workers when they are exposed to high-risk conditions. That is why, in industry, AI is combined with security measures in order to create an environment that offers better conditions for industrial development.
The objective of the proposed device is to improve occupational health and safety (OHS); increasing employee performance by reducing the probability of illness, injury, absence or death [
12]. Another objective is to contribute to the third wave, as proposed by Niu et al. [
13,
14], through the implementation of intelligent systems for early risk detection in the working environment.
Different studies have been conducted in creation of devices for occupational safety and health (OSH), which indicate the need to implement increasingly innovative solutions for workers in high-risk areas. For example, in 2014 [
15] a study was conducted among 209 welders in India and it was found that all of them had more than 2 injuries and 44% (92) of them had more than 10 injuries. Furthermore, in 2020 [
16] an analysis of workplace-related injuries in major industries such as agriculture, construction, manufacturing and health care has been carried out. The data for this analysis have been obtained from a Bureau of Labor Statistics and it was found that from 1992 to 2018, the number amounted to 4,471,340 injuries in the upper extremities, 3,296,547 in the lower extremities, and 5,889,940 in the trunk (
p < 0.05). Therefore, the motivation behind this research is to propose an innovative helmet with different sensors such as temperature, humidity and atmospheric pressure, the force exerted between the helmet and the head of the user, the variations in axes, air quality and luminosity, through specialized IoT modules being able to have a faster reaction time to an accident in a work team. All the research papers that address the problem of occupational safety and health (OSH) are summarised in
Table 1 with the purpose of comparing the improvements and advantages of similar research.
The information coming from the sensors is analyzed through a platform known as ThingsBoard. Independent alarms are configured using this information. Likewise, the data coming from the sensors are adapted to classify them in a Convolutional Neural Network, whose accuracy is of 92.05% in cross-validation compared to 3 other supervised learning models.
The remaining part of this work is organized as follows:
Section 2 gives an overview of the related literature.
Section 3 describes the system design. A multisensory helmet with communication in IIoT and AI-based information analysis is presented in
Section 4. Finally, the last section describes future lines of research.
2. Related Works
Protective equipment is of obligatory use in cases where the safety of the worker is at risk. However, detecting hazardous situations in a timely manner is not always possible, leading to the occurrence of accidents. Such events call the worker’s health and safety into question; the confidence of the worker in the company for which they work decreases [
17,
18,
19]. For effective prevention of injuries or fatal accidents in the working environment, the integration of electronic components is crucial given their ability for early risk detection. The research of Henley, E.J. and Kumamoto, H [
20] proposed a quantitative approach for the optimal design of safety systems which focused on information links (human and computer), sensors, and control systems. In 2003, Condition Monitoring (CM) was addressed in the research of Y. Han and Y. H. Song [
21] including a review of popular CM methods, as well as the research status of CM transformer, generator, and induction motor, respectively. In December 2001, the factor structure of a safety climate within a road construction organization was determined by A.I Glendon and D.K Litherland [
22]; a modified version of the safety climate questionnaire (SCQ). They also investigated the relationship between safety climate and safety performance. In March 2011, Intelligent Internet of Things for Equipment Maintenance (IITEM) was presented by Xu Xiaoli et al. [
23]. The static and dynamic information on electrical and mechanical equipment is collected by IITEM from all kinds of sensors, and the different types of information are standardized, facilitating Internet of Things information transmission [
24,
25]. The investigations that address motion monitoring and sensor networks have been compiled in
Table 2.
Moreover, an Accident Root Causes Tracing Model (ARCTM), tailored to the needs of the construction industry, has been presented by Tariq S. Abdelhamid and John G. Everett [
26]. In January 2010, guidelines for identifying, analyzing and disseminating information on near misses at construction sites were defined by Fabricio BorgesCambraia et al. [
27]. In September 2013, three case studies were presented by Tao Cheng and Jochen Teizer [
28] which employed methods for recording data and visualizing information on construction activities at a (1) simulated virtual construction site, (2) outdoor construction setting, and (3) worker training environment. Furthermore, systems theory has been incorporated in Heinrich’s domino theory by Seokho Chia and Sangwon Han [
29] to explore the interrelationships between risks and to break the chain of accident causation. In April 2008, the reasons for which construction workers engage in unsafe behavior were discussed in the empirical research of Rafiq M. Choudhry and Dongping Fang [
30]. Interviews were conducted in Hong Kong with workers who had been accident victims. In addition, Daniel Fitton et al. [
31] applied augmented technology with sensing and communication technologies which can measure use in order to enable new pay-per-use payment models for equipment hire. The areas in which it is necessary to create a safer working environment are listed in
Table 3. This can be achieved through the use of sensors for the monitoring environmental parameters and capturing motion.
In December 2008, the underlying biomechanical elements required to understand and study human movement were identified by A. Godfrey et al. [
32]. A method for investigating the kinematics and dynamics of locomotion without any laboratory-related limitations has been developed by Yasuaki Ohtaki et al. [
33]. In April 2012, the usage of the Unscented Kalman Filter (UKF) as the integration algorithm for the inertial measurements was proposed by Francisco Zampella et al. [
34]. Furthermore, in 2012, a micro wearable device based on a tri-axis accelerometer was introduced by Yinghui Zhou et al. [
35]. It can detect change in the acceleration of the human body on the basis of the position of the device. In 2009, a method for the recognition of daily human activities was developed by Chun Zhu and Weihua Sheng [
36]. This method involved fusing the data from two wearable inertial sensors attached to the foot and the waist of the subject. In October 2012, Martin J.-D. Otis and Bob-Antoine J. Menelas [
37] reported an ongoing project whose objective was to create intelligent clothes for fall prevention in the work environment. In 2007, a signal transform method, called Common Spatial Pattern, was introduced by Hong Yu et al. [
38] for Electroencephalographic (EEG) data processing. In March 2006, the development history of a wearable, called the scalable vibrotactile stimulus delivery system, was presented by Robert W. Lindeman et al. [
39]. In 2014, an objective and real-time approach based on EEG spectral analysis for the evaluation of fatigue in SSVEP-based BCIs was proposed by Teng Cao et al. [
40].
Thanks to the implementation of communication technologies, it is possible to notify both the managing staff and the workers about the hazards encountered in a particular working area. A helmet that implements Zigbee transmission technologies for the analysis of variables such as humidity, temperature and methane in mines has been developed by Qiang et al. (2009) [
41]. This helmet helps decrease the risk of suffering an accident during the coal extraction process. An intelligent helmet for the detection of anomalies in mining environments was also proposed by Nithya et al. (2018) [
42]. This research points to the possibility of integrating components in the PPE that would alert the worker of the presence of danger. Moreover, the vital signs of the worker are monitored by their helmet, making it possible to monitor their state of health. An emergency button on the helmet is used for the transmission of alerts via Zigbee technologies to the personnel nearest to the working environment. Accelerometers have been integrated in safety helmets by Kim et al. (2018) [
43], with the purpose of detecting if the safety helmet is being worn properly, improperly or not worn at all while the worker performs their tasks. In December 2016, a framework for real-time pro-active safety assistance was developed by Yihai Fang et al. [
44] for mobile crane lifting operations.
Ensuring the physical well-being of workers is the responsibility of employers. Better protection is offered to today’s workers thanks to PPE helmets by protecting the worker from blows to the head. However, monitoring other aspects for the worker’s security is important in some cases. Li et al. (2014) [
45] developed a helmet which, by means of sensors, measures the impact of blows to the worker’s head. Sensors for brain activity detection are also implemented in the helmet. In terms of movement, identifying the position of the worker is essential in order to detect falls that result in physical injury or fatal accidents.
In 2019, Machine Learning (ML) algorithms for the prediction and classification of motorcycle crash severity were employed in a research by Wahab, L., and Jiang, H. [
46]. Machine-learning-based techniques are non-parametric models without any presumption of the relationships between endogenous and exogenous variables. Another objective of this paper was to evaluate and compare different approaches to modeling motorcycle crash severity as well as investigating the risk factors involved and the effects of motorcycle crashes. In 2015, a scalable concept and an integrated system demonstrator was designed by Bleser, G. et al. [
47]. The basic idea is to learn workflows from observing multiple expert operators and then transferring the learned workflow models to demonstrate the severity of motorcycle crashes. In 2019, an intelligent video surveillance system which detected motorcycles automatically was developed by Yogameena, B., Menaka, K., and Perumaal, S. S. [
48]. Its purpose was to identify whether motorcyclists were wearing safety helmets or not. If the motorcyclists were found without the helmet, their License Plate (LP) number was recognised and legal action was taken against them by the traffic police and the legal authority, such as assigning penalty points on the motorcyclists’ vehicle license and Aadhar Number (Applicable to Indian Scenario). In 2017, a comparison of four statistical and ML methods was presented by Iranitalab, A., and Khattak [
49], including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in relation to their ability to predict traffic crash severity. A crash costs-based approach was developed to compare crash severity prediction methods, and to investigate the effects of data clustering methods—K-means Clustering (KC) and Latent Class Clustering (LCC)—on the performance of crash severity prediction models. These novel proposals are compiled in
Table 4. They employ artificial intelligence and machine learning, and suppose a significant improvement in different scenarios.
In 2005, the results obtained with the random forest classifier were presented in the research of M. Pal [
50] and its performance was compared with that of the support vector machines (SVMs) in terms of classification accuracy, training time and user defined parameters. In January 2012, the performance of the RF classifier for land cover classification of a complex area was explored by V. F. Rodriguez-Galiano et al. [
51]; the evaluation was based on several criteria: mapping accuracy, sensitivity to data set size and noise. Furthermore, in February 2014, a random forest classifier (RF) approach was proposed by Ahmad Taher Aza et al. [
52] for the diagnosis of lymph diseases. In April 2016, the use of the RF classifier in remote sensing was reviewed by Mariana Belgiua and Lucian Drăguţ [
53]. Besides, in 2015, machine learning approaches including k-nearest neighbor (k-NN), a rules-based classifier (JRip), and random forest, were investigated by Esrafil Jedari et al. [
54] to estimate the indoor location of a user or an object using RSSI based fingerprinting method. Finally, in July 2011, a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset was presented by Mohammed Khalilia et al. [
55] for predicting disease risk in individuals on the basis of their medical history.
With regard to CNN in 2020, an automated system for the identification of motorcyclists without helmets from real-time traffic surveillance videos was presented by Shine L. and Jiji, C. V. [
56]. A two-stage sorter was used to detect motorcycles in surveillance videos. The detected motorcycles were fed in a helmet identification stage based on a CNN. Moreover, in July 2019, the same approach to detecting the absence of helmets on motorcyclists with or without helmets was presented by Yogameena B. et al. [
48]; it was different in that it combined a CNN with a Gaussian Mixture Model (GMM) [
57]. Furthermore, in 2020, a system that uses image processing and CNN networks was developed by Raj K. C. et al. [
58] for the identification of the motorcyclists who violate helmet laws. The system includes motorcycle detection, helmet vs. helmetless classification and motorcycle license plate recognition. As can be observed, CNNs have been used mainly for real-time image processing. However, the use of CNN for linear data evaluation is proposed in this paper. Here, CNN is integrated (input–output) in a rules model for the classification of different problems in working environments. The presented papers are examples and inspired the given research as a support for this paper. A diagram of the most represented technologies in the state of the art is given in
Figure 1. These technologies are the main basis of the proposal.
3. Smart Helmet 5.0 Platform
There are different methodologies for carrying out research on electronics and system design. Thus, in this section, a description of the hardware and software used for the development of the fifth version of the smart helmet will be presented, and the procedure followed for its subsequent validation through the AI model will be detailed. The four previous helmets included less sensorisation and conectivity, which is why we developed a new version with all the improvements.
3.1. Hardware Platform
The structure followed in the development of the proposed helmet are the steps involved in the prototype development methodology, identifying the parameters to be monitored in the environment. A Job Safety Analysis (JSA) was performed, identifying the risk factors that lead to injuries and accidents in the worker [
59]. The deficiencies that have been observed are presented in
Table 5 for work places such us mines, construction places and electrical work areas. They are related to aspects such as lighting, detection of blows to the worker’s helmet (PPE detection), dangerous temperature levels for human activityand poor air quality in the environment. Other parameters that could be interesting such as, noise, rate pulse and body temperature are implemented in other devices for better ergonomics.
Given the above, a series of specialized sensors are proposed to counteract the difficulties that usually occur in a high-risk work environment [
1], see
Table 6. As seen in the literature review, agriculture and industrial activities involve high risk, among others.
In terms of the transmission of information from sensors, the use of Wi-Fi technologies has been selected due to their ability to transmit the information in Local Area Networks (LAN) to a web server responsible for collecting, processing and transmitting anomaly warnings to the worker or administrative personnel. The following describes the system design and the interaction of the components.
The elements used in the smart helmet and the risks it seeks to prevent or detect are detailed below. The operation of the Smart PPE and the distribution of the circuits will also be discussed. In addition, the architecture and technologies are explained, as well as the operating rules of the different sensors and actuators that make up the system. Finally, their communication system is considered, as well as the technology used for both the management of the data and for its visualization and treatment once obtained.
The aim of this Smart PPE is to protect the operator from possible impacts, while monitoring variables in their environment such as the amount of light, humidity, temperature, atmospheric pressure, presence of gases and air quality. At the same time, the Smart PPE is to be bright enough to be seen by other workers, and the light source will provide extra vision to the operator. All these alerts will be transmitted to the operator by means of sound beeps. The sensors described below were selected as part of the set of electronic devices to be implemented:
In the process of the visualization of environmental data, a LED strip is deployed on the helmet as a means of notifying the worker of anomalies through color codes presented in the environment. The block diagram shown in
Figure 2 is a representation of the electronic system integrated in the helmet.
The specifications of the sensors and the microcontroller used to monitor the environment are defined as follows:
The component used to supervise the parameters of gas, pressure, temperature and humidity is the low power environmental sensor DFRobot BME680. It is a MEMS (Micro-Electromechanical System) multifunctional 4 in 1 environmental sensor that integrates a VOC (Volatile Organic Compounds) sensor, temperature sensor, humidity sensor and barometer. The environmental pressure is subject to many short-term changes caused by external disturbances. To suppress disturbances in the output data without causing additional interface traffic and processor work load, the BME680 features an internal IIR filter. The output of the subsequent measurement step is filtered using the following Equation (
1):
where
is the data coming from the current filter memory, and
the data coming from current ADC acquisition and where
denotes the new value of filter memory and the value that will be sent to the output registers.
The sensor implemented for the monitoring of the level of brightness is the ALS-PT19 ambient light sensor. Due to the high rejection ratio of infrared radiation, the spectral response of the ambient light sensor resembles that of the human eyes.
The sensor implemented for shock detection is a sensitive force resistor, the sensor emits shock alerts if the readings obtained in the environment exceed a threshold value.
The sensor responsible for the detection of falls suffered by the worker is the MPU6050 module, it is an electronic component that has six axes (three corresponding to the gyroscope system and three to the accelerometer) making it possible to obtain the values of positioning in the X, Y and Z axes.
The light source integrated in the helmet is a NeoPixel Adafruit LED strip, the component integrates a multicolor LED in each section of the strip. The algorithm implemented in the microcontroller is configured in such a way that it is possible to control the color of the LED strip.
The microcontroller used for processing, transmitting and displaying the information transmitted to the web platform is the dual-core ESP-WROOM-32 module of the DFRobot FireBeetle series, which supports communication through Wi-Fi and Bluetooth. The main controller supports two power methods: USB and 3.7 V external lithium battery.
The components are integrated in the microcontroller, which obtains and processes the information coming from the sensors. This information is then transmitted to the implemented web server by means of the Wi-Fi module. The designed electronic system is located in the backside of the helmet, as shown in
Figure 3. It also integrates a lamp which is activated automatically if the brightness value of the sensor is below the threshold value established in the programming of the microcontroller. The information transmitted by the helmet can be viewed on a web platform.
This section describes the developed software and the interaction that takes place between the different components.
3.2. Intelligence Module
Firstly, the communication between the active sensors is enabled by Thingsboard. ThingsBoard is an open source IoT platform for data collection, processing, visualization and IoT device management. It is free for both personal and commercial use and can be implemented anywhere.
It enables device connectivity through industry-standard IoT protocols (MQTT, CoAP and HTTP) and supports both cloud and on-premise deployments. ThingsBoard combines scalability, fault tolerance and performance, ensuring that the users’ data are never lost.
ESP32 is a series of low-power, low-consumption system-on-a-chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth, as mentioned in the previous section. The device is responsible for transmitting the information to the ThingsBoard platform and its subsequent processing by the intelligent model see
Figure 4, to interact with the helmet.
Simple steps are required to link the devices to the platform:
The automatically generated access token is copied from the Access token field.
Go to Devices, locate ESP32 device, open the device details, and select the Latest telemetry tab.
It is now possible to view the data regarding an asset.
The data obtained through ThingsBoard is later processed by an intelligent model, the model confirms or denies the existence of a real emergency. This is the reason why configuring the platform is very important.
An association must be created between the different values of the sensors and the corresponding response. Once these associations are created, it is possible to modify any value depending on the values to be tested empirically or in the alarms. Alarms are configured in the device settings so that the respective notifications appear on the panel. A rule chain must be added.
A selection of the attributes placed on the server and on the device’s threshold panel must be carried out. The names of the attributes on the server must correspond with those on the panel so that when the data are dynamically configured, they will be recognized correctly and will appear on the diagram generated by the platform,
Figure 5.
Subsequently, in the script block, it is verified that the information coming from the device does not exceed the established threshold value. If the script is positive, an alarm is configured and the information to be displayed is defined.
Moreover, the root string, which is in charge of obtaining and processing the information coming from the devices, has been modified. In this case, an originator type section has been added, where the devices that transmit the information are identified. Likewise, code strings have been generated to implement the customized code blocks in the panels. Finally, the information on the data panel may be visualized.
In cases where it is not necessary to perform this procedure, it is possible to view the notifications generated by the different devices. To this end, it is necessary to enter the Device section. Select one of the devices for which an alarm has been configured and go to the alarms tab, see
Figure 6, where the notifications generated by that device are displayed.
Once the alarms have been configured on the platform, validation is carried out through the explanation of the AI [
60].
5. Conclusions and Discussion
Our work has a history of electronic development in which the use of a multisensory helmet was established. Through a conditional model of input–output rules, we tried to detect the different situations to which a worker was subjected. However, the input–output techniques presented false positives and false negatives with 60% accuracy in the best of cases, which is why after several stages, it was decided to implement AI in the helmet. The 60% that was described a moment ago is due to the combination of different circumstances, that is to say, the correlation that exists on the independent characteristics, is for that reason, that through techniques that find linear and nonlinear relations we decided to innovate in the present work. Since it is necessary to find the patterns that determine a particular action, for them there are the techniques of deep learning as our work presents
The comparison between different models of AI has been made in this research. Our innovation comes from the moment of using a CNN that in the literature has been used to analyze images or videos in intelligent helmets with the aim of saving lives. However, we proposed a multisensory approach to real-time feature analysis. Through the transmission of data through specialized IoT devices, a smart helmet has been designed to monitor the conditions in a working environment. The application areas of this proposal are industrial and agricultural sectors and any other sector that involves risk for the workers. Thanks to the helmet, different injuries can be avoided, and in case an accident occurs the damage caused to the worker is lessened through prompt attention or detection.
It is possible to observe in
Figure 9,
Figure 10 and
Figure 11 that the MSA presented many false positives on majority classes in sample size, and even false positives of repeated classes (class 6) on more than five different classifications. NB and NN had a better performance in minority classes, however, there are three different classifications in false positives in classes such as 11, 9, 7, 2, 1 and 0. The NN has a strong resolution where the classes mentioned above still present false positives but with a decrease to 2 wrong classes in almost all cases.
5.1. Limitations
The work has different limitations. It is well known that artificial intelligence has the ability to find patterns that can hardly be found in linear analysis models. However, as stated in [
73] risk analyses are not yet common in project-oriented industries. A problem with current risk analysis procedures is that procedures that are simple enough to be used by normal project staff are too simplistic to capture the subtlety of risk situations. Those that are complex enough to capture the essence and subtlety of risk situations are so complex that they require an expert to operate them. That is why the combination of possible risk situations can be counterproductive in the industrial area, an area that should be analyzed in more detail, with the following consequences:
False positives would result in economic losses that would eventually affect the services and production areas involved, since the medical service and will be attending to situations that were not risky, the industry part will have to make production stops every time a false positive is found.
On the other hand, false negatives are even more dangerous because the misinterpretation of data due to the complexity that can cause the unbalance of classes with less data set would result in losses not only economic but also of human personnel due to situations that were not attended to in the indicated time.
Our system has limitations on the amount of data that can be processed due to the microcontroller and the data that the model supports through the ESP32 module. That is why other techniques can be adopted, as will be seen in the next part of future work.
5.2. Future Work Opportunities
The use of paradigms such as edge computing or fog computing for the processing of many data as would be the integration of images or video would be the viable option to allow a transmission of information in real time, avoiding problems of saturation by the microcontroller. Several state-of-the-art researches have proposed smart helmets, among them is the US6798392B2 patent [
74], which integrates a global location system, an environmental interaction sensor, a mobile communications network device, a small display panel, a microphone and a speaker. The helmet knows the location of the user and their interaction with the environment. The helmet can provide data to a user, monitor the actions of the user and conditions. This work is quite interesting since it offers device–user interaction. On the contrary, the advantage of our proposal is that it strives towards the autonomy of the system where decisions are made by the convolutional method.
Furthermore, the US9389677B2 patent [
75], which is a smart helmet that includes a camera, a communications subsystem and a control subsystem. The control subsystem processes the video data from the camera, and the communications subsystem transmits this video data from the smart helmet to a target device. This work can be taken as a reference for a future sensor integration, since in our proposal it would be possible to integrate a camera that can process data through Deep CNN, for example thermal radiation data or even data regarding those who are infected with COVID-19.
Furthermore, the US registered patent, US20150130945A1 [
76], in which a smart helmet is proposed that includes a helmet shell, a visor and a projector mounted on the helmet shell. The projector is configured so that content can be selected for display on the visor. The visor is rotatably attached to the helmet shell, and is configured to expose or cover the passage. The hull of the helmet defines an internal cavity and a passage that communicates with the internal cavity. The internal cavity is designed to receive the head of a user. This proposal’s focus is directed at the ergonomic part for the user, in addition to having navigation systems and control modules. This research is comparable to our proposal.
Moreover, in 2013 a helmet was proposed by Rasli Mohd Khairul Afiq Mohd et al. [
77] for the prevention of accidents in which an FSR and a BLDC fan were used to detect the head of the driver and the speed of the motorcycle, respectively. A 315 MHz radio frequency module was used as a wireless link for communication between the transmitter circuit and the receiver circuit. PIC16F84a is a microcontroller for the control of the different components of the system. The motorcyclist could start the engine only when they had fastened their helmet. In comparison, our proposal communication takes place through IIoT for optimized decision making in case of accidents.
With reference to smart helmets connected to IoT, in 2016 [
78], Sreenithy Chandran et al. presented a design whose objective is to provide a channel and a device for monitoring and reporting accidents. Sensors, a Wi-Fi enabled processor, and cloud computing infrastructures were used to build the system. The accident detection system communicates the accelerometer values to the processor that continuously monitors erratic variations. When an accident occurs, details about the accident are sent to emergency contacts using a cloud-based service. The location of the vehicle is obtained using the global positioning system. This work has a close relationship with the one proposed by us where there is optimized communication to reduce the consequences of accidents, the approach is different since we propose it for a work environment that can later be adapted for a case focused on vehicle safety, mainly on motorcycles.