Experimental Stand for Sorting Components Dismantled from Printed Circuit Boards

: There is nothing new about the fact that higher concentrations (up to 50 times) of valuable materials can be found in e-waste, compared to mined ores. Moreover, the constant accumulation of excessive amounts of waste equipment has a negative impact on the environment. The components found in electronic equipment may contain hazardous materials or materials that could be recycled and reintroduced into production processes, thus reducing the carbon footprint created by waste electrical and electronics equipment (WEEE). Sustainable e-waste recycling requires high-value, integrated recovery systems. By implementing a two-stage experimental sorting stand, this paper proposes an efﬁcient and fast sorting method that can be industrially scaled up to reduce the time, energy and costs needed to sort electronic waste (e-waste). The sorting equipment is in fact an ensemble of sensors consisting of cameras, color sensors, proximity sensors, metal detectors and a hyperspectral camera. The ﬁrst stage of the system sorts the components based on the materials’ spectral signature by using hyperspectral image (HSI) processing and, with the help of a robotic arm, removes the marked components from the conveyor belt. The second stage of the sorting stand uses a contour vision camera to detect speciﬁc shapes of the components to be sorted with the help of pneumatic actuators. The experimental sorting stand is able to distinguish up to ﬁve types of components with an efﬁciency of 89%. M.P., L.G. and F.D.; formal analysis, M.P. and H.R.; investi-gation, F.D. and R.C.; resources, M.P. and H.R.; data curation, L.G., F.D., H.R., R.C., R.R. and L.R.; writing—original draft preparation, L.R.; writing—review and editing, H.R. and M.P.; visualization, L.G., F.D., H.R., R.C., R.R. and L.R.; supervision, M.P. and project administration, funding acquisition,


Introduction
Electronic devices are already well integrated into every aspect of our lives, ranging from common appliances and portable gadgets to smart devices. Meanwhile, the replacement intervals of electronic devices are getting shorter. Often, many products will be thrown away once their batteries die, being quickly replaced with new devices. The increasing rate at which people are replacing their electronic devices is becoming more alarming by the day. Therefore, it is very important to have the means to obtain and recycle the materials from electronic waste (e-waste), to help conserve and limit the extraction of natural resources worldwide. The objective is to recover and reinsert all materials into the production process, with minimal energy consumption through non-polluting technologies, thereby ensuring reduced quantities of secondary waste and minimal environmental impact [1].
A printed circuit board (PCB) taken out of different electronic devices contains, in general, metallic or plastic screws, metallic or plastic frames, heat sinks, Al capacitors, Cu inductors, small capacitors and integrated circuit chips. Last but not least, it can contain precious metals (the surface of Au plates) on the pin connectors or terminals [2]. In addition, a considerable amount of hazardous materials (lead, cadmium, mercury, fire retardants, second conveyor belt, a contour vision sensor was trained to detect specific components to be sorted [35]. The introduction of technology-based sorting systems has multiple advantages when comparing different types of recycling technologies. The introduction of sensor-based sorting systems decreases the costs of energy and water consumption, while the production, efficiency and reliability of the recycling technologies are increased significantly.

An Overview of the Sampled Materials
Several sample materials were chosen from the results of dismantlement processes (mechanical and chemical) of end-of-life personal computer PCBs. They were scanned with the help of a hyperspectral camera ( Figure 2) to examine the spectral signature of each object, thus creating a reference sample database. Using the spectral signature database thus created, the experimental stand could be easily reconfigured to sort other objects. The hyperspectral signatures were extracted from the image by applying a noise filter and running a principal component analysis (PCA) (Figure 3) so that the objects could be classified. The introduction of technology-based sorting systems has multiple advantages when comparing different types of recycling technologies. The introduction of sensor-based sorting systems decreases the costs of energy and water consumption, while the production, efficiency and reliability of the recycling technologies are increased significantly.

An Overview of the Sampled Materials
Several sample materials were chosen from the results of dismantlement processes (mechanical and chemical) of end-of-life personal computer PCBs. They were scanned with the help of a hyperspectral camera ( Figure 2) to examine the spectral signature of each object, thus creating a reference sample database. Using the spectral signature database thus created, the experimental stand could be easily reconfigured to sort other objects. second conveyor belt, a contour vision sensor was trained to detect specific components to be sorted [35]. The introduction of technology-based sorting systems has multiple advantages when comparing different types of recycling technologies. The introduction of sensor-based sorting systems decreases the costs of energy and water consumption, while the production, efficiency and reliability of the recycling technologies are increased significantly.

An Overview of the Sampled Materials
Several sample materials were chosen from the results of dismantlement processes (mechanical and chemical) of end-of-life personal computer PCBs. They were scanned with the help of a hyperspectral camera ( Figure 2) to examine the spectral signature of each object, thus creating a reference sample database. Using the spectral signature database thus created, the experimental stand could be easily reconfigured to sort other objects. The hyperspectral signatures were extracted from the image by applying a noise filter and running a principal component analysis (PCA) (Figure 3) so that the objects could be classified.

Data Acquisition of HSI
The acquisition of hyperspectral images was realized with the help of an NIR camera with wavelength spectra variation from 400 to 1000 nm. The system is based on a pushbroom acquisition architecture, with a field of view of 240 mm, a pixel resolution of 1024

Data Acquisition of HSI
The acquisition of hyperspectral images was realized with the help of an NIR camera with wavelength spectra variation from 400 to 1000 nm. The system is based on a pushbroom acquisition architecture, with a field of view of 240 mm, a pixel resolution of 1024 spatial pixels and two halogen line illumination units. The calibration of the hypercube was set up with the help of a white and black reference target. Depending on the number of wavelengths analyzed, the camera has a frame rate of 200 to 9900 Hz. The sample data were acquired with 224 bands at a frame rate of 210 Hz.

Data Processing with LabVIEW
Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is a graphical programming language used to create different types of applications with the help of add-ons [36]. There are many advantages to using LabVIEW, one of which is the ability to connect to different third-party hardware (for example, a hyperspectral camera). To control the experimental sorting stand's first stage, a total number of 2 VIs were created. With the help of the first VI, the hyperspectral images were acquired, filtered and processed to recognize the objects of interest. By implementing the above-mentioned functions in one VI, the speed of the algorithm increased. The second implemented VI was used to control the Delta robot on a serial communication protocol.

The First Stage of Sorting
The first stage of the experimental sorting stand is based on hyperspectral image analysis. The stand consists of a conveyor belt; an asynchronous motor; a Mitsubishi Electric S500 inverter; a FX10e hyperspectral camera from Specim; a halogen lighting system; and a Delta robot with a vacuum suction cup attachment ( Figure 4).  In this stage of the sorting process, an operator places the objects to be sorted on the front end of the conveyor belt. The objects will start their travel under the hyperspectral camera first. The created analysis software will continuously scan the conveyor band, and in case it detects any spectral signature other than the conveyor belt itself, it will start to process the read spectral image. If the detected spectral signature matches one of the two objects set for sorting in the first stage, it will also determine the center of mass of the object. After the analysis process, the VI will send the X and Y coordinates for the detected object to the Delta robot to pick it up and place it in the appropriate recycle bin. The flow The conveyor belt carries the objects for sorting, driven by the asynchronous motor controlled in frequency by the inverter. The halogen lighting system is focused under the hyperspectral camera's field of vision (FOV) to ensure the necessary amount of light for the measurements, and the Delta robot with the vacuum suction cup is used to remove the objects from the conveyor belt.
In this stage of the sorting process, an operator places the objects to be sorted on the front end of the conveyor belt. The objects will start their travel under the hyperspectral camera first. The created analysis software will continuously scan the conveyor band, and in case it detects any spectral signature other than the conveyor belt itself, it will start to process the read spectral image. If the detected spectral signature matches one of the two objects set for sorting in the first stage, it will also determine the center of mass of the object. After the analysis process, the VI will send the X and Y coordinates for the detected object to the Delta robot to pick it up and place it in the appropriate recycle bin. The flow chart of the HSI analysis software can be viewed in Figure 5. In this stage of the sorting process, an operator places the objects to be sorted on the front end of the conveyor belt. The objects will start their travel under the hyperspectral camera first. The created analysis software will continuously scan the conveyor band, and in case it detects any spectral signature other than the conveyor belt itself, it will start to process the read spectral image. If the detected spectral signature matches one of the two objects set for sorting in the first stage, it will also determine the center of mass of the object. After the analysis process, the VI will send the X and Y coordinates for the detected object to the Delta robot to pick it up and place it in the appropriate recycle bin. The flow chart of the HSI analysis software can be viewed in Figure 5.

HSI Acquisition and Analysis Using LabVIEW
To establish a fast and secure connection with the FX10e hyperspectral camera, the Gigabit Ethernet (GigE) vision interface standard was used, which provided a high-speed data transmission rate (up to 900 Mbps) [37]. The high-speed transmission rate allowed the system to execute the operations in real time, thus speeding up the sorting process. Before the images were processed in real time, an ensemble image of the objects was preprocessed with the help of Evience, a Prediktera software package for hyperspectral image processing [38]. With the help of this step, the specific spectral signature of each object was pre-defined in a small database. The ensemble image taken from the hyperspectral camera was preprocessed by removing the noise with a spectral low-pass filter. After the first filtering process, a standard normal variate (SNV) was applied to the image, reducing the scattering effect in the spectral data. Principal component analysis (PCA) was applied to the preprocessed image to provide a decomposition of the acquired data into a linear combination of the original principal component (PCo) and the spectra variations. The PCo was used to deduce the characteristics and grouping of the samples taken ( Figure 6). was pre-defined in a small database. The ensemble image taken from the hypersp camera was preprocessed by removing the noise with a spectral low-pass filter. Aft first filtering process, a standard normal variate (SNV) was applied to the image, red the scattering effect in the spectral data. Principal component analysis (PCA) was ap to the preprocessed image to provide a decomposition of the acquired data into a combination of the original principal component (PCo) and the spectra variation PCo was used to deduce the characteristics and grouping of the samples taken (Figu Once the PCA study was realized and all the identified groupings were classi partial least squares discriminant analysis (PLS-DA) model was built for each proc set, thus testing if the identified spectral signatures are reliable (Figure 7). A tota categories were created for different types of silicon chips; these included fiberglass, silicon and a mixture of fiberglass and copper. Once the PCA study was realized and all the identified groupings were classified, a partial least squares discriminant analysis (PLS-DA) model was built for each processing set, thus testing if the identified spectral signatures are reliable (Figure 7). A total of 4 categories were created for different types of silicon chips; these included fiberglass, resin, silicon and a mixture of fiberglass and copper. A sorting algorithm, which compares the acquired hyperspectral images from the FX10e camera with the spectral signature reference database, was implemented in Lab-VIEW. During the sorting process, the spectral signature of the passing object is compared with the reference database, by searching for the specific wavelength and reflectance value stored within the reference spectral signatures. This comparison decides if an object is to be sorted or not.

Control of the Delta X Robot with LabVIEW
A Delta-style robot was chosen to remove the objects from the conveyor belt because of its high speed and precision. The robot used for the experimental sorting stand is a A sorting algorithm, which compares the acquired hyperspectral images from the FX10e camera with the spectral signature reference database, was implemented in LabVIEW. During the sorting process, the spectral signature of the passing object is compared with the reference database, by searching for the specific wavelength and reflectance value stored within the reference spectral signatures. This comparison decides if an object is to be sorted or not.

Control of the Delta X Robot with LabVIEW
A Delta-style robot was chosen to remove the objects from the conveyor belt because of its high speed and precision. The robot used for the experimental sorting stand is a Delta X, an open-source robot.
In order to control the Delta robot, a virtual prototype was firstly designed in Solid-Works, and a graphical programming algorithm was created using LabVIEW ( Figure 8). The communication protocol was established via serial communication and by using a Lab-VIEW add-on named VISA. With the help of the VISA add-on, the communication between a workstation and the Delta robot's central processing unit was established. The Delta robot uses the G-code language to execute the commands sent on a serial communication.
To create a G-code command, a specifically formatted code had to be created. The message format is "G01 Xa Yb Zc", where G01 is the command to move the robotic arm, and a, b and c are the coordinates resulting from the image processing (Figure 9). At the same time, the Delta robot was equipped with a suction cup to grab the necessary components. The vacuum unit can be turned on or off with two G-code commands, M03 and M05.

The Second Stage of Sorting
The second stage of the experimental sorting stand is based on a contour vision sensor. The stand ( Figure 10) was constructed by using a caterpillar conveyor belt, an AC asynchronous motor controlled by a variable frequency drive (VFD), a contour vision sensor (O2D220) from IFM Electronics, an LED lighting ring with a variable color temperature and light intensity, a proximity and color sensor, four electro-pneumatic valves with directional air flow control and a programmable logic controller (PLC), Mitsubishi FX3G.

The Second Stage of Sorting
The second stage of the experimental sorting stand is based on a contour vision se sor. The stand (Figure 10) was constructed by using a caterpillar conveyor belt, an A asynchronous motor controlled by a variable frequency drive (VFD), a contour vision se sor (O2D220) from IFM Electronics, an LED lighting ring with a variable color tempe ture and light intensity, a proximity and color sensor, four electro-pneumatic valves w directional air flow control and a programmable logic controller (PLC), Mitsubishi FX3

The Second Stage of Sorting
The second stage of the experimental sorting stand is based on a contour vision sensor. The stand (Figure 10) was constructed by using a caterpillar conveyor belt, an AC asynchronous motor controlled by a variable frequency drive (VFD), a contour vision sensor The sorting process ( Figure 11) continues when an unrecognized object from the first stage passes along onto the second conveyor belt. As the component passes under the vision sensor, the algorithm will start the recognition process. The onboard recognition software of the contour vision camera can take up to 1.2 s to recognize an object, depending on the lighting conditions and the number of stored objects in its memory. The necessary data for object recognition are stored on the sensor's nonvolatile memory. Each object programmed into the memory of the sensor has a corresponding four-digit binary code, set by the user, with each of the digits corresponding to a digital output pin. Once the vision sensor recognizes an object, it will activate the corresponding digital output ports of the sensor, which, on their end, are connected to the PLC's digital input ports. As the PLC receives the signaling code, it will activate one of the four electro-pneumatic valves with a delay, thus removing the identified object from the conveyor belt into the appropriate sorting bin. The sorting process ( Figure 11) continues when an unrecognized object from the first stage passes along onto the second conveyor belt. As the component passes under the vision sensor, the algorithm will start the recognition process. The onboard recognition software of the contour vision camera can take up to 1.2 s to recognize an object, depending on the lighting conditions and the number of stored objects in its memory. The necessary data for object recognition are stored on the sensor's nonvolatile memory. Each object programmed into the memory of the sensor has a corresponding four-digit binary code, set by the user, with each of the digits corresponding to a digital output pin. Once the vision sensor recognizes an object, it will activate the corresponding digital output ports of the sensor, which, on their end, are connected to the PLC's digital input ports. As the PLC receives the signaling code, it will activate one of the four electro-pneumatic valves with a delay, thus removing the identified object from the conveyor belt into the appropriate sorting bin.

Contour Vision Sensor Algorithm
The contour vision sensor utilizes a specific contour detection algorithm to correctly determine the edges of the analyzed components. To identify the objects correctly, the sensor must be manually focused and set in such a manner that the object itself will occupy at least 5% of the sensor's field of view. The evaluation algorithm was accomplished by taking into consideration the processing time, the accuracy of contour tracing, data file size and the capability of the algorithm to correctly rebuild and enlarge the saved contour data. The contour vision sensor model O2D220 from IFM uses the incident light or backlight to detect the image of a component and compares it to the several reference models saved in the nonvolatile memory of the sensor [39]. Based on the orientation, tolerance and the degree of conformity of the object, it will be classified as a "pass" or a "fail" ( Figure  12). For a better understanding of the backlight detection algorithm, in case the luminosity difference between the key object (in the foreground) and the background is high, background images will be generated. When the object is in front of the sensor, this will reflect light back to it, thus producing the difference between the object and background. After running the background detection algorithm, the user can improve the detected edges and aid the software to recognize the object with a higher accuracy. Depending on the light conditions and the preferred image quality, the user can choose to use the inbuilt infrared light of the sensor which operates at a wavelength of 880 nm, or an external light source. For the proposed experimental sorting stand, an external light source is used

Contour Vision Sensor Algorithm
The contour vision sensor utilizes a specific contour detection algorithm to correctly determine the edges of the analyzed components. To identify the objects correctly, the sensor must be manually focused and set in such a manner that the object itself will occupy at least 5% of the sensor's field of view. The evaluation algorithm was accomplished by taking into consideration the processing time, the accuracy of contour tracing, data file size and the capability of the algorithm to correctly rebuild and enlarge the saved contour data. The contour vision sensor model O2D220 from IFM uses the incident light or backlight to detect the image of a component and compares it to the several reference models saved in the nonvolatile memory of the sensor [39]. Based on the orientation, tolerance and the degree of conformity of the object, it will be classified as a "pass" or a "fail" (Figure 12). For a better understanding of the backlight detection algorithm, in case the luminosity difference between the key object (in the foreground) and the background is high, background images will be generated. When the object is in front of the sensor, this will reflect light back to it, thus producing the difference between the object and background. After running the background detection algorithm, the user can improve the detected edges and aid the software to recognize the object with a higher accuracy. Depending on the light conditions and the preferred image quality, the user can choose to use the inbuilt infrared light of the sensor which operates at a wavelength of 880 nm, or an external light source. For the proposed experimental sorting stand, an external light source is used which consists of an LED lighting ring placed around the camera and set to a color temperature of 3200 K and a light intensity of 77%; therefore, the image has a well-balanced contrast and is not over-exposed. which consists of an LED lighting ring placed around the camera and set to a color temperature of 3200 K and a light intensity of 77%; therefore, the image has a well-balanced contrast and is not over-exposed.

PLC Control System
The PLC unit and the electro-pneumatic valves use a DC voltage source of 24 V. To control the electro-pneumatic valves, each of them was connected to a MOSFET transistor. The gate of the MOSFET transistor was connected to the PLC's digital outputs lines. As soon as one of the PLC's outputs is activated (has a HIGH value), the current will pass through the MOSFET, therefore opening the electro-pneumatic valve. Because an electropneumatic valve has a coil in it, a flywheel diode was placed in parallel to protect the MOSFET from any self-generated back-electromotive force. (Figure 13) The PLC unit was programmed using a ladder logic diagram. The digital outputs of the PLC are controlled by a sequence of instructions. At first, the inputs coming from the

PLC Control System
The PLC unit and the electro-pneumatic valves use a DC voltage source of 24 V. To control the electro-pneumatic valves, each of them was connected to a MOSFET transistor. The gate of the MOSFET transistor was connected to the PLC's digital outputs lines. As soon as one of the PLC's outputs is activated (has a HIGH value), the current will pass through the MOSFET, therefore opening the electro-pneumatic valve. Because an electropneumatic valve has a coil in it, a flywheel diode was placed in parallel to protect the MOSFET from any self-generated back-electromotive force. (Figure 13) Minerals 2021, 11, x FOR PEER REVIEW 11 of 16 which consists of an LED lighting ring placed around the camera and set to a color temperature of 3200 K and a light intensity of 77%; therefore, the image has a well-balanced contrast and is not over-exposed.

PLC Control System
The PLC unit and the electro-pneumatic valves use a DC voltage source of 24 V. To control the electro-pneumatic valves, each of them was connected to a MOSFET transistor. The gate of the MOSFET transistor was connected to the PLC's digital outputs lines. As soon as one of the PLC's outputs is activated (has a HIGH value), the current will pass through the MOSFET, therefore opening the electro-pneumatic valve. Because an electropneumatic valve has a coil in it, a flywheel diode was placed in parallel to protect the MOSFET from any self-generated back-electromotive force. (Figure 13) The PLC unit was programmed using a ladder logic diagram. The digital outputs of the PLC are controlled by a sequence of instructions. At first, the inputs coming from the The PLC unit was programmed using a ladder logic diagram. The digital outputs of the PLC are controlled by a sequence of instructions. At first, the inputs coming from the vision sensor will be stored in a register, and afterwards, the stored value will be compared to the memorized data, which will activate one of the digital outputs of the PLC and actuate one of the electro-pneumatic valves.

Results and Discussions
The first conveyor belt was programmed with the help of a LabVIEW algorithm to detect and sort two categorized materials (silicon chips and silicon chips with resin). The objects to be sorted are loaded on the front of the conveyor belt, and while they travel along the conveyor, the created algorithm will determine which objects need to be sorted at this stage.
Two different sorting bins were placed in the work area of the Delta robot, one for simple silicon chips and another one for silicon chips which also contain resin residue. The Delta robot has a maximum working speed of 0.7 m/s, allowing the conveyor belt to move at a higher speed.
To optimize the speed of the system, only specific spectral lengths were chosen to be read and analyzed (Table 1). This way, the algorithm can determine if the scanned object is to be sorted or not in a faster manner. By reducing the number of spectral wavelengths scanned from 224 bands to 80 bands, the processing speed of the software was increased by 64%. The total amount of time needed to read the hyperspectral images and process them after the optimization takes about an average of 438 ms. The reduction in the number of wavelengths read ensures that the hyperspectral camera will not pick up any other types of materials that are not intended for processing (finer particles of dirt and dust). In addition, to help avoid the processing of any unwanted smaller particles, the implemented image processing algorithm uses a particle filter to remove small pixel spots from the image. By increasing the processing speed, both the conveyor belt speed and the hyperspectral camera speed could also be increased. The best values obtained for processing were 170 frames per second for the hyperspectral camera and a speed of 0.061 m/s for the conveyor belt. These settings of the system obtained the best possible results.
The second conveyor belt controlled by a PLC sorts the objects coming from the first sorting stage with the help of a contour vision sensor. The second stage of sorting has in total three sorting bins. One is for components containing fiber glass, one is for silicon components which are too small for the robotic arm to handle and the third is for plastic components.
For a more accurate sorting, the second conveyor belt speed was maintained at an approximate speed of 0.066 m/s, which assured that the contour vision sensor would detect the objects moving underneath it. If the vision sensor matches the shape of the object to one from its internal database with a minimum match percentile of 80%, it will declare it as a pass, and the PLC algorithm will decide which of the electro-pneumatic valves will be triggered.
The experimental sorting stand was calibrated to sort a total of five different WEEE categories disassembled from old personal computer units. The categories included were small and large silicon chips, fiberglass, silicon chips with resin residue and plastic components. To calibrate the sorting stand, every variant of the components was individually registered and processed for testing. The registered components were tested at least 10 times to ensure a minimum recognition rate of 80% (2 out of 10 could fail). Following the calibration process, the experimental stand was tested using all the components available. These were run through the sorting process 10 times to obtain the average accuracy of the experimental sorting stand. All the components were placed randomly on the conveyor belt for each case studied. After the sorting process finished for a case, each sorting bin was evaluated to see if it contained the correct number of components. If a sorting bin did not contain all the components intended, the percentage for that category was reduced. After passing through 10 cases, the overall accuracy of the experimental sorting stand was 89% (Table 2).  The experimental sorting stand was calibrated to sort a total of five different WEEE categories disassembled from old personal computer units. The categories included were small and large silicon chips, fiberglass, silicon chips with resin residue and plastic components. To calibrate the sorting stand, every variant of the components was individually registered and processed for testing. The registered components were tested at least 10 times to ensure a minimum recognition rate of 80% (2 out of 10 could fail). Following the calibration process, the experimental stand was tested using all the components available. These were run through the sorting process 10 times to obtain the average accuracy of the experimental sorting stand. All the components were placed randomly on the conveyor belt for each case studied. After the sorting process finished for a case, each sorting bin was evaluated to see if it contained the correct number of components. If a sorting bin did components which are too small for the robotic arm to handle and the third is for plastic components.
For a more accurate sorting, the second conveyor belt speed was maintained at an approximate speed of 0.066 m/s, which assured that the contour vision sensor would detect the objects moving underneath it. If the vision sensor matches the shape of the object to one from its internal database with a minimum match percentile of 80%, it will declare it as a pass, and the PLC algorithm will decide which of the electro-pneumatic valves will be triggered. The experimental sorting stand was calibrated to sort a total of five different WEEE categories disassembled from old personal computer units. The categories included were small and large silicon chips, fiberglass, silicon chips with resin residue and plastic components. To calibrate the sorting stand, every variant of the components was individually registered and processed for testing. The registered components were tested at least 10 times to ensure a minimum recognition rate of 80% (2 out of 10 could fail). Following the calibration process, the experimental stand was tested using all the components available. The experimental sorting stand was calibrated to sort a total of five different WEEE categories disassembled from old personal computer units. The categories included were small and large silicon chips, fiberglass, silicon chips with resin residue and plastic components. To calibrate the sorting stand, every variant of the components was individually registered and processed for testing. The registered components were tested at least 10 times to ensure a minimum recognition rate of 80% (2 out of 10 could fail). Following the calibration process, the experimental stand was tested using all the components available. These were run through the sorting process 10 times to obtain the average accuracy of the experimental sorting stand. All the components were placed randomly on the conveyor belt for each case studied. After the sorting process finished for a case, each sorting bin was evaluated to see if it contained the correct number of components. If a sorting bin did not contain all the components intended, the percentage for that category was reduced.

Conclusions
The implemented two-stage experimental sorting stand was able to differentiate a total of five waste electrical and electronics equipment materials taken from discarded personal computer mother boards, with the help of a hyperspectral camera and a contour vision sensor. All the components from the PCs' motherboards were the result of a previous different chemical process. A standalone LabVIEW application was developed for the first stage of sorting. The application recognizes two different types of materials to be sorted in the appropriate sorting bin with the help of a hyperspectral camera. A second application was created for a PLC unit used for the second stage of sorting; this includes a contour vision sensor that recognizes different contoured objects to be sorted in two different sorting bins. After the sorting stages, a fifth sorting bin collects the remaining plastic components. A total of two applications were created and optimized in the implementation stage.
The presented results were achieved under specific laboratory conditions. Because of the sensitivity of the hyperspectral camera to different types of light sources, a dark room was used to analyze the studied materials.
As a future improvement, the experimental stand could be built inside a black chamber; thus, no external light could infiltrate and damage the readings of the hyperspectral camera. Another improvement could be achieved by introducing a monitoring system to oversee the sorting process in real time from a distance.
The created algorithm can easily be modified to detect other types of materials. To improve the speed of the system, the algorithms could be implemented directly on a field-programmable gate array (FPGA), thus raising the stability, the efficiency and-most importantly-the analysis and sorting speed.
The proposed experimental stand could be easily scaled up to industrial scale. With this type of implementation, a wider variety of components could be sorted as a function of the recycling plant's requirements. Such an approach has high potential for reducing the manipulation costs of discarded electronics and sorted materials.