Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests
Abstract
1. Introduction
- We proposed a robust algorithm that has been extensively tested in real-world conditions, where there is significant variability in the applicants’ answer styles. The algorithm we implement does not require a training stage for exams with the same layout; hence, it would work without any changes, as demonstrated for the dataset with more than 6000 real-world answer sheet images, correctly classifying more than 99% of the test items in the dataset.
- We developed an OMR system based on well-known image-processing algorithms, which enables us to understand what happens at each stage of processing, locate errors if they occur and identify their possible causes. The latter is not possible with the use of more complex algorithms, such as those based on convolutional neural networks. We developed a system that does not rely on complex algorithms for training and does not require high-performance hardware, making it suitable for computers with modest computing power, such as those found in office environments.
- The average execution time (1 s per answer sheet) is fast enough to be included in an automated exam-evaluation chain. Although it is not a real-time algorithm, we could reduce its execution time using process-optimization tools and high-performance hardware.
2. State of the Art
2.1. Controlled Condition Answer Sheet Acquisition
2.2. Uncontrolled Condition Answer Sheet Acquisition
2.3. Classical Computer Vision-Based Systems
2.4. Deep Learning Approaches
3. Materials and Methods
3.1. Answer Sheet Folder Organization and Layout
- A space where the students handwrite their name and date of exam application.
- A first reference rectangle with four rows of circles ranging from 0 to 9, which will serve to establish a 4-digit identifier.
- A second reference rectangle containing three columns with items having four options for each question, grouped depending on the grade to which the test is applied. In one case, we have two tests: one with 90 items and one with 100. In the 90-item test, we found three columns of 30 items each, while in the 100-item test, we found two columns of 33 and one column of 34 items. Figure 2b shows the localization of these reference rectangles in a sample answer sheet. The second reference rectangle has three circle option columns, named section 0, section 1 and section 2.
- A standard error is that the sheets become bent during scanning.
- Different scanner conditions were detected, such as resolution and image quality variations, which, if not corrected, would generate errors when recognizing the responses.
- The exams may not be oriented correctly, so preprocessing must be added to correct incorrect orientation scans.
- Even when the exams are correctly oriented, minor rotations could affect recognition performance, so an angle correction process is required to correct the last.
3.2. Proposed Algorithm
3.2.1. ID and Answer Section Detection
- Alternative rectangle finding process 1. If the main process fails to detect two rectangles, this is the first alternative process to test. This process implies more operations and uses more copies of the images to work, but it allows us to detect the two main rectangles when the assessed student wrote over the border of the rectangles. In this alternative process, we initially detect the contours over a thresholded version of the normalized image obtained after the uneven illumination correction. In this case, a low intensity value is set as a threshold (60) to capture the grayscale values corresponding to the borders. Next, we detect the contours of the resulting binary image and fill the regions inside the contours with the maximum intensity value. We apply an iterative morphological opening operation with a medium-sized structuring element ( in our case) to the resulting filled polygons to eliminate noise and smooth the borders of the binary objects. Through experimentation, we found that the minimum number of iterations that yielded clean binary objects was 7. The next step is a new detection of the binary objects’ contours, filtering by area and keeping only the contours with an area greater than 2% of the area of the image. Due to the aggressiveness of the applied morphological processing, these contours might contain holes. To cope with these discontinuities, we approximate the contour polygons using the Ramer–Douglas–Peucker algorithm [29].
- Alternative rectangle finding process 2. We found that the first alternative rectangle-finding process could fail if the assessed students wrote text inside the ID or answer bounding boxes. For those cases, we defined the following steps. First, we thresholded the image using the Otsu algorithm to select the threshold. As the next step, we apply a morphological opening with a small structuring element () to eliminate small pixel clusters. The following steps are similar to those used in the first alternative process; however, before detecting the external contours, we apply an iterative morphological opening with a medium-sized structuring element (). In this alternative process, the number of iterations is 5. The last two steps are the same as applied in the first alternative process.
- Alternative rectangle finding process 3. This alternative process allowed us to correctly process images where some sections of the contour of the two main rectangles were missing (significant discontinuities). In this version, the first three steps, thresholding, morphological opening and external contours filtered by area, are the same as those used in the second alternative process. The next step is the detection of the extremes of the detected contours, by ordering the contour coordinates and finding the lowest and largest coordinate x and y values to define the corners of the rectangles. We also verify that the distance between corresponding corners of each side of the rectangles is within a range previously measured over a typical sample. We then convert the corners into the required contours. This process is computationally costly compared to the main process.
3.2.2. Alternative ID Section Circle Mark Detection (AIDSCMD)
- Alternative ID section processing 1. This process works well when the assessed students chooses the option with a big mark touching the border of the external rectangle of the section. In this process, we work with the initial binary image obtained in the main process masked with the current bounding box. First, we apply an iterative morphological opening with a structuring element to ensure a separation between the circles and the border of the bounding rectangle of the section. Next, we detect the external circle contours, filter them by area and fit a minimum enclosing circle. Finally, we count the number of resulting circles.
- Alternative ID section processing 2. This process allowed us to work with images where the assessed student filled the option circles with a mark that was too big, extending beyond the circle border. In this alternative process, we also work with the initial binary image obtained in the first stage of the main process masked by the current bounding box. First, we dilate this image with a horizontally oriented rectangular structuring element. Then, we produce a second version of the binary image by dilating the initial binary image with a vertically oriented rectangular structuring element. Next, we perform a bitwise and operation between the binary image’s vertical and horizontal oriented versions to obtain the crossings. To eliminate holes, we apply a morphological closing with a structuring element to the resulting image. Next, we detect the contours of the resulting binary image, filter by area and fit minimal enclosing circles to the contours. Finally, we count the number of resulting adjusted circles.
- Alternative ID section processing 3. This process allowed us to correctly process images where the assessed student used a deficient erasing technique, producing stained circle marks. We begin by applying an adaptive thresholding to a median-filtered version of the grayscale image. Next, we apply a morphological closing using a structuring element. Then, we detect the contours, filter by area and fit minimal enclosing circles. Finally, we count the number of resulting circles.
- Alternative ID section processing 4. This process worked well with images where the assessed student marked the option circles using different intensities and marks extending from the pretended circle area. In this case, we work with the saturation channel, S, of the version of the input image, masked by the current bounding box. Next, we apply an iterative morphological closing with a structuring element and then detect the external contours. We use the detected contours to draw them filled over a new binary image. We apply an iterative morphological closing over this image using a kernel, extract its external contours, filter them by area and fit minimal enclosing circles. Finally, we count the number of adjusted circles.
3.2.3. Alternative Answer Section Circle Mark Detection (AASCMD)
- Alternative answers section processing 1. This process works on images with variations in the grayscale levels affecting the thresholding process used in the main process. The process begins by thresholding a slightly blurred version of the RGB image around the color (in the RGB space) corresponding to the circles of the answer options (orange in our case). The following operations are similar to the second part of the main process, where the objective is to obtain the crossing between two versions of the binary images. To obtain the first version of these images, we applied a morphological closing operator to the answer circles using an enlarged structuring element in the vertical direction. Similarly, we apply the same morphological operator to the answer circles using an enlarged structuring element in the horizontal direction to obtain the second version. We then use morphological opening and closing operations to eliminate holes and spur pixels around the detected circles. Next, we compute the centroids of the detected crossings and draw filled circles with a radius equal to the radius of a reference circle on a clean sheet. Finally, we count the obtained circles.
- Alternative answers section processing 2. We found that for some images, when the assessed student marks the answer circle with a too big mark invading the space of the surrounding options, the main and first alternative processes might fail. If this happens, we apply a different method that uses an adaptive thresholding over a median filtered version of the original grayscale image. Next, we obtained the region of interest by masking with the current bounding box. After a morphological closing with a rectangular structuring element, we obtain the external contours and draw them filled. We correct holes and spur pixels with morphological closing and opening operations before detecting external contours again. We filter the detected contours by area and use them to fit a minimum enclosing circle around them. The fitted contours represent the answer circle options, and we count them to estimate the number of answers in the bounding box rectangle.
- Alternative answers section processing 3. This process allows us to process images where the assessed student marked the answers using different circle sizes and grayscale tones. In this case, we used the input image’s saturation layer (S) of the HSV image. As the first step, we threshold the S layer using Otsu’s algorithm and apply the current bounding box mask. Then, we apply an iterative morphological closing using a square structuring element. Next, we find the external contours and draw them filled out. We correct the borders of the filled contours using a morphological closing using a structuring element and detect the contours again. We filter the detected contours by area and fit a minimum enclosing circle to the detected contours. Finally, we count the fitted circles.
- Alternative answers section processing 4. We found that in some tests if the assessed student made a mistake, he could eventually erase his mistake, but also the circle mark. The latter could cause the main and previous alternative processes to fail. This alternative process begins thresholding the normalized image obtained after the uneven illumination correction using a threshold close to the maximum value to ensure capturing the borders of the available circle options. After masking the current bounding box, we detect the contours inside the bounding box, filter by area and detect the centroids of the contours. We detect the nearest centroids to the corners of the bounding box and compute the vertical distance to evaluate if the number of answer rows in the current section is 30, 33 or 34. We then create a matrix of circles with 30, 33 or 34 rows and four columns inside the current bounding box and with the same radius as a reference circle detected in a clean test sheet.
3.2.4. Circle Ordering and Pixel Counting in Reference Image (COPCRI)
3.2.5. Cross Comparison of Identified Marks in Binary and Grayscale Images (CCIMBGI)
3.2.6. Noisy Background Section Processing (NBSP)
3.2.7. Answer Values by Row and Test Layout Definition (AVRTLD)
3.3. Automatic MCQ Answer Sheet Recognition Application
- Python. This language allows us to develop multi-platform applications with minimal effort and provide object-oriented programming. The version used is 3.10.12.
- OpenCV. This library allowed us to implement the image-processing algorithms needed to identify the responses of the digitalized answer sheets. The version used is 4.11.0.
- PyQt6. This library allowed us to add graphical user interfaces (GUIs), including functions to select a folder graphically. The version used is 6.7.0.
- The user selects the root folder with images associated with each institution where SET applied the exam, each institution folder having at least one subfolder.
- The user selects an institution’s folder inside the root folder. The app requires a process to identify the parent folder but only includes the designated institution’s information in the interface.
- The user selects a grade folder inside one institution; being necessary that the interface infers the institution of the parent folder and the name of the root folder from the parent of the institution folder.
4. Results
4.1. Circle-Detection Comparison with Circle Hough Transform
- Where the applicant exceeded the space available to mark the circle, extending their mark outward in a uniform or non-uniform manner. Figure 11a shows that the CHT algorithm fails to detect the mark in responses 3, 5, 6, 7, 8 and 39, while our algorithm successfully detects them (see Figure 11b). The failure of the CHT algorithm is attributable to the applicant not filling in the selected circle for the answer, or in some cases, exceeding the limits of that circle.
- Where the applicant joined several circles with pencil strokes, causing the CHT algorithm to fail and our algorithm to succeed. Figure 11c,d illustrate a case where both the CHT algorithm and the proposed algorithm yield the same responses. Still, the CHT algorithm fails in response 38, given that the applicant generated a horizontal line connecting the circles in that response. In this particular case, the proposed algorithm successfully performs circle identification, and the output response is unmarked (X), as none of the options are selected.
- Where the applicant did not fill in the selected circle, making it impossible for the algorithm to determine whether or not the mark was present. The CHT algorithm fails in responses numbered 6, 33, 34, 41, 68, 70 and 71 (see Figure 11e), while our algorithm successfully detects the circles in those same responses (see Figure 11f). The CHT algorithm fails because the applicant mismarked the space for that particular response.
4.2. Individual Answer Item Accuracy and Error Analysis
4.3. Answer Sheet Accuracy and Error Analysis
4.4. Hard Cases Considered by the Proposed OMR Algorithm
- The handwritten text added by the student interferes with the mark-identification algorithms since its location generates false marks in undesired places. Ideally, it would be necessary to detect this text after scanning the test sheet to avoid the interference mentioned above. Figure 15 shows examples of this problem.
- Another problem detected by the mark-detection algorithm is the folding of the answer sheet, which may have occurred accidentally during the scanning process. The mark recognition discards these answer sheets since the folding causes the algorithms to obtain a different number of marks than the one that should be present. Figure 16 show examples of this problem.
- In isolated cases, students exceed the limits of the circles when marking the answer sheet, and in some cases, the marking appears blurry. In most cases, the algorithm considers the mark as valid. Figure 17 shows answer sheets with this problem.
- In some isolated cases, students make an incorrect filling of the answer sheet; instead of filling the circle, they only put a diagonal mark, or the filling of the circle is not complete. In most cases, the algorithm considers this type of mark as valid. Figure 18 shows examples where the student marked the response circle incorrectly or very faintly on the response sheet. Figure 18a shows a fragment of an answer sheet image with some situations the OMR algorithm faced. Although the test applier provided the instruction that students must fill the circles of the answer sheet, they omit this indication and only mark a part of the circle (more precisely, in answers 31 to 38, among others), complicating the algorithm in charge of detecting whether there is a mark in the circle or not. Another indication is that they should not go beyond the margins of the circle, but again this indication is issued (more precisely, in answers 1, 2, 6 and 7, among others).
- Each high school scans the answer sheets. This scanning can be susceptible to errors, such as scanning the sheets in the wrong orientation or introducing artifacts during digitization, including lines that do not belong to the original answer sheet. Figure 19 shows answer sheets with this problem.
- During the exam, a student may change the answer to a given question. The erasing process may affect the visibility of adjacent marks, complicating the recognition process and leaving an imprint on the mark with a smudge. Figure 20 shows examples of this problem.
4.5. Detected Answer Report
4.6. Analysis of Alternative Process Usage
4.7. Execution Time Performance vs. Manual Answer Extraction
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fuhrman, M. Developing good multiple-choice tests and test questions. J. Geosci. Educ. 1996, 44, 379–384. [Google Scholar] [CrossRef]
- de Elias, E.M.; Tasinaffo, P.M.; Hirata, R., Jr. Optical mark recognition: Advances, difficulties, and limitations. SN Comput. Sci. 2021, 2, 367. [Google Scholar] [CrossRef]
- Patel, R.; Sanghavi, S.; Gupta, D.; Raval, M.S. CheckIt-A low cost mobile OMR system. In Proceedings of the TENCON 2015—2015 IEEE Region 10 Conference, Macao, China, 1–4 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–5. [Google Scholar]
- Abbas, A.A. An automatic system to grade multiple choice questions paper based exams. J. Univ. Anbar Pure Sci. 2009, 3, 174–181. [Google Scholar] [CrossRef]
- Chai, D. Automated marking of printed multiple choice answer sheets. In Proceedings of the 2016 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Bangkok, Thailand, 7–9 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 145–149. [Google Scholar]
- Khan, I.; Rahman, S.; Alam, F. An efficient, cost effective and user friendly approach for MCQs treatment. Proc. Pakistan Acad. Sci. Phys. Comput. Sci. 2018, 55, 39–44. [Google Scholar]
- Hadžić, Đ.; Saletović, E.; Kapić, Z. Software system for automatic reading, storing, and evaluating scanned paper Evaluation Sheets for questions with the choice of one correct answer from several offered. Iop Conf. Ser. Mater. Sci. Eng. 2023, 1298, 012020. [Google Scholar] [CrossRef]
- Shaikh, E.; Mohiuddin, I.; Manzoor, A.; Latif, G.; Mohammad, N. Automated grading for handwritten answer sheets using convolutional neural networks. In Proceedings of the 2019 2nd International Conference on New Trends in Computing Sciences (ICTCS), Shanghai, China, 2–4 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Jocovic, V.; Nikolic, B.; Bacanin, N. Software System for Automatic Grading of Paper Tests. Electronics 2023, 12, 4080. [Google Scholar] [CrossRef]
- Obradovic, M.; Srbljanovic, A.; Djukic, J.; Jocovic, V.; Misic, M. Improvements of Test Variant Assembly Tool for Massive Exams. In Proceedings of the 2023 31st Telecommunications Forum (TELFOR), Belgrade, Serbia, 21–22 November 2023; pp. 1–4. [Google Scholar] [CrossRef]
- Kommey, B.; Keelson, E.; Samuel, F.; Twum-Asare, S.; Akuffo, K.K. Automatic Multiple Choice Examination Questions Marking and Grade Generator Software. IPTEK J. Technol. Sci. 2022, 33, 175–189. [Google Scholar] [CrossRef]
- Rangkuti, A.H.; Athalia, Z.; Thalia, T.E.; Wiharja, C.E.; Rakhmansyah, A. Economical and Efficient Multiple-Choice Question Grading System using Image Processing Technique. Int. J. Intell. Syst. Appl. Eng. 2023, 11, 193–198. [Google Scholar]
- Calado, M.P.; Ramos, A.A.; Jonas, P. An application to generate, correct and grade multiple-choice tests. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Lingang, Shanghai, 2–4 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1548–1552. [Google Scholar]
- Jain, V.; Malik, S.; Bhatia, V. Robust Image Processing based Real-time Optical Mark Recognition System. In Proceedings of the 2022 IEEE 6th Conference on Information and Communication Technology (CICT), Gwalior, India, 18–20 November 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Somaiya, E.; Mim, A.S.; Kader, M.A. Webcam Based Robust and Affordable Optical Mark Recognition System for Teachers. Indones. J. Electr. Eng. Inform. (IJEEI) 2024, 12, 870–882. [Google Scholar] [CrossRef]
- Karunanayake, N. OMR sheet evaluation by web camera using template matching approach. Int. J. Res. Emerg. Sci. Technol. 2015, 2, 40–44. [Google Scholar]
- Tavana, A.M.; Abbasi, M.; Yousefi, A. Optimizing the correction of MCQ test answer sheets using digital image processing. In Proceedings of the 2016 Eighth International Conference on Information and Knowledge Technology (IKT), Hamedan, Iran, 7–8 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 139–143. [Google Scholar]
- Rababaah, A.R. Machine vision algorithm for MCQ automatic grading–MVAAG. Int. J. Comput. Vis. Robot. 2025, 15, 233–251. [Google Scholar] [CrossRef]
- Loke, S.C.; Kasmiran, K.A.; Haron, S.A. A new method of mark detection for software-based optical mark recognition. PLoS ONE 2018, 13, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Atencio, Y.P.; Suaquita, J.H.; Ramirez, J.M.; Moscoso, J.C.; Saucedo, F.M. Using OMR for Grading MCQ-Type Answer Sheets Based on Bubble Marks. In Advanced Computing, Proceedings of the 12th International Conference, IACC 2022, Hyderabad, India, 16–17 December 2022; Garg, D., Narayana, V.A., Suganthan, P.N., Anguera, J., Koppula, V.K., Gupta, S.K., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 395–404. [Google Scholar]
- Hafeez, Q.; Aslam, W.; Aziz, R.; Aldehim, G. An Enhanced Fault Tolerance Algorithm for Optical Mark Recognition Using Smartphone Cameras. IEEE Access 2024, 12, 121305–121319. [Google Scholar] [CrossRef]
- Sinchai, A.; Tuwanut, P. Using of an arithmetic sequence to estimate undetected existing circle choice locations. In Proceedings of the 2022 37th International Technical Conference on Circuits/Systems, Computers And Communications (ITC-CSCC), Phuket, Thailand, 5–8 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 268–271. [Google Scholar]
- Jingyi, T.; Hooi, Y.K.; Bin, O.K. Image processing for enhanced omr answer matching precision. In Proceedings of the 2021 International Conference on Computer & Information Sciences (ICCOINS), Virtual, 13–15 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 322–327. [Google Scholar]
- Shi, C.; Zhang, J.; Zhang, J.; Zhang, C.; Zang, X.; Wang, L.; Zhu, C. Unsupervised Optical Mark Localization for Answer Sheet Based on Energy Optimization. In Proceedings of the 2023 IEEE 9th International Conference on Cloud Computing and Intelligent Systems (CCIS), Dali, China, 12–13 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 386–392. [Google Scholar]
- Tabassum, K.; Rahman, Z. Optical Mark Recognition with Object Detection and Clustering. In Proceedings of the 2024 6th International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT), Dhaka, Bangladesh, 2–4 May 2024; pp. 352–357. [Google Scholar] [CrossRef]
- Afifi, M.; Hussain, K.F. The achievement of higher flexibility in multiple-choice-based tests using image classification techniques. Int. J. Doc. Anal. Recognit. (IJDAR) 2019, 22, 127–142. [Google Scholar] [CrossRef]
- Mondal, S.; De, P.; Malakar, S.; Sarkar, R. OMRNet: A lightweight deep learning model for optical mark recognition. Multimed. Tools Appl. 2024, 83, 14011–14045. [Google Scholar] [CrossRef]
- Tinh, P.D.; Minh, T.Q. Automated Paper-based Multiple Choice Scoring Framework using Fast Object Detection Algorithm. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 1174–1181. [Google Scholar] [CrossRef]
- Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica 1973, 10, 112–122. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, X.; Hu, E.; Wang, A.; Shiri, B.; Lin, W. VNDHR: Variational single nighttime image Dehazing for enhancing visibility in intelligent transportation systems via hybrid regularization. IEEE Trans. Intell. Transp. Syst. 2025, 26, 10189–10203. [Google Scholar] [CrossRef]
- Talebi, H.; Milanfar, P. Global image denoising. IEEE Trans. Image Process. 2013, 23, 755–768. [Google Scholar] [CrossRef] [PubMed]
Grade | Number of Tests |
10th grade | 2157 |
11th grade | 2143 |
12th grade | 1729 |
Total | 6029 |
Answers distribution over the dataset | |
90 answer-tests | 3886 |
100 answer-tests | 2143 |
Total institutions | 44 |
Total answers | 564,040 |
Circle Type | Total Circles | Recognized Circles and Accuracy | |
---|---|---|---|
CHT | Proposed | ||
Marked circles | 546,978 | 171,094 (31.28%) | 533,030 (97.45%) |
Unmarked circles | 1,709,212 | 1,648,364 (96.44%) | 1,707,160 (99.88%) |
Total circles | 2,256,160 | 1,819,458 (80.64%) | 2,224,190 (99.26%) |
Error in answers section | |
Exams without error | 5797 |
Exams with at least one error | 232 |
Accuracy | 96.15% |
Error in ID section | |
Exams without error | 6000 |
Exams with ID error | 29 |
Accuracy | 99.50% |
Error in answers compared to the entire dataset | |
Total answers | 564,040 |
Answer detected with error | 424 |
Accuracy | 99.95% |
Summary of Answer Errors | |
---|---|
Exams with 1 answer error | 166 |
Exams with 2 answer errors | 25 |
Exams with 3 answer errors | 14 |
Exams with 4 answer errors | 9 |
Exams with 5 answer errors | 8 |
Exams with 6 answer errors | 4 |
Exams with 7 answer errors | 1 |
Exams with 8 answer errors | 2 |
Exams with 9 answer errors | 0 |
Exams with 10 answer errors | 0 |
Exams with 11 answer errors | 0 |
Exams with 12 answer errors | 1 |
Exams with 13 answer errors | 0 |
Exams with 14 answer errors | 1 |
Exams with 15 answer errors | 0 |
Exams with 16 answer errors | 0 |
Exams with 17 answer errors | 1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hernández-Mier, Y.; Nuño-Maganda, M.A.; Polanco-Martagón, S.; Acosta-Villarreal, G.; Posada-Gómez, R. Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests. J. Imaging 2025, 11, 308. https://doi.org/10.3390/jimaging11090308
Hernández-Mier Y, Nuño-Maganda MA, Polanco-Martagón S, Acosta-Villarreal G, Posada-Gómez R. Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests. Journal of Imaging. 2025; 11(9):308. https://doi.org/10.3390/jimaging11090308
Chicago/Turabian StyleHernández-Mier, Yahir, Marco Aurelio Nuño-Maganda, Said Polanco-Martagón, Guadalupe Acosta-Villarreal, and Rubén Posada-Gómez. 2025. "Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests" Journal of Imaging 11, no. 9: 308. https://doi.org/10.3390/jimaging11090308
APA StyleHernández-Mier, Y., Nuño-Maganda, M. A., Polanco-Martagón, S., Acosta-Villarreal, G., & Posada-Gómez, R. (2025). Unsupervised Optical Mark Recognition on Answer Sheets for Massive Printed Multiple-Choice Tests. Journal of Imaging, 11(9), 308. https://doi.org/10.3390/jimaging11090308