Next Article in Journal
The Model and Training Algorithm of Compact Drone Autonomous Visual Navigation System
Previous Article in Journal
A Mobile Air Pollution Monitoring Data Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Data Descriptor

Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings

1
Department of Computer Science, California State University, Fullerton, CA 92831, USA
2
Department of Computer Science, North Dakota State University, Fargo, ND 58102, USA
*
Author to whom correspondence should be addressed.
Submission received: 28 November 2018 / Revised: 20 December 2018 / Accepted: 20 December 2018 / Published: 23 December 2018

Abstract

:
There are multiple technological ways to identify humans and verify claimed identities. The dataset presented herein facilitates work on hard and soft biometric human identification and identity verification. It is comprised of full-body images of multiple fully clothed males from a constrained age range. The images have been taken from multiple perspectives with varied lighting brightness and temperature.
Dataset License: CC-BY

1. Summary

A variety of techniques exist for visually detecting humans and identifying particular individuals. Facial recognition [1,2,3,4], gait analysis [5,6,7,8] and body shape-based recognition [9,10,11] techniques have all been proposed, as have techniques that combine shape silhouettes and gait [12,13], and facial recognition and gait [14,15,16,17]. The use of techniques to detect gender [18,19,20], age [19] and other characteristics from visual data has also been demonstrated. Multi-factor authentication systems can make use of the identification of individuals or characteristics as an additional check that a prospective system user is who he or she claims to be, in addition to prospective uses of similar systems in medical, retail, clothing recommendation and numerous other application areas.
This dataset:
  • Facilitates the testing of human body identification and verification algorithms and the comparison of performance between such algorithms.
  • Provides data to analyze algorithm performance under different lighting brightness conditions and temperatures, in addition to subject perspective.
  • Includes images from all perspectives under all experimental lighting conditions, allowing training under one lighting condition and testing under another.
  • Was collected in a controlled environment with a white backdrop.
  • Is of subjects that are all male and of similar ages; many have a similar build. This facilitates testing of the algorithm’s ability to identify individuals from other similar individuals

2. Data Description

This article describes a dataset that can be used to evaluate the performance of human identification and classification systems. The data collected are of multiple subjects and have been collected from different perspectives, relative to the subject, and under different lighting conditions including multiple levels of light brightness and temperature. The subjects are males between 18 and 26 years old. Multiple subjects have similar skin tones and build, to facilitate testing the systems ability to identify similar-looking individuals.
The data are provided as JPEG files. These files can be opened and processed by numerous common image viewing, editing and processing software programs. These JPEG files contain the original metadata included by the camera.

3. Methods

This section discusses the methods used to collect the data. First, the equipment used will be described, along with a description of how it was used during the collection of the data. Then, a description of the experimental setup is provided. Finally, the experimental procedures are detailed.

3.1. Equipment

The equipment used consisted of a camera and multiple LED lights. A Canon T5i with its stock lens was used to take the images in this dataset. The camera was set to aperture priority mode, with an ISO of 800 max, an FSTOP of 4, AI focus, auto white balance, flash off, portrait mode and shot at a resolution of 3456 × 5184 pixels. The photos were saved in JPEG format.
The subjects were photographed in front of a white photographic backdrop that was six feet wide and nine feet tall. Six Neewer LED500LRC lights and light stands were used to illuminate the subject and the background. Four of the lights were used for backdrop illumination and two were used to light the subject. A Tekpower lumen reader was used to record lumen levels.

3.2. Setup

The backdrop was positioned to extend under and beyond the feet of the subject. This prevented the floor from being visible in the image and a floor-backdrop seam from showing. Instead, a consistent white background was present throughout the images.
A light (lower left light) was placed to the left of the backdrop and lowered as far as the stand allowed. A second light (upper left light) was also placed to the left of the backdrop and was raised to the highest level that the stand allowed. A light (lower right light) was placed to the right of the backdrop and lowered as far as the stand allowed. A second light (upper right light) was also placed to the right of the backdrop and raised to the highest level that the stand allowed. The camera was placed centrally and positioned vertically at the subject’s waist level. A light (subject light) was placed next to the camera at the height of the camera. Finally, a light was placed directly in front of the camera and was lowered as far as the stand allowed. The coordinate positions of all of the equipment, in inches, are listed in Table 1. All measurements are from the left corner of the room, as shown in Figure 1. Figure 2 depicts the approximate positions of the equipment from overhead and the orientation of the coordinate system. Note that it is not drawn to scale.

3.3. Experimental Procedures

Images were taken of fully clothed males aged between 18 and 26. The subjects varied in height and build. They were differently attired and some were wearing jackets. Some also had beards. A number had similar build types and skin tone levels. Each subject was asked to face and look straight ahead (in each orientation), with their arms at their sides. They were also asked to empty their pockets and take their shoes off.
The subjects were instructed to stand in each of the five positions while multiple images were taken. They faced to the left, half-way between left and forward facing, forward, half-way between right and forward facing and to the right. For each position, five images were taken under multiple brightness and temperature levels. Thus, a total of 25 images were taken for each subject. Figure 3 depicts an individual in each of the positions with each of the lighting configurations used.
The images were taken with the following lighting parameters: A color temperature of 5600 K at 100% brightness, 3300 K at 100% brightness, 4400 K at 25% brightness, 4400 K at 60% brightness and 4400 K at 100% brightness. The level of lumens produced at each of these settings is listed in Table 2.

Author Contributions

Conceptualization, W.C. and J.S.; methodology, W.C and J.S.; data curation, W.C.; writing—original draft preparation, W.C and J.S.; writing—review and editing, J.S.; supervision, J.S.; project administration, J.S.; funding acquisition, J.S.

Funding

This work was supported by the U.S. National Science Foundation (NSF award # 1757659).

Acknowledgments

Thanks is given to Collin Gros and Marco Colasito for assisting in the experimental setup and data collection for this work. Facilities and some equipment used for the collection of this data were provided by the North Dakota State University Institute for Cyber Security Education and Research and the North Dakota State University Department of Computer Science.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fasel, B.; Luettin, J. Automatic facial expression analysis: A survey. Pattern Recognit. 2003, 36, 259–275. [Google Scholar] [CrossRef]
  2. Samal, A.; Iyengar, P.A. Automatic recognition and analysis of human faces and facial expressions: A survey. Pattern Recognit. 1992, 25, 65–77. [Google Scholar] [CrossRef]
  3. Vasilescu, M.A.O.; Terzopoulos, D. Multilinear image analysis for facial recognition. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 August 2002; Volume 2, pp. 511–514. [Google Scholar]
  4. Zhao, W.; Chellappa, R.; Phillips, P.J.; Rosenfeld, A. Face recognition. ACM Comput. Surv. 2003, 35, 399–458. [Google Scholar] [CrossRef]
  5. Kale, A.; Cuntoor, N.; Yegnanarayana, B.; Rajagopalan, A.N.; Chellappa, R. Gait Analysis for Human Identification; Springer: Berlin/Heidelberg, Germany, 2003; pp. 706–714. [Google Scholar]
  6. Lee, L.; Grimson, W.E.L. Gait analysis for recognition and classification. In Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, DC, USA, 21 May 2002; pp. 155–162. [Google Scholar] [Green Version]
  7. Wang, J.; She, M.; Nahavandi, S.; Kouzani, A. A Review of Vision-Based Gait Recognition Methods for Human Identification. In Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications, Sydney, NSW, Australia, 1–3 December 2010; pp. 320–327. [Google Scholar]
  8. Kale, A.; Sundaresan, A.; Rajagopalan, A.N.; Cuntoor, N.P.; Roy-Chowdhury, A.K.; Kruger, V.; Chellappa, R. Identification of Humans Using Gait. IEEE Trans. Image Process. 2004, 13, 1163–1173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Godil, A.; Grother, P.; Ressler, S. Human identification from body shape. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 386–392. [Google Scholar] [Green Version]
  10. Balamurali, R.; Chandrasekar, A. Multiple parameter algorithm approach for adult image identification. Cluster Comput. 2018. [Google Scholar] [CrossRef]
  11. Bk, S.; Corvee, E.; Bremond, F.; Thonnat, M. Person Re-identification Using Spatial Covariance Regions of Human Body Parts. In Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; pp. 435–440. [Google Scholar]
  12. Wang, L.; Tan, T.; Ning, H.; Hu, W. Silhouette analysis-based gait recognition for human identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1505–1518. [Google Scholar] [CrossRef] [Green Version]
  13. Collins, R.T.; Gross, R.; Shi, J. Silhouette-based human identification from body shape and gait. In Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, DC, USA, 21 May 2002; pp. 366–371. [Google Scholar] [Green Version]
  14. Kale, A.; Roychowdhury, A.K.; Chellappa, R. Fusion of gait and face for human identification. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 5. [Google Scholar]
  15. Zhou, X.; Bhanu, B. Feature fusion of side face and gait for video-based human identification. Pattern Recognit. 2008, 41, 778–795. [Google Scholar] [CrossRef]
  16. Shakhnarovich, G.; Darrell, T. On probabilistic combination of face and gait cues for identification. In Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washinton DC, USA, 20–21 May 2002; pp. 176–181. [Google Scholar]
  17. Shakhnarovich, G.; Lee, L.; Darrell, T. Integrated face and gait recognition from multiple views. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
  18. Kozlowski, L.T.; Cutting, J.E. Recognizing the sex of a walker from a dynamic point-light display. Percept. Psychophys. 1977, 21, 575–580. [Google Scholar] [CrossRef] [Green Version]
  19. Makihara, Y.; Mannami, H.; Yagi, Y. Gait Analysis of Gender and Age Using a Large-Scale Multi-view Gait Database. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2011; pp. 440–451. [Google Scholar]
  20. Zhang, D.; Wang, Y. Gender Recognition Based on Fusion of Face and Multi-view Gait. In International Conference on Biometrics; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1010–1018. [Google Scholar]
Figure 1. Experimental setup.
Figure 1. Experimental setup.
Data 04 00003 g001
Figure 2. Equipment positions (coordinate positions are presented in Table 1).
Figure 2. Equipment positions (coordinate positions are presented in Table 1).
Data 04 00003 g002
Figure 3. Depicts subject in each position with varied lighting brightnesses and temperatures. Top left image: Left facing, 5600 K, 100%. Top middle image: Left-center facing, 3300 K, 100%. Top right image: Center facing, 4400 K, 25%. Bottom left image: Right-center facing, 4400 K, 60%. Bottom right image: Right facing, 4400 K, 100%.
Figure 3. Depicts subject in each position with varied lighting brightnesses and temperatures. Top left image: Left facing, 5600 K, 100%. Top middle image: Left-center facing, 3300 K, 100%. Top right image: Center facing, 4400 K, 25%. Bottom left image: Right-center facing, 4400 K, 60%. Bottom right image: Right facing, 4400 K, 100%.
Data 04 00003 g003
Table 1. Equipment positions.
Table 1. Equipment positions.
ItemPosition (x, y) in InchesPosition (x, y) in Meters
Backdrop stands10, 48 and 10, 1220.25, 1.22 and 0.25, 3.09
Lower left light30, 410.76, 1.0
Upper left light30, 330.76, 0.84
Lower right light27, 1320.69, 3.35
Upper right light30, 1410.76, 3.58
Camera109, 872.77, 2.21
Subject light103, 752.62, 1.91
Lower subject light88, 872.24, 2.21
Table 2. Lumen production by temperature and brightness setting.
Table 2. Lumen production by temperature and brightness setting.
Lighting Configuration: Temperature, BrightnessLumen Production
5600 K, 100%620
3300 K, 100%545
4400 K, 25%326
4400 K, 60%570
4400 K, 100%865

Share and Cite

MDPI and ACS Style

Clemons, W.; Straub, J. Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings. Data 2019, 4, 3. https://doi.org/10.3390/data4010003

AMA Style

Clemons W, Straub J. Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings. Data. 2019; 4(1):3. https://doi.org/10.3390/data4010003

Chicago/Turabian Style

Clemons, William, and Jeremy Straub. 2019. "Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings" Data 4, no. 1: 3. https://doi.org/10.3390/data4010003

Article Metrics

Back to TopTop