Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms
Abstract
1. Introduction
Aims of This Study
- (1)
- Construction of representative style paradigms: Naturalness, Technology, Toughness, Steadiness, Softness, and Dynamism (NTTSSD) for evaluating imagery styles across multi-category product forms. Based on Kansei Engineering (KE), we integrate four key visual morphological features, i.e., contour line, edge transition angle, visual direction, and visual curvature, to establish the NTTSSD paradigms. These paradigms enable a systematic mapping between product form features and user emotional preferences, allowing for the quantification of perceptual evaluations across diverse product categories. Accordingly, a dataset of 8604 images covering 20 industrial product types was constructed.
- (2)
- Development of an end-to-end intelligent ISE framework: This study integrates an enhanced You Only Look Once version 5 small (YOLOv5s) deep learning network with NTTSSD paradigms by incorporating a Convolutional Block Attention Module (CBAM) to improve efficient learning of key morphological features. A shallow-to-deep semantic feature fusion mechanism is also proposed to capture multi-scale morphological characteristics across diverse product types. Experimental results show that the mean Average Precision (mAP@0.5) across all categories reaches 86.1%, representing a 1.4% improvement over the state-of-the-art baseline model, while increasing the processing speed by 40.7 Frames Per Second (FPS). This framework significantly reduces the dependence on costly user surveys and complex data transformations, enabling efficient and collaborative evaluation across product categories.
- (3)
- Validation of the model’s generalization and interpretability: We utilize key layer feature maps and Gradient-weighted Class Activation Mapping (Grad-CAM) technology to visualize the model’s attention to critical morphological regions, thereby enhancing interpretability. In extended testing across 40 product types, the ISE model achieves a root mean square error (RMSE) of 0.26, demonstrating strong cross-category generalization. These results highlight the feasibility of data-driven, intelligent imagery style evaluation in cross-category industrial product design.
2. Related Work
2.1. Research on User Emotional Preferences in Product Forms Based on Kansei Engineering
2.2. Deep Learning Model
3. Constructing NTTSSD Paradigms for Imagery Style of Industrial Product Forms
3.1. Quantifying the Imagery Style of Industrial Product Forms Based on Kansei Engineering
3.1.1. Acquisition of Cross-Category Industrial Product Forms
3.1.2. Establishing the Vocabulary of Imagery Styles in Industrial Product Forms
3.1.3. Experimental Design of User Preference Using a Three-Stage Semantic Differential (SD) Questionnaire Survey
3.2. Constructing the NTTSSD Paradigms for Imagery Style Mapping Across Cross-Category Industrial Products
3.3. Constructing the Dataset Based on the NTTSSD Paradigms
4. Imagery Style Evaluation (ISE) Method for Cross-Category Industrial Product Forms
4.1. Constructing an Enhanced YOLOv5 Deep Learning Network for Imagery Style Evaluation
4.2. Testing and Comparative Analysis of the Imagery Style Evaluation Model
4.3. Testing the Generalization Ability of the ISE Model on Different Industrial Products
4.3.1. Experimental Program
4.3.2. Validation Results and Analysis
- (1)
- Coupling the effect of feature distinguishability and background interference: When the product has a simple structure and highly distinguishable features (e.g., a trash bin with sharp lines and a rigid form, or headphones with smooth, rounded edges), the model can accurately evaluate the imagery style regardless of background complexity. In contrast, when the structure is complex and features are ambiguous (e.g., cases Bf1, Bf2, Bf3), the model tends to misclassify the imagery style even against a simple background, due to overlapping or conflicting stylistic cues.
- (2)
- Structure-feature balance under background-free conditions: For complex products with homogeneous and dominant features (e.g., bicycles, robots), the model can effectively extract and evaluate key stylistic traits. However, for products with simple structures but conflicting features (e.g., cases Bf4 and Bf5), the model struggles due to semantic interference among competing stylistic signals, leading to ambiguous or incorrect predictions.
- (3)
- Feature competition in multi-object scenes: Products with complex structures and salient features (e.g., electric shavers, projectors) can still be correctly evaluated even with simple or cluttered backgrounds. However, in scenes containing multiple objects—even if each object has a simple and clear structure (e.g., case Bf6)—feature overlap and competition between objects introduce significant noise, often resulting in inconsistent or inaccurate assessments.
4.4. Product Form Design Decision Process Based on ISE Method
5. Feature Visualization of the Imagery Style Evaluation Method
5.1. Visualization Technique
5.2. Visualization Analysis
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
EEG | Electroencephalography |
SVM | Support Vector Machines |
ISE | Imagery Style Evaluation |
NTTSSD | Naturalness, Technology, Toughness, Steadiness, Softness, Dynamism |
CBAM | Convolutional Block Attention Module |
Map | mean Average Precision |
FPS | Frames Per Second |
Grad-CAM | Gradient-weighted Class Activation Mapping |
RMSE | Root Mean Square Error |
KE | Kansei Engineering |
SD | Semantic Differential |
TOPSIS | Technique for Order Preference by Similarity to Ideal Solution |
DETR | DEtection Transformer |
YOLO | You Only Look Once |
FLOPs | Floating-point operations |
AP | Average Precision |
mAP | mean Average Precision |
Param | Parameters |
Conv | Convolution layers |
C3 layers | Containing 3 standard Conv |
SPPF | Spatial Pyramid Pooling-Fast |
PANet | Path Aggregation Network |
l | Contour line |
θ | Edge transition angle |
d | Visual direction |
k | Visual curvature |
SGD | Stochastic Gradient Descent |
IoU | Intersection over Union |
CioU | Complete Intersection over Union |
NMS | Non-maximum Suppression |
Na | Naturalness |
Te | Technology |
To | Toughness |
St | Steadiness |
So | Softness |
Dy | Dynamism |
References
- Zhang, X.; Li, Y.; Shang, D.; Di, C.; Ding, M. The influence of user cognition on consumption decision-making from the perspective of bounded rationality. Displays 2023, 77, 102392. [Google Scholar] [CrossRef]
- Chan, K.Y.; Kwong, C.K.; Clark, P.; Jiang, H.; Fung, C.K.Y.; Bilal, A.S.; Liu, Z.; Andy, T.C.; Wong, P.J. Affective design using Machine learning: A survey and its prospect of conjoining big data. Int. J. Comput. Integr. Manuf. 2020, 33, 645–669. [Google Scholar] [CrossRef]
- Hu, H.C.; Liu, Y.; Lu, W.F.; Guo, X. A quantitative aesthetic measurement method for product appearance design. Adv. Eng. Inform. 2022, 53, 101644. [Google Scholar] [CrossRef]
- Zhao, X.; Sharul, A.S.; Han, L. A novel product shape design method integrating kansei engineering and whale optimization algorithm. Adv. Eng. Inform. 2024, 62, 102847. [Google Scholar] [CrossRef]
- Shan, H.U.; Jia, Q.; Wang, Y.; Dong, L.; Luo, Y. A study on the redesign of traditional cultural symbols based on eye movement experiment and extensible semantics. Decoration 2021, 8, 88–92. [Google Scholar]
- Kuo, J.Y.; Chen, C.H.; Shinichi, K.; Danni, C. Investigating the relationship between users’ eye movements and perceived product attributes in design concept evaluation. Appl. Ergon. 2021, 94, 103393. [Google Scholar] [CrossRef]
- Pei, H.; Huang, X.; Ding, M. Image visualization: Dynamic and static images generate users visual cognitive experience using eye-tracking technology. Displays 2022, 73, 102175. [Google Scholar] [CrossRef]
- Li, L.; Xin, Y.; Guo, Z.; Deng, Y.Q.; Yang, P. Research on product morphological Kansei engineering model based on eye movement empowerment and EEG image cognition. Packag. Eng. 2022, 43, 37–44. [Google Scholar]
- Fan, T.; Qiu, S.; Wang, Z.; Zhao, H.; Jiang, J.; Wang, Y. A new deep convolutional neural network incorporating attentional mechanisms for ECG emotion recognition. Comput. Biol. Med. 2023, 159, 106938. [Google Scholar] [CrossRef]
- Valeria, B.; Paul, G. The influence of product involvement and emotion on short-term product demand forecasting. Int. J. Forecast. 2017, 33, 652–661. [Google Scholar]
- Buker, T.; Schmitt, T.; Miehling, J.; Wartzack, S. What’s more important for product design-usability or emotionality? An examination of influencing factors. J. Eng. Des. 2022, 33, 635–669. [Google Scholar] [CrossRef]
- Han, Y.; Moghaddam, M. Eliciting attribute-level user needs from online reviews with deep language models and information extraction. J. Mech. Des. 2020, 143, 1–34. [Google Scholar] [CrossRef]
- Su, J.N.; Zhao, H.J.; Wang, R.H.; Zhang, S.T. Optimized design of product imagery styling based on support vector machines and particle swarm algorithms. Mech. Des. 2015, 32, 105–109. [Google Scholar]
- Ding, M.; Zhang, S.Y.; Huang, X.G.; Li, M.H. Product appearance imagery design based on support vector machine regression and simulated annealing algorithm. Mech. Des. 2020, 37, 135–140. [Google Scholar]
- Li, Z.; Tian, Z.G.; Wang, J.W.; Wang, W.M.; Huang, G.Q. Dynamic mapping of design elements and affective responses: A machine learning based method for affective design. J. Eng. Des. 2018, 29, 358–380. [Google Scholar] [CrossRef]
- Hao, J.; Xu, L.; Wang, G.; Jin, Y.; Yan, Y. A knowledge-based method for rapid design concept evaluation. IEEE Access 2019, 7, 116835–116847. [Google Scholar] [CrossRef]
- Ding, M.; Zhao, L.; Sun, M.; Qin, H. An ISM-BN-GA based methodology for product emotional design. Displays 2022, 74, 102279. [Google Scholar] [CrossRef]
- Neha, P.; Goonjan, J. Bayesian game model based unsupervised sentiment analysis of product reviews. Expert Syst. Appl. 2023, 214, 119128. [Google Scholar]
- Ferrero, V.J.; Alqseer, N.; Tensa, M.; DuPont, B. Using decision trees supported by data mining to improve function-based design. In Proceedings of the ASME Design Engineering Technical Conference, Online, 17–19 August 2020. [Google Scholar] [CrossRef]
- Čok, V.; Vlah, D.; Povh, J. Methodology for mapping form design elements with user preferences using Kansei engineering and VDI. J. Eng. Des. 2022, 33, 144–170. [Google Scholar] [CrossRef]
- Wang, L.; Liu, Z. Data-driven product design evaluation method based on multistage artificial neural network. Appl. Soft Comput. 2021, 103, 107117. [Google Scholar] [CrossRef]
- Camburn, B.; He, Y.; Raviselvam, S.; Luo, J.; Wood, K. Machine learning-based design concept evaluation. J. Mech. Des 2020, 142, 3. [Google Scholar] [CrossRef]
- Wu, X.X.; Yang, M.G.; Su, Z.S.; Zhang, X.X. On the evaluation of product aesthetic evaluation based on hesitant-fuzzy cognition and neural network. Complexity 2022, 18, 8407521. [Google Scholar] [CrossRef]
- Zhe, H.L.; Woo, J.C.; Luo, F.; Chen, Y.T. Research on sound imagery of electric shavers based on Kansei engineering and multiple artificial neural networks. Appl. Sci 2022, 12, 10329. [Google Scholar]
- Wang, T.X.; Xu, M.M.; Liu, Y.; Zhou, M.Y.; Sun, X.F. Constructing a MOEA approach for product form Kansei design based on text mining and BPNN. J. Intell. Fuzzy Syst. 2024, 02, 8865–8885. [Google Scholar] [CrossRef]
- Nurullah, Y.; Hüseyin, R.B.; Hüseyin, K.S.; Olcay, E.C. Review of artificial intelligence applications in engineering design perspective. Eng. Appl. Artif. Intell. 2023, 118, 105697. [Google Scholar]
- Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 1995, 15, 3–11. [Google Scholar] [CrossRef]
- Osgood, C.E.; Suci, C.J.; Tannenbaum, P.H. The Measurement of Meaning; University of Illinois Press: Champaign, IL, USA, 1957; pp. 76–124. [Google Scholar]
- Lu, Z.L. Design Cognition Mechanisms and Methods for Automotive Styling Morphologies; Beijing Institute of Technology Press: Beijing, China, 2022; Volume 3, pp. 15–16. ISBN 9787576311273. [Google Scholar]
- Cristina, N.Z.; Silva, S.L.; Daniel, C.A.; Costa, J.M.H.; Benedito, G.B. Automatic digital mood boards to connect users and designers with kansei engineering. Int. J. Ind. Ergon. 2019, 74, 102829. [Google Scholar]
- Hirokawa, J.; Vaughan, P.; Masset, T.; Ott, A.K. Frontal cortex neuron types categorically encode single decision variables. Nature 2019, 576, 446–451. [Google Scholar] [CrossRef]
- Masayuki, K.; Yuichi, K.; Shigekazu, I. Kansei engineering study on car seat lever position. Int. J. Ind. Ergon. 2021, 86, 103215. [Google Scholar]
- Wang, Z.; Hu, S.; Liu, W.D. Product feature sentiment analysis based on GRU-CAP considering chinese sarcasm recognition. Expert Syst. Appl. 2024, 241, 122512. [Google Scholar] [CrossRef]
- Luo, S.J.; Li, W.J.; Fu, Y.T. Consumer preference-driven SUV product family profile gene design. J. Mech. Eng. 2016, 52, 9. [Google Scholar] [CrossRef]
- Wang, D.; Li, Z.; Dey, N.; Ashour, A.S.; Sherratt, R.S.; Shi, F. case-based reasoning for product style construction and fuzzy analytic hierarchy process evaluation modeling using consumers linguistic variables. Cit. Inf. 2017, 10, 4900–4912. [Google Scholar] [CrossRef]
- Shih, W.H.; Zhang, C.T. Use Aesthetic Measure to Analyze the Consumer Preference Model of Product forms. Int. J. Eng. Innov. Technol. 2018, 7, 2277–3754. [Google Scholar]
- Fu, G.; Qing, X.Q.; Mitsuo, N.; Vincent, G.D. A proposal of the event-related potential method to effectively identify kansei words for assessing product design features in kansei engineering research. Int. J. Ind. Ergon. 2020, 76, 102940. [Google Scholar]
- Fan, J.S.; Yu, S.; Yu, M.J. Research on construction and application of gene network model for morphology design based on consumer’s preference. Adv. Eng. Inform. 2021, 50, 101412. [Google Scholar] [CrossRef]
- Shi, Y.L.; Peng, Q.J. Enhanced customer requirement classifcation for product design using big data and improved Kano model. Adv. Eng. Inform. 2021, 49, 101340. [Google Scholar] [CrossRef]
- Kuo, L.W.; Chang, T.; Lai, C.C. Research on product design modeling image and color psychological test. Displays 2021, 71, 102108. [Google Scholar] [CrossRef]
- Zhong, D.X.; Fan, J.S.; Yang, G.J.; Tian, B.; Zhang, Y. Knowledge management of product design: A requirements-oriented knowledge management framework based on Kansei engineering and knowledge map. Adv. Eng. Inform. 2022, 52, 101541. [Google Scholar] [CrossRef]
- Wu, Y.; Ma, L.; Yuan, X.F.; Li, Q.G. Human-machine hybrid intelligence for the generation of car frontal morphologies. Adv. Eng. Inform. 2023, 55, 101906. [Google Scholar] [CrossRef]
- Guo, X.Y.; Li, M.M.; Chen, X.L. Research on personalized ceramic bottle design with imagery and neural networks. Front. Artif. Intell. Appl. 2024, 4, 703–713. [Google Scholar]
- Liu, L.L.; Bo, M.; Meng, C.P.; Zhen, C.; Yang, Z.Y.; He, X. Research on automotive front face styling based on shape grammar. Front. Artif. Intell. Appl. 2024, 4, 295–304. [Google Scholar]
- Zhou, A.M.; Liu, H.B.; Zhang, S.T. Evaluation and design method for product form aesthetics based on deep learning. Digit. Object Identifier 2021, 10, 3101619. [Google Scholar] [CrossRef]
- Luigi, C.; Gianluigi, C.; Paolo, N. A grid anchor based cropping approach exploiting image aesthetics, geometric composition and semantics. Expert Syst. Appl. 2021, 186, 115852. [Google Scholar]
- Danilo, A.; Marco, C.; Luigi, C.; Alessio, F.; Gian, L.F.; Marco, R.M.; Fabrizio, R. Real-time deep learning method for automated detection and localization of structural defects in manufactured products. Comput. Ind. Eng. 2022, 172, 108512. [Google Scholar]
- Dharmalingam, M.; Sathyamoorthy, S. Feature sampling based on multilayer perceptive neural network for image quality assessment. Eng. Appl. Artif. Intell. 2023, 121, 106015. [Google Scholar]
- Delitzas, A.; Chatzidimitriou, K.C.; Symeonidis, A.L. Calista: A deep learning-based system for understanding and evaluating website aesthetics. Int. J. Hum.-Comput. Stud. 2023, 175, 103019. [Google Scholar] [CrossRef]
- Luis, G.N.; Julia, F.M.; Jesus, M.G.; Jose, M.P. Novel groundtruth transformations for the aesthetic assessment problem. Inf. Process. Manag. 2023, 60, 103368. [Google Scholar]
- Ma, A.; Yu, Y.H.; Shi, C.; Guo, Z.R.; Chua, T.S. Cross-view hypergraph contrastive learning for attribute-aware recommendation. Inf. Process. Manag. 2024, 61, 103701. [Google Scholar] [CrossRef]
- Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 2022, 12, 1353. [Google Scholar] [CrossRef]
- Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
- Allison, J.T.; Cardin, M.; McComb, C.; Ren, M.Y.; Selva, D.; Tucker, C.; Witherell, P.; Zhao, Y.F. Special issue: Artificial intelligence and engineering design. ASME J. Mech. Des 2022, 144, 020301. [Google Scholar] [CrossRef]
- Chang, Y.; Zhou, W.; Cai, H.N.; Fan, W.; Hu, L.F.; Wen, J.H. Meta-relation assisted knowledge-aware coupled graph neural network for recommendation. Inf. Process. Manag. 2023, 60, 103353. [Google Scholar] [CrossRef]
- Chen, Y.M.; Yuan, X.B.; Wu, R.Q.; Wang, J.B.; Hou, Q.B.; Cheng, M.M. YOLO-MS: Rethinking multi-scale representation learning for real-time object detection. arXiv 2023, arXiv:2308.05480v1. [Google Scholar] [CrossRef]
- Zhao, Y.; Chen, B.; Liu, B.; Yu, C.Y.; Wang, L.; Wang, S.S. GRP-YOLOv5: An improved bearing defect detection algorithm based on YOLOv5. Sensors 2023, 23, 7437. [Google Scholar] [CrossRef]
- Zhou, T.; Liu, F.Z.; Ye, X.Y.; Wang, H.W.; Lu, H.L. CCGL-YOLOV5: A cross-modal cross-scale global-local attention YOLOV5 lung tumor detection model. Comput. Biol. Med. 2023, 165, 107387. [Google Scholar] [CrossRef]
- Arunabha, M.R.; Jayabrata, B. DenseSPH-YOLOv5: An automated damage detection model based on DenseNet and Swin-Transformer evaluation head-enabled YOLOv5 with attention mechanism. Adv. Eng. Inform. 2023, 56, 102007. [Google Scholar]
- Jing, J.P.; Li, S.F.; Qiao, C.; Li, K.Y.; Zhua, X.Y.; Zhang, L.X. A tomato disease identification method based on leaf image automatic labeling algorithm and improved YOLOv5 model. J. Sci. Food Agric. 2023, 103, 7070–7082. [Google Scholar] [CrossRef]
- Li, Y.; Ma, R.; Zhang, R.; Cheng, Y.; Dong, C. A tea buds counting method based on YOLOv5 and kalman filter tracking algorithm. Plant Phenomics 2023, 5, 0030. [Google Scholar] [CrossRef]
- Li, C.Y.; Li, L.L.; Geng, Y.F.; Jiang, H.L.; Cheng, M.; Zhang, B.; Ke, Z.D.; Xu, X.M.; Chu, X.X. Yolov6 v3.0: A full-scale reloading. arXiv 2023, arXiv:2301.05586. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.H.; Chen, K.; Lin, Z.J.; Han, J.G.; Ding, G.G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458v1. [Google Scholar]
- Syed, S.A.Z.; Mohammad, S.A.; Asra, A.; Nadia, K.M.A.; Brian, L. A survey of modern deep learning based on object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar]
- Zhu, H.; Bian, C.Z.; Li, X.X. Contrast feature enhancement for small target detection in elevated banks. Comput. Eng. Appl. 2023, 9, 1–11. [Google Scholar]
- Cengil, E.; Cinar, A. Poisonous mushroom detection using YOLOV5. Turk. J. Sci. Technol. 2021, 16, 119–127. [Google Scholar]
- Luo, Y.F.; Huang, Y.; Qian, W.; Kai, Y.; Zhao, Z.X.; Li, Y.H. An improved YOLOv5 model: Application to leaky eggs detection. LWT-Food Sci. Technol. 2023, 187, 115313. [Google Scholar] [CrossRef]
- Joel, P. The human imagination: The cognitive neuroscience of visual mental imagery. Nat. Rev. Neurosci. 2019, 20, 624–634. [Google Scholar]
- Hwang, C.L.; Kwangsun, Y. Lecture notes in economics and mathematical systems. In Multiple Attribute Decision Making Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1981; p. 186. ISBN 978-3-540-10558-9. [Google Scholar]
- Sahoo, L.; Senapati, T.; Yager, R.R. Real Life Applications of Multiple Criteria Decision Making Techniques in Fuzzy Domain; Studies in Fuzziness and Soft Computing; Springer: Singapore, 2023; p. 420. ISBN 978-981-19-4928-9. [Google Scholar]
- Liu, H.R. Curvature Representation and Decomposition of Shapes. Ph.D. Dissertation, Huazhong University of Science and Technology, Wuhan, China, 2009. Volume 3. pp. 38–47. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Quan, H.; Li, S.; Zeng, C.; Wei, H.; Hu, J. Big Data and AI-Driven Product Design: A Survey. Appl. Sci. 2023, 13, 9433. [Google Scholar] [CrossRef]
- Ramprasaath, R.S.; Michael, C.; Abhishek, D.; Abhishek, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1–24. [Google Scholar]
Model | Param (M) | FLOPs (G) | APval (%) |
---|---|---|---|
YOLOv5-S | 7.2 | 16.5 | 37.4 |
YOLOv6-3.0-S | 18.5 | 45.3 | 44.3 |
Gold-YOLO-S | 21.5 | 46.0 | 45.4 |
YOLO-MS-S | 8.1 | 31.2 | 46.2 |
YOLOv8-S | 11.2 | 28.6 | 44.9 |
YOLOv9-S | 7.1 | 26.4 | 46.7 |
YOLOv10-S | 7.2 | 21.6 | 46.8 |
RT-DETR-R18 | 20.0 | 60.0 | 46.5 |
Serial Number | Imagery Style Words | Serial Number | Imagery Style Words | Serial Number | Imagery Style Words | Serial Number | Imagery Style Words |
---|---|---|---|---|---|---|---|
01 | Tidy | 26 | Delicate | 51 | Austere | 76 | Geometric |
02 | Unaffected | 27 | Traditional | 52 | Lithe | 77 | Respected |
03 | Ponderous | 28 | Exaggerated | 53 | Soft | 78 | Ecological |
04 | Urbane | 29 | Meticulous | 54 | Airy | 79 | Exquisite |
05 | Graceful | 30 | Comfortable | 55 | Placid | 80 | Lively |
06 | Flexible | 31 | Directional | 56 | Smooth | 81 | Decorative |
07 | Revolving | 32 | Interesting | 57 | Masculine | 82 | Individual |
08 | Rough | 33 | Overbearing | 58 | Concise | 83 | Mysterious |
09 | Bold | 34 | Advanced | 59 | Trendy | 84 | High-end |
10 | Unique | 35 | Modern | 60 | Plush | 85 | Newfangled |
11 | Daring | 36 | Swanking | 61 | Natural | 86 | Comfortable |
12 | Organic | 37 | Kingly | 62 | Rational | 87 | High-grade |
13 | Ornate | 38 | Exquisite | 63 | Amiable | 88 | Rustic |
14 | Gliding | 39 | Conservative | 64 | Sedate | 89 | Classical |
15 | Steady | 40 | Coordinating | 65 | Laconic | 90 | Unified |
16 | Nostalgic | 41 | Technological | 66 | Sincere | 91 | Romantic |
17 | Simple | 42 | Plain | 67 | Staid | 92 | Dynamic |
18 | Rational | 43 | Tenuous | 68 | Friendly | 93 | Stately |
19 | Quiet | 44 | Lively | 69 | Slant | 94 | Formal |
20 | Cute | 45 | Vivid | 70 | Tough | 95 | Unbending |
21 | Sportive | 46 | Sharp | 71 | Stocky | 96 | Heavy |
22 | Streamline | 47 | Reliable | 72 | Future | 97 | Demure |
23 | Hard | 48 | Particular | 73 | Rounded | 98 | Fashion |
24 | Solid | 49 | Elegant | 74 | Pure | 99 | Bionic |
25 | Peculiar | 50 | Brisk | 75 | Genial | 100 | Speed |
Degree of Relevance | Irrelevant | Slightly Related | Relevant | Very Relevant | Strong Relevant |
---|---|---|---|---|---|
0 | 0.2 | 0.5 | 0.8 | 1 |
Product Images | Imagery Style Vocabulary | (Positive Ideal Solutions) | (Negative Ideal Solutions) | (Comprehensive Evaluation Score) |
---|---|---|---|---|
Business Car | Steady | 0.115 | 0.938 | 0.890 |
Demure | 0.163 | 0.872 | 0.842 | |
Elegant | 0.381 | 0.667 | 0.637 | |
Heavy | 0.274 | 0.758 | 0.735 | |
Stately | 0.332 | 0.714 | 0.683 | |
Formal | 0.261 | 0.797 | 0.753 | |
Mini Car | Soft | 0.082 | 0.970 | 0.922 |
Amiable | 0.163 | 0.872 | 0.842 | |
Rounded | 0.261 | 0.797 | 0.753 | |
Quiet | 0.381 | 0.667 | 0.637 | |
Urbane | 0.332 | 0.714 | 0.683 | |
Cute | 0.321 | 0.755 | 0.701 | |
Racing Car | Dynamic | 0.082 | 0.970 | 0.922 |
0.261 | 0.797 | 0.753 | ||
Gliding | 0.332 | 0.714 | 0.683 | |
Sportive | 0.163 | 0.872 | 0.842 | |
Flexible | 0.274 | 0.758 | 0.735 | |
Speed | 0.464 | 0.561 | 0.548 | |
Off-road Car | Tough | 0.115 | 0.938 | 0.890 |
Masculine | 0.163 | 0.872 | 0.842 | |
Geometric | 0.274 | 0.758 | 0.735 | |
Rough | 0.381 | 0.667 | 0.637 | |
Sharp | 0.332 | 0.714 | 0.683 | |
Hard | 0.424 | 0.616 | 0.592 | |
Intelligent car | Technological | 0.115 | 0.938 | 0.890 |
Advanced | 0.183 | 0.837 | 0.821 | |
Future | 0.141 | 0.906 | 0.865 | |
Trendy | 0.261 | 0.797 | 0.753 | |
High-end | 0.381 | 0.667 | 0.637 | |
Peculiar | 0.424 | 0.616 | 0.592 | |
Sports Car | Natural | 0.082 | 0.970 | 0.922 |
Bionic | 0.115 | 0.938 | 0.890 | |
Organic | 0.183 | 0.837 | 0.821 | |
Brisk | 0.381 | 0.667 | 0.637 | |
Ecological | 0.529 | 0.529 | 0.500 | |
Comfortable | 0.424 | 0.616 | 0.592 |
Number of Surveyed Users | 124 Female | 119 Male | ||||
---|---|---|---|---|---|---|
Sample Number | The Proportion of Naturalness (%) | The Proportion of Technology (%) | The Proportion of Toughness (%) | The proportion of Steadiness (%) | The Proportion of Softness (%) | The Proportion of Dynamism (%) |
01 | 8.23 | 6.17 | 3.29 | 27.57 | 1.24 | 53.50 |
02 | 7.82 | 11.11 | 18.11 | 0.82 | 0 | 62.14 |
03 | 39.09 | 8.64 | 8.23 | 1.65 | 1.24 | 41.15 |
04 | 51.44 | 40.33 | 0 | 1.23 | 4.12 | 2.88 |
05 | 52.26 | 3.29 | 0.41 | 2.06 | 28.81 | 13.17 |
06 | 8.23 | 4.53 | 0.82 | 12.35 | 65.84 | 8.23 |
07 | 7.41 | 11.11 | 2.47 | 16.46 | 54.32 | 8.23 |
08 | 2.06 | 1.65 | 16.87 | 65.02 | 7.82 | 6.58 |
09 | 0.41 | 2.88 | 12.35 | 77.36 | 1.65 | 5.35 |
10 | 2.47 | 4.94 | 13.17 | 66.67 | 1.23 | 11.52 |
11 | 53.50 | 2.06 | 4.12 | 7.41 | 8.23 | 24.68 |
12 | 1.65 | 6.58 | 0 | 0.41 | 74.49 | 16.87 |
13 | 12.35 | 55.14 | 19.75 | 6.17 | 2.06 | 4.53 |
14 | 2.88 | 44.44 | 36.63 | 3.29 | 0.41 | 12.35 |
15 | 4.53 | 62.55 | 12.34 | 8.23 | 1.65 | 10.70 |
16 | 2.47 | 45.27 | 35.80 | 4.12 | 0.41 | 11.93 |
17 | 3.70 | 14.40 | 70.78 | 3.70 | 0 | 7.42 |
18 | 0.82 | 6.59 | 78.19 | 1.23 | 0.82 | 12.35 |
19 | 2.88 | 18.52 | 65.84 | 3.70 | 1.24 | 7.82 |
20 | 2.47 | 11.52 | 52.26 | 4.53 | 0.41 | 28.81 |
Number of Surveyed Users | 20 Female | 20 Male | ||||
---|---|---|---|---|---|---|
Sample Number | The Proportion of Naturalness (%) | The Proportion of Technology (%) | The Proportion of Toughness (%) | The Proportion of Steadiness (%) | The Proportion of Softness (%) | The Proportion of Dynamism (%) |
21 | 5.00 | 2.50 | 2.50 | 0 | 0 | 90.00 |
22 | 0 | 10.00 | 2.50 | 85.00 | 2.50 | 0 |
23 | 0 | 7.50 | 5.00 | 87.50 | 0 | 0 |
24 | 15.00 | 72.50 | 5.00 | 5.00 | 0 | 2.50 |
25 | 0 | 10.00 | 2.50 | 82.50 | 5.00 | 0 |
26 | 0 | 10.00 | 5.00 | 85.00 | 0 | 0 |
27 | 0 | 7.50 | 15.00 | 75.00 | 2.50 | 0 |
28 | 60.00 | 17.50 | 0 | 2.50 | 20.00 | 0 |
29 | 2.50 | 12.50 | 0 | 82.50 | 2.50 | 0 |
30 | 2.50 | 5.00 | 7.50 | 0 | 0 | 85.00 |
31 | 12.50 | 10.00 | 12.50 | 0 | 0 | 65.00 |
32 | 85.00 | 0 | 2.50 | 0 | 5.00 | 7.50 |
33 | 75.00 | 5.00 | 0 | 2.50 | 10.00 | 7.50 |
34 | 17.50 | 0 | 0 | 2.50 | 77.50 | 2.50 |
35 | 12.50 | 5.00 | 0 | 5.00 | 77.50 | 0 |
36 | 17.50 | 7.50 | 0 | 0 | 67.50 | 7.50 |
37 | 2.50 | 5.00 | 0 | 72.50 | 20.00 | 0 |
38 | 2.50 | 7.50 | 0 | 70.00 | 20.00 | 0 |
39 | 10.00 | 77.50 | 7.50 | 0 | 0 | 5.00 |
40 | 0 | 5.00 | 80.00 | 15.00 | 0 | 0 |
Algorithm | AP (%) | mAP (%) | F1 (%) | FPS (%) | |||||
---|---|---|---|---|---|---|---|---|---|
Na | Te | To | So | St | Dy | ||||
Enhanced YOLOv5 | 91.8 | 85.4 | 76.3 | 87.1 | 86.3 | 89.8 | 86.1 | 82 | 106.4 |
YOLOv5 | 89.9 | 76.7 | 66.2 | 86.2 | 85.3 | 83 | 81.2 | 76 | 104.2 |
YOLOv10 | 90.1 | 83.7 | 73.4 | 84.2 | 87.9 | 88.8 | 84.7 | 80 | 147.1 |
SVM | 54 | 57 | 32 | 43 | 63 | 61 | 52 | 66 | 195.0 |
Clusters | Total Number | Number of Predicted Successes | mAP (%) | RMSE |
---|---|---|---|---|
A | 860 | 802 | 86.1 | 0.26 |
B | 1077 | 1005 | 80 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, J.; Li, Y.; Zhou, M.; Chu, S. Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms. Appl. Sci. 2025, 15, 6061. https://doi.org/10.3390/app15116061
Zhang J, Li Y, Zhou M, Chu S. Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms. Applied Sciences. 2025; 15(11):6061. https://doi.org/10.3390/app15116061
Chicago/Turabian StyleZhang, Jianmin, Yuliang Li, Mingxing Zhou, and Sixuan Chu. 2025. "Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms" Applied Sciences 15, no. 11: 6061. https://doi.org/10.3390/app15116061
APA StyleZhang, J., Li, Y., Zhou, M., & Chu, S. (2025). Deep Learning-Based Imagery Style Evaluation for Cross-Category Industrial Product Forms. Applied Sciences, 15(11), 6061. https://doi.org/10.3390/app15116061